ACL-OCL / Base_JSON /prefixE /json /E14 /E14-1019.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E14-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:39:56.803795Z"
},
"title": "About Inferences in a Crowdsourced Lexical-Semantic Network",
"authors": [
{
"first": "Manel",
"middle": [],
"last": "Zarrouk",
"suffix": "",
"affiliation": {
"laboratory": "UM2-LIRMM",
"institution": "",
"location": {
"addrLine": "161 rue Ada",
"postCode": "34095",
"settlement": "Montpellier",
"country": "FRANCE"
}
},
"email": "manel.zarrouk@lirmm.fr"
},
{
"first": "Mathieu",
"middle": [],
"last": "Lafourcade",
"suffix": "",
"affiliation": {
"laboratory": "UM2-LIRMM",
"institution": "",
"location": {
"addrLine": "161 rue Ada",
"postCode": "34095",
"settlement": "Montpellier",
"country": "FRANCE"
}
},
"email": "mathieu.lafourcade@lirmm.fr"
},
{
"first": "Alain",
"middle": [],
"last": "Joubert",
"suffix": "",
"affiliation": {
"laboratory": "UM2-LIRMM",
"institution": "",
"location": {
"addrLine": "161 rue Ada",
"postCode": "34095",
"settlement": "Montpellier",
"country": "FRANCE"
}
},
"email": "alain.joubert@lirmm.fr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatically inferring new relations from already existing ones is a way to improve the quality of a lexical network by relation densification and error detection. In this paper, we devise such an approach for the JeuxDeMots lexical network, which is a freely avalaible lexical network for French. We first present deduction (generic to specific) and induction (specific to generic) which are two inference schemes ontologically founded. We then propose abduction as a third form of inference scheme, which exploits examples similar to a target term.",
"pdf_parse": {
"paper_id": "E14-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatically inferring new relations from already existing ones is a way to improve the quality of a lexical network by relation densification and error detection. In this paper, we devise such an approach for the JeuxDeMots lexical network, which is a freely avalaible lexical network for French. We first present deduction (generic to specific) and induction (specific to generic) which are two inference schemes ontologically founded. We then propose abduction as a third form of inference scheme, which exploits examples similar to a target term.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Building resources for Computational Linguistics (CL) is of crucial interest. Most of existing lexical-semantic networks have been built by hand (like for instance WordNet (Miller et al., 1990) ) and, despite that tools are generally designed for consistency checking, the task remains time consuming and costly. Fully automated approaches are generally limited to term co-occurrences as extracting precise semantic relations between terms from corpora remains really difficult. Meanwhile, crowdsourcing approaches are flowering in CL especially with the advent of Amazon Mechanical Turk or in a broader scope Wikipedia and Wiktionary, to cite the most well-known examples. WordNet is such a lexical network, constructed by hand at great cost, based on synsets which can be roughly considered as concepts (Fellbaum, 1988) . Eu-roWordnet (Vossen., 1998) a multilingual version of WordNet and WOLF (Sagot., 2008) a French version of WordNet, were built by automated crossing of WordNet and other lexical resources along with some manual checking. Navigli (2010) constructed automatically BabelNet a large multilingual lexical network from term cooccurrences in Wikipedia.",
"cite_spans": [
{
"start": 172,
"end": 193,
"text": "(Miller et al., 1990)",
"ref_id": "BIBREF9"
},
{
"start": 805,
"end": 821,
"text": "(Fellbaum, 1988)",
"ref_id": null
},
{
"start": 837,
"end": 852,
"text": "(Vossen., 1998)",
"ref_id": null
},
{
"start": 896,
"end": 910,
"text": "(Sagot., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A lexical-semantic network can contain lemmas, word forms and multi-word expressions as entry points (nodes) along with word meanings and concepts. The idea itself of word senses in the lexicographic tradition may be debatable in the context of resources for semantic analysis, and we generally prefer to consider word usages. A given polysemous word, as identified by locutors, has several usages that might differ substantially from word senses as classically defined. A given usage can also in turn have several deeper refinements and the whole set of usages can take the form of a decision tree. For example, frigate can be a bird or a ship. A frigate>boat can be distinguished as a modern ship with missiles and radar or an ancient vessel with sails. In the context of a collaborative construction, such a lexical resource should be considered as being constantly evolving and a general rule of thumb is to have no definite certitude about the state of an entry. For a polysemic term, some refinements might be just missing at a given time notwithstanding evolution of language which might be very fast, especially in technical domains. There is no way (unless by inspection) to know if a given entry refinements are fully completed, and even if this question is really relevant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The building of a collaborative lexical network (or, in all generality, any similar resource) can be devised according to two broad strategies. First, it can be designed as a contributive system like Wikipedia where people willingly add and complete entries (like for Wiktionary). Second, contributions can be made indirectly thanks to games (better known as GWAP (vonAhn, 2008) ) and in this case players do not need to be aware that while playing they are helping building a lexical resource. In any case, the built lexical network is not free of errors which are corrected along their discovery. Thus, a large number of obvious relations are not contained in the lexical network but are indeed necessary for a high quality resources usable in various NLP applications and notably semantic analysis. For example, contributors seldom indicate that a particular bird type can fly, as it is considered as an obvious generality. Only notable facts which are not easily deductible are naturally contributed. Well known exceptions are also generally contributed and take the form of a negative weight and annotated as such (for example, fly",
"cite_spans": [
{
"start": 364,
"end": 378,
"text": "(vonAhn, 2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "ag ent :\u2212100 \u2212 \u2212\u2212\u2212\u2212\u2212\u2212 \u2192 ostrich [exception: bird]).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to consolidate the lexical network, we adopt a strategy based on a simple inference mechanism to propose new relations from those already existing. The approach is strictly endogenous (i.e. self-contained) as it doesn't rely on any other external resources. Inferred relations are submitted either to contributors for voting or to experts for direct validation/invalidation. A large percentage of the inferred relations has been found to be correct however, a non-negligible part of them are found to be wrong and understanding why is both interesting and useful. The explanation process can be viewed as a reconciliation between the inference engine and contributors who are guided through a dialog to explain why they found the considered relation incorrect. The possible causes for a wrong inferred relation may come from three possible origins: false premises that were used by the inference engine, exception or confusion due to some polysemy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In (Sajous et al., 2013) an endogenous enrichment of Wiktionary is done thanks to a crowdsourcing tool. A quite similar approach of using crowdsourcing has been considered by (Zeichner, 2012) for evaluating inference rules that are discovered from texts. In (Krachina, 2006) , some specific inference methods are conducted on text with the help of an ontology. Similarly, (Besnard, 2008) capture explanation with ontology-based inference. OntoLearn (Velardi, 2006) is a system that automatically build ontologies of specific domains from texts and also makes use of inferences. There have been also researchs on taxonomy induction based on WordNet (Snow, 2006) . Although extensive work on inference from texts or handcrafted resources has been done, almost none endogenously on lexical network built by the crowds. Most probably the main reason of that situation is the lack of such specific resources.",
"cite_spans": [
{
"start": 3,
"end": 24,
"text": "(Sajous et al., 2013)",
"ref_id": "BIBREF12"
},
{
"start": 258,
"end": 274,
"text": "(Krachina, 2006)",
"ref_id": "BIBREF3"
},
{
"start": 372,
"end": 387,
"text": "(Besnard, 2008)",
"ref_id": "BIBREF1"
},
{
"start": 449,
"end": 464,
"text": "(Velardi, 2006)",
"ref_id": "BIBREF16"
},
{
"start": 648,
"end": 660,
"text": "(Snow, 2006)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this article, we first present the principles behind the lexical network construction with crowdsourcing and games with a purpose (also know as human-based computation games) and illustrated them with the JeuxDeMots (JDM) project. Then, we present the outline of an elicitation engine based on an inference engine using deduction, induction and especially abduction schemes. An experimentation is then presented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For validating our approach, we used the JDM lexical network, which is constructed thanks to a set of associatory games (Lafourcade, 2007) and has been made freely available by its authors. There is an increasing trend of using online GWAPs (game with a purpose (Thaler et al., 2011) ) method for feeding such resources. Beside manual or automated strategies, contributive approaches are flowering and becoming more and more popular as they are both cheap to set up and efficient in quality.",
"cite_spans": [
{
"start": 120,
"end": 138,
"text": "(Lafourcade, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 262,
"end": 283,
"text": "(Thaler et al., 2011)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Crowdsourced Lexical Networks",
"sec_num": "2"
},
{
"text": "The network is composed of terms (as vertices) and typed relations (as links between vertices) with weight. It contains terms and possible refinements. There are more than 50 types of relations, that range from ontological (hypernym, hyponym), to lexical-semantic (synonym, antonym) and to semantic role (agent, patient, instrument). The weight of a relation is interpreted as a strength, but not directly as a probability of being valid. The JDM network is not an ontology with some clean hierarchy of concepts or terms. A given term can have a substantial set of hypernyms that covers a large part of the ontological chain to upper concepts. For example, hypernym(cat) = {feline, mammal, living being, pet, vertebrate, ...}. Heavier weights associated to relations are those felt by users as being the most relevant. The 1st January 2014, there are more than 6 700 000 relations and roughly 310 000 lexical items in the JDM lexical network (according to the figures given by the game site: http://jeuxdemots.org).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crowdsourced Lexical Networks",
"sec_num": "2"
},
{
"text": "To our knowledge, there is no other existing freely available crowdsourced lexical-network, especially with weighted relations, thus enabling strongly heuristic methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crowdsourced Lexical Networks",
"sec_num": "2"
},
{
"text": "Adding new relations to the JDM lexical network may rely on two components: (a) an inference engine and (b) a reconciliator. The inference engine proposes relations as a contributor to be validated by other human contributors or experts. In case of invalidation of an inferred relation, the reconciliator is invoked to try to assess why the inferred relation was found wrong. Elicitation here should be understood as the process to transform some implicit knowledge of the user into explicit relations in the lexical network. The core ideas about inferences in our engine are the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inferring with Deduction & Induction",
"sec_num": "3"
},
{
"text": "\u2022 inferring is to derive new premises (as relations between terms) from previously known premises, which are existing relations; \u2022 candidate inferences may be logically blocked on the basis of the presence or the absence of some other relations; \u2022 candidate inferences can be filtered out on the basis of a strength evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inferring with Deduction & Induction",
"sec_num": "3"
},
{
"text": "Inferring by deduction is a top-down scheme based on the transitivity of the relation is-a (hypernym). If a term A is a kind of B and B holds some relation R with C, then we can expect that A holds the same relation type with C. The scheme can be formally written as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deduction Scheme",
"sec_num": "3.1"
},
{
"text": "\u2203 A i s\u2212a \u2212 \u2212\u2212 \u2192 B \u2227 \u2203 B R \u2212\u2192 C \u21d2 A R \u2212\u2192 C. For example, shark i s\u2212a \u2212 \u2212\u2212 \u2192 fish and fish has\u2212par t \u2212 \u2212\u2212\u2212\u2212\u2212\u2212 \u2192 fin, thus we can expect that shark has\u2212par t \u2212 \u2212\u2212\u2212\u2212\u2212\u2212 \u2192 fin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deduction Scheme",
"sec_num": "3.1"
},
{
"text": "The inference engine is applied on terms having at least one hypernym (the scheme could not be applied otherwise). Of course, this scheme is far too naive, especially considering the resource we are dealing with and may produce wrong relations (noise). In effect, the central term B is possibly polysemous and ways to avoid probably wrong inferences can be done through a logical blocking: if there are two distinct meanings for B that hold respectively the first and the second relation, then most probably the inferred relation R(3) is wrong (see figure 1 ) and hence should be blocked. Moreover, if one of the premises is tagged by contributors as true but irrelevant, then the inference is blocked.",
"cite_spans": [],
"ref_spans": [
{
"start": 549,
"end": 557,
"text": "figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Deduction Scheme",
"sec_num": "3.1"
},
{
"text": "Bj",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Bi",
"sec_num": null
},
{
"text": "A C ( 1 ) i s -a : w 1 (3) R? : w 3 ( 4 ) i s -a ( 2 ) R : w 2 ( 5 ) R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Bi",
"sec_num": null
},
{
"text": "Figure 1: Triangular inference scheme where the logical blocking based on the polysemy of the central term B which has two distinct meanings B i and B j is applied. The two arrows without label are those of word meanings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Bi",
"sec_num": null
},
{
"text": "It is possible to evaluate a confidence level (on an open scale) for each produced inference, in a way that dubious inferences can be eliminated out through statistical filtering. The weight w of an inferred relation is the geometric mean of the weight of the premises (relations (1) and (2) in Figure 1 ). If the second premise has a negative value, the weight is not a number and the proposal is discarded. As the geometric mean is less tolerant to small values than the arithmetic mean, inferences which are not based on two rather strong relations (premises) are unlikely to pass.",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 303,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Bi",
"sec_num": null
},
{
"text": "w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Bi",
"sec_num": null
},
{
"text": "(A R \u2212\u2192 C) = ( w(A i s\u2212a \u2212 \u2212\u2212 \u2192 B) \u00d7 w(B R \u2212\u2192 C) ) 1/2 \u21d2 w3 = (w1 \u00d7 w2) 1/2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Bi",
"sec_num": null
},
{
"text": "Inducing a transitive closure over a knowledge base is not new, but doing so considering word meanings over a crowdsourced lexical network is an original approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Bi",
"sec_num": null
},
{
"text": "As for the deductive inference, induction exploits the transitivity of the relation is-a. If a term A is a kind of B and A holds a relation R with C , then we might expect that B could hold the same type of relation with C . More formally we can write: \u2212 \u2212\u2212\u2212\u2212 \u2192 jaw. This scheme is a generalization inference. The principle is similar to the one applied to the de-duction scheme and similarly some logical and statistical filtering may be undertaken.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Induction Scheme",
"sec_num": "3.2"
},
{
"text": "\u2203 A i s\u2212a \u2212 \u2212\u2212 \u2192 B \u2227 \u2203 A R \u2212\u2192 C \u21d2 B R \u2212\u2192 C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Induction Scheme",
"sec_num": "3.2"
},
{
"text": "B C A Ai Aj ( 1 ) i s -a : w 1 ( 2 ) R : w 3 ( 5 ) i s -a ( 4 ) R (3) R ? : w 2 Figure 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Induction Scheme",
"sec_num": "3.2"
},
{
"text": "(1) and (2) are the premises, and 3is the induction proposed for validation. Term A may be polysemous with meanings holding premises, thus inducing a probably wrong relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Induction Scheme",
"sec_num": "3.2"
},
{
"text": "The central term here A, is possibly polysemous (as shown in Figure 2 ). In that case, we have the same polysemy issues than with the deduction, and the inference may be blocked. The estimated weight for the induced relation is:",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 69,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Induction Scheme",
"sec_num": "3.2"
},
{
"text": "w(B R \u2212\u2192 C) = (w(A R \u2212\u2192 C)) 2 / w(A i s\u2212a \u2212 \u2212\u2212 \u2192 B) \u21d2 w2 = (w 3 ) 2 /w 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Induction Scheme",
"sec_num": "3.2"
},
{
"text": "Inferred relations are presented to the validator to decide of their status. In case of invalidation, a reconciliation procedure is launched in order to diagnose the reasons: error in one of the premises (previously existing relations are false), exception or confusion due to polysemy (the inference has been made on a polysemous central term). A dialog is initiated with the user (Cohen's kappa of 0.79). To know in which order to proceed, the reconciliator checks if the weights of the premises are rather strong or weak.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performing Reconciliation",
"sec_num": "3.3"
},
{
"text": "Errors in the premises. We suppose that relation (1) (in Figure 1 and 2) has a relatively low weight. The reconciliation process asks the validator if the relation (1) is true. It sets a negative weight to this relation if not so that the engine blocks further inferences. Else, if relation (1) is true, we ask about relation (2) and proceed as above if the answer is negative. Otherwise, we check the other cases (exception, polysemy).",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 65,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performing Reconciliation",
"sec_num": "3.3"
},
{
"text": "Errors due to Exceptions. For the deduction, in case we have two trusted relations, the reconciliation process asks the validators if the inferred relation is a kind of exception relatively to the term B . If it is the case, the relation is stored in the lexical network with a negative weight and annotated as exception. Relations that are exceptions do not participate further as premises for deducing. For the induction, in case we have two trusted relations, the reconciliator asks the validators if the relation (A R \u2212\u2192 C) (which served as premise) is an exception relatively to the term B . If it is the case, in addition to storing the false inferred relation (B R \u2212\u2192 C) in the lexical network with a negative weight, the relation (A R \u2212\u2192 C) is annotated as exception. In the induction case, the exception is a true premise which leads to a false induced relation. In both cases of induction and deduction, the exception tag concerns always the relation (A R \u2212\u2192 C). Once this relation is annotated as an exception, it will not participate as a premise in inferring generalized relations (bottom-up model) but can still be used in inducing specified relations (top-down model).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performing Reconciliation",
"sec_num": "3.3"
},
{
"text": "Errors due to Polysemy. If the central term (B for deduction and A for induction) presenting a polysemy is mentioned as polysemous in the network, the refinement terms t er m 1 , t er m 2 , . . . t er m n are presented to the validator so she/he can choose the appropriate one. The validator can propose new terms as refinements if she/he is not satisfied with the listed ones (inducing the creation of new appropriate refinements). If there is no meta information indicating that the term is polysemous, we ask first the validator if it is indeed the case. After this procedure, new relations will be included in the network with positive values and the inference engine will use them later on as premises.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performing Reconciliation",
"sec_num": "3.3"
},
{
"text": "The last inferring scheme is built upon abduction and can be viewed as an example based strategy. Hence abduction relies on similarity between terms, which may be formalized in our context as sharing some outgoing relations between terms. The abductive inferring layout supposes that relations held by a term can be proposed to similar terms. Here, abduction first selects a set of similar terms to the target term A which are considered as proper examples. The outgoing relations from the examples which are not common with those of A are proposed as potential relations for A and then presented for validation/invalidation to users. Unlike induction and deduction, abduction can be applied on terms with missing or irrelevant ontological relations, and can generate ontological relations to be used afterward by the inference loop.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abductive Inference",
"sec_num": "4"
},
{
"text": "We note an outgoing relation as a 3-uple of a type t , a weight w and a target node n:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Scheme",
"sec_num": "4.1"
},
{
"text": "R i = \u2329 t i , w i , n i \u232a.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Scheme",
"sec_num": "4.1"
},
{
"text": "For example, consider the term A having n outgoing relations. Amongst these relations, we have for example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Scheme",
"sec_num": "4.1"
},
{
"text": "\u2022 beak has\u2212par t \u2190 \u2212\u2212 \u2212 A & \u2022 nest l oc at i on \u2190 \u2212\u2212 \u2212 A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Scheme",
"sec_num": "4.1"
},
{
"text": "We found 3 examples sharing those two relations with the term A:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Scheme",
"sec_num": "4.1"
},
{
"text": "\u2022 beak has\u2212par t \u2190 \u2212\u2212 \u2212 {ex 1 , ex 2 , ex 3 } \u2022 nest l oc at i on \u2190 \u2212\u2212 \u2212 {ex 1 , ex 2 , ex 3 }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Scheme",
"sec_num": "4.1"
},
{
"text": "We consider these terms as a set of examples to follow and similar to A. These examples have also other outgoing relations which are proposed as potential relations for A. For example :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Scheme",
"sec_num": "4.1"
},
{
"text": "\u2022 {ex 1 , ex 2 } ag ent \u22121 \u2212 \u2212\u2212 \u2192 fly \u2022 {ex 2 } c ar ac \u2212 \u2212\u2212 \u2192 colorful \u2022 {ex 1 , ex 2 , ex 3 } has\u2212par t \u2212 \u2212\u2212 \u2192 feather \u2022 {ex 3 } ag ent \u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Scheme",
"sec_num": "4.1"
},
{
"text": "\u2212 \u2212\u2212 \u2192 sing We infer that A can hold these relations and we propose them for validation. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Scheme",
"sec_num": "4.1"
},
{
"text": "Applying the abduction procedure crudely on the terms generates a lot of waste as a considerable amount of erroneous inferred relations. Hence, we elaborated a filtering strategy to avoid having a lot of dubious proposed candidates. For this purpose, we define two different threshold pairs. The first threshold pair (\u03b4 1 , \u03c9 1 ) is used to select proper examples x 1 ,x 2 ...x n and is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b4 1 = max(3, nbogr(A) \u00d7 0.1)",
"eq_num": "(1)"
}
],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "where nbogr(A) is the number of outgoing relations from the term A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c9 1 = max(25, mwogr(A) \u00d7 0.5)",
"eq_num": "(2)"
}
],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "where mwogr(A) is the mean of weights of outgoing relations from A. The second threshold pair (\u03b4 2 , \u03c9 2 ) is used to select proper candidate relations from outgoing relations of the examples",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R 1 ,R 2 ...R q . \u03b4 2 = max(3, {x i } \u00d7 0.1)",
"eq_num": "(3)"
}
],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "where {x i } is the cardinal of the set {x i }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c9 2 = max(25, mwogr({x i }) \u00d7 0.5)",
"eq_num": "(4)"
}
],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "where mwogr({x i }) is the mean of weights of outgoing relations from the set of examples x i . If a term A is sharing at least \u03b4 1 relations, having a weight over \u03c9 1 , of the total of the relations R 1 , R 2 , . . . R p toward terms T 1 , T 2 , . . . T p with a group of examples x 1 , x 2 , . . . x n , we admit that this term has a degree of similarity strong enough with these examples. After building up a set of examples on which we can apply our abduction engine we proceed with the second part of the strategy. If we have at least \u03b4 2 examples x i holding a specific relation R k weighting over \u03c9 2 with a term B k , more formally R k = \u2329 t , w \u2265 \u03c9 2 , B k \u232a, we can suppose that the term A may hold this same relation R k with the same target term B k (figure 3). On figure 3, we simplified thresholds to 2 for illustrative purpose. So, to be selected, the examples x 1 ,x 2 , x 3 , . . . x n must have at least 2 common relations with A. A relation R 1\u2192q must be hold by at least 2 examples to be proposed as a potential relation for A. More clearly:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "x 1 x 2 x 3 x n T 1 T 2 T p A B 1 B 2 B q R 1 R 2 R p R 1 R q R 2 R 1 ? R q ?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "x 1 R 1 \u2212 \u2212\u2212 \u2192 B 1 and x 2 R 1 \u2212 \u2212\u2212 \u2192 B 1 \u21d2 R 1 : 2 =\u21d2 propose A R 1 ? \u2212 \u2212\u2212 \u2192 B 1 x n R 2 \u2212 \u2212\u2212 \u2192 B 2 \u21d2 R 2 : 1 =\u21d2 do not propose this relation. x 1 R q \u2212 \u2212\u2212 \u2192 B q , x 3 R q \u2212 \u2212\u2212 \u2192 B q and x n R q \u2212 \u2212\u2212 \u2192 B q \u21d2 R q : 3 =\u21d2 propose A R q ? \u2212 \u2212\u2212 \u2192 B q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "For statistical filtering, we can act on the threshold (\u03b4 2 , \u03c9 2 ) as the minimum number of examples x i being R related with a target term B k . It is also possible to evaluate the weight of the abducted relation as following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "w(A R k \u2212\u2192 B k ) = 1 nb R cd n,p,q i =1, j =1,k=1 3 w 1 w 2 w 3 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "where nb R cd is the number of the relations R candidate to be proposed and w 1 =A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "R j \u2212 \u2212\u2212 \u2192 T j & w 2 =x i R j \u2212 \u2212\u2212 \u2192 T j & w 3 =x i R k \u2212 \u2212\u2212 \u2192 B k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "This filtering parameters are adjustable according to the user's requirements, so it can fulfil various expectations. Constant values in threshold formulas have been determined empirically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abduction Filtering",
"sec_num": "4.2"
},
{
"text": "We made an experiment with a unique run of the deduction, induction and abduction engines over the lexical network. Contributors have either accepted or rejected a subset of those candidates during the normal course of their activity. This experiment is for an evaluation purpose only, as actually the system is running iteratively along with contributors and games. The experiment has been done with the parameters given previously, which are determined emprically as those maximizing recall and precision (over a very small subset of the JDM lexical network, around 1\u2030).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimentation",
"sec_num": "5"
},
{
"text": "We applied the inference engine on around 25 000 randomly selected terms having at least one hypernym or one hyponym and thus produced by deduction more than 1 500 000 inferences and produced by induction over 360 000 relation candidates. The threshold for filtering was set to a weight of 25. This value is relevant as when a human contributor proposed relation is validated by experts, it is introduced with a default weight of 25.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appliying Deductions and Inductions",
"sec_num": "5.1"
},
{
"text": "The transitive is-a (Table1) is not very productive which might seems surprising at first glance. In fact, the is-a relation is already quite populated in the network, and as such, fewer new relations can be inferred. The figures are inverted for some other relations that are not so well populated in the lexical network but still are potentially valid. The has-parts relation and the agent semantic role (the agent-1 relation) are by far the most productive types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appliying Deductions and Inductions",
"sec_num": "5.1"
},
{
"text": "Proposed % is-a (x is a type of y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation type",
"sec_num": null
},
{
"text": "6.1 has-parts (x is composed of y) 25.1 holonym (y specific of x) 7.2 typical place (of x) 7.2 charac (x as characteristic y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation type",
"sec_num": null
},
{
"text": "13.7 agent-1 (x can do y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation type",
"sec_num": null
},
{
"text": "13.3 instr-1 (x instrument of y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation type",
"sec_num": null
},
{
"text": "1.7 patient-1 (x can be y) 1 place-1 (x located in the place y) 9.8 place > action (y can be done in place x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation type",
"sec_num": null
},
{
"text": "3.4 object > mater (x is made of y) 0.3 In tables 2 and 3 are presented some evaluations of the status of the inferences proposed by the inference engine through deduction and induction respectively. Inferences are valid for an overall of 80-90% with around 10% valid but not relevant (like for instance dog has\u2212par t s \u2212 \u2212\u2212\u2212\u2212\u2212\u2212 \u2192 proton). We observe that error number in premises is quite low, and nevertheless errors can be easily corrected. Of course, not all possible errors are detected through this process. More interestingly, the reconciliation allows in 5% of the cases to identify polysemous terms and refinements. Globally false negatives (inferences voted false while being true) and false positives (inferences voted true while being false) are evaluated to less than 0.5%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation type",
"sec_num": null
},
{
"text": "For the induction process, the relation is-a is not obvious (a lexical network is not reductible to an ontology and multiple inheritance is possible). Result seems about 5% better than for the deduction process: inferences are valid for an overall of 80-95%. The error number is very low. The main difference with the deduction process is on errors due to polysemy which is lower with the induction process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation type",
"sec_num": null
},
{
"text": "To try to assess a baseline for those results, we compute the full closure of the lexical network, i.e. we produce iteratively all possible candidate relations until no more could be found, each candidate being considered as correct and participating to the process. We got more than 6 000 000 relations out of which 45% were wrong (evaluation on around 1 000 candidates randomly chosen).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation type",
"sec_num": null
},
{
"text": "We applied systematically the abduction engine on the lexical items contained in the network, and produce 629 987 abducted relations out of which 137 416 were not already existing in the network. Those 137 416 are candidate relations concerning 10 889 distinct lexical entries, hence producing a mean of around 12 new relations per entry. The distribution of the proposed relations follows a power law, which is not totally surprising as the relation distribution in the lexical network is by itself governed by such a distribution. Those figures indicate that abduction seems to be still quite productive in terms of raw candidates, even not relying on ontological existing relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unleashing the Abductive Engine",
"sec_num": "5.2"
},
{
"text": "The table 4 presents the number of relations proposed by the inference engine through abduction. The different relation types are variously productive, and this is mainly due to the number of existing relations and the distribution of their type. The most productive relation is has-part and the least one is holo (holonym/whole). Correct relations represent around 80% of the relations that have been evaluated (around 5.6% of the total number of produced relations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unleashing the Abductive Engine",
"sec_num": "5.2"
},
{
"text": "One suprising fact, is that the 80% seem to be quite constant notwithstanding the relation type, the lowest value being 77% (for instr-1 which is the relation specifying what can be done with x as an instrument) and the highest being 85% (for action-place which is the relation associating for an action the typical locations where it can occur). The abduction process is not ontologically based, and hence does not rely on the generic (is-a) or specific (hyponym) relations, but on the contrary on any set of examples that seems to be alike the target term. The apparent stability of 80% correct abducted relations may be a positive consequence of relying on a set of examples, with a potentially irreductible of 20% wrong abducted relations. Figure 4 presents two types of data: (1) the percentage of correct abducted relations according to the number of examples required to produce the inference, and (2) the proportion between the produced relations and the total of 107 416 relations according to the minimal number of examples allowed. What can clearly be seen is that when the number of required examples is increased, the ratio of correct abductions increases accordingly, but the number of proposed relations dramaticaly falls. The number of abductions is an inverse power law of the number of examples required. At 3 examples, only 40% of the proposed relations are correct, and with a minimum of 6 examples, more than 3/4 of the proposals are deemed correct. The balanced F-score is optimal at the intersection of both curves, that is to say for at least 4 examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 744,
"end": 752,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Unleashing the Abductive Engine",
"sec_num": "5.2"
},
{
"text": "In figure 5 , is showed the mean number of new relations during an iteration of the inference engine on abduction. Between two runs, users and validators are invited to accept or reject abducted relations. This process is done at their discretion and users may leave some propostions unvoted. Experiments showed that users are willing to validate strongly true relations and invalidate clearly false relations. Relations whose status may be difficult are more often left aside than other easiest proposals. The third run is the most productive with a mean of almost 20 new abducted relations. After 3 runs, the abductive process begins to be less productive by attrition of new possible candidates. Notice that the abduction process may, on subsequent runs, remove some previsouly done proposals and as such is not monotonous. ",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Unleashing the Abductive Engine",
"sec_num": "5.2"
},
{
"text": "Reconciliation in abduction is simpler than in deduction or induction, as the potential adverse effect of polysemy is counterbalanced by the statistical approach implemented by the large number of examples (when available). The reconciliation in the case of abduction is to determine if the wrong proposal has been produced logically considering the support examples. In 97% of the cases, the wrong abducted relation has been qualified as wrong but logical by voters or validators. For examples:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figures on Reconciliation",
"sec_num": "5.3"
},
{
"text": "\u2022 Boeing \u2212 \u2212\u2212\u2212\u2212 \u2192 sing *. All those wrong abducted relations given as examples above might have been correct. Considering the examples exploited to produce the candidates, in those cases there is no possible way to guess those relations are wrong. This is even reinforced by the fact that abduction does not rely on ontological relations, which in some cases could have avoided wrong abduction. However, abduction compared to induction and deduction, can be used on terms that do not hold ontological relations, either they are missing or they are not relevant (for verbs, instances...).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figures on Reconciliation",
"sec_num": "5.3"
},
{
"text": "We presented some issues in inferring new relations from existing ones to consolidate a lexicalsemantic network built with games and user contributions. New inferred relations are stored to avoid having to infer them again and again dynamically. To be able to enhance the network quality and coverage, we proposed an elicitation engine based on inferences (induction, deduction and abduction) and reconciliation. If an inferred relation is proven wrong, a reconciliation process is conducted in order to identify the underlying cause and solve the problem. The abduction scheme does not rely on the ontological relation (is-a) but merely on examples that are similarly close to the target term. Experi-ments showed that abduction is quite productive (compared to deduction and induction), and is stable in correctness. User evaluation showed that wrong abducted relations (around 20% of all abducted relations) are still logically sound and could not have been dismissed a priori. Abduction can conclusively be considered as a usefull and efficient tool for relation inference. The main difficulty relies in setting the various parameter in order to achieve a fragile tradeoff between an overrestrictive filter (many false negatives, resulting in information losses) and the opposite (many false postive, more human effort).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The elicitation engine we presented through schemas based on deduction, induction and abduction is an efficient error detector, a polysemy identifier but also a classifier by abduction. The actions taken during the reconciliation forbid an inference proven wrong or exceptional to be inferred again. Each inference scheme is supported by the two others, and if a given inference has been produced by more than one of these three schemas, it is almost surely correct. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Designing games with a purpose",
"authors": [
{
"first": "L",
"middle": [],
"last": "Von Ahn",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Dabbish",
"suffix": ""
}
],
"year": 2008,
"venue": "Communications of the ACM",
"volume": "51",
"issue": "8",
"pages": "58--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "von Ahn, L. and Dabbish, L. 2008. Designing games with a purpose. in Communications of the ACM, number 8, volume 51. p 58-67.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Ontology-based inference for causal explanation. Integrated Computer-Aided Engineering",
"authors": [
{
"first": "P",
"middle": [],
"last": "Besnard",
"suffix": ""
},
{
"first": "M.-O",
"middle": [],
"last": "Cordier",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Moinard",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "15",
"issue": "",
"pages": "351--367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Besnard, P. Cordier, M.-O., and Moinard, Y. 2008. Ontology-based inference for causal explanation. Integrated Computer-Aided Engineering , IOS Press, Amsterdam, Vol. 15 , No. 4, 351-367, 2008.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Ontology-Based Inference Methods. CERIAS TR 2006-76",
"authors": [
{
"first": "O",
"middle": [],
"last": "Krachina",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Raskin",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krachina, O., Raskin, V. 2006. Ontology-Based Infer- ence Methods. CERIAS TR 2006-76, 6 p.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Making people play for Lexical Acquisition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lafourcade",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. SNLP 2007, 7th Symposium on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "13--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lafourcade, M. 2007. Making people play for Lex- ical Acquisition. In Proc. SNLP 2007, 7th Sym- posium on Natural Language Processing. Pattaya, Thailande, 13-15 December. 8 p.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long Tail in Weighted Lexical Networks",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lafourcade",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joubert",
"suffix": ""
}
],
"year": 2012,
"venue": "proc of Cognitive Aspects of the Lexicon",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lafourcade, M., Joubert, A. 2012. Long Tail in Weighted Lexical Networks. In proc of Cogni- tive Aspects of the Lexicon (CogAlex-III), COLING, Mumbai, India, December 2012.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Common consensus: a web-based game for collecting commonsense goals",
"authors": [
{
"first": "H",
"middle": [],
"last": "Lieberman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Teeters",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of IUI",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lieberman, H, Smith, D. A and Teeters, A 2007. Common consensus: a web-based game for col- lecting commonsense goals. In Proc. of IUI, Hawaii,2007.12 p .",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SemKey: A Semantic Collaborative Tagging System",
"authors": [
{
"first": "",
"middle": [],
"last": "Marchetti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tesconi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ronzano",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mosella",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Minutoli",
"suffix": ""
}
],
"year": 2007,
"venue": "Procs of WWW2007",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marchetti, A and Tesconi, M and Ronzano, F and Mosella, M and Minutoli, S. 2007. SemKey: A Se- mantic Collaborative Tagging System. in Procs of WWW2007, Banff, Canada. 9 p.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Open MindWord Expert: Creating large annotated data collections with web users help",
"authors": [
{
"first": "",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Chklovski",
"suffix": ""
}
],
"year": 2003,
"venue": "Workshop on Linguistically Annotated Corpora (LINC)",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihalcea, R and Chklovski, T. 2003. Open MindWord Expert: Creating large annotated data collections with web users help.. In Proceedings of the EACL 2003, Workshop on Linguistically Annotated Cor- pora (LINC). 10 p.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Introduction to WordNet: an on-line lexical database",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Beckwith",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "K",
"middle": [
"J"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1990,
"venue": "International Journal of Lexicography",
"volume": "3",
"issue": "",
"pages": "235--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, G.A. and Beckwith, R. and Fellbaum, C. and Gross, D. and Miller, K.J. 1990. Introduction to WordNet: an on-line lexical database. Interna- tional Journal of Lexicography. Volume 3, p 235- 244.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BabelNet: Building a very large multilingual semantic network",
"authors": [
{
"first": "",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "216--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navigli, R and Ponzetto, S. 2010. BabelNet: Build- ing a very large multilingual semantic network. in Proceedings of the 48th Annual Meeting of the As- sociation for Computational Linguistics, Uppsala, Sweden, 11-16 July 2010.p 216-225.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Construction d'un wordnet libre du fran\u00e7ais \u00e0 partir de ressources multilingues",
"authors": [
{
"first": "B",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Fier",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of TALN 2008",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sagot, B. and Fier, D. 2010. Construction d'un word- net libre du fran\u00e7ais \u00e0 partir de ressources multi- lingues. in Proceedings of TALN 2008, Avignon, France, 2008.12 p.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semi-Automatic Enrichment of Crowdsourced Synonymy Networks: The WISIG-OTH system applied to Wiktionary. Language Resources & Evaluation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Sajous",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Navarro",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Gaume",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Pr\u00e9vot",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Chudy",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "47",
"issue": "",
"pages": "63--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sajous, F., Navarro, E., Gaume, B,. Pr\u00e9vot, L. and Chudy, Y. 2013. Semi-Automatic Enrichment of Crowdsourced Synonymy Networks: The WISIG- OTH system applied to Wiktionary. Language Re- sources & Evaluation, 47(1), pp. 63-96.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Games with a Purpose for the Semantic Web",
"authors": [
{
"first": "K",
"middle": [],
"last": "Siorpaes",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hepp",
"suffix": ""
}
],
"year": 2008,
"venue": "IEEE Intelligent Systems",
"volume": "23",
"issue": "",
"pages": "50--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siorpaes, K. and Hepp, M. 2008. Games with a Pur- pose for the Semantic Web. in IEEE Intelligent Sys- tems, number 3, volume 23.p 50-60.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semantic taxonomy induction from heterogenous evidence",
"authors": [
{
"first": "R",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING/ACL",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Snow, R. Jurafsky, D., Y. Ng., A. 2006. Semantic tax- onomy induction from heterogenous evidence. in Proceedings of COLING/ACL 2006, 8 p.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A Survey on Games for Knowledge Acquisition",
"authors": [
{
"first": "",
"middle": [],
"last": "Thaler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Siorpaes",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Simperl",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Hofer",
"suffix": ""
}
],
"year": 2011,
"venue": "STI Technical Report",
"volume": "19",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thaler, S and Siorpaes, K and Simperl, E. and Hofer, C. 2011. A Survey on Games for Knowledge Acqui- sition. STI Technical Report, May 2011.19 p.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Evaluation of OntoLearn, a methodology for Automatic Learning of Ontologies",
"authors": [
{
"first": "P",
"middle": [],
"last": "Velardi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Cucchiarelli",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Neri",
"suffix": ""
}
],
"year": 2006,
"venue": "Ontology Learning and Population, Paul Buitelaar Philipp Cimmiano and Bernardo Magnini Editors",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Velardi, P. Navigli, R. Cucchiarelli, A. Neri, F. 2006. Evaluation of OntoLearn, a methodology for Auto- matic Learning of Ontologies. in Ontology Learn- ing and Population, Paul Buitelaar Philipp Cim- miano and Bernardo Magnini Editors, IOS press 2006).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "EuroWordNet: a multilingual database with lexical semantic networks",
"authors": [
{
"first": "P",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vossen, P. 2011. EuroWordNet: a multilingual database with lexical semantic networks. Kluwer Academic Publishers.Norwell, MA, USA.200 p.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Crowdsourcing Inference-Rule Evaluation",
"authors": [
{
"first": "N",
"middle": [],
"last": "Zeichner",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Dagan",
"middle": [
"I"
],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "proc of ACL 2012",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeichner, N., Berant J., and Dagan I. 2012. Crowd- sourcing Inference-Rule Evaluation. in proc of ACL 2012 (short papers).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "For example, shark i s\u2212a \u2212 \u2212\u2212 \u2192 fish and shark has\u2212par t \u2212 \u2212\u2212\u2212\u2212 \u2192 jaw, thus we might expect that fish has\u2212par t"
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Abduction scheme with examples x i sharing relations with A and proposing new abducted relations."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Production of abducted relations and percentage of correctness according to examples number."
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Mean number of new relations relatively to runs in iterated abduction."
},
"TABREF0": {
"type_str": "table",
"num": null,
"text": "Global percentages of relations proposed per type for deduction and induction.",
"content": "<table><tr><td>Deduction</td><td colspan=\"2\">% valid</td><td/><td>% error</td><td/></tr><tr><td>Relation type</td><td>rlvt</td><td colspan=\"3\">\u00ac rlvnt prem excep</td><td>pol</td></tr><tr><td>is-a</td><td>76%</td><td>13%</td><td>2%</td><td>0%</td><td>9%</td></tr><tr><td>has-parts</td><td>65%</td><td>8%</td><td>4%</td><td>13%</td><td>10%</td></tr><tr><td>holonym</td><td>57%</td><td>16%</td><td>2%</td><td>20%</td><td>5%</td></tr><tr><td>typical place</td><td>78%</td><td>12%</td><td>1%</td><td>4%</td><td>5%</td></tr><tr><td>charac</td><td>82%</td><td>4%</td><td>2%</td><td>8%</td><td>4%</td></tr><tr><td>agent-1</td><td>81%</td><td>11%</td><td>1%</td><td>4%</td><td>3%</td></tr><tr><td>instr-1</td><td>62%</td><td>21%</td><td>1%</td><td>10%</td><td>6%</td></tr><tr><td>patient-1</td><td>47%</td><td>32%</td><td>3%</td><td>7%</td><td>11%</td></tr><tr><td>place-1</td><td>72%</td><td>12%</td><td>2%</td><td>10%</td><td>6%</td></tr><tr><td>place &gt; action</td><td>67%</td><td>25%</td><td>1%</td><td>4%</td><td>3%</td></tr><tr><td>object &gt; mater</td><td>60%</td><td>3%</td><td>7%</td><td>18%</td><td>12%</td></tr></table>",
"html": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "Number of propositions produced by deduction and ratio of relations found as true or false.",
"content": "<table/>",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "Number of propositions produced by induction and ratio of relations found as true or false.",
"content": "<table><tr><td colspan=\"3\">Abduction #prop #eval (%)</td><td>True (%)</td><td>False (%)</td></tr><tr><td>is-a</td><td>7141</td><td>421 (5.9)</td><td colspan=\"2\">343 (81.5) 78 (18.5)</td></tr><tr><td>has-parts</td><td>26517</td><td>720 (2.7)</td><td colspan=\"2\">578 (80.3) 142 (19.7)</td></tr><tr><td>holo</td><td>1592</td><td>153 (9.6)</td><td>124 (81)</td><td>29 (18.9)</td></tr><tr><td>agent</td><td>7739</td><td>298 (3.9)</td><td colspan=\"2\">236 (79.2) 62 (20.8)</td></tr><tr><td>place</td><td>17148</td><td>304 (1.8)</td><td colspan=\"2\">253 (83.2) 51 (16.8)</td></tr><tr><td>instr</td><td>10790</td><td>431 (4)</td><td colspan=\"2\">356 (82.6) 75 (17.4)</td></tr><tr><td>charac</td><td>7443</td><td>319 (4.3)</td><td colspan=\"2\">251 (78.7) 68 (21.3)</td></tr><tr><td>agent-1</td><td>18147</td><td>955 (5.3)</td><td colspan=\"2\">780 (81.7) 175 (18.3)</td></tr><tr><td>instr-1</td><td>11867</td><td>886 (7.5)</td><td>682 (77)</td><td>204 (23)</td></tr><tr><td>place-1</td><td>14787</td><td>1106 (7.5)</td><td>896 (81)</td><td>210 (19)</td></tr><tr><td colspan=\"2\">place&gt;act 8268</td><td>270 (3.3)</td><td colspan=\"2\">214 (79.3) 56 (20.7)</td></tr><tr><td colspan=\"2\">act&gt;place 5976</td><td>170 (2.8)</td><td colspan=\"2\">145 (85.3) 25 (14.7)</td></tr><tr><td>Total</td><td colspan=\"2\">137416 6033 (4.3)</td><td>4858 (81)</td><td>1175 (19)</td></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"text": "Number of propositions produced by abduction and ratio of relations found as true or false.",
"content": "<table/>",
"html": null
}
}
}
}