ACL-OCL / Base_JSON /prefixR /json /R15 /R15-1030.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R15-1030",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:57:43.248702Z"
},
"title": "Automatic Acquisition of Artifact Nouns in French",
"authors": [
{
"first": "Xiaoqin",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {
"laboratory": "Laboratory",
"institution": "LDI University of Paris",
"location": {}
},
"email": ""
},
{
"first": "Pierre-Andr\u00e9",
"middle": [],
"last": "Buvet",
"suffix": "",
"affiliation": {
"laboratory": "Laboratory",
"institution": "LDI University of Paris",
"location": {}
},
"email": "pierreandre.buvet@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This article describes a method which allows acquiring artifact nouns in French automatically by extracting predicateargument structures. Two strategies are presented: the supervised strategy and the semi-supervised strategy. In the supervised method, the semantic classes of artifact nouns are recognized by identifying the predicate-argument structures with the syntactic patterns of the given predicates. In the semi-supervised method, the extraction of predicate-argument structures is carried out from a semantic class of artifact nouns given in advance. The predicate candidates obtained from extracted predicate-argument structures are then intersected. Next, the syntactic patterns of predicates are automatically learned by probabilistic calculation. With the acquired predicates and the learned syntactic patterns, more artifact nouns are identified.",
"pdf_parse": {
"paper_id": "R15-1030",
"_pdf_hash": "",
"abstract": [
{
"text": "This article describes a method which allows acquiring artifact nouns in French automatically by extracting predicateargument structures. Two strategies are presented: the supervised strategy and the semi-supervised strategy. In the supervised method, the semantic classes of artifact nouns are recognized by identifying the predicate-argument structures with the syntactic patterns of the given predicates. In the semi-supervised method, the extraction of predicate-argument structures is carried out from a semantic class of artifact nouns given in advance. The predicate candidates obtained from extracted predicate-argument structures are then intersected. Next, the syntactic patterns of predicates are automatically learned by probabilistic calculation. With the acquired predicates and the learned syntactic patterns, more artifact nouns are identified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The difficulties for automatic acquisition of terms might come from the linguistic techniques, the computational techniques or the limits of the natural language processing theory. Nowadays, many studies have been conducted for term extraction. This article presents a method for automatic acquisition of artifact nouns on the basis of syntacticsemantic analysis of predicates. Artifact nouns are the nouns of the artificial entities produced intentionally by human beings, with a view to a specific function. The automatic acquisition of artifact nouns is for completing the dictionary of semantic classes of laboratory LDI. There are two strategies for realizing this method: a supervised strategy in which the predicate-argument structures are ex-tracted by the syntactic patterns of the given predicates and a semi-supervised strategy developed on the basis of the supervised strategy. The semisupervised strategy consists of two steps. In the first step, it predicts which predicates are relevant by a probabilistic calculation. In the second step, it appeals to the supervised strategy. This article is organized as follows. Section 2 states the related work on term extraction in recent years. Section 3 presents the data model used in the proposed method. Section 4 explains in detail the proposed method including the semantic-syntactic analysis of appropriate predicates of artifact nouns. Section 5 presents the experiment results and the analysis of the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the as-built systems of term extraction, the part of linguistic model is often limited to morphosyntactic descriptions and the part of statistical model, to a large extent, depends on the statistical knowledge. TERMINO (Lauriston, 1994) and LEXTER (Bourigault, 1996) , two well-known semi-automatic systems of term extraction, are based on syntactic descriptions. The method of Hearst (1992) and the method of Snow et al. (2004) take advantage of morpho-syntactic patterns for automatically recognizing hyponyms and hypernyms. The statistical methods for term extraction can be based on Markov model (Jiang, 2012) , co-occurrence, or vector support, etc. ANA (Enguehard, 1993 ) is a statistical method which is based on co-occurrence. Morlane-Hond\u00e8re (2012) has presented a series of distributional methods realized with data mining techniques, such as mutual information, measures of association, loglikelihood or naive bays (Ibekwe-sanjuan, 2007) . The method of Meilland and Bellot (2003) which extracts terms from annotated corpora, ACABIT (Daille, 1994) and the strategy of cooperation of many term extractors of Alecu et al. (2012) are all the hybrid methods which combine linguistic model and statistical model. Furthermore, Seeker and Kuhn (2013) has proposed a method which identifies the morpho-syntactic patterns by statistical dependency Parsing, and Quiniou et al. (2012) has brought forward an approach aiming to identify the linguistic patterns via data mining techniques.",
"cite_spans": [
{
"start": 222,
"end": 239,
"text": "(Lauriston, 1994)",
"ref_id": "BIBREF10"
},
{
"start": 251,
"end": 269,
"text": "(Bourigault, 1996)",
"ref_id": "BIBREF1"
},
{
"start": 381,
"end": 394,
"text": "Hearst (1992)",
"ref_id": "BIBREF7"
},
{
"start": 413,
"end": 431,
"text": "Snow et al. (2004)",
"ref_id": "BIBREF16"
},
{
"start": 603,
"end": 616,
"text": "(Jiang, 2012)",
"ref_id": "BIBREF9"
},
{
"start": 662,
"end": 678,
"text": "(Enguehard, 1993",
"ref_id": "BIBREF4"
},
{
"start": 738,
"end": 760,
"text": "Morlane-Hond\u00e8re (2012)",
"ref_id": "BIBREF13"
},
{
"start": 929,
"end": 951,
"text": "(Ibekwe-sanjuan, 2007)",
"ref_id": null
},
{
"start": 981,
"end": 994,
"text": "Bellot (2003)",
"ref_id": "BIBREF11"
},
{
"start": 1047,
"end": 1061,
"text": "(Daille, 1994)",
"ref_id": "BIBREF3"
},
{
"start": 1121,
"end": 1140,
"text": "Alecu et al. (2012)",
"ref_id": "BIBREF0"
},
{
"start": 1235,
"end": 1257,
"text": "Seeker and Kuhn (2013)",
"ref_id": "BIBREF15"
},
{
"start": 1366,
"end": 1387,
"text": "Quiniou et al. (2012)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 Data Model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The predicate is a linguistic unit defined as a language form of semantic relation between two entities. The entities linked by this relation are arguments. The actualizers are the linguistic elements which enable to register the predicates and arguments in grammatically correct statements. They can be grammatical units (such as prepositions, determiners...) or lexical units, such as modifying adjectives, adverbs, auxiliary verbs, support verbs, etc. The predicates semantically dominate the arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicates, Arguments and Actualizers",
"sec_num": "3.1"
},
{
"text": "The predicates can be divided into verbal predicates, nominal predicates, adjective predicates, prepositional predicates and adverbial predicates in the conception of \"uses of predicates\" of Buvet (2009) . The variations between the uses of predicate are morphosyntactic or interpretative. The interpretations of the uses result from a set of properties: type of state, type of action, processive aspect and stative aspect. A predicate can have one or more uses, for example, for the predicate n\u00e9gocier (negotiate), n\u00e9gocier (negotiate) is its verbal use, n\u00e9gociation (negotiation) is its nominal use and n\u00e9gociable (negotiable) is its adjective use.",
"cite_spans": [
{
"start": 191,
"end": 203,
"text": "Buvet (2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Uses of Predicate",
"sec_num": "3.2"
},
{
"text": "The appropriate predicates have a number of relatively limited semantic classes of arguments. This character of appropriate predicates allows predicting the semantic class to which their arguments belong. An appropriate predicate in a specific sense can define a semantic class of arguments. Nevertheless, the polysemy of most of the appropriate predicates necessitates delimiting the semantic class of arguments by gathering many appropriate predicates of one semantic class. For example, for the predicate conduire (dive/take/lead), it can be used in the following senses: conduire mon enfant\u00e0 l'\u00e9cole (drive my child to school), conduire une voiture (drive a car) or conduire une entreprise (lead a company). The polysemy of conduire (drive/take/lead) prevents from isolating the semantic class of transport. However, with another appropriate predicate r\u00e9parer (repair), we can predict that the arguments which can appear after both conduire (drive/lead) and r\u00e9parer (repair) belong to the semantic class of transport. A set of appropriate predicates that allows delimiting a semantic class of arguments is defined as the definitional appropriate predicates of this semantic class of arguments (Buvet, 2009; Mejri, 2009) . The definitional appropriate predicates characterize the semantics of their class of arguments.",
"cite_spans": [
{
"start": 1197,
"end": 1210,
"text": "(Buvet, 2009;",
"ref_id": "BIBREF2"
},
{
"start": 1211,
"end": 1223,
"text": "Mejri, 2009)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Appropriate Predicates and Appropriate Relation",
"sec_num": "3.3"
},
{
"text": "The corpora used for the method are composed of texts coming from about ten French websites (e.g., http://www.forum-auto.com/marques/index.htm, http://geekandfood.fr/blog/, etc.). The websites are selected around various themes: automobile, household appliances and decoration, cooking, beauty, fashion, health, etc. The chosen texts include the comments, the discussions on forum and the articles on the blog. The volume of the corpora reaches 22,858 Ko. They comprise 3,754,334 words. The texts of different themes occupy about the same proportion in the corpora. The texts of the different genres (the comments, the discussions on the forum and the articles on the blog) also occupy about the same proportion respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "4.1"
},
{
"text": "In the proposed method, both the preprocessing and the extraction of predicate-argument structures are carried out with local grammars through Unitex. With the integrated linguistic resources (such as Dela, Delac, etc.), Unitex makes it possible to represent a local grammar in the form of finite state automaton.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Work Tool: Unitex",
"sec_num": "4.2"
},
{
"text": "In the supervised method, the predicate-argument structures are recognized automatically from a set of predicates given in advance. A series of syntactic patterns are established on the basis of the syntactic-semantic analysis of the appropriate predicates. The obtained arguments are then intersected for choosing the appropriate arguments of the given predicates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Method",
"sec_num": "4.3"
},
{
"text": "The constituents of multi-word expressions are often misrecognized by computer as the constituents of other syntactic structures, for example, in the sentence Une fois achet\u00e9 mon nouveau manteau, je suis rentr\u00e9\u00e0 la maison (once my new coat bought, I returned to home), fois achet\u00e9 (time bought) is often misrecognized as a noun phrase by computer with the syntactic pattern N+Adj. Nevertheless, fois (time) is the constituent of the multi-word expression une fois (once). To solve this problem, the following strategy is adopted: we appeal to the dictionary Delac in Unitex for labelling adjective multiword expressions, adverbial multi-word expressions, verbal multi-word expression and prepositional multi-word expressions ; these expressions are then replaced by the corresponding morphosyntactic label (like <ADV>, <ADJ>, <V> and <PREP>). Thus, the multiword expression une fois achet\u00e9 becomes <ADV> achet\u00e9 after the preprocessing. The given predicates are labelled by the tool Unitex considering the different uses of each predicate. The morphosyntactic disambiguation of predicates depending on context is conducted at the same time. Thus, the given predicates are labeled by identifying the corresponding verbal phrases with the syntactic patterns such as, avoir+\u00e9t\u00e9+ADV+Vpp, se+faire+V, aller+\u00eatre+ADV+Vpp, se+\u00eatre+ADV+Vpp, avoir+Vpp, Vpp+Det+N, Vpr+DET+N,.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.3.1"
},
{
"text": "For some lexical units of which the parts of speech are often used as reference for the morphosyntactic disambiguation of other lexical units, if they have multiple parts of speech, their entries of the lesser-used parts of speech in Dela are eliminated. For example, to decide a lexical unit is a noun or not depends on whether the lexical unit follows an article or not while some articles in French have more than one morphosyntactic interpretations (e.g., un (a) can be an article or a noun). Thus, the entry of un (a) as noun considered less used is eliminated in the preprocessing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.3.1"
},
{
"text": "In French, the nominal distribution of an appropriate predicate can be situated in the position of subject, object complement (direct or indirect: the indirect object is introduced by the preposition in French), circumstantial complement of location or circumstantial complement of means. The syntactic position of nominal distribution often changes with the structural transformation of sentences (e.g., from an active sentence to a passive sentence). The analysis of syntactic-semantic distribution of appropriate predicates of artifact nouns is based on the elementary sentences of active form. The elementary sentences are the sentences containing only one conjugated verb. The complex sentences containing more than one conjugated verb can be obtained from a set of elementary sentences by the linguistic technique, i.e. transformation (Harris, 1976; Gross, 1986 ).",
"cite_spans": [
{
"start": 841,
"end": 855,
"text": "(Harris, 1976;",
"ref_id": "BIBREF6"
},
{
"start": 856,
"end": 867,
"text": "Gross, 1986",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Extraction of Predicate-argument Structures",
"sec_num": "4.3.2"
},
{
"text": "According to the syntactic position of nominal distribution of appropriate predicates, the appropriate predicates can be divided into four classes: the first class contains the appropriate predicates whose object complements (the object complement corresponds to the verb complement in English) are always artifact nouns; the second class contains the appropriate predicates whose object complements can be artifact nouns but whose circumstantial complements of means (it corresponds to the prepositional complement in English) are always artifact nouns; the third class includes the appropriate predicates whose object complements have less possibilities to be artifact nouns but whose circumstantial complements of location are always artifact nouns; the last class includes the appropriate predicates whose object complements are never artifact nouns but whose circumstantial complements (means or location) are always artifact nouns. Each class can be subdivided according to the syntactic features of the appropriate predicates. Unitex. However, as the modifiers of a nominal phrase can be added without limits (especially when the modifiers are relative clauses), it is difficult to describe all types of constructions of sentence by the local grammar. In addition, an apposition often has a flexible position in one sentence. It can almost be inserted next to any noun phrase of a sentence. In the proposed method, the nominal phrases of more than five grams and the appositions are not taken into account.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Extraction of Predicate-argument Structures",
"sec_num": "4.3.2"
},
{
"text": "To intersect the arguments of different predicates is for finding the common arguments of the semantic class of predicates given in advance. As a semantic class of arguments is defined by a set of definitional appropriate predicates, the more an argument is shared by the given predicates, the more probably this argument belongs to the semantic class of the given predicates. The process of intersecting the arguments is shown in Figure 1 . Pred i (i=1, 2, 3 ) refers to a predicate and Arg j (j=1, 2, 3) means an argument. The grey parts are the intersection of arguments. In fact, in our method, not only the common arguments of the given predicates are selected, but also the arguments shared by most of the given predicates. The number of different predicates that co-occur with an argument is noted as the intersecting frequency of this argument. For example, in Figure 1 , the Figure 1 : Intersecting the predicates intersecting frequency of Arg2, Arg3, Arg8, and Arg17 is 4 because they are shared by all the four predicates and the intersecting frequency of Ar1, Arg4, Arg7, and Arg9 is 2 since they are shared by two predicates (Pred1 and Pred3).",
"cite_spans": [],
"ref_spans": [
{
"start": 431,
"end": 440,
"text": "Figure 1",
"ref_id": null
},
{
"start": 870,
"end": 878,
"text": "Figure 1",
"ref_id": null
},
{
"start": 885,
"end": 893,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Intersecting the Arguments",
"sec_num": "4.3.3"
},
{
"text": "In the semi-supervised method, the predicateargument structures are identified with a semantic class of artifact nouns by local grammars. Then, all the predicates in the structures are extracted. Next, the predicates are intersected for determining which predicates belong to the semantic class of the given artifact nouns. The syntactic patterns associated with each predicate are also extracted. A probabilistic calculation is conducted in order to choose the most appropriate syntactic pattern for each predicate. With the selected predicates and their syntactic patterns, the nominal distribution of appropriate predicates can be located and the artifact nouns of the semantic class are finally acquired after intersecting the artifact nouns. With the obtained artifact nouns, the processes can be iterated for getting more artifact nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Method",
"sec_num": "4.4"
},
{
"text": "In the semi-supervised method, the predicateargument relations are extracted from a set of arguments. However, with a set of appropriate predicates given in advance, the syntacticsemantic distribution can be predicted, but from the arguments, it is not certain to predict which syntactic-semantic relation is associated with the given argument. Thus, the following solution is adopted: all the possible syntactic relations between the artifact nouns and their appropriate predicates are firstly predicated; a probabilistic calculation is then carried out to the predicted syntactic relations in order to choose the appropriate syntactic pattern for each predicate. If the semantic distribution of artifact nouns can be situated in the position of noun complement without preposition introducing it (which concerns the direct complement of objet) and in the position of noun complement introduced by preposition (which concerns the indirect complement of objet, the circumstantial complement of location and the circumstantial complement of means), there should be three necessary constituents for forming a syntactic pattern allowing predicting a predicate-argument relation: verb, noun complement without preposition and noun complement introduced by preposition. With these three constituents, four combinations for forming the desired syntactic patterns can be obtained: V+NAF, V+NAF+prep+NAF, V+Nc+prep+NAF and V+prep+NAF (V means verb, prep refers to preposition and NAF indicates the artifact nouns). Other syntactic patterns (such as V+ADV+NAF, NAF+be+V+prep+NAF, NAF+V+prep+NAF+NAF or V+ADV+ADV+prep+NAF+NAF) are derived from these four basic syntactic patterns through the transformation of natural language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of Predicate-argument Structures from a Semantic Class of Arguments",
"sec_num": "4.4.1"
},
{
"text": "With the established syntactic patterns, a series of graphs is constructed and the predicate-argument structures are labelled. In addition, the predicates, the arguments and the syntactic patterns associated with each predicate-argument structures are also labelled and extracted for the following processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of Predicate-argument Structures from a Semantic Class of Arguments",
"sec_num": "4.4.1"
},
{
"text": "All the predicate-argument structures recognized by predicting the possible syntactic relations don't represent the real syntactic relation between a certain predicate and its arguments. For example, in\u00e9teindre la lampe de poche (turn off the flashlight),\u00e9teindre (turn off ) can be identified by the syntactic pattern V+NAF+prep+NAF or V+NAF with the given artifact noun lampe (lamp) or poche (pocket); however, V+NAF+prep+NAF does not represent the syntactic-semantic distribution of the predicate\u00e9teindre (turn off ). V+NAF+prep+NAF is misrecognized as the syntactic pattern of the predicate\u00e9teindre (turn off ) because of the preposition de (of ) which is a constituent of the compound noun lampe de poche (flashlight) rather than a preposition introducing a circumstantial complement. Thus, a probabilistic calculation of syntactic patterns is necessary for choosing the appropriate syntactic pattern for each predicate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculation of Syntactic Patterns",
"sec_num": "4.4.2"
},
{
"text": "The syntactic pattern by which a predicateargument structure is identified is recoded in the labels like s=vactif _gnaf, s=vactif _gn_de_gnaf,..., etc. The code vactif (vpassif ) indicates the active (passive) form of the verb. The code gn means nominal phrase, and gnaf refers to a nominal phrase of artifact noun. The probability of having a direct object complement, P (cod), is calculated by the formula:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculation of Syntactic Patterns",
"sec_num": "4.4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (cod) = c(gnaf ) + c(gn) c(s)",
"eq_num": "(1)"
}
],
"section": "Calculation of Syntactic Patterns",
"sec_num": "4.4.2"
},
{
"text": "c(gnaf ) implies the frequency of occurrence of the syntactic patterns containing gnaf in the position of direct object complement. For example, s=vactif _gnaf, s=gnaf _va andgnaf _vpassif are all the syntactic patterns including gnaf in the position of direct object complement. c(gn) indicates the frequency of occurrence of the syntactic patterns containing gn in the position of direct object complement. c(s) indicates the frequency of occurrence of all the syntactic patterns associated with a predicate. The probability of having a direct object complement which is always artifact noun, P (codnaf ), is calculated according to the formula:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculation of Syntactic Patterns",
"sec_num": "4.4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (codnaf ) = c(gnaf ) c(s)",
"eq_num": "(2)"
}
],
"section": "Calculation of Syntactic Patterns",
"sec_num": "4.4.2"
},
{
"text": "and the probability of having an object complement introduced by a preposition, P (codi), is calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculation of Syntactic Patterns",
"sec_num": "4.4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (codi) = c(prep) c(s)",
"eq_num": "(3)"
}
],
"section": "Calculation of Syntactic Patterns",
"sec_num": "4.4.2"
},
{
"text": "c(prep) refers to the frequency of occurrence of the syntactic patterns containing a preposition. For each predicate, if its P (codnaf ) is greater than P (cod)-P (codnaf ), the direct object complement of this predicate is considered to be always artifact nouns;if P (cod) equals to zero, this predicate is not considered to have the direct object complement; if P (prep) is greater than 0.12, this predicate is considered as a predicate having an object complement introduced by preposition which is always artifact nouns. The threshold for P (prep) is decided after several tests and it allows obtaining a more accurate syntactic information for each predicate. According to these probabilities about the syntactic positions, the most appropriate syntactic pattern is chosen for each predicate from the four basic syntactic pattern candidates. Finally, the extracted predicates are classified into four groups according to their syntactic-semantic patterns: the group of V+NAF, the group of V+NAF+prep+NAF, the group of V+Nc+prep+NAF, and the group of V+prep+NAF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculation of Syntactic Patterns",
"sec_num": "4.4.2"
},
{
"text": "The aim of intersecting the predicates is to find out common predicates of the given artifact nouns. The more a predicate is shared by the given arguments, the more probably it belongs to the semantic class of the given arguments. For a predicate, the number of different artifact nouns which cooccur with this predicate is noted as the intersecting frequency of this predicate. The threshold for intersecting the predicates is set at 2 after several tests. This threshold allows giving a better result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersecting the Predicates",
"sec_num": "4.4.3"
},
{
"text": "In the result obtained after intersecting, many basic predicates occupy the top place of the list. The basic predicates have a large semantic spectrum. They are not appropriate predicates of arti-fact nouns, but their nominal distribution cover the semantic class of artifact nouns. For the appropriate predicates which belong to the semantic class of given arguments, their frequencies of occurrence in the extracted predicate-argument structures (FC) and their frequencies of occurrence in the total corpus (FT) are more or less similar. On the contrary, for the basic predicates, there is a great disparity between their frequencies of occurrence in the extracted predicate-argument structures and their frequencies of occurrence in the total corpus, since the basic predicates have a larger and more general semantic spectrum. On the basis of this occurrence disparity, some of the basic predicates can be eliminated. The occurrence disparity (Ecart) is calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Elimination of Basic Predicates",
"sec_num": "4.4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Ecart = F T \u2212 F C F T",
"eq_num": "(4)"
}
],
"section": "Elimination of Basic Predicates",
"sec_num": "4.4.4"
},
{
"text": "After several tests, we decided the threshold as 0.978 which gives a better result. If the Ecart supasses the threshold, the corresponding predicate is considered as basic predicate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Elimination of Basic Predicates",
"sec_num": "4.4.4"
},
{
"text": "With the filtered appropriate predicates and the learned syntactic-semantic patterns, a script is developed to automatically write the graphs for identifying the predicate-argument structures and labelling the arguments. Likewise, all the predicateargument structures are extracted. The acquired arguments are then intersected. In this way, more artifact nouns are acquired from a small set of artifact nouns given in advance. The processes can be iterated for obtaining more artifact nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application of Supervised Method",
"sec_num": "4.4.5"
},
{
"text": "For the supervised method experiment, about one hundred appropriate predicates of artifact nouns are chosen, and a series of syntactic patterns are established on the basis of the syntactic-semantic distribution of the appropriate predicates. The semi-supervised method is tested with three semantic classes of arguments: container, cooker and road transport. For each semantic class, a list of arguments, including about twenty artifact nouns, is manually established. The evaluation is carried out by appealing to a dictionary of artifact nouns (including 13,400 entries) developed in the laboratory. The manual annotation is added because the dictrionary is not complet. Firstly, the Figure 2 : Experiment of threshold artifact nouns in the corpus are labeled by the dictrionary and the manual annotation. The result is considered as standard. Then, our method is applied for labelling the artifact nouns and another result is obtained. The result of our method is compared with the standard in order to calculate the precision, the recall and the F-measure. For the supervised method, the threshold for intersecting the arguemnts is respectively set at 4, 5, 6, 7 and 8. Then, the precision, the recall and the F-measure are respectively calculated. The Evaluation results obtained with different thresholds are shown in Table 2 . Figure 2 shows the comparision of the different evaluation results (F-measures) obtained with different thresholds. It is seen that the highest F-measure can be obtained when the threshold equals to 6. For the semi-supervised method, the experiment is firstly carried out with the artifact nouns of semantic calss \"container\". The processes of the semi-supervised method are iterated five times. The results obtained after each iteration are respectively evaluated. The threshold for intersecting the arguments is firstly set at 3. The result obtained by the semi-supervised method includes the grain terms. Table 3 shows the evaluation results obtained with different number of iterations, and Figure 3 shows the comparision of the evaluation resaults. It is found that the result obtained after three iterations has the highest F-measure. After four iterations, the precision falls down rapidly Figure 3 : Experiment of iteration with semantic class \"container\" and the recall reaches a relatively stable value. The noise is brought by the nouns of other semantic classes obtained in each iteration. Then, the threshold is set at 2, 4, 5 and 6 respectively. The same experiment presented above is repeated for each threshold. Figure 4 shows a comparision of the highest F-measures that can be obtained with different thresholds. For the other two semantic classes, the same experiment and evaluation are conducted. Finally, we choose 3, 2 and 3 as the threshold for intersecting the arguments of the semantic class \"container\", \"cooker\" and \"road transport\" respectively and select 3, 3 and 4 as the number of iterations for the semantic class \"container\", \"cooker\" and \"road transport\" respectively. Table 4 shows the evaluation results of each semantic class with the defined threshold and number of iterations. The different quantity of apppropriate predicates of different semantic class in the copus makes the performance of our method different. ",
"cite_spans": [],
"ref_spans": [
{
"start": 687,
"end": 695,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1325,
"end": 1332,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1335,
"end": 1343,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1943,
"end": 1950,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 2030,
"end": 2038,
"text": "Figure 3",
"ref_id": null
},
{
"start": 2232,
"end": 2240,
"text": "Figure 3",
"ref_id": null
},
{
"start": 2563,
"end": 2571,
"text": "Figure 4",
"ref_id": "FIGREF0"
},
{
"start": 3038,
"end": 3045,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiment and Evaluation",
"sec_num": "5"
},
{
"text": "The method in this article is based on the analysis of syntactic-semantic distribution of appropriate predicates of artifact nouns. The advantage of this method is that it allows locating not only the position of an artifact noun in each sentence but also the position of a nominal distribution which is composed of a semantic class of artifact nouns. A class of definitional appropriate predicate characterizes a semantic class of arguments and makes it possible to consider the polysemy. In addition, the identification of the nominal distributions of appropriate predicates also permits the identification of neologisms, misspelled artifact nouns or abbreviations. Although the performance of the proposed method is dependent on the accuracy and the completeness of the established local grammars, it allows obtaining lexicon resources with a relatively high precision and the obtained lexicon resources of semantic class can make a contribution to dialogue systems, natural language generation or other natural language processing applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "La \"multi-extraction\" comme strat\u00e9gie d'acquisition optimis\u00e9e de ressources terminologiques et non terminologiques",
"authors": [
{
"first": "B",
"middle": [
"P"
],
"last": "Alecu",
"suffix": ""
},
{
"first": "Izabella",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Renahy",
"suffix": ""
}
],
"year": 2012,
"venue": "Actes de la 19e conf\u00e9rence sur le Traitement Automatique des Langues Naturelles",
"volume": "",
"issue": "",
"pages": "511--518",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. P. Alecu, Izabella Thomas, and Julie Renahy. 2012. La \"multi-extraction\" comme strat\u00e9gie d'acquisition optimis\u00e9e de ressources terminologiques et non ter- minologiques. In Actes de la 19e conf\u00e9rence sur le Traitement Automatique des Langues Naturelles, pages 511-518. Grenoble.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Lexter: a natural language tool for terminology extraction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bourigault",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 7th EURALEX International Congress",
"volume": "",
"issue": "",
"pages": "771--779",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Bourigault. 1996. Lexter: a natural language tool for terminology extraction. In Proceedings of the 7th EURALEX International Congress, pages 771- 779. G\u00f6teborg.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Des mots aux emplois : la repr\u00e9sentation lexicographique des pr\u00e9dicats",
"authors": [
{
"first": "P.-A",
"middle": [],
"last": "Buvet",
"suffix": ""
}
],
"year": 2009,
"venue": "Le Fran\u00e7ais Moderne",
"volume": "77",
"issue": "1",
"pages": "83--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P.-A. Buvet. 2009. Des mots aux emplois : la repr\u00e9sentation lexicographique des pr\u00e9dicats. Le Fran\u00e7ais Moderne, 77(1):83-96.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Study and implementation of combined techniques for automatic extraction of terminology",
"authors": [
{
"first": "B",
"middle": [],
"last": "Daille",
"suffix": ""
}
],
"year": 1994,
"venue": "The Balancing Act: Combining Symbolic and Statistical Approaches to Language, Proceedings of the Workshop of the 32nd Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "29--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Daille. 1994. Study and implementation of com- bined techniques for automatic extraction of termi- nology. In The Balancing Act: Combining Symbolic and Statistical Approaches to Language, Proceed- ings of the Workshop of the 32nd Annual Meeting of the ACL, pages 29-36. Las Cruces, New Mexico, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Acquisition de terminologie\u00e0 partir de gros corpuss",
"authors": [
{
"first": "Chantal",
"middle": [],
"last": "Enguehard",
"suffix": ""
}
],
"year": 1993,
"venue": "Actes Informatique & Langue Naturelle",
"volume": "",
"issue": "",
"pages": "373--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chantal Enguehard. 1993. Acquisition de terminolo- gie\u00e0 partir de gros corpuss. In Actes Informatique & Langue Naturelle, pages 373-384. Nantes.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Grammaire transformationnelle du fran\u00e7ais : Syntaxe du verbe",
"authors": [
{
"first": "M",
"middle": [],
"last": "Gross",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Gross. 1986. Grammaire transformationnelle du fran\u00e7ais : Syntaxe du verbe ; Syntaxe du nom. Can- til\u00e8ne.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Notes du cours de syntaxe",
"authors": [
{
"first": "Z",
"middle": [
"S"
],
"last": "Harris",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. S. Harris. 1976. Notes du cours de syntaxe. Le seuil.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 14th conference on Computational linguistics",
"volume": "2",
"issue": "",
"pages": "539--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. A. Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proceedings of the 14th conference on Computational linguistics, volume 2, pages 539-545. Stroudsburg, PA, USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Fidelia Ibekwe-sanjuan",
"authors": [],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fidelia Ibekwe-sanjuan. 2007. Fouille de textes. Lavoisier.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Information extraction from texte",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2012,
"venue": "Mining Text Data",
"volume": "",
"issue": "",
"pages": "18--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Jiang. 2012. Information extraction from texte. In Charu C. Aggarwal and ChengXiang Zhai, editors, Mining Text Data, pages 18-22. Springer. US.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatic recognition of complex terms: problems and the termino solution",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lauriston",
"suffix": ""
}
],
"year": 1994,
"venue": "Terminology",
"volume": "1",
"issue": "1",
"pages": "147--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Lauriston. 1994. Automatic recognition of complex terms: problems and the termino solution. Terminol- ogy, 1(1):147-170.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Extraction automatique de terminologie\u00e0 partir de libell\u00e9s textuels courts",
"authors": [
{
"first": "Jean-",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Claude",
"middle": [],
"last": "Meilland",
"suffix": ""
},
{
"first": "Patrice",
"middle": [],
"last": "Bellot",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-Claude Meilland and Patrice Bellot. 2003. Ex- traction automatique de terminologie\u00e0 partir de li- bell\u00e9s textuels courts. In Geoffrey Williams, edi- tor, Linguistique de corpus. Presses Universitaires de Rennes. France.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Le mot, probl\u00e9matique th\u00e9orique. Le Fran\u00e7ais Moderne",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mejri",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "77",
"issue": "",
"pages": "68--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Mejri. 2009. Le mot, probl\u00e9matique th\u00e9orique. Le Fran\u00e7ais Moderne, 77(1):68-82.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Une approche linguistique de l'\u00e9valuation des ressources extraites par analyse distributionnelle automatique",
"authors": [
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Morlane-Hond\u00e8re",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois Morlane-Hond\u00e8re. 2012. Une approche lin- guistique de l'\u00e9valuation des ressources extraites par analyse distributionnelle automatique. Ph.D. thesis, Universit\u00e9 Toulouse le Mirail. France.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "What about sequential data mining techniques to identify linguistic patterns for stylistics?",
"authors": [
{
"first": "Solen",
"middle": [],
"last": "Quiniou",
"suffix": ""
},
{
"first": "Peggy",
"middle": [],
"last": "Cellier",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Charnois",
"suffix": ""
},
{
"first": "Dominique",
"middle": [],
"last": "Legallois",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "166--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Solen Quiniou, Peggy Cellier, Thierry Charnois, and Dominique Legallois. 2012. What about sequential data mining techniques to identify linguistic patterns for stylistics? In Computational Linguistics and In- telligent Text Processing, pages 166-177. Heidel- berg.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Morphological and syntactic case in statistical dependency parsing",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Seeker",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "1",
"pages": "23--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Seeker and Jonas Kuhn. 2013. Morphologi- cal and syntactic case in statistical dependency pars- ing. Computational Linguistics, 39(1):23-55.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning syntactic patterns for automatic hypernym discovery",
"authors": [
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Y. Ng",
"middle": [],
"last": "Andrew",
"suffix": ""
}
],
"year": 2004,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1297--1305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rion Snow, Daniel Jurafsky, and Y. Ng Andrew. 2004. Learning syntactic patterns for automatic hyper- nym discovery. In Advances in Neural Informa- tion Processing Systems, pages 1297-1305. British Columbia.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Experiment of threshold",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"html": null,
"content": "<table><tr><td>Classes</td><td>Appropriate Predicates</td><td>Syntactic-semantic Distribution</td></tr><tr><td>Class1</td><td/><td/></tr><tr><td colspan=\"2\">Class_1\u00e9teindre (turn off), inventer (invent), etc.</td><td>V+NAF</td></tr><tr><td>Class_1a</td><td>tirer (pull), retirer (remove), appuyer (support), etc.</td><td>V+dessus/dessous/sur...+NAF</td></tr><tr><td>Class_1b</td><td>jouer (play)</td><td>V+\u00e0/de+NAF</td></tr><tr><td>Class2</td><td/><td/></tr><tr><td>Class_2</td><td>r\u00e9curer (scrub), r\u00e9parer (repaire), tracter (tow), etc.</td><td>V+de/avec/par+NAF</td></tr><tr><td>Class_2a</td><td>d\u00e9couper (cut out), fouiller (dig), d\u00e9crasser (clean up), etc.</td><td>V+NAF/Nc+de/avec/par+NAF</td></tr><tr><td colspan=\"2\">Class_2b\u00e9quiper (equip), orner (decorate), etc.</td><td>V+NAF/Nc+de+NAF</td></tr><tr><td>Class3</td><td/><td/></tr><tr><td>Class_3</td><td>ranger (arrange), installer (install), contenir (contain), etc.</td><td>V+NAF/Nc+sous/devant/sur/derri\u00e8re...+NAF</td></tr><tr><td>Class_3a</td><td>transformer (transform)</td><td>V+NAF/Nc+en+NAF</td></tr><tr><td>Class_3b</td><td>connecter (connect)</td><td>V+NAF/Nc+\u00e0+NAF</td></tr><tr><td>Class4</td><td/><td/></tr><tr><td>Class_4</td><td>verser(pour), enregistrer (record), etc.</td><td>V+Nc+dans+NAF</td></tr><tr><td>Class_4a</td><td>peigner (comb), maquiller(make up), farder (disguise), etc</td><td>V+Nc+avec/par/de+NAF</td></tr><tr><td>Class_4b</td><td>nourrir (nourish), alimenter (feed)</td><td>V+Nc+\u00e0+NAF</td></tr><tr><td>Class_4c</td><td>afficher (put up), placarder (placard), etc.</td><td>V+Nc+sur+NAF</td></tr></table>",
"text": "lists all the classes that we made according to the syntactic-semantic distribution of appropriate predicates. For each class, some examples of predicates and corresponding syntactic patterns are given. In the formula expressions of syntactic-semantic distribution, V means verb, NAF indicates the artifact nouns and Nc refers to the nouns of other semantic classes.Many other syntactic patterns are constructed considering language transformation from the basic syntactic patterns presented above. A series of graphs is established on the basis of the established syntactic patterns, and the predicateargument structures are extracted through the tool",
"type_str": "table",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table/>",
"text": "Evaluation of supervised method",
"type_str": "table",
"num": null
},
"TABREF5": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Semantic classes Precision</td><td>Recall</td><td>F-measure</td></tr><tr><td>Road transport</td><td colspan=\"3\">62.46% 58.53% 60.43%</td></tr><tr><td>Cooker</td><td colspan=\"3\">70.14% 76.87% 73.35%</td></tr><tr><td>Container</td><td colspan=\"3\">81.34% 81.02% 81.20%</td></tr></table>",
"text": "Evaluation of iteration with semantic class \"container\"",
"type_str": "table",
"num": null
},
"TABREF6": {
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null
}
}
}
}