| { |
| "paper_id": "S15-1030", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:38:28.782217Z" |
| }, |
| "title": "Discovering Hypernymy Relations using Text Layout", |
| "authors": [ |
| { |
| "first": "Jean-Philippe", |
| "middle": [], |
| "last": "Fauconnier", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Mouna", |
| "middle": [], |
| "last": "Kamel", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "kamel@irit.fr" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Hypernymy relation acquisition has been widely investigated, especially because taxonomies, which often constitute the backbone structure of semantic resources are structured using this type of relations. Although lots of approaches have been dedicated to this task, most of them analyze only the written text. However relations between not necessarily contiguous textual units can be expressed, thanks to typographical or dispositional markers. Such relations, which are out of reach of standard NLP tools, have been investigated in well specified layout contexts. Our aim is to improve the relation extraction task considering both the plain text and the layout. We are proposing here a method which combines layout, discourse and terminological analyses, and performs a structured prediction. We focused on textual structures which correspond to a well defined discourse structure and which often bear hypernymy relations. This type of structure encompasses titles and subtitles , or enumerative structures. The results achieve a precision of about 60%.", |
| "pdf_parse": { |
| "paper_id": "S15-1030", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Hypernymy relation acquisition has been widely investigated, especially because taxonomies, which often constitute the backbone structure of semantic resources are structured using this type of relations. Although lots of approaches have been dedicated to this task, most of them analyze only the written text. However relations between not necessarily contiguous textual units can be expressed, thanks to typographical or dispositional markers. Such relations, which are out of reach of standard NLP tools, have been investigated in well specified layout contexts. Our aim is to improve the relation extraction task considering both the plain text and the layout. We are proposing here a method which combines layout, discourse and terminological analyses, and performs a structured prediction. We focused on textual structures which correspond to a well defined discourse structure and which often bear hypernymy relations. This type of structure encompasses titles and subtitles , or enumerative structures. The results achieve a precision of about 60%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The hypernymy relation acquisition task is a widely studied problem, especially because taxonomies, which often constitute the backbone structure of semantic resources like ontologies, are structured using this type of relations. Although this task has been addressed in literature, most of the publications report analyses based on the written text only, usually at the phrase or sentence level.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, a written text is not merely a set of words or sentences. When producing a document, a writer may use various layout means, in addition to strictly linguistics devices such as syntactic arrangement or rhetorical forms. Relations between textual units that are not necessarily contiguous can thus be expressed thanks to typographical or dispositional markers. Such relations, which are out of reach of standard NLP tools, have been studied within some specific layout contexts. Our aim is to improve the relation extraction task by considering both the plain text and the layout. This means (1) identifying hierarchical structures within the text using only layout, (2) identifying relations carried by these structures, using both lexico-syntactic and layout features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Such an approach is deemed novel for at least two reasons. It combines layout, discourse and terminological analyses to bridge the gap between the document layout and lexical resources. Moreover, it makes a structured prediction of the whole hierarchical structure according to the set of visual and discourse properties, rather than making decisions only based on parts of this structure, as usually performed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The main strength of our approach is its applicability to different document formats as well to several domains. It should be highlighted that encyclopedic, technical or scientific documents, which are often analyzed for building semantic resources, are most of the time strongly structured. Our approach has been implemented for the French language, for which only few resources are currently available. In this paper we focus on specific textual structures which share the same discourse properties and that are expected to bear hypernymy relations. They encompass for instance titles/sub-titles, or enumerative structures.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The paper is organized as follows. Some related works about hypernymy relation identification are reported in section 2. Section 3 presents the theoretical framework on which the proposed approach is based. Sections 4 and 5 respectively describe transitions from the text layout to its discourse representation and from this discourse structure to the terminological structure. Finally we draw conclusions and propose some perspectives.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The task of extracting hypernymy relations (it may also be denoted as generic/specific, taxonomic, is-a or instance-of relations) is critical for building semantic resources and for semantic content authoring. Several parameters concerning corpora may affect the methods used for this task: the natural language quality (carefully written or informal), the textual genre (scientific, technical documents, newspapers, etc.), technical properties (corpus size, format), the level of precision of the resource (thesaurus, lightweight or full-fledged ontology), the degree of structuring, etc. This task may be carried out by using the proper text and/or external pre-existing resources. Various methods for exploiting plain text exist using techniques such as regular expressions (also known as lexicosyntactic patterns) (Hearst, 1992), classification using supervised or unsupervised learning (Snow et al., 2004; Alfonseca and Manandhar, 2002) , distributional analysis (Lenci and Benotto, 2012) or Formal Concepts Analysis (Cimiano et al., 2005) . In the Information Retrieval area, the relevant terms are extracted from documents and organized into hierarchies (S\u00e1nchez and Moreno, 2005) .", |
| "cite_spans": [ |
| { |
| "start": 891, |
| "end": 910, |
| "text": "(Snow et al., 2004;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 911, |
| "end": 941, |
| "text": "Alfonseca and Manandhar, 2002)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 968, |
| "end": 993, |
| "text": "(Lenci and Benotto, 2012)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1022, |
| "end": 1044, |
| "text": "(Cimiano et al., 2005)", |
| "ref_id": null |
| }, |
| { |
| "start": 1161, |
| "end": 1187, |
| "text": "(S\u00e1nchez and Moreno, 2005)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Works on the document structure and on the discourse relations that it conveys have been carried out by the NLP community. Among these are the Document Structure Theory (Power et al., 2003) , and the DArt bio system (Bateman et al., 2001 ). These approaches offer strong theoretical frameworks, but they were only implemented from a text generation point of view.", |
| "cite_spans": [ |
| { |
| "start": 169, |
| "end": 189, |
| "text": "(Power et al., 2003)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 216, |
| "end": 237, |
| "text": "(Bateman et al., 2001", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "With regard to the relation extraction task using layout, two categories of approaches may be distinguished. The first one encompasses approaches exploiting documents written in a markup language. The semantics of these tags and their nested structure is used to build semantic resources. For instance, collection of XML documents have been analyzed to build ontologies (Kamel and Aussenac-Gilles, 2009) , while collection of HTML or MediaWiki documents have been exploited to build taxonomies (Sumida and Torisawa, 2008) .", |
| "cite_spans": [ |
| { |
| "start": 370, |
| "end": 403, |
| "text": "(Kamel and Aussenac-Gilles, 2009)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 494, |
| "end": 521, |
| "text": "(Sumida and Torisawa, 2008)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The second category gathers approaches exploiting specific documents or parts of documents, for which the semantics of the layout is strictly defined. Let us mention dictionaries and thesaurus (Jannink and Wiederhold, 1999) or specific and well localized textual structures such as category field (Chernov et al., 2006; Suchanek et al., 2007) or infoboxes (Auer et al., 2007) from Wikipedia pages. In some cases, these specific textual structures are also expressed thanks to a markup language. All these works implement symbolic as well as machine learning techniques.", |
| "cite_spans": [ |
| { |
| "start": 193, |
| "end": 223, |
| "text": "(Jannink and Wiederhold, 1999)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 297, |
| "end": 319, |
| "text": "(Chernov et al., 2006;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 320, |
| "end": 342, |
| "text": "Suchanek et al., 2007)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 356, |
| "end": 375, |
| "text": "(Auer et al., 2007)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our approach is similar to the one followed by Sumida and Torisawa (2008) which analyzes a structured text according to the following steps:", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 73, |
| "text": "Sumida and Torisawa (2008)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(1) they represent the document structure from a limited set of tags (headings, bulleted lists, ordered lists and definition lists), (2) they link two tagged strings when the first one is in the scope of the second one, and (3) they use lexico-syntactic and layout features for selecting hypernymy relations, with the help of a machine learning algorithm. Some attempts have been made for improving these results (Oh et al., 2009; Yamada et al., 2009) . However our work differs in two points: we aimed to be more generic by proposing a discourse structure of layout that can be inferred from different document formats, and we propose to find out the relation arguments (hypernym-hyponym term pairs) by analyzing propositional contents. Prior to describing the implemented processes, the underlying principles of our approach will be reported in the next section.", |
| "cite_spans": [ |
| { |
| "start": 413, |
| "end": 430, |
| "text": "(Oh et al., 2009;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 431, |
| "end": 451, |
| "text": "Yamada et al., 2009)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We rely on principles of discourse theories and on knowledge models for respectively formalizing text layout and identifying hypernymy relations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Underlying principles of our approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Several discourse theories exist. Their starting point lies in the idea that a text is not just a collection of sentences, but it also includes relations between all these sentences that ensure its coherence (Mann and Thompson, 1988; Asher and Lascarides, 2003) . Discourse analysis aims at observing the discourse coherence from a rhetorical point of view (the intention of the author) or from a semantic point of view (the description of the world). A discourse analysis is a three step process: splitting the text into Discourse Units (DU), ensuring the attachment between DUs, and then labeling links between DUs with discourse relations. Discourse relations may be divided into two categories: nucleus-satellite (or subordinate) relations which link an important argument to an argument supporting background information, and multi-nuclear (or coordinate) relations which link arguments of equal importance. Most of discourse theories acknowledge that a discourse is hierarchically structured thanks to discourse relations.", |
| "cite_spans": [ |
| { |
| "start": 208, |
| "end": 233, |
| "text": "(Mann and Thompson, 1988;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 234, |
| "end": 261, |
| "text": "Asher and Lascarides, 2003)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse analysis of the layout", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Text layout supports a large part of semantics and participates to the coherence of the text; it thus contributes to the elaboration of the discourse. Therefore, we adapted the discourse analysis to treat the layout, according to the following principles:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse analysis of the layout", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "-a DU corresponds to a visual unit (a bloc); -two units sharing the same role (title, paragraph, etc.) and the same typographic and dispositional markers are linked with a multinuclear relation; otherwise, they are linked with a nuclear-satellite relation. An example 1 of document from Wikipedia and the tree which results from the discourse analysis of its layout is given (Figure 1 ). In the following figures, we represent nucleus-satellite relations with solid lines and multi-nuclear relations with dashed lines.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 375, |
| "end": 384, |
| "text": "(Figure 1", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discourse analysis of the layout", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "1 http://fr.wikipedia.org/wiki/Red\u00e9centralisation d'Internet", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse analysis of the layout", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We are currently interested in discourse structures displaying the following properties:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse analysis of the layout", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "-n DUs are linked with multi-nuclear relations; -one of these coordinated DU is linked to another DU with a nucleus-satellite relation. Figure 2 gives a representation of such a discourse structure according to the Rhetorical Structure Theory (Mann and Thompson, 1988) . Although there is only one explicit nucleussatellite relation, this kind of structure involves n implicit nucleus-satellite relations (between DU 0 and DU i (2 \u2264 i \u2264 n)). Indeed, from a discourse point of view, if a DU j is subordinated to a DU i , then all DU k coordinated to DU j , are subordinated to DU i . As mentioned above, this kind of discourse structure encompasses textual structures such as titles/sub-titles and enumerative structures which are frequent in structured documents, and which often convey hypernymy relation. In that context, the hypernym is borne by the DU 0 and each DU i (1 \u2264 i \u2264 n) bears at least one hyponym.", |
| "cite_spans": [ |
| { |
| "start": 243, |
| "end": 268, |
| "text": "(Mann and Thompson, 1988)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 136, |
| "end": 144, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discourse analysis of the layout", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Hypernymy relation identification is carried out in two stages: specifying if the relation is hypernymic and, if appropriate, identifying its arguments. The first stage relies on linguistic regularities denoting a hypernymy relation, regularities which are expressed thanks to lexical, syntactic, typographical and dispositional clues. The second stage is based on a graph representation. Rather than independently identifying links between the hypernym and each potential hyponym, we take advantage from the fact that writers use the same syntactic and visual skills (recognized by a textual parallelism) for expressing knowledge units of equal rhetorical importance. Generally, these salient units are semantically linked and belong to a same lexical field. Thus, we represent each discourse structure of interest bearing a hypernymy relation as a directed acyclic graph (DAG), where the nodes are terms and the edges are possible relations between them. This DAG is decomposed into layers, each layer i gathering nodes corresponding to terms of a given DU i (0 \u2264 i \u2264 n). Each node of a layer i (0 \u2264 i \u2264 (n \u2212 1)) is connected by directed edges to all nodes of the layer i + 1. A root node is added on the top of the DAG. We weight the edges according to the inverse similarity of terms they link. Thus, the terms in the lower-cost path starting from the root and ending at the last layer are maximally cohesive. A flatter representation does not allow this structured prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge models for hypernymy relation identification", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "To elicit discourse structures from text layout, the system detects visuals units and labels them with their role (paragraph, title, footnote, etc.) in the text. Then, it links the labeled units using discourse relations (nucleus-satellite or multi-nuclear) in order to produce a discourse tree. We are currently able to process two types of documents: documents written in a markup language and documents in PDF format. It is obvious that tags of markup languages both delimit blocs and give their role. Getting the visual structure is thus straightforward. Conversely, PDF documents do not benefit from such tags. So we used the LAPDF-Text tool (Ramakrishnan et al., 2012) which is based on a geometric analysis for detecting blocs, and we have implemented a machine learning method for labeling these blocs. The features include typographical markers (size of fonts, emphasis markers, etc.) and dispositional one (margins, position in page, etc.).", |
| "cite_spans": [ |
| { |
| "start": 647, |
| "end": 674, |
| "text": "(Ramakrishnan et al., 2012)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "From text layout to its discourse representation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "For labeling relations, we used an adapted version of the shift-reduce algorithm as (Marcu, 1999) did. We thus obtain a dependency tree representing the discourse structure of the text layout. We evaluate this process on a corpus of PDF documents (documents written in a markup language pose no problem). Results are good since we obtain an accuracy of 80.46% for labeling blocs, and an accuracy of 97.23% for labeling discourse relations (Fauconnier et al., 2014) . The whole process has been implemented in the LaToe 2 tool.", |
| "cite_spans": [ |
| { |
| "start": 84, |
| "end": 97, |
| "text": "(Marcu, 1999)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 439, |
| "end": 464, |
| "text": "(Fauconnier et al., 2014)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "From text layout to its discourse representation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Finally, the extraction of discourse structures of interest may be done easily by means of tree patterns (Levy and Andrew, 2006) .", |
| "cite_spans": [ |
| { |
| "start": 105, |
| "end": 128, |
| "text": "(Levy and Andrew, 2006)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "From text layout to its discourse representation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "5 From layout discourse structure to terminological structure", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "From text layout to its discourse representation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We wish to elicit possible hypernymy relations from identified discourse structures of interest. This task involves a two-step process. The first step consists in specifying the nature of the relation borne by these structures. The second step aims at identifying the related terms (the relation arguments). These steps have been independently evaluated on an annotated corpus, while the whole system has been evaluated on another not annotated corpus. Corpora and evaluation protocols are described in the next section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "From text layout to its discourse representation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The annotated corpus includes 166 French Wikipedia pages corresponding to urban and environmental planning. 745 discourse structures of interest were annotated by 3 annotators (2 students in Linguistics, and an expert in knowledge engineering) according to a guideline. The annotation task for each discourse structure of interest has consisted in annotating the nucleus-satellite relation as hypernymy or not, and when required, in annotating the terms involved in the relation. For the first stage, we have calculated a degree of inter-annotator agreement (Fleiss et al., 1979) and obtained a kappa of 0.54. The second stage was evaluated as a named entity recognition task (Tateisi et al., 2000) for which we have obtained an F-measure of 79.44. From this dataset, 80% of the discourse structures of interest were randomly chosen to constitute the development set, and the remaining 20% were used for the test set. The tasks described below were tuned on the development set using a k-10 cross-validation. The evaluation is done using the precision, the recall and the F-measure metrics.", |
| "cite_spans": [ |
| { |
| "start": 558, |
| "end": 579, |
| "text": "(Fleiss et al., 1979)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 676, |
| "end": 698, |
| "text": "(Tateisi et al., 2000)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora and evaluation protocols", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "A second evaluation for the entire system was led on two corpora respectively made of Wikipedia pages from two domains: Transport and Computer Science. For each domain, we have randomly selected 400 pages from a French Wikipedia Dump (2014-09-28). Since those copora are not manually annotated, we have only reported the precision.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora and evaluation protocols", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Hypernymy relations present lexical, syntactic, typographical and dispositional regularities in the text. The recognition of these relations is thus based on the analysis of these regularities within the two DUs explicitly linked by the nucleussatellite relation. We consider this problem as a binary classification one: each discourse structure is assigned to either the Hypernymy-Structure class or the nonHypernymy-Structure class. The Hypernymy-Structure class encompasses discourse structures with a nucleus-satellite relation bearing a hypernymy, whereas the nonHypernymy-Structure one gathers all others discourse structures. In the example given in figure 1, the discourse structures constituted of DUs {3,4,5} and {6,7,8,9,10} would be classified as Hypernymy-Structure, while this constituted of DUs {2,3,6,11,12} would be assigned to the nonHypernymy-Structure class.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualifying the nucleus-satellite relation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "For this purpose, we applied feature functions (summarized in table 1) in order to map the two DUs linked by the explicit nucleus-satellite relation into a numerical vector which is submitted to a classifier. The feature functions were defined according to background knowledge and were selected on the basis of a Pearson's correlation. We have compared two types of classifiers: a linear one which generalizes well, but may produce more misclassifications when data distribution presents a large spread, and a non-linear one which may lead to a model separating well the training set but with an overfitting risk. We respectively used a Maximum Entropy classifier (MaxEnt) (Berger et al., 1996) and a Support Vector Machine (SVM) with a Gaussian kernel (Cortes and Vapnik, 1995) .", |
| "cite_spans": [ |
| { |
| "start": 674, |
| "end": 695, |
| "text": "(Berger et al., 1996)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 754, |
| "end": 779, |
| "text": "(Cortes and Vapnik, 1995)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualifying the nucleus-satellite relation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The morphological and lexical information used were obtained from the French dependency parser Talismane (Urieli, 2013) . For the classifiers, we have used the OpenNLP 3 library for the MaxEnt and the LIBSVM implementation of the SVM 4 . This task has been evaluated against a majority baseline which better reflects the reality because of the asymmetry of the relation distribution. Table 2 : Results for qualifying the relation", |
| "cite_spans": [ |
| { |
| "start": 105, |
| "end": 119, |
| "text": "(Urieli, 2013)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 384, |
| "end": 391, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Qualifying the nucleus-satellite relation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Regarding the F-measure metric, the difference between the MaxEnt and the SVM is not significant. We observe that the MaxEnt achieves the best precision, while the SVM reaches the best recall. These results are not surprising since the SVM decision boundary seems to be biased by outliers, thus increasing the false positive rate on unseen data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualifying the nucleus-satellite relation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We have now to identify terms linked by the hypernymy relation. As previously mentioned we build a DAG reflecting all possible relations between terms of the DUs, to find the lower-cost path which represents the most cohesive sequence of terms. If we consider the discourse structure constituted of DUs {6,7,8,9,10} in figure 1, the retrieved path from the corresponding DAG (figure 3) would be [\"protocoles de communication interop\u00e9rables\" (interoperable communication protocols), \"courrie\u0155 electronique\" (email), \"messagerie instantan\u00e9e\" (instant messaging), \"partage de fichiers en pair\u00e0 pair\" (peer-to-peer file sharing), \"tchat en salons\" (chat room)]. Then, an example of hypernymy relation would be \"courrier\u00e9lectronique\" (email) is a kind of \"protocoles de communication interop\u00e9rables\" (interoperable communication protocols).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The cost of an edge is defined using the following function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "cost(< T j i , T k i+1 >) = 1 \u2212 p(y|T j i , T k i+1 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "where T j i is the j-th term of DU i . The probability assigned to the outcome y measures the likeliness that both terms are linked. This probability is conditioned by lexical and dispositional clues. Since it is expected that terms involved in the relation share the same lexical field, we also consider the cosine similarity between the term vectors. All those clues are mapped into a numerical vector using feature functions summarized in table 3. We built two models based on supervised probabilistic classifiers since characteristics of links between a hypernym and a hyponym are different from those between two hyponyms. The first model considers only the edges between layer 0 and layer 1 (hypernym-hyponym link), whereas the second one is dedicated to the edges of remaining layers (hyponym-hyponym link).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "For this step, we used ACABIT (Daille, 1996) and YaTeA (Aubin and Hamon, 2006) for extracting terms. The cosine similarity is based on a distributional model constructed with the word2vec tool (Mikolov et al., 2013) and the French corpus FrWac (Baroni et al., 2009) . We have learned the models using a Maximum Entropy classifier.", |
| "cite_spans": [ |
| { |
| "start": 30, |
| "end": 44, |
| "text": "(Daille, 1996)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 55, |
| "end": 78, |
| "text": "(Aubin and Hamon, 2006)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 193, |
| "end": 215, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 244, |
| "end": 265, |
| "text": "(Baroni et al., 2009)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "For computing the lower-cost path, we use an A* search algorithm because it can handle large search space with an admissible heuristic. The estimated cost of a path P , a sequence of edges from the root to a given term, is defined by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "f (P ) = g(P ) + h(P )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The function g(P ) calculates the real cost along the path P and it is defined by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "g(P ) = <T j i ,T k i+1 > \u2208 P cost(< T j i , T k i+1 >)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The heuristic h(P ) is a greedy function which picks a new path with the minimal cost over d layers and returns its cost:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "h(P ) = g(l d (P ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The function l d (P ) is defined recursively: l 0 (P ) is the empty path. Assume l d (P ) is defined and T j d i d is the last node reached on the path formed by the concatenation of P and l d (P ), then we define:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "l d+1 (P ) = l d (P ). < T j d i d , T m i d +1 >", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "where m is the index of the term with the lower cost edge and belonging to the layer i d + 1:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "m = argmin k<|layer i d +1| cost(< T j d i d , T k i d +1 >)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "This heuristic is admissible by definition. We set d=3 because it is a good tradeoff between the number of operations and the number of iterations during the A* search.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In order to evaluate this task, we compare it to a baseline and two vector-based approaches. The baseline works on the assumption that two related terms belong to a same window of words; then it takes the last term of the layer 0 as hypernym, and the first term of each layer i (1 \u2264 i \u2264 n) as hyponym. The two other strategies use a cosine similarity (calculated with respectively 200-and 500dimensional vectors) for the costs estimation. Table 4 : Results for terms recognition vector-based strategies present interesting precisions, which seems to confirm a correlation between the lexical cohesion of terms and their likelihood of being involved in a relation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 439, |
| "end": 446, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "To lead additional evaluations we define the score of a path as the mean of its costs, and we select results using a list of threshold values: only the paths with a score lower than a given threshold are returned. Figure 4 shows the Precision-Recall curves using the whole list of threshold values. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 214, |
| "end": 222, |
| "text": "Figure 4", |
| "ref_id": "FIGREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Identifying the terms linked by the hypernymy relation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In this section, we report the results for the whole process applied on two corpora made of Wikipedia pages from two domains: Transport and Computer Science. For each of them, we applied a discourse analysis of the layout, and we extracted the hypernym-hyponym pairs. This extraction was done with a Maximum Entropy classifier which has shown a good precision for the two tasks described before. The retrieved pairs were ranked according to the score of the path they belong to. Finally, we For the two domains, around 300 pairs were retrieved with a precision of about 60% for the highest threshold. We have identified the main sources of error. The most common arises from nested discourse structures. In this case, intermediate DUs often specify contexts, and therefore do not contain the searched hyponyms. This is the case in the last example of table 5 where the retrieved hyponyms for \"transmission\" (transmission) are \"Courte distance\" (Short distance), \"Moyenne distance\" (Medium distance) and \"Longue distance\" (Long distance).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of the whole system", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Another error comes from a confusion between hypernymy and meronymy relations, which are both hierarchical. The fact that these two relations share the same linguistic properties may explain this confusion (Ittoo and Bouma, 2009 ). Furthermore we are still faced with classical linguistic problems which are out of the scope of this paper: anaphora, ellipse, coreference, etc.", |
| "cite_spans": [ |
| { |
| "start": 206, |
| "end": 228, |
| "text": "(Ittoo and Bouma, 2009", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of the whole system", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Finally, we ignore cases where the hypernymy relation is reversed, i.e. when the hyponym is localized into the nucleus DU and its hypernym into a satellite DU. Clues that we use are not enough discriminating at this level.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of the whole system", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "In this paper we investigate a new way for extracting hypernymy relations, exploiting the text layout which expresses hierarchical relations and for which standard NLP tools are not suitable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The system implements a two steps process: (1) a discourse analysis of the text layout, and (2) a hypernymy relation identification within specific discourse structures. We first evaluate each module independently (discourse analysis of the layout, identification of the nature of the relation, and identification of arguments of the relation), and we obtain accuracies of about 80% and 97% for the discourse analysis, and F-measures of about 81% and 73% for the relation extraction. We then evaluate the whole process and we obtain a precision of about 60%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "One way to improve this work is to extend this analysis to other hierarchical relations. We plan to investigate more advanced techniques offered by distributional semantic models in order to discriminate hypernymy relation from meronymy ones.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Another way is to extend the scope of investigation of the layout to take into account new discursive structures. Moreover, a subsequent step to this work is its large scale application on collections of structured web documents (such as Wikipedia pages) in order to build semantic resources and to share them with the community.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "http://github.com/fauconnier/LaToe", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://opennlp.apache.org/ 4 http://www.csie.ntu.edu.tw/\u223ccjlin/libsvm/ 5 The p-values are calculated using a paired t-test.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Improving an ontology refinement method with hyponymy patterns", |
| "authors": [ |
| { |
| "first": "Enrique", |
| "middle": [], |
| "last": "Alfonseca", |
| "suffix": "" |
| }, |
| { |
| "first": "Suresh", |
| "middle": [], |
| "last": "Manandhar", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "cell", |
| "volume": "4081", |
| "issue": "", |
| "pages": "0--0087", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Enrique Alfonseca and Suresh Manandhar. 2002. Im- proving an ontology refinement method with hy- ponymy patterns. cell, 4081:0-0087.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Logics of conversation", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Asher", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Lascarides", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "N. Asher and A. Lascarides. 2003. Logics of conversa- tion. Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Improving term extraction with terminological resources", |
| "authors": [ |
| { |
| "first": "Sophie", |
| "middle": [], |
| "last": "Aubin", |
| "suffix": "" |
| }, |
| { |
| "first": "Thierry", |
| "middle": [], |
| "last": "Hamon", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Advances in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "380--387", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sophie Aubin and Thierry Hamon. 2006. Improving term extraction with terminological resources. In Ad- vances in Natural Language Processing, pages 380- 387. Springer.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Dbpedia: A nucleus for a web of open data", |
| "authors": [ |
| { |
| "first": "S\u00f6ren", |
| "middle": [], |
| "last": "Auer", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Bizer", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgi", |
| "middle": [], |
| "last": "Kobilarov", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Lehmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Cyganiak", |
| "suffix": "" |
| }, |
| { |
| "first": "Zachary", |
| "middle": [], |
| "last": "Ives", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "The semantic web", |
| "volume": "", |
| "issue": "", |
| "pages": "722--735", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S\u00f6ren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, pages 722-735. Springer.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The wacky wide web: a collection of very large linguistically processed webcrawled corpora. Language resources and evaluation", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvia", |
| "middle": [], |
| "last": "Bernardini", |
| "suffix": "" |
| }, |
| { |
| "first": "Adriano", |
| "middle": [], |
| "last": "Ferraresi", |
| "suffix": "" |
| }, |
| { |
| "first": "Eros", |
| "middle": [], |
| "last": "Zanchetta", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "43", |
| "issue": "", |
| "pages": "209--226", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The wacky wide web: a collection of very large linguistically processed web- crawled corpora. Language resources and evaluation, 43(3):209-226.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Towards constructive text, diagram, and layout generation for information presentation", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Bateman", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Kamps", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00f6rg", |
| "middle": [], |
| "last": "Kleinz", |
| "suffix": "" |
| }, |
| { |
| "first": "Klaus", |
| "middle": [], |
| "last": "Reichenberger", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Computational Linguistics", |
| "volume": "27", |
| "issue": "3", |
| "pages": "409--449", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Bateman, Thomas Kamps, J\u00f6rg Kleinz, and Klaus Reichenberger. 2001. Towards constructive text, dia- gram, and layout generation for information presenta- tion. Computational Linguistics, 27(3):409-449.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A maximum entropy approach to natural language processing", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Adam", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent J Della", |
| "middle": [], |
| "last": "Berger", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen A Della", |
| "middle": [], |
| "last": "Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pietra", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Computational linguistics", |
| "volume": "22", |
| "issue": "1", |
| "pages": "39--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam L Berger, Vincent J Della Pietra, and Stephen A Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational lin- guistics, 22(1):39-71.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Extracting semantics relationships between wikipedia categories. SemWiki, 206. Philipp Cimiano, Andreas Hotho, and Steffen Staab", |
| "authors": [ |
| { |
| "first": "Sergey", |
| "middle": [], |
| "last": "Chernov", |
| "suffix": "" |
| }, |
| { |
| "first": "Tereza", |
| "middle": [], |
| "last": "Iofciu", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Nejdl", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuan", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "J. Artif. Intell. Res.(JAIR)", |
| "volume": "24", |
| "issue": "", |
| "pages": "305--339", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sergey Chernov, Tereza Iofciu, Wolfgang Nejdl, and Xuan Zhou. 2006. Extracting semantics relationships between wikipedia categories. SemWiki, 206. Philipp Cimiano, Andreas Hotho, and Steffen Staab. 2005. Learning concept hierarchies from text cor- pora using formal concept analysis. J. Artif. Intell. Res.(JAIR), 24:305-339.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Supportvector networks", |
| "authors": [ |
| { |
| "first": "Corinna", |
| "middle": [], |
| "last": "Cortes", |
| "suffix": "" |
| }, |
| { |
| "first": "Vladimir", |
| "middle": [], |
| "last": "Vapnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Machine learning", |
| "volume": "20", |
| "issue": "3", |
| "pages": "273--297", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support- vector networks. Machine learning, 20(3):273-297.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Study and implementation of combined techniques for automatic extraction of terminology. The balancing act: Combining symbolic and statistical approaches to language", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "B\u00e9atrice Daille", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "1", |
| "issue": "", |
| "pages": "49--66", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B\u00e9atrice Daille. 1996. Study and implementation of combined techniques for automatic extraction of ter- minology. The balancing act: Combining symbolic and statistical approaches to language, 1:49-66.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "D\u00e9tection automatique de la structure organisationnelle de documents\u00e0 partir de marqueurs visuels et lexicaux", |
| "authors": [ |
| { |
| "first": "Jean-Philippe", |
| "middle": [], |
| "last": "Fauconnier", |
| "suffix": "" |
| }, |
| { |
| "first": "Laurent", |
| "middle": [], |
| "last": "Sorin", |
| "suffix": "" |
| }, |
| { |
| "first": "Mouna", |
| "middle": [], |
| "last": "Kamel", |
| "suffix": "" |
| }, |
| { |
| "first": "Mustapha", |
| "middle": [], |
| "last": "Mojahid", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathalie", |
| "middle": [], |
| "last": "Aussenac-Gilles", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Actes de la 21e Conf\u00e9rence sur le Traitement Automatique des Langues Naturelles (TALN 2014)", |
| "volume": "", |
| "issue": "", |
| "pages": "340--351", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean-Philippe Fauconnier, Laurent Sorin, Mouna Kamel, Mustapha Mojahid, and Nathalie Aussenac-Gilles. 2014. D\u00e9tection automatique de la structure organ- isationnelle de documents\u00e0 partir de marqueurs vi- suels et lexicaux. In Actes de la 21e Conf\u00e9rence sur le Traitement Automatique des Langues Naturelles (TALN 2014), pages 340-351.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Large sample variance of kappa in the case of different sets of raters", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Joseph", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Fleiss", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "J Richard", |
| "middle": [], |
| "last": "Nee", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Landis", |
| "suffix": "" |
| } |
| ], |
| "year": 1979, |
| "venue": "Psychological Bulletin", |
| "volume": "86", |
| "issue": "5", |
| "pages": "974--977", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph L Fleiss, John C Nee, and J Richard Landis. 1979. Large sample variance of kappa in the case of different sets of raters. Psychological Bulletin, 86(5):974-977.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Automatic acquisition of hyponyms from large text corpora", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Marti", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hearst", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the 14th conference on Computational linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "539--545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marti A Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proceedings of the 14th conference on Computational linguistics, vol- ume 2, pages 539-545. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Semantic selectional restrictions for disambiguating meronymy relations", |
| "authors": [ |
| { |
| "first": "Ashwin", |
| "middle": [], |
| "last": "Ittoo", |
| "suffix": "" |
| }, |
| { |
| "first": "Gosse", |
| "middle": [], |
| "last": "Bouma", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "proceedings of CLIN09: The 19th Computational Linguistics in the Netherlands meeting", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashwin Ittoo and Gosse Bouma. 2009. Semantic selec- tional restrictions for disambiguating meronymy rela- tions. In proceedings of CLIN09: The 19th Compu- tational Linguistics in the Netherlands meeting, to ap- pear.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Thesaurus entry extraction from an on-line dictionary", |
| "authors": [ |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Jannink", |
| "suffix": "" |
| }, |
| { |
| "first": "Gio", |
| "middle": [], |
| "last": "Wiederhold", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of Fusion", |
| "volume": "99", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jan Jannink and Gio Wiederhold. 1999. Thesaurus entry extraction from an on-line dictionary. In Proceedings of Fusion, volume 99. Citeseer.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "How can document structure improve ontology learning?", |
| "authors": [ |
| { |
| "first": "Mouna", |
| "middle": [], |
| "last": "Kamel", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathalie", |
| "middle": [], |
| "last": "Aussenac-Gilles", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Workshop on Semantic Annotation and Knowledge Markup collocated with K-CAP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mouna Kamel and Nathalie Aussenac-Gilles. 2009. How can document structure improve ontology learn- ing? In Workshop on Semantic Annotation and Knowledge Markup collocated with K-CAP.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Identifying hypernyms in distributional semantic spaces", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Lenci", |
| "suffix": "" |
| }, |
| { |
| "first": "Giulia", |
| "middle": [], |
| "last": "Benotto", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", |
| "volume": "1", |
| "issue": "", |
| "pages": "75--79", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alessandro Lenci and Giulia Benotto. 2012. Identify- ing hypernyms in distributional semantic spaces. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Vol- ume 2: Proceedings of the Sixth International Work- shop on Semantic Evaluation, pages 75-79. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Tregex and tsurgeon: tools for querying and manipulating tree data structures", |
| "authors": [ |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Galen", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the fifth international conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "2231--2234", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roger Levy and Galen Andrew. 2006. Tregex and tsur- geon: tools for querying and manipulating tree data structures. In Proceedings of the fifth international conference on Language Resources and Evaluation, pages 2231-2234. Citeseer.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Rhetorical structure theory: Toward a functional theory of text organization", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "William", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [ |
| "A" |
| ], |
| "last": "Mann", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Thompson", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Text", |
| "volume": "8", |
| "issue": "3", |
| "pages": "243--281", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text, 8(3):243-281.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "A decision-based approach to rhetorical parsing", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "365--372", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Marcu. 1999. A decision-based approach to rhetorical parsing. In Proceedings of the 37th annual meeting of the Association for Computational Linguis- tics on Computational Linguistics, pages 365-372. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Efficient estimation of word representations in vector space", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1301.3781" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representa- tions in vector space. arXiv preprint arXiv:1301.3781.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Bilingual co-training for monolingual hyponymy-relation acquisition", |
| "authors": [ |
| { |
| "first": "Jong-Hoon", |
| "middle": [], |
| "last": "Oh", |
| "suffix": "" |
| }, |
| { |
| "first": "Kiyotaka", |
| "middle": [], |
| "last": "Uchimoto", |
| "suffix": "" |
| }, |
| { |
| "first": "Kentaro", |
| "middle": [], |
| "last": "Torisawa", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNL", |
| "volume": "", |
| "issue": "", |
| "pages": "432--440", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jong-Hoon Oh, Kiyotaka Uchimoto, and Kentaro Tori- sawa. 2009. Bilingual co-training for monolingual hyponymy-relation acquisition. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNL, pages 432-440. Association for Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Document structure. Computational Linguistics", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Power", |
| "suffix": "" |
| }, |
| { |
| "first": "Donia", |
| "middle": [], |
| "last": "Scott", |
| "suffix": "" |
| }, |
| { |
| "first": "Nadjet", |
| "middle": [], |
| "last": "Bouayad-Agha", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "29", |
| "issue": "", |
| "pages": "211--260", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Power, Donia Scott, and Nadjet Bouayad-Agha. 2003. Document structure. Computational Linguis- tics, 29(2):211-260.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Layout-aware text extraction from full-text pdf of scientific articles", |
| "authors": [ |
| { |
| "first": "Cartic", |
| "middle": [], |
| "last": "Ramakrishnan", |
| "suffix": "" |
| }, |
| { |
| "first": "Abhishek", |
| "middle": [], |
| "last": "Patnia", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Eduard", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Apc", |
| "middle": [], |
| "last": "Gully", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Burns", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Source code for biology and medicine", |
| "volume": "7", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cartic Ramakrishnan, Abhishek Patnia, Eduard H Hovy, Gully APC Burns, et al. 2012. Layout-aware text ex- traction from full-text pdf of scientific articles. Source code for biology and medicine, 7(1):7.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Web-scale taxonomy learning", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "S\u00e1nchez", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonio", |
| "middle": [], |
| "last": "Moreno", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of Workshop on Extending and Learning Lexical Ontologies using Machine Learning (ICML 2005)", |
| "volume": "", |
| "issue": "", |
| "pages": "53--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David S\u00e1nchez and Antonio Moreno. 2005. Web-scale taxonomy learning. In Proceedings of Workshop on Extending and Learning Lexical Ontologies using Ma- chine Learning (ICML 2005), pages 53-60, Bonn, Germany.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Learning syntactic patterns for automatic hypernym discovery", |
| "authors": [ |
| { |
| "first": "Rion", |
| "middle": [], |
| "last": "Snow", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew Y", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "17", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rion Snow, Daniel Jurafsky, and Andrew Y Ng. 2004. Learning syntactic patterns for automatic hypernym discovery. In Advances in Neural Information Pro- cessing Systems, volume 17.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Yago: a core of semantic knowledge", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Fabian", |
| "suffix": "" |
| }, |
| { |
| "first": "Gjergji", |
| "middle": [], |
| "last": "Suchanek", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerhard", |
| "middle": [], |
| "last": "Kasneci", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Weikum", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 16th international conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "697--706", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, pages 697-706. ACM.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Hacking wikipedia for hyponymy relation acquisition", |
| "authors": [ |
| { |
| "first": "Asuka", |
| "middle": [], |
| "last": "Sumida", |
| "suffix": "" |
| }, |
| { |
| "first": "Kentaro", |
| "middle": [], |
| "last": "Torisawa", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "IJC-NLP", |
| "volume": "8", |
| "issue": "", |
| "pages": "883--888", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Asuka Sumida and Kentaro Torisawa. 2008. Hacking wikipedia for hyponymy relation acquisition. In IJC- NLP, volume 8, pages 883-888. Citeseer.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Building an annotated corpus in the molecular-biology domain", |
| "authors": [ |
| { |
| "first": "Yuka", |
| "middle": [], |
| "last": "Tateisi", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomoko", |
| "middle": [], |
| "last": "Ohta", |
| "suffix": "" |
| }, |
| { |
| "first": "Nigel", |
| "middle": [], |
| "last": "Collier", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the COLING-2000 Workshop on Semantic Annotation and Intelligent Content", |
| "volume": "", |
| "issue": "", |
| "pages": "28--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yuka Tateisi, Tomoko Ohta, Nigel Collier, Chikashi No- bata, and Jun-ichi Tsujii. 2000. Building an annotated corpus in the molecular-biology domain. In Proceed- ings of the COLING-2000 Workshop on Semantic An- notation and Intelligent Content, pages 28-36. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Robust French syntax analysis: reconciling statistical methods and linguistic knowledge in the Talismane toolkit", |
| "authors": [ |
| { |
| "first": "Assaf", |
| "middle": [], |
| "last": "Urieli", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Assaf Urieli. 2013. Robust French syntax analysis: rec- onciling statistical methods and linguistic knowledge in the Talismane toolkit. Ph.D. thesis, Universit\u00e9 de Toulouse.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Hypernym discovery based on distributional similarity and hierarchical structures", |
| "authors": [ |
| { |
| "first": "Ichiro", |
| "middle": [], |
| "last": "Yamada", |
| "suffix": "" |
| }, |
| { |
| "first": "Kentaro", |
| "middle": [], |
| "last": "Torisawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Kow", |
| "middle": [], |
| "last": "Jun'ichi Kazama", |
| "suffix": "" |
| }, |
| { |
| "first": "Masaki", |
| "middle": [], |
| "last": "Kuroda", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Murata", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Stijn De", |
| "suffix": "" |
| }, |
| { |
| "first": "Francis", |
| "middle": [], |
| "last": "Saeger", |
| "suffix": "" |
| }, |
| { |
| "first": "Asuka", |
| "middle": [], |
| "last": "Bond", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sumida", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "2", |
| "issue": "", |
| "pages": "929--937", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ichiro Yamada, Kentaro Torisawa, Jun'ichi Kazama, Kow Kuroda, Masaki Murata, Stijn De Saeger, Francis Bond, and Asuka Sumida. 2009. Hypernym discov- ery based on distributional similarity and hierarchical structures. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2-Volume 2, pages 929-937. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "uris": null, |
| "text": "Rhetorical representation of the discourse structure of interest", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "text": "Example of a discourse analysis of text layout", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "text": "Figure 3presents an example of this DAG. Example of a DAG", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "text": "of a term (bigrams and unigrams of parts of speech) POS t Parts of speech of a term Role Role of a DU Visual Boolean indicating whether a pair of terms share the same visual properties Position t Value indicating a term position Position d Position of a DU in the whole document Coord For a DU, presence of coordinated DUs Sub For a DU, presence of subordinated DUs Level Value indicating the level of a DU in the structure of document Punc Returns the last punctuation of a DU NbToken Number of tokens in a DU NbSent Number of sentences in a DU COS Cosine similarity for a pair of terms", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF6": { |
| "uris": null, |
| "text": "Comparison between the baseline, the vector-based strategies and the MaxEnt", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF7": { |
| "uris": null, |
| "text": "Precision curves for two domains of Wikipedia manually checked the first 500 pairs. The curves in figure 5 indicate the precision.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "num": null, |
| "text": "d'Internet peut se faire via : ]_2 \u2022 [l'autoh\u00e9bergement de son serveur gr\u00e2ce aux projets : pair \u00e0 pair et aux protocoles de communication interop\u00e9rables et libres comme : ]_6 \u2022 [le courrier \u00e9lectronique : SMTP, IMAP ; ]_7", |
| "content": "<table><tr><td/><td>text</td><td/><td>text</td><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"2\">[1] title (level 1)</td><td/><td/></tr><tr><td/><td/><td/><td colspan=\"2\">[2] paragraph</td><td/><td/></tr><tr><td>\u2022 \u2022</td><td>\u2022 \u2022 \u2022 [aux moteurs de recherche d\u00e9centralis\u00e9s comme YaCy, Seeks ; ]_11 [la messagerie instantan\u00e9e : XMPP ; ]_8 [le partage de fichiers en pair \u00e0 pair avec par exemple le protocole BitTorrent ; ]_9 [le tchat en salons (permettant plus de deux personnes) avec des logiciels tels que RetroShare, Marabunta ; ]_10 [aux architectures distribu\u00e9es. ]_12</td><td>[4] item</td><td>[3] item [5] item [7] item [6] item</td><td>[11] item [8] item</td><td>[12] item [9] item</td><td>[10] item</td></tr></table>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "num": null, |
| "text": "Main features for qualifying the relation", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "num": null, |
| "text": "", |
| "content": "<table><tr><td>presents</td></tr></table>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "num": null, |
| "text": "Main features for the terms recognition", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "num": null, |
| "text": "", |
| "content": "<table><tr><td>presents the results.</td></tr><tr><td>The MaxEnt achieves the best F-measure and</td></tr><tr><td>outperforms the others proposed strategies. The</td></tr></table>", |
| "html": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |