ACL-OCL / Base_JSON /prefixY /json /Y06 /Y06-1020.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y06-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:34:15.953937Z"
},
"title": "An Approach to Automatically Constructing Domain Ontology 1",
"authors": [
{
"first": "Tingting",
"middle": [],
"last": "He",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Huazhong Normal University",
"location": {
"postCode": "430079",
"settlement": "Wuhan",
"country": "China"
}
},
"email": "tthe@mail.ccnu.edu.cn"
},
{
"first": "Xiaopeng",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Huazhong Normal University",
"location": {
"postCode": "430079",
"settlement": "Wuhan",
"country": "China"
}
},
"email": "zhangxiaopeng@mails.ccnu.edu.cn"
},
{
"first": "Xinghuo",
"middle": [],
"last": "Ye",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Huazhong Normal University",
"location": {
"postCode": "430079",
"settlement": "Wuhan",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we present an approach to mining domain-dependent ontologies using term extraction and relationship discovery technology. There are two main innovations in our approach. One is extracting terms using log-likelihood ratio, which is based on the contrastive probability of term occurrence in domain corpus and background corpus. The other is fusing together information from multiple knowledge sources as evidences for discovering particular semantic relationships among terms. In the experiment, we also improve the traditional k-mediods algorithm for multi-level clustering. We have applied our approach to produce an ontology for the domain of computer science and obtained promising results.",
"pdf_parse": {
"paper_id": "Y06-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we present an approach to mining domain-dependent ontologies using term extraction and relationship discovery technology. There are two main innovations in our approach. One is extracting terms using log-likelihood ratio, which is based on the contrastive probability of term occurrence in domain corpus and background corpus. The other is fusing together information from multiple knowledge sources as evidences for discovering particular semantic relationships among terms. In the experiment, we also improve the traditional k-mediods algorithm for multi-level clustering. We have applied our approach to produce an ontology for the domain of computer science and obtained promising results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Ontologies have become an important means for structuring knowledge and building knowledge-intensive systems. Ontologies have shown their usefulness in application areas such as intelligent information integration, information retrieval and natural language processing, to name but a few. For this purpose, efforts have been made to facilitate the ontology engineering process, in particular the acquisition of ontologies from domain texts [1] .",
"cite_spans": [
{
"start": 440,
"end": 443,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Constructing an ontology is an extremely laborious effort. Even with some reuse of \"core\" knowledge from an Upper Model, the task of creating an ontology for a particular domain has a high cost, incurred for each new domain. Tools that could automate, or semi-automate, the construction of ontologies for different domains could dramatically reduce the knowledge creation cost. One approach to developing such tools is to rely on information implicit in collections of texts in a particular domain. If it were possible to automatically extract terms and their semantic relations from the text corpus, domain ontology could be built conveniently. This would be more cost-effective than having a human develop the ontology from scratch [2] [3] .",
"cite_spans": [
{
"start": 734,
"end": 737,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 738,
"end": 741,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Domain specific ontologies are useful in systems involved with artificial reasoning, and information retrieval. Ontologies give such systems a vocabulary of terms, and concepts relating one term to another. In this paper, we present a method that automatically mining an ontology from any large corpus in a specific domain, to support data integration and information retrieval tasks. The induced ontology consists of domain concepts only related by parent-child links, not including more specialized relations. Our approach is comprised of two main phases: term extraction and relationship discovery. In the former phase, meaningful terms are extracted using LLR formula. In the latter one, parent-child relations among terms are induced through multilevel clustering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is as follows. In Section 2, we will discuss the related works in this field. In Section 3, we will give an overview of the overall system architecture and explain the means and progress of ontology construction. Subsequently, an example will show some promising results we have obtained when applying our mechanisms for mining ontologies from text, together with our analysis. This will be presented in Section 4. In Section 5, we conclude and mention our future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The existing approaches to ontology induction include those that start from structured data, merging ontologies or database schemas (Doan et al. 2002) . Other approaches use natural language data, sometimes just by analyzing the corpus (Sanderson and Croft 1999), (Caraballo 1999) or by learning to expand WordNet with clusters of terms from a corpus, e.g., (Girju et al. 2003) . Information extraction approaches that infer labeled relations either require substantial hand-created linguistic or domain knowledge, e.g., (Craven and Kumlien 1999) (Hull and Gomez 1993), or require human-annotated training data with relation information for each domain (Craven et al. 1998).",
"cite_spans": [
{
"start": 132,
"end": 150,
"text": "(Doan et al. 2002)",
"ref_id": "BIBREF9"
},
{
"start": 358,
"end": 377,
"text": "(Girju et al. 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "For automatic ontology construction, Govind and Chakravarthi have presented in 2001 an approach that extracts ontology from text documents by Singular Value Decomposition (SVD), which is a pure statistical analysis method , as compared to heuristic and rule based methods. They adopt Latent Semantic Indexing (LSI), which attempts to catch term-term statistical references by replacing the document space with lower dimensional concept space. Their method is convinced of its simplicity because it is based on fairly precise mathematical foundation. It is effective but limited with precision. What's more, it doesn't figure out the exact relations among terms [4] .",
"cite_spans": [
{
"start": 661,
"end": 664,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "In year of 2001, Dekang Lin and Patrick Pantel proposed a method for domain concepts discovery based on a clustering algorithm called CBC (Clustering by Committee). They generally regard a concept as a cluster of terms. It just deals with only one aspect of the whole progress of ontology induction [5] .",
"cite_spans": [
{
"start": 299,
"end": 302,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "An overall architecture for domain-independent ontology induction is shown in Figure 1 . The documents of domain corpora are preprocessed to remove segments such as pictures and specific symbols and then filter the stopwords. Next, terms are extracted based on word segmentation and syntactic tagging. Subsequently semantic relations between pairs of terms are derived using multilevel clustering with evidences from multiple knowledge sources. ",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 86,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Architecture",
"sec_num": "3.1"
},
{
"text": "In this section, our goal is to identify domain-relevant terms from a collection of domain specific texts. For term extraction, there exist several approaches. One is based on PAT-TREE, which has the advantage in extracting terms (phrases) of any length because it could avoid word segmentation for Chinese. But it needs a large amount of documents to achieve excellent precision. Another approach to term extraction is C/NC-Value method proposed by Frantzi and Ananiadou (1999), which combines linguistics and statistics methods and makes a progress. But it has limitation because its linguistics knowledge is only for English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Discovery",
"sec_num": "3.2"
},
{
"text": "In this paper, we propose a promising approach to extracting domain-relevant terms. Terms are scored for domain-relevance based on the assumption that if a term occurs significantly more in a domain corpus than in a background corpus, then the term is clearly domain relevant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Discovery",
"sec_num": "3.2"
},
{
"text": "As an example, in Table 1 we compare the number of documents containing the terms \"\u7b97 \u6cd5\"(algorithm), \"\u5185\u5b58\"(memory), \"\u8def\u7531\u5668\"(Router) and \"\u6570\u636e\u5e93\"(database) in a domain-dependent corpus of computer science including 200 Chinese periodical papers, compared to a larger Modern Chinese Corpus as the background corpus. We can observe from Table 1 that the terms \"\u7b97\u6cd5\"(algorithm), \"\u5185\u5b58\"(memory), \"\u8def\u7531\u5668(Router) and \"\u6570\u636e\u5e93\"(database) occur significantly more in the domain corpus than in the Background corpus. We consider they are domain-dependent. To estimate the domain relevancy of a term, we use the log-likelihood ratio (LLR) given by -2 log2 ( Ho (p;k1,n1,k2,n2 ) / Ha ( p1,p2;n1,k1,n2,k2 ) )",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 327,
"end": 334,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Term Discovery",
"sec_num": "3.2"
},
{
"text": "LLR measures the extent to which a hypothesized model of the distribution of cell counts, Ha, differs from the null hypothesis, Ho (namely, that the percentage of documents containing this term is the same in both corpora). We use a binomial model for Ho and Ha. Here, p=(k1+k2)/(n1+n2), p1=k1/n1, p2=k2/n2, k1 is the number of documents containing the term in the domain corpus, k2 is the number of documents containing the term in the background corpus, n1 is the total number of documents in the domain corpus, n2 is the total number of documents in the background corpus [6] .",
"cite_spans": [
{
"start": 575,
"end": 578,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Term Discovery",
"sec_num": "3.2"
},
{
"text": "In this section, we try to mine the parent-child links among terms using multilevel clustering based on term similarity. We fuse together information from multiple knowledge sources as evidences for particular semantic relationships among terms. For clustering we adopt improved k-medoids algorithm, which is flat and effective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationship Discovery",
"sec_num": "3.3"
},
{
"text": "The process of inducing relations is as follows. First the clustering algorithm is used to obtain top-level clusters. Subsequently, for each top-level cluster, we use the clustering algorithm again to gain clusters of second level. By analogy, we can find multilevel clusters that imply parent-child relationships among terms. In experiment the depth of clustering level is restricted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationship Discovery",
"sec_num": "3.3"
},
{
"text": "The preprocessed documents are analyzed and the frequency matrix known as Term-Document matrix is produced as result of the analysis. For a Term-Document matrix M[m][n], each term can be represented as the following vector:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Similarity Computation",
"sec_num": "3.3.1"
},
{
"text": "M[i][n]) , M[i][k], , M[i][1], (M[i][0], \u2026 \u2026 = i T r (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Similarity Computation",
"sec_num": "3.3.1"
},
{
"text": "We can compute the cosine similarity between a pair of terms given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Similarity Computation",
"sec_num": "3.3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "j i j i j i T T T T T T r r r r r r \u22c5 \u22c5 = ) , cos(",
"eq_num": "(3)"
}
],
"section": "Term Similarity Computation",
"sec_num": "3.3.1"
},
{
"text": "The cosine similarities between several term pairs are shown in Table 2 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 71,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Term Similarity Computation",
"sec_num": "3.3.1"
},
{
"text": "It can be observed that the cosine similarity is likely to satisfy the concurrent frequency of term pairs. It reflects the context correlation of terms but might have a prejudice against semantic correlation among terms. For instance, the cosine similarity computed above between \"\u78c1\u76d8\"(disk) and \"\u6570\u636e\" (data) is just 0.0663552, which deviates from the experiential value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "0.17585",
"sec_num": null
},
{
"text": "In order to exact similarity, we use HowNet as another knowledge resource, which is a Chinese thesaurus by Zhendong Dong and Qiang Dong (2001). We combine the similarity of terms in HowNet as a supplement with the cosine similarity to estimate the correlation of term pairs. The HowNet similarity that involves unknown words is assigned to 0. Table 3 shows some HowNet similarities of term pairs. Meanwhile, \u03b1 is an adjustment argument. The adjusted similarity between term pairs is given in Table 4 (\u03b1 =0.5). 0.318716",
"cite_spans": [],
"ref_spans": [
{
"start": 343,
"end": 350,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 492,
"end": 499,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "0.17585",
"sec_num": null
},
{
"text": "In order to make it converge faster and result in better clusters in accordance with the original distribution of terms, we improved the traditional k-medoids algorithm. When deciding the new center of a cluster, we first choose top-p terms closest to the old center in the cluster and take the term closest to the mean of the p terms as the new center. The improved algorithm is described as follows: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering Algorithm",
"sec_num": "3.3.2"
},
{
"text": "We have applied our approach to produce ontology in computer science domain. The domain corpus contains 200 Chinese theses (3.12 M) retrieved from professional publications. The background corpus we used is the Modern Chinese Corpus (1999-2001) which consists of 12000 documents (33.5 M).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "In experiment, we have compared the effect of the two scoring methods LLR and TF.IDF. Table 5 shows the scores of some terms in relative percentage. The boldfaced terms is identified by LLR to be domain-relevant. It can be inferred from the experiment results that LLR excels TF.IDF. A term list of 286 domain-relevant terms (key terms) has been manually constructed to support the subsequent evaluation. We estimate the results normally based on Recall, Precision and F-measure, which are defined as follows.",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 93,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Term Discovery",
"sec_num": "4.1"
},
{
"text": "The ratio of correct terms number to key terms number in manual list given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Recall(R)",
"sec_num": null
},
{
"text": "key correct N N R = (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Recall(R)",
"sec_num": null
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Recall(R)",
"sec_num": null
},
{
"text": "The ratio of correct terms number to extracted terms number given by \u2022 F-measure(F) F-measure is defined as a combination of recall and precision:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision(P)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P R RP F + = 2",
"eq_num": "(9)"
}
],
"section": "Precision(P)",
"sec_num": null
},
{
"text": "These three values vary depending on the threshold of LLR. Figure 2 shows the variety of F-measure, recall and precision along with LLR. Fig. 2 . Parent-child relationships among terms",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 2",
"ref_id": null
},
{
"start": 137,
"end": 143,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Precision(P)",
"sec_num": null
},
{
"text": "In this paper we present an approach to automatically mining domain-dependent ontologies from corpora based on term extraction and relationship discovery technology. There are two main innovations in our approach. One is extracting terms using log-likelihood ratio, which is based on the contrastive probability of term occurrence in domain corpus and background corpus. The other is fusing together information from multiple knowledge sources as evidences for discovering particular semantic relationships among terms. In our experiment, we improve the traditional k-mediods algorithm for multi-level clustering. We have applied our approach to produce an ontology for the domain of computer science and obtained promising results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In the future work, we will consider developing heuristic methods or rules to label the exact relations among terms and finding out a more effective ontology evaluation methodology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The paper is supported by National Natural Science Foundation of China (NSFC), (Grant No.60496323; Grant No.60375016 Grant No.10071028;); Ministry of education of China, Research Project for Science and technology, (Grant No. 105117).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatically Inducing Ontologies from Corpora",
"authors": [
{
"first": "Inderjeet",
"middle": [],
"last": "Mani",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of CompuTerm 2004: 3rd International Workshop on Computational Terminology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inderjeet Mani.Automatically Inducing Ontologies from Corpora. Proceedings of CompuTerm 2004: 3rd International Workshop on Computational Terminology, OLING'2004, Geneva.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Mining ontology from text",
"authors": [
{
"first": "A",
"middle": [],
"last": "Maedche",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Staab",
"suffix": ""
}
],
"year": 2000,
"venue": "12th International Workshop on Knowledge Engineering and Knowledge Management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Maedche and S. Staab. Mining ontology from text. 12th International Workshop on Knowledge Engineering and Knowledge Management,2000 (EKAW'2000).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The TEXT-TO-ONTO Ontology Learning Environment. Software Demonstration at ICCS-2000-Eight International Conference on Conceptual Structures",
"authors": [
{
"first": "A",
"middle": [],
"last": "Maedche",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Staab",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Maedche and S. Staab. The TEXT-TO-ONTO Ontology Learning Environment. Software Demonstration at ICCS-2000-Eight International Conference on Conceptual Structures. August 14-18, 2000, Darmstadt, Germany.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Extracting an ontology from a document using Singular Value Decomposition",
"authors": [
{
"first": "",
"middle": [],
"last": "Dr",
"suffix": ""
},
{
"first": "Dr",
"middle": [
"James"
],
"last": "Srivastava",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gil De Lamadrid",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dr.Sadanand Srivastava, Dr.James Gil de Lamadrid. Extracting an ontology from a document using Singular Value Decomposition. ADMI 2001.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Induction of semantic classes from natural language text",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of SIGKDD-01",
"volume": "",
"issue": "",
"pages": "317--322",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. and Pantel, P. 2001. Induction of semantic classes from natural language text. In Proceedings of SIGKDD-01. pp.317-322. SanFrancisco, CA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Accurate Methods for the Statistics of Surprise and Coincidence",
"authors": [
{
"first": "",
"middle": [
"T"
],
"last": "Dunning",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "",
"issue": "1",
"pages": "61--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dunning.T. Accurate Methods for the Statistics of Surprise and Coincidence, Computational Linguistics, 19(1); 61-74, March 1993.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Document Ontology Extractor, Applied research in Computer science",
"authors": [
{
"first": "S",
"middle": [],
"last": "Chakravarthi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Velvadapu",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chakravarthi S Velvadapu, Document Ontology Extractor, Applied research in Computer science, Fall-2001.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Discovering conceptual relations from text",
"authors": [
{
"first": "A",
"middle": [],
"last": "Maedche",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Staab",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ECAI-2000",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Maedche and S. Staab. Discovering conceptual relations from text. In Proceedings of ECAI-2000. IOS Press, Amsterdam, 2000.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A corpus-based conceptual clustering method for verb frames and ontology acquisition",
"authors": [
{
"first": "D",
"middle": [],
"last": "Faure",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Nedellec",
"suffix": ""
}
],
"year": 1998,
"venue": "LREC workshop on adapting lexical and corpus resources to sublanguages and applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Faure and C. Nedellec. A corpus-based conceptual clustering method for verb frames and ontology acquisition. In LREC workshop on adapting lexical and corpus resources to sublanguages and applications, Granada, Spain, 1998.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning to Map between Ontologies on the Semantic Web",
"authors": [
{
"first": "A",
"middle": [],
"last": "Doan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Madhavan",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Domings",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Halevy",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doan, A., Madhavan, J., Domings, P. and Halevy, A. 2002. Learning to Map between Ontologies on the Semantic Web. WWW'2002.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "System Architecture",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Choose m terms as the centers of clusters randomly: Compute the similarity for each term to each cluster center. Assign the term into the closest cluster.3. Determine the new center of each cluster as follows:\u2212 Compute the average similarity of terms in cluster i using the formula: the number of terms in cluster i. \u2212 Pick out p terms most closest to the cluster i center decided by the maximum of m average similarities for m clusters. \u2212 Compute the mean of the above p terms and choose the term closest to it as the new center of cluster i. Go to step 2 only if m cluster centers is not stable.5.End and result in m clusters.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "F-measure, recall and precision by varying the threshold of LLR 4.2 Relationship Discovery According to figure 2, we chose 94 as the threshold of LLR and obtained 216 domain-relevant terms. The depth of clustering level was restricted to 2. We have gained 5 top-level clusters and 15 second level clusters. The precision of the top-level clusters achieved 76.7%. The average precision of the second-level clusters achieved 70.3%. The parent-child relationship among terms and clusters is shown in figure 3. Each cluster is named by the most frequent term in it.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"content": "<table><tr><td/><td>\u7b97\u6cd5</td><td>\u5185\u5b58</td><td>\u8def\u7531\u5668</td><td>\u6570\u636e\u5e93</td><td>Total</td></tr><tr><td>Domain Corpus</td><td>89</td><td>78</td><td>73</td><td>69</td><td>200</td></tr><tr><td>Background Corpus</td><td>7</td><td>0</td><td>4</td><td>6</td><td>12000</td></tr><tr><td>Difference</td><td>82</td><td>78</td><td>69</td><td>63</td><td>-11800</td></tr></table>",
"type_str": "table",
"text": "Term IDF in domain and background corpora",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>Term1</td><td>Term2</td><td>Cosine Similarity</td></tr><tr><td>\u7f13\u51b2\u533a(buffer)</td><td>\u78c1\u76d8(disk)</td><td>0.436248</td></tr><tr><td>\u7f13\u51b2\u533a(buffer)</td><td>\u4e3b\u5b58(main store)</td><td>0.579771</td></tr><tr><td>\u5185\u5b58(memory)</td><td>\u78c1\u76d8(disk)</td><td>0.434059</td></tr><tr><td>\u670d\u52a1\u5668(server)</td><td>\u7f51\u7edc(network)</td><td/></tr></table>",
"type_str": "table",
"text": "Cosine similarities of term pairs",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table><tr><td>Term1</td><td>Term2</td><td>HowNet Similarity</td></tr><tr><td>\u7f13\u51b2\u533a(buffer)</td><td>\u78c1\u76d8(disk)</td><td>0.149333</td></tr><tr><td>\u7f13\u51b2\u533a(buffer)</td><td>\u4e3b\u5b58(main store)</td><td>0.000000</td></tr><tr><td>\u5185\u5b58(memory)</td><td>\u78c1\u76d8(disk)</td><td>0.149333</td></tr><tr><td>\u670d\u52a1\u5668(server)</td><td>\u7f51\u7edc(network)</td><td>0.285714</td></tr><tr><td colspan=\"2\">The final formula of similarity computation is given by</td><td/></tr></table>",
"type_str": "table",
"text": "HowNet similarities of term pairs",
"num": null
},
"TABREF4": {
"html": null,
"content": "<table><tr><td>Term1</td><td>Term2</td><td>Similarity</td></tr><tr><td>\u7f13\u51b2\u533a(buffer)</td><td>\u78c1\u76d8(disk)</td><td>0.510913</td></tr><tr><td>\u7f13\u51b2\u533a(buffer)</td><td>\u4e3b\u5b58(main store)</td><td>0.579771</td></tr><tr><td>\u5185\u5b58(memory)</td><td>\u78c1\u76d8(disk)</td><td>0.508724</td></tr><tr><td>\u670d\u52a1\u5668(server)</td><td>\u7f51\u7edc(network)</td><td/></tr></table>",
"type_str": "table",
"text": "Adjusted similarities of term pairs",
"num": null
},
"TABREF5": {
"html": null,
"content": "<table><tr><td>Term</td><td>Domain DF</td><td colspan=\"3\">Background DF LLR score TF.IDF score</td></tr><tr><td>\u8def\u7531\u5668</td><td>73</td><td>4</td><td>98.6</td><td>66.8</td></tr><tr><td>\u603b\u7ebf</td><td>56</td><td>1</td><td>99.9</td><td>70.4</td></tr><tr><td>\u63a7\u5236\u5668</td><td>35</td><td>7</td><td>96.5</td><td>94.5</td></tr><tr><td>\u7aef\u53e3</td><td>25</td><td>5</td><td>93.1</td><td>65.3</td></tr><tr><td>\u7f13\u5b58</td><td>36</td><td>2</td><td>94.6</td><td>53.5</td></tr><tr><td>\u8bf7\u6c42</td><td>17</td><td>188</td><td>67.5</td><td>72.6</td></tr><tr><td>\u7ebf\u8def</td><td>31</td><td>232</td><td>56.2</td><td>65.7</td></tr><tr><td>\u8d28\u91cf</td><td>37</td><td>3725</td><td>42.4</td><td>99.9</td></tr><tr><td>\u5c4f\u853d</td><td>7</td><td>8</td><td>15.3</td><td>48.2</td></tr></table>",
"type_str": "table",
"text": "Relative scores of both LLR and TF.IDF",
"num": null
},
"TABREF6": {
"html": null,
"content": "<table><tr><td/><td>\u7f51\u7edc</td><td>\u8def\u7531\u5668 \u8def\u5f84 \u4e3b\u5e72\u7f51 \u901f\u7387 \u4e2d\u7ee7 \u5e7f\u57df\u7f51 \u5ef6\u8fdf \u5206\u7ec4 \u4ea4\u6362 \u4ea4\u6362\u673a \u5c40\u57df\u7f51 \u4ea4\u6362\u7f51 \u63a5\u5165\u7f51 \u7b56\u7565 \u5173</td></tr><tr><td>\u7f51</td><td/><td>\u952e\u8bcd \u7f51\u683c \u7f51\u7edc \u62d3\u6251 \u9aa8\u5e72\u7f51</td></tr><tr><td>\u7edc</td><td>\u641c\u7d22</td><td>\u641c\u7d22 \u5f15\u64ce \u7f51\u9875 \u811a\u672c \u4e3b\u673a \u4e07\u7ef4\u7f51 \u7f51\u7ad9 \u7f51\u5740 \u5e26\u5bbd \u963b\u585e \u7aef\u53e3 \u76d1\u542c \u4fe1\u606f\u7f51</td></tr><tr><td/><td>\u670d\u52a1\u5668</td><td>\u9632\u706b\u5899 \u6d41\u91cf \u8ba1\u7b97\u4e2d\u5fc3 \u57df\u540d \u4e3b\u9875 \u670d\u52a1\u5668 \u62f7\u8d1d \u7535\u5b50\u90ae\u4ef6 \u90ae\u4ef6 \u90ae\u7bb1 \u65e5\u5fd7 \u6570\u636e\u6d41 \u8bed\u97f3 \u5171\u4eab</td></tr><tr><td/><td>\u4fe1\u606f\u6e2f</td><td>\u667a\u80fd\u6027 \u4fe1\u606f\u6e2f \u56e0\u7279\u7f51 \u667a\u80fd\u5316 \u7535\u8bdd\u7f51 \u5bbd\u5e26 \u8c03\u5236 \u89e3\u8c03 \u9891\u7387\u6bb5 \u6570\u5b57\u7f51</td></tr><tr><td>\u8ba1</td><td>\u8fdb\u7a0b</td><td>\u4f53\u7cfb \u7ed3\u6784 \u603b\u7ebf \u786c\u4ef6 \u63a5\u53e3 \u4e2d\u65ad \u63a7\u5236\u5668 \u5bc4\u5b58\u5668 \u8fd0\u7b97 \u65f6\u95f4\u6bb5 \u6570\u636e \u5171\u4eab \u5b9e\u65f6 \u7cfb\u7edf \u8c03\u5ea6\u8005 \u8fdb\u7a0b</td></tr><tr><td>\u7b97</td><td/><td>\u5b50\u7cfb\u7edf \u5206\u7cfb\u7edf \u786c\u76d8</td></tr><tr><td>\u673a</td><td>\u8ba1\u7b97\u673a</td><td>\u8ba1\u7b97\u673a \u82af\u7247</td></tr></table>",
"type_str": "table",
"text": "\u5927\u578b\u673a \u5de8\u578b \u78c1\u76d8 \u8f6f\u76d8 \u5149\u76d8 \u4e3b\u5b58 \u541e\u5410 \u8f6f\u9a71 \u5916\u5b58 \u5185\u5b58 \u5b58\u50a8\u5668 \u5b58\u50a8 \u7f13\u51b2 \u7f13\u5b58 \u8f93\u51fa \u9a71\u52a8 \u9a71\u52a8\u5668 \u9002\u914d\u5668 \u7f51\u5361 \u53c2\u6570 \u77e9\u9635 \u6570\u7ec4 \u6307\u9488 \u5b57\u7b26\u4e32 \u53c2\u6570 \u9759\u6001 \u8bbf\u95ee \u7f16\u8bd1\u5668 \u53d8\u91cf \u5b57\u7b26 \u6807\u91cf \u6d41\u7a0b\u56fe \u6d41\u7a0b \u7ed3\u6784\u56fe \u903b\u8f91\u56fe \u8fd0 \u7b97\u7b26 \u7f16\u7a0b \u63a7\u5236\u53f0 \u7f16\u7a0b \u6e90\u7a0b\u5e8f \u7a0b\u5e8f\u5305 \u5b50\u7a0b\u5e8f \u7a0b\u5e8f\u6027 \u6e90\u4ee3\u7801 \u5411\u91cf \u7f16\u8bd1 \u8c03\u8bd5 \u8c03\u8bd5\u5668 \u7a0b\u5e8f\u5458 \u74f6\u9888 \u6570\u636e\u5305 \u5730\u5740 \u8f93\u5165 \u63d0\u793a\u7b26 \u952e\u76d8 \u7ec8\u6b62\u7b26 \u793a\u610f\u56fe \u4e3b\u7a0b\u5e8f \u64cd\u4f5c\u7b26 \u7a7a\u683c \u6570\u636e\u4f4d \u8bfb\u53d6 \u5b58\u53d6 \u7f13\u51b2\u533a \u5730\u5740 \u8ba1\u65f6\u5668 \u7a0b\u5e8f \u7f16\u7a0b\u8005 \u67e5\u8be2 \u5206\u6790\u5668 \u6570\u636e\u5e93 \u6570\u636e\u8868 \u5197\u4f59 \u5907\u4efd \u6807\u8bc6\u7b26 \u8ba1\u6570\u5668 \u521d\u59cb\u5316 \u7a0b\u5e8f \u8bed\u53e5 \u8868\u8fbe\u5f0f \u64cd\u4f5c\u6570 \u56fe\u5f62 \u5207\u6362 \u56fe\u5f62 \u754c\u9762 \u8bf7\u6c42 \u6587\u672c\u6846 \u56fe\u6807 \u6309\u94ae \u89c6\u56fe \u6807\u8bc6\u53f7 \u7a0b \u5e8f \u8f6f\u4ef6 \u8f6f\u4ef6 \u8f6f\u4ef6\u5305 \u6784\u5efa \u590d\u7528 \u91cd\u6784 \u7ec4\u4ef6 \u6784\u4ef6 \u91cd\u7528\u6027 \u6a21\u5757 \u5c01\u88c5\u6027 \u5c01\u88c5 \u7ee7\u627f \u79c1\u6709 \u5b9e\u4f8b \u5bf9\u8c61 \u4fe1 \u606f \u4fe1\u606f \u52a0\u5bc6 \u4ee4\u724c \u4fe1\u606f\u8bba \u7528\u6237\u540d \u53e3\u4ee4 \u767b\u5f55 \u4fe1\u606f \u4fe1\u9053 \u4e8c\u8fdb\u5236 \u7f16\u7801 \u89e3\u7801 \u5c40\u90e8 \u8bd1\u7801 \u8865\u7801 \u8bef\u7801 \u7ea0\u9519 \u7a7a\u4e32 \u7b97\u6cd5 \u961f\u5217 \u7d22\u5f15 \u5b57\u8282 \u6807\u8bb0 \u5b57\u6bb5 \u7b97\u6cd5 \u4e8c\u5206\u6cd5 \u6570\u636e\u9879 \u56de\u6eaf \u5806\u6808 \u8282\u70b9 \u8c03\u5ea6 \u4f18\u5148\u7ea7 \u865a\u62df \u7b97 \u6cd5 \u6267\u884c \u4ea4\u6362 \u8f6e\u8f6c \u6d41\u6c34\u53f7 \u65f6\u5e8f \u7ebf\u6027 \u6267\u884c \u6807\u8bc6\u7b26",
"num": null
}
}
}
}