ACL-OCL / Base_JSON /prefixW /json /W14 /W14-0123.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W14-0123",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:53:40.165689Z"
},
"title": "Automatic Construction of Amharic Semantic Networks From Unstructured Text Using Amharic WordNet",
"authors": [
{
"first": "Alelgn",
"middle": [],
"last": "Tefera",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Jigjiga University",
"location": {
"country": "Ethiopia"
}
},
"email": "alelgn.tefera@gmail.com"
},
{
"first": "Yaregal",
"middle": [],
"last": "Assabie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ababa University",
"location": {
"country": "Ethiopia"
}
},
"email": "yaregal.assabie@aau.edu.et"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Semantic networks have become key components in many natural language processing applications. This paper presents an automatic construction of Amharic semantic networks using Amharic WordNet as initial knowledge base where intervening word patterns between pairs of concepts in the WordNet are extracted for a specific relation from a given text. For each pair of concepts which we know the relationship contained in Amharic WordNet, we search the corpus for some text snapshot between these concepts. The returned text snapshot is processed to extract all the patterns having n-gram words between the two concepts. We use the WordSpace model for extraction of semantically related concepts and relation identification among these concepts utilizes the extracted text patterns. The system is designed to extract \"part-of\" and \"type-of\" relations between concepts which are very popular and frequently found between concepts in any corpus. The system was tested in three phases with text corpus collected from news outlets, and experimental results are reported.",
"pdf_parse": {
"paper_id": "W14-0123",
"_pdf_hash": "",
"abstract": [
{
"text": "Semantic networks have become key components in many natural language processing applications. This paper presents an automatic construction of Amharic semantic networks using Amharic WordNet as initial knowledge base where intervening word patterns between pairs of concepts in the WordNet are extracted for a specific relation from a given text. For each pair of concepts which we know the relationship contained in Amharic WordNet, we search the corpus for some text snapshot between these concepts. The returned text snapshot is processed to extract all the patterns having n-gram words between the two concepts. We use the WordSpace model for extraction of semantically related concepts and relation identification among these concepts utilizes the extracted text patterns. The system is designed to extract \"part-of\" and \"type-of\" relations between concepts which are very popular and frequently found between concepts in any corpus. The system was tested in three phases with text corpus collected from news outlets, and experimental results are reported.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A semantic network is a network which represents semantic relations among concepts and it is often used to represent knowledge. A semantic network is used when one has knowledge that is best understood as a set of concepts that are related to one another. Concepts are the abstract representations of the meaning of terms. A term can be physically represented by a word, phrase, sentence, paragraph, or document. The relations between concepts that are most com-monly used in semantic networks are synonym (similar concepts), antonym (opposite concepts), meronym/holonym (\"part-of\" relation between concepts), and hyponym/hypernym (\"type-of\" relation between concepts). Knowledge stored as semantic networks can be represented in the form of graphs (directed or undirected) using concepts as nodes and semantic relations as labeled edges (Fellbaum, 1998; Steyvers and Tenenbaum, 2005) . Semantic networks are becoming popular issues these days. Even though this popularity is mostly related to the idea of semantic web, it is also related to the natural language processing (NLP) applications. Semantic networks allow search engines to search not only for the key words given by the user but also for the related concepts, and show how this relation is made. Knowledge stored as semantic networks can be used by programs that generate text from structured data. Semantic networks are also used for document summarization by compressing the data semantically and for document classification using the knowledge stored in it (Berners-Lee, 2001; Sahlgren, 2006; Smith, 2003) . Approaches commonly used to automatically construct semantic networks are knowledgebased, corpus-based and hybrid approaches. In the knowledge-based approach, relations between two concepts are extracted using a thesaurus in a supervised manner whereas corpus-based approach extracts concepts from a large amount of text in a semi-supervised method. Hybrid approach combines both the hierarchy of the thesaurus and statistical information for concepts measured in large corpora (Dominic and Trevor, 2010; George et al, 2010; Sahlgren, 2006) . Over the past years, several attempts have been made to develop semantic networks. Among the widely known are ASKNet (Harrington and Clark, 2007) , MindNet (Richardson et al, 1998) , and Leximancer (Smith, 2003) . Most of the semantic networks constructed so far assume English text as corpus. However, to our best knowledge, there is no system that automatically constructs semantic networks from unstructured Amharic text.",
"cite_spans": [
{
"start": 838,
"end": 854,
"text": "(Fellbaum, 1998;",
"ref_id": "BIBREF4"
},
{
"start": 855,
"end": 884,
"text": "Steyvers and Tenenbaum, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 1523,
"end": 1542,
"text": "(Berners-Lee, 2001;",
"ref_id": "BIBREF2"
},
{
"start": 1543,
"end": 1558,
"text": "Sahlgren, 2006;",
"ref_id": "BIBREF10"
},
{
"start": 1559,
"end": 1571,
"text": "Smith, 2003)",
"ref_id": "BIBREF11"
},
{
"start": 2052,
"end": 2078,
"text": "(Dominic and Trevor, 2010;",
"ref_id": "BIBREF3"
},
{
"start": 2079,
"end": 2098,
"text": "George et al, 2010;",
"ref_id": "BIBREF5"
},
{
"start": 2099,
"end": 2114,
"text": "Sahlgren, 2006)",
"ref_id": "BIBREF10"
},
{
"start": 2234,
"end": 2262,
"text": "(Harrington and Clark, 2007)",
"ref_id": "BIBREF6"
},
{
"start": 2273,
"end": 2297,
"text": "(Richardson et al, 1998)",
"ref_id": "BIBREF9"
},
{
"start": 2315,
"end": 2328,
"text": "(Smith, 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents an automatic construction of semantic networks from unconstrained and unstructured Amharic text. The remaining part of this paper is organized as follows. Section 2 presents Amharic language with emphasis to its morphological features. The design of Amharic semantic network construction is discussed in Section 3. Experimental results are presented in Section 4, and conclusion and future works are highlighted in Section 5. References are provided at the end.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Amharic is a Semitic language spoken predominantly in Ethiopia. It is the working language of the country having a population of over 90 million at present. The language is spoken as a monther tongue by a large segment of the population in the northern and central regions of Ethiopia and as a second language by many others. It is the second most spoken Semitic language in the world next to Arabic and the most commonly learned second language throughout Ethiopia (Lewis et al, 2013) . Amharic is written using a script known as fidel having 33 consonants (basic characters) out of which six other characters representing combinations of vowels and consonants are derived for each character.",
"cite_spans": [
{
"start": 466,
"end": 485,
"text": "(Lewis et al, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Amharic Language",
"sec_num": "2"
},
{
"text": "Derivation and inflection of words in Amharic is a very complex process (Amare, 2010; Yimam, 2000) . Amharic nouns and adjectives are inflected for number, gender, definitnesss, and cases. On the other hand, Amharic nouns can be derived from: Adjectives are also derived from:",
"cite_spans": [
{
"start": 72,
"end": 85,
"text": "(Amare, 2010;",
"ref_id": "BIBREF1"
},
{
"start": 86,
"end": 98,
"text": "Yimam, 2000)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Amharic Language",
"sec_num": "2"
},
{
"text": "\u2022 verbal",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amharic Language",
"sec_num": "2"
},
{
"text": "\u2022 verbal roots by infixing vowels between consonants, e.g. \u1325\u1241\u122d (\u0143\u012dqur/black) from \u1325\u1245\u122d (\u0143qr); \u2022 nouns by suffixing bound morphemes, e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amharic Language",
"sec_num": "2"
},
{
"text": "\u1325\u1241\u122d (\u0143\u012dqur/black) from \u1325\u1245\u122d (\u0143qr); and \u2022 stems by suffixing bound morphemes, e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amharic Language",
"sec_num": "2"
},
{
"text": "\u12f0\u12ab\u121b (d\u00e4kama/weak) from \u12f0\u12ab\u121d-(dekam-). In addition, nouns and adjectives can be derived from compound words of various lexical categories. Amharic verb inflection is even more complex than that of nouns and adjectives as verbs are marked for any combination of person, gender, number, case, tense/aspect, and mood resulting in the synthesis of thousands of words from a single verbal root. With respect to the derivation process, several verbs in their surface forms are derived from a single verbal stem, and several stems are derived from a single verbal root. For example, from the verbal root \u1235\u1265\u122d (sbr/to break), we can derive verbal stems such as \u1230\u1265\u122d (s\u00e4br), \u1230\u1260\u122d (s\u00e4b\u00e4r), \u1233\u1265\u122d (sabr), \u1230\u1263\u1265\u122d (s\u00e4babr), \u1270\u1230\u1263\u1265\u122d (t\u00e4s\u00e4babr), etc. and we can derive words such as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amharic Language",
"sec_num": "2"
},
{
"text": "\u1230\u1260\u1228\u12cd (s\u00e4b\u00e4r\u00e4w), \u1230\u1260\u122d\u12a9 (s\u00e4b\u00e4rku), \u1230\u1260\u1228\u127d (s\u00e4b\u00e4r\u00e4\u010d), \u1230\u1260\u122d\u1295 (s\u00e4b\u00e4rn), \u12a0\u1230\u1260\u1228 (ass\u00e4b\u00e4r\u00e4), \u1270\u1230\u1260\u1228 (t\u00e4s\u00e4b\u00e4r\u00e4), \u12a0\u120d\u1230\u1260\u1228\u121d (als\u00e4b\u00e4r\u00e4m), \u1232\u1230\u1260\u122d (sis\u00e4b\u00e4r), \u1233\u12ed\u1230\u1260\u122d (says\u00e4b\u00e4r), \u12ab\u120d\u1270\u1230\u1260\u1228 (kalt\u00e4s\u00e4b\u00e4r\u00e4), \u12e8\u121a\u1230\u1260\u122d (y\u00e4mis\u00e4b\u00e4r), etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amharic Language",
"sec_num": "2"
},
{
"text": "This leads a single word to represent a complete sentence constructed with subject, verb and object. Because of such morphological complexities, many Amharic natural language processing applications require stemmer or morphological analyser as a key component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amharic Language",
"sec_num": "2"
},
{
"text": "The model proposed to construct Amharic semantic networks has the following major components: Amharic WordNet, text analysis and indexing, computing term vectors, concept extraction, and relation extraction. First, index terms representing text corpus are extracted. Term vectors are then computed from the index file and stored using WordSpace model. By searching the WordSpace, semantically related concepts are extracted for a given synset in the Amharic WordNet. Finally, relations between those concepts in the intervening word patterns are extracted from the corpus using pairs of concepts from Amharic WordNet. Process relatioships between these components are shown in Figure 1 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 677,
"end": 685,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Proposed Semantic Network Model",
"sec_num": "3"
},
{
"text": "To automatically construct semantic networks from free text corpus, we need some initial knowledge for the system so that other unknown relation instances can be extracted. Accordingly, we constructed Amharic WordNet manually as a small knowledge base in which the basic relation between terms is \"synonymy\". Amharic WordNet is composed of 890 single word terms (all are nouns) grouped into 296 synsets (synonym groups) and these synsets are representations of the concepts of terms in the group. We chose noun concepts because most relation types are detected between nouns. Verbs and adverbs are relation indicators which are used to show relations between nouns. Synsets are further related with each other by other three relations called \"type-of\", \"part-of\" and \"antonym\". The Amharic WordNet is then used to set different seeds for a specific relation. Once we prepare sets of seeds from the WordNet, we can extract the patterns which indicate how these pairs of seeds exist in the corpus. The way these pairs of concepts exist in the corpus can tell us more about other concept pairs in the corpus. For example, the way the pair of terms {\u12a2\u1275\u12ee\u1335\u12eb/Ethiopia, \u12a0\u134d\u122a\u12ab/Africa} exists in the corpus can tell us that the pair of terms {\u12ac\u1295\u12eb/Kenya, \u12a0\u134d\u122a\u12ab/Africa} can exist in same way as the former pairs. The patterns extracted between a pair of terms {\u12a2\u1275\u12ee\u1335\u12eb/Ethiopia, \u12a0\u134d\u122a\u12ab/Africa} can be used to extract the relation between other countries like \u12ac\u1295\u12eb/Kenya with that of \u12a0\u134d\u122a\u12ab/Africa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amharic WordNet",
"sec_num": "3.1"
},
{
"text": "The process of text analysis starts with removal of non-letter tokens and stopwords from the corpus. This is followed by stemming of words where several words derived from the same morpheme are considered in further steps as the same token. Since Amharic is morphologically complex language, the process of finding the stem which is the last unchangeable morpheme of the word is a difficult task. We used a modified version of the stemmer algorithm developed by Alemayehu and Willet (2002) which removes suffixes and prefixes iteratively by employing minimum stem length and context sensitive rules. The stem is used as a term for indexing which is performed by applying term frequencyinverse document frequency weighting algorithm.",
"cite_spans": [
{
"start": 462,
"end": 489,
"text": "Alemayehu and Willet (2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Analysis and Indexing",
"sec_num": "3.2"
},
{
"text": "A term vector is a sequence of term-weight pairs. The weight of the term in our case is the cooccurrence frequency of the term with other terms in a document. possible to map the index to term-context (termdocument) matrix where the values of the cells of the matrix are the weighted frequency of terms in the context (document). The WordSpace model is used to create term vectors semantically from this matrix by reducing the dimension of the matrix using random projection algorithm (Fern and Brodley, 2003) . At the end, the WordSpace contains the list of term vectors found from the corpus along with co-occurrence frequencies of each term. The algorithm used to compute term vetcors is shown in Figure 2 . ",
"cite_spans": [
{
"start": 485,
"end": 509,
"text": "(Fern and Brodley, 2003)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 700,
"end": 708,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Computing Term Vectors",
"sec_num": "3.3"
},
{
"text": "Semantically related concepts for a seed term of Amharic WordNet are extracted from the WordSpace model which is used to create a collection of term vectors. Each term vector contains different related words along with their co-occurrence frequencies. For a concept from Amharic WordNet as input to WordSpace, related concepts are extracted by computing the cosine similarity between the term vector containing this concept and the remaining term vectors of the WordSpace model. For each term vector TV i in the WordSpace model and a term vector TVx that corresponds to the synset, the cosine similarity C is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept Extraction",
"sec_num": "3.4"
},
{
"text": "where n is the number of term vectors in the WordSpace model. Since the collection of the term vectors in the WordSpace is many in number, we rank related terms using the cosine values in decreasing order for selection of top-k number of related concepts for the given synset where k is our threshold used to determine the number of related concepts to be extracted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept Extraction",
"sec_num": "3.4"
},
{
"text": "The relations among concepts considered in this work are \"part-of\" and \"type-of\". We use semisupervised approach to extract relations where a very small number of seed instances or patterns from Amharic WordNet are used to do bootstrap learning. These seeds are used with a large corpus to extract a new set of patterns, which in turn are used to extract more instances in an iterative fashion. In general, using Amharic WordNet entries, intervening word patterns for a specific relation are extracted from the corpus. For each pair of concepts (C 1 , C 2 ) of which we know the relationship contained in Amharic WordNet, we send the query \"C 1 \" + \"C 2 \" to the corpus. The returned text snapshot is processed to extract all n-grams (where n is set empirically to be 2 \u2264 n \u2264 7) that match the pattern \"C 1 X*C 2 \", where X can be any combination of up to five space-separated word or punctuation tokens. Thus, \"C 1 X*C 2 \" is a pattern extracted from the corpus using concept pair (C 1 , C 2 ) from Amharic WordNet of specific relation. For instance, assume the Amharic WordNet contains the concepts \"\u12a2\u1275\u12ee\u1335\u12eb (ityoPya/Ethiopia)\" and \"\u12a0\u121b\u122b (amara/Amhara)\" with \"\u12a2\u1275\u12ee\u1335\u12eb/Ethiopia\" being a hypernym of \"\u12a0\u121b\u122b/Amhara\". The method would query the corpus with the string \"\u12a2\u1275\u12ee\u1335\u12eb/Ethiopia\" + \"\u12a0\u121b\u122b/Amhara\". Let us assume that one of the returned text snapshot is \"\u2026\u1260\u12a2\u1275\u12ee\u1335\u12eb \u12a8\u121a\u1308\u1299 \u12ad\u120d\u120e\u127d \u1218\u12ab\u12a8\u120d \u12a0\u121b\u122b \u12a0\u1295\u12f1 \u1232\u1206\u1295\u2026 (\u2026b\u00e4'ityoPya k\u00e4mig\u00e4\u00f1u k\u012dl\u012dlo\u010d m\u00e4kak\u00e4l amara andu sihon...) \". In this case, the method would extract the pattern \"...\u1260\u12a2\u1275\u12ee\u1335\u12eb \u12a8\u121a\u1308\u1299 \u12ad\u120d\u120e\u127d \u1218\u12ab\u12a8\u120d \u12a0\u121b\u122b... (...b\u00e4'ityoPya k\u00e4mig\u00e4\u00f1u k\u012dl\u012dlo\u010d m\u00e4kak\u00e4l amara...)\". This pattern would be added to the list of potential hypernymy patterns list with \"\u12a2\u1275\u12ee\u1335\u12eb/Ethiopia\" and \"\u12a0\u121b\u122b/Amhara\" substituted with matching placeholders, like \"var1 \u12a8\u121a\u1308\u1299 \u12ad\u120d\u120e\u127d \u1218\u12ab\u12a8\u120d (k\u00e4mig\u00e4\u00f1u k\u012dl\u012dlo\u010d m\u00e4kak\u00e4l) var2\". Once the patterns are extracted, the final step is to detect if there is a relation between every pair of concepts extracted from the WordSpace. If a relation between a pair of concepts are detected, the concept pair will be added to the network in which each concept is a node and the link is the relation between the concepts. Figure 3 shows the algorithm used to extract relations between concepts. ",
"cite_spans": [],
"ref_spans": [
{
"start": 2128,
"end": 2136,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Relation Extraction",
"sec_num": "3.5"
},
{
"text": "The corpus is composed of domain independent, unconstrained and unstructured text data. It contains two groups of text. The first group is a collection of news text documents gathered by Walta Information Center (1064 news items) and all news items are tagged with part-of-speech categries.This group of the dataset was used for the extraction of concepts in the corpus. The second group was collected from Ethiopian National News Agency (3261 news items). This dataset group was used for computing the frequency of concepts that are extracted from the first tagged dataset. Thus, a total of 4325 Amharic news documents were collected to build the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Collection",
"sec_num": "4.1"
},
{
"text": "The proposed model was implemented by creating the WordSpace from the index file which is mapped to term-document matrix. We used Apache Lucene and Semantic Vectors APIs for indexing and development of the WordSpace model, respectively. Concept and relation extraction processes were also implemented using Java.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.2"
},
{
"text": "We coined the name AMSNet to semantic networks automatically constructed using our system from Amharic text. AMSNet consists of a set of concepts and a set of important relationships called \"synonym\", \"part-of\" and \"type-of\". It holds entries as a form of first order predicate calculus in which the predicate is the relation and the arguments are concepts. AMSNet acquires new concepts over time and connects each new concept to a subset of the concepts within an existing neighborhood whenever new text document is processed by the system. The growing network is not intended to be a complete model of semantic development, but contains specific relations that can be extracted and connected between concepts of the given corpus. Semantic networks not only represent information but also facilitate the retrieval of relevant facts. For instance, all the facts related to the concept \"\u12a2\u1275\u12ee\u1335\u12eb/Ethiopia\" are stored with pointers directed to the node representing \"\u12a2\u1275\u12ee\u1335\u12eb/ Ethiopia\". Another example concerns the inheritance of properties. Given a fact such as \"\u12a0\u1308\u122d \u1201\u1209 \u1218\u1295\u130d\u1235\u1275 \u12a0\u1208\u12cd (ag\u00e4r hulu m\u00e4ng\u012dst al\u00e4w/each country has a government)\", the system would automatically conclude that \"\u12a2\u1275\u12ee\u1335\u12eb \u1218\u1295\u130d\u1235\u1275 \u12a0\u120b\u1275 (ityoPya m\u00e4ng\u012dst alat/Ethiopia has a government)\" given that \u12a2\u1275\u12ee\u1335\u12eb \u12a0\u1308\u122d \u1293\u1275 (ityoPya ag\u00e4r nat/Ethiopia is a country).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AMSNet",
"sec_num": "4.3"
},
{
"text": "There is no gold standard to evaluate the result of semantic network construction. Our result is validated manually by linguists, and based on their evaluations the average accuracy of the system to extract the \"type-of\" and \"part-of\" relations between concepts (synsets) from free text corpus is 68.5% and 71.7%, respectively. Sample result generated from the our system is shown in Figure 4 . Figure 4 . Part of the Amharic semantic network automatically constructed by the proposed system.",
"cite_spans": [],
"ref_spans": [
{
"start": 384,
"end": 392,
"text": "Figure 4",
"ref_id": null
},
{
"start": 395,
"end": 403,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test Results",
"sec_num": "4.4"
},
{
"text": "A major effort was made in identifying and defining a formal set of steps for automatic construction of semantic network of Amharic noun concepts from free text corpus. The construction model of our semantic network involves the creation of index file for the collected news text corpus, development of WordSpace based on the index file, searching the WordSpace to generate semantically related concepts for a given Amharic WordNet term, generate patterns for a specific relation using entries of Amharic WordNet and detect relations between each pair of concepts among the related concepts using those patterns. The availability of Amharic semantic networks helps other Amharic NLP applications such as information retrieval, document classification, machine translation, etc. improve their performance. Future works include deep morphological analyis on Amharic and the use of hybrid approches to improve the performance of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Works",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Stemming of Amharic Words for Information Retrieval",
"authors": [
{
"first": "Nega",
"middle": [],
"last": "Alemayehu",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Willet",
"suffix": ""
}
],
"year": 2002,
"venue": "Literary and Linguistic Computing",
"volume": "17",
"issue": "",
"pages": "1--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nega Alemayehu and Peter Willet. 2002. Stemming of Amharic Words for Information Retrieval, In Li- terary and Linguistic Computing, Vol 17, Issue 1, pp. 1-17.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "\u12d8\u1218\u1293\u12ca \u12e8\u12a0\u121b\u122d\u129b \u1230\u12cb\u1235\u12cd \u1260\u1240\u120b\u120d \u12a0\u1240\u122b\u1228\u1265 (Modern Amharic Grammar in a Simple Approach)",
"authors": [
{
"first": "Getahun",
"middle": [],
"last": "Amare",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Getahun Amare. 2010. \u12d8\u1218\u1293\u12ca \u12e8\u12a0\u121b\u122d\u129b \u1230\u12cb\u1235\u12cd \u1260\u1240\u120b\u120d \u12a0\u1240\u122b\u1228\u1265 (Modern Amharic Grammar in a Simple Approach). Addis Ababa, Ethiopia.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Semantic Web",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Berners-Lee",
"suffix": ""
}
],
"year": 2001,
"venue": "Scientific American",
"volume": "284",
"issue": "5",
"pages": "34--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Berners-Lee. 2001. The Semantic Web, Scientific American , Vol 284, Issue 5, pp. 34-43.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Semantic Vectors Package: New Algorithms and Public Tools for Distributional Semantics",
"authors": [
{
"first": "Widdows",
"middle": [],
"last": "Dominic",
"suffix": ""
},
{
"first": "Cohen",
"middle": [],
"last": "Trevor",
"suffix": ""
}
],
"year": 2010,
"venue": "Fourth IEEE International Conference on Semantic Computing (IEEE ICSC2010",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Widdows Dominic and Cohen Trevor. 2010. The Se- mantic Vectors Package: New Algorithms and Pub- lic Tools for Distributional Semantics, In Fourth IEEE International Conference on Semantic Com- puting (IEEE ICSC2010). Carnegie Mellon Univer- sity, Pittsburgh, PA, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "WordNet: An Electronic Lexical Database",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Cambridge, MIT Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Iraklis Varlamis and Michalis Vazirgiannis",
"authors": [
{
"first": "Tsatsaronis",
"middle": [],
"last": "George",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "",
"pages": "1--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsatsaronis George, Iraklis Varlamis and Michalis Vazirgiannis. 2010. Text Relatedness Based on a Word Thesaurus, Journal of Artificial Intelligence Research, vol. 37, pp. 1-39.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "ASKNet: Automated Semantic Knowledge Network",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Harrington",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. 22nd National Conf. on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "889--884",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Harrington and Stephen Clark. 2007. ASKNet: Automated Semantic Knowledge Network, In Proc. 22nd National Conf. on Artificial Intelli- gence, Vancouver, Canada. pp. 889-884.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "MindNet: Acquiring and structuring semantic information from text",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 17th COLING",
"volume": "",
"issue": "",
"pages": "1098--1102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Richardson, William Dolan and Lucy Van- derwende. 1998. MindNet: Acquiring and structur- ing semantic information from text, In Proceedings of the 17th COLING, Montreal, Canada. pp. 1098- 1102.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Word-Space Model: Using distributional analysis to represent syntagmatic and paradigmatic relations between words in high-dimensional vector space",
"authors": [
{
"first": "Magnus",
"middle": [],
"last": "Sahlgren",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Magnus Sahlgren. 2006. The Word-Space Model: Using distributional analysis to represent syntag- matic and paradigmatic relations between words in high-dimensional vector space. PhD Thesis, Stock- holm University, Sweden.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic Extraction of Semantic Networks from Text using Leximancer",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Smith. 2003. Automatic Extraction of Se- mantic Networks from Text using Leximancer, In Proceedings of HLT-NAACL, Edmonton.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2005,
"venue": "Cognitive Science",
"volume": "29",
"issue": "1",
"pages": "41--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steyvers and Joshua Tenenbaum. 2005. The Large-Scale Structure of Semantic Networks: Sta- tistical Analyses and a Model of Semantic Growth, Cognitive Science, Vol 29, Issue 1, pp. 41-78.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Random Projection for High Dimensional Data Clustering: A Cluster Ensemble Approach",
"authors": [
{
"first": "Xiaoli",
"middle": [],
"last": "Fern",
"suffix": ""
},
{
"first": "Carla",
"middle": [],
"last": "Brodley",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the 20th Int. Conf. on Machine Learning (ICML-2003), Washington",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoli Fern and Carla Brodley. 2003. Random Projec- tion for High Dimensional Data Clustering: A Clus- ter Ensemble Approach, In Proc. of the 20th Int. Conf. on Machine Learning (ICML-2003), Wash- ington, DC.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "\u12e8\u12a0\u121b\u122d\u129b \u1230\u12cb\u1235\u12cd (Amharic Grammar)",
"authors": [
{
"first": "Baye",
"middle": [],
"last": "Yimam",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baye Yimam. 2000. \u12e8\u12a0\u121b\u122d\u129b \u1230\u12cb\u1235\u12cd (Amharic Gram- mar). Addis Ababa, Ethiopia.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "System architecture of the proposed Amharic semantic network.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Algorithm for computing term vectors.",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "Algorithm for Relation Extraction.",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "each pair (A,B) in CONPAIR: For each pattern PAT in PATLIST: NEWPAT=A + PAT + B MODPAT= MODPAT + NEWPAT 2. For each phrase PHRASE in MODPAT: COUNT =0 A= PHRASE [0] B= PHRASE [size(PHRASE)-1] For each file FILE in CORPUS: For each sentence SENTENCE in FILE If PHRASE exists in SENTENCE COUNT=COUNT+1 If COUNT>=TRESHOLD Add pair (A,B) to SEMNET Break 3. Return SEMNET",
"uris": null,
"type_str": "figure"
}
}
}
}