ACL-OCL / Base_JSON /prefixI /json /icon /2019.icon-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:30:07.202459Z"
},
"title": "Language Modeling with NMT Query Translation for Amharic-Arabic Cross-Language Information Retrieval",
"authors": [
{
"first": "Ibrahim",
"middle": [],
"last": "Gashaw",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mangalore University Mangalagangotri",
"location": {
"settlement": "Mangalore-574199"
}
},
"email": "ibrahimug1@gmail.com"
},
{
"first": "H",
"middle": [
"L"
],
"last": "Shashirekha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mangalore University Mangalagangotri",
"location": {
"settlement": "Mangalore-574199"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our first experiment on Neural Machine Translation (NMT) based query translation for Amharic-Arabic Cross-Language Information Retrieval (CLIR) task to retrieve relevant documents from Amharic and Arabic text collections in response to a query expressed in the Amharic language. We used a pretrained NMT model to map a query in the source language into an equivalent query in the target language. The relevant documents are then retrieved using a Language Modeling (LM) based retrieval algorithm. Experiments are conducted on four conventional IR models, namely Uni-gram and Bi-gram LM, Probabilistic model, and Vector Space Model (VSM). The results obtained illustrate that the proposed Unigram LM outperforms all other models for both Amharic and Arabic language document collections.",
"pdf_parse": {
"paper_id": "2019",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our first experiment on Neural Machine Translation (NMT) based query translation for Amharic-Arabic Cross-Language Information Retrieval (CLIR) task to retrieve relevant documents from Amharic and Arabic text collections in response to a query expressed in the Amharic language. We used a pretrained NMT model to map a query in the source language into an equivalent query in the target language. The relevant documents are then retrieved using a Language Modeling (LM) based retrieval algorithm. Experiments are conducted on four conventional IR models, namely Uni-gram and Bi-gram LM, Probabilistic model, and Vector Space Model (VSM). The results obtained illustrate that the proposed Unigram LM outperforms all other models for both Amharic and Arabic language document collections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Information Retrieval (IR) is the activity of retrieving relevant documents to information seekers from a collection of information resources such as text, images, videos, scanned documents, audio, and music as well. These resources can be structured, indexed, and navigated through Language Technology (LT), which includes computational methods that are specialized for analyzing, producing, modifying, and translating text and speech (Madankar et al., 2016) . The increasing necessity for retrieval of multilingual documents in response to a query in any language opens up a new branch of IR called Cross-Language Information Retrieval (CLIR). Its goal is to accept the query in one language, transform it into a searchable format and provide an interface to allow a user to search and retrieve information in different languages as per their information need (Sourabh, 2013) .",
"cite_spans": [
{
"start": 436,
"end": 459,
"text": "(Madankar et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 862,
"end": 877,
"text": "(Sourabh, 2013)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Amharic language is the official language of Ethiopia spoken by 26.9% of Ethiopia's population as mother tongue and spoken by many people in Israel, Egypt, and Sweden. Arabic is a natural language spoken by 250 million people in 21 countries as the first language and serving as a second language in some Islamic countries. Ethiopia is one of the nations, which have more than 33.3% of the population who follow Islam, and they use the Arabic language to teach religion and for communication purposes. Arabic and Amharic languages belong to the Semitic family of languages, where the words in such languages are formed by modifying the root itself internally and not simply by the concatenation of affixes to word roots (Shashirekha and Gashaw, 2016) .",
"cite_spans": [
{
"start": 724,
"end": 754,
"text": "(Shashirekha and Gashaw, 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nowadays, it is widely used to solve CLIR problems for many language pairs. However, much of the research on this area has focused on European languages despite these languages being very rich in resources. So this study is aimed to develop the NMT query translation based Amharic-Arabic CLIR system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An essential part of CLIR is mapping between query and document collections by translating queries to the target document language or the source document to the target document language. We follow the first approach to translate the query words by using a pre-trained NMT model. For the purpose of this translation, we have constructed a small parallel text corpus by modifying the existing monolingual Arabic and its equivalent translation of Amharic language text corpora available on Tanzile (Tiedemann, 2012) , as Amharic-Arabic parallel text corpora are not available for MT task.",
"cite_spans": [
{
"start": 495,
"end": 512,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. CLIR approaches are discussed in section 2. Related works are reviewed in Section 3. The proposed CLIR approach based on LM is described in Section 4. Resources and configurations of experiments for evaluating the system and the results are detailed in Section 5, followed by a conclusion in section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In CLIR, the query and the document collection needs to be mapped into a common representation to enable users to search and retrieve relevant documents across the language boundaries (Tune, 2015) . Based on the resources used to map the query and the documents in different languages, CLIR approaches can be categorized as; Dictionarybased approach, Latent Semantic Indexing (LSI), Machine Translation (MT) approach, and Probabilistic-based approach (Raju et al., 2014).",
"cite_spans": [
{
"start": 184,
"end": 196,
"text": "(Tune, 2015)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CLIR Approaches",
"sec_num": "2"
},
{
"text": "Dictionary-based approaches use either an automatically constructed bilingual Machine Readable Dictionaries (MRD), bilingual word lists, or other lexicon resources to translate the query terms to their target language equivalents. This approach offers a relatively cheap and easily applicable solution for large-scale document collection. Due to Out of Vocabulary (OOV), some words in a query may not be translated. Further, linguistic concepts such as polysemy and homonymy may introduce ambiguity in translation of words (Shashirekha and Gashaw, 2016)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dictionary-based approaches",
"sec_num": "2.1"
},
{
"text": "In the LSI approach, the documents of the source language are represented in the language-independent LSI space. Similarly, a user query can be treated as a pseudodocument and represented as a vector in the same LSI space. Even though the performance of the LSI model is on par with the traditional vector space model, the cost of computing Singular Value Decomposition (SVD) of very large collections is high, and it makes a difference between different meanings of ambiguous terms according to their contexts of utilization (Nie, 2010) .",
"cite_spans": [
{
"start": 526,
"end": 537,
"text": "(Nie, 2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSI approach",
"sec_num": "2.2"
},
{
"text": "MT is a process of obtaining a target language text for a given source language text by using automatic techniques. MT can be used to translate the query, the document, or both into the same language, and the retrieval process could then be treated similar to a conventional IR system. However, MT systems require time and resources to develop and are still not widely or readily available for many language pairs (Madankar et al., 2016) .",
"cite_spans": [
{
"start": 414,
"end": 437,
"text": "(Madankar et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation approach",
"sec_num": "2.3"
},
{
"text": "Probabilistic-based approaches include corpus-based methods which translate queries and language modeling which avoid translation of queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic-based approaches",
"sec_num": "2.4"
},
{
"text": "Corpus-Based approaches use multilingual corpora which can be parallel corpora or comparable corpora. In this approach, queries are translated on the basis of multilingual terms extracted from parallel or comparable document collections. While parallel corpora contain translation-equivalent texts which contain direct translations of the same documents in different languages, comparable corpora contain texts of the same subject which are neither aligned nor direct translations of each other but composed in their respective languages independently (Tesfaye, 2010) . It is available only in a few languages and more expensive to construct.",
"cite_spans": [
{
"start": 552,
"end": 567,
"text": "(Tesfaye, 2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus-based methods",
"sec_num": "2.4.1"
},
{
"text": "A language model is a probability distribution over all possible sentences or other linguistic units in a language. While the classification of LM is not exhaustive, and a specific language model may belong to several types, LM can be categorized as uniform, finite state, grammarbased, n-gram, and Neural Language Model (NLM) (or continuous space LM) that might be feed-forward or recurrent (SWLG, 1997) . Uniform LM uses the same probability for all words of the vocabulary of the sentences if the number of sentences is limited. In finite-state LM, the set of legal word sequences is represented as a finite state network (or regular grammar) whose edges stand for the words that are assigned probabilities. Grammarbased LM is based on variants of stochastic context-free grammars or other phrase structure grammars.",
"cite_spans": [
{
"start": 392,
"end": 404,
"text": "(SWLG, 1997)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language modeling approaches",
"sec_num": "2.4.2"
},
{
"text": "Data scarcity is a significant problem in building language models, as most possible word sequences will not be observed in training. One solution to this problem is continuous representations, or embedding of words to make their predictions that help to alleviate the curse of dimensionality in LM. The main advantage of LM is to estimate the distribution of various natural language phenomena for language technologies such as speech, machine translation, document classification and routing, optical character recognition, information retrieval, handwriting recognition, spelling correction, etc. (Kim et al., 2016) . Over-fitting (random error or noise instead of the underlying relationship when its test error is larger than its training error) is the main limitation in current LM for small size datasets (Jozefowicz et al., 2016) .",
"cite_spans": [
{
"start": 600,
"end": 618,
"text": "(Kim et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 812,
"end": 837,
"text": "(Jozefowicz et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language modeling approaches",
"sec_num": "2.4.2"
},
{
"text": "Most of the researchers have studied CLIR works related to different language pairs. However, the only work reported on Amharic and Arabic languages pair is \"Dictionary Based Amharic-Arabic Cross-Language Information Retrieval System\" (Shashirekha and Gashaw, 2016). The performance was affected by incorrect translation due to outof-dictionary words and unnormalized Arabic words; specifically, diacritics not mapped with the dictionary words, and the query was formulated by selecting words available in the dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "3"
},
{
"text": "Some of the prominent works reported on Amharic and Arabic languages paired with other languages are discussed below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "3"
},
{
"text": "In bilingual Amharic-English Search Engine (Munye and Atnafu, 2012) , limitation of word coverage includes a large-size commercial bilingual dictionary and on-line bilingual dictionary for query translation and short data size. The system can perform best only on the selected query terms which are available in the dictionary. The lack of electronic resources such as morphological analyzers and large MRD have forced A. Argaw (2005) to spend considerable time to develop those resources themselves.",
"cite_spans": [
{
"start": 43,
"end": 67,
"text": "(Munye and Atnafu, 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "3"
},
{
"text": "Solving the problem of word sense disambiguation will enhance the effectiveness of CLIR systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "3"
},
{
"text": "Andres Duque et al. (2015) , studied to choose the best dictionary for Cross-Lingual Word Sense Disambiguation (CLWSD), which is focused only on English-Spanish cross-lingual disambiguation and the disambiguation task is dependent on the coverage of dictionary and corpus size. Query suggestion that exploits query logs and document collections by mapping the input query of French language to queries of English language in the query log of a search engine by W. Gao et al. (2007) showed the strong correspondence between the French input queries and English queries in the log, but languages may be more loosely correlated. For example, English and Amharic. M. Al-shuaili and M.Garvalho (2016) , proposed a technique to map characters automatically from different languages into English, without human interference and prior knowledge of the language. While mapping helps transliterations of OOV names to have the same or, at least, very similar pronunciations in any language, word structure, and writing direction add complexity for character mapping and originality of the names also affects the result of character mapping.",
"cite_spans": [
{
"start": 7,
"end": 26,
"text": "Duque et al. (2015)",
"ref_id": "BIBREF3"
},
{
"start": 464,
"end": 481,
"text": "Gao et al. (2007)",
"ref_id": "BIBREF5"
},
{
"start": 663,
"end": 695,
"text": "Al-shuaili and M.Garvalho (2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "3"
},
{
"text": "In the Corpus-based CLIR system for Amharic and English language pairs (Tesfaye and Scannell, 2012) , the size and the quality of document constructed highly affected the performance of the system. Nigussie Eyob (2013), have developed a corpus-based Afaan Oromo-Amharic CLIR system to enable Afaan Oromo speakers to retrieve Amharic information using Afaan Oromo queries. The scarcity of aligned corpus creates a problem of translation disambiguation, and the dictionary is limited to translate words only.",
"cite_spans": [
{
"start": 71,
"end": 99,
"text": "(Tesfaye and Scannell, 2012)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "3"
},
{
"text": "F. T\u00fcre et al. (2012) , explores combinationof-evidence techniques for CLIR using three types of statistical translation models: context-independent token translation, token translation using phrase-dependent con-texts, and token translation using sentencedependent contexts. Experiments on retrieval of Arabic, Chinese, and French documents using English queries show that no one technique is optimal for all queries, but statistically significant improvements in Mean Average Precision (MAP) over strong baselines can be achieved by combining translation evidence from all three techniques.",
"cite_spans": [
{
"start": 3,
"end": 21,
"text": "T\u00fcre et al. (2012)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "3"
},
{
"text": "In all the above-mentioned cases, the key element is the mechanism to map between languages that can be encoded in different forms as a data structure of the query and documentlanguage term correspondences in an MRD or as an algorithm, such as an MT or machine transliteration system. Nowadays, the direction of CLIR is on utilizing neural approaches. Quing Liu (2018) , proposed a neural approach to English-Chinese CLIR, which consists of two parts; bilingual training data and Kernel-based Neural Ranking Model (K-NRM). External sources of translation knowledge are used to generate bilingual training data which is then fed into a kernel-based neural ranking model. The bilingual training approach outperforms traditional CLIR techniques given the same external translation knowledge sources. K-NRM learns translation relationships from bilingual training data by capturing soft-matches from bilingual term pairs and combine softmatches to generate final scores with a set of bins. Kazuhiro Seki (2018) explores a neural network-based approach to compute similarities of English and Japanese language text. They focus on NMT models and examine the utility of an intermediate state. The intermediate state of input texts is indeed beneficial for computing cross-lingual similarity outperforming other approaches, including a strong machine translation baseline.",
"cite_spans": [
{
"start": 358,
"end": 368,
"text": "Liu (2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "3"
},
{
"text": "Many of CLIR works related to neural approaches are focused on neural ranking methods not directly using NMT for query translation. In this work, an NMT based query translation is employed to map between Amharic and Arabic Languages using traditional IR ranking methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "3"
},
{
"text": "Traditional IR in cross-language environment settings mainly allows measuring the similarity between the information need (query) in source language and collection of documents in both languages. In a CLIR environment, queries and documents are written in two different languages. In order to match terms between the two languages, a retrieval system needs to establish a mapping between words in the query vocabulary and words in the document vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Amharic-Arabic CLIR System",
"sec_num": "4"
},
{
"text": "Deep learning NMT is a recent approach of MT that produces high-quality translation results based on a massive amount of aligned parallel text corpora in both the source and target languages. Deep learning is part of a broader family of ML methods based on artificial neural networks (MemoQ, 2019). It allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have improved the state-of-the-art research in language translation (LeCun et al., 2015) . NMT is one of the deep learning end-to-end learning approaches to MT that uses a large artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model. The advantage of this approach is that a single system can be trained directly on the source and target text no longer requiring the pipeline of specialized systems used in statistical MT. Many companies such as Google, Facebook, and Microsoft are already using NMT technology . NMT has recently shown promising results on multiple language pairs. Nowadays, it is widely used to solve translation problems in many languages. However, much of the research on this area has focused on European languages despite these languages being very rich in resources.",
"cite_spans": [
{
"start": 530,
"end": 550,
"text": "(LeCun et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Amharic-Arabic CLIR System",
"sec_num": "4"
},
{
"text": "Our research has been focused on resolving query translation ambiguity. The opensource NMT system, called OpenNMT (Klein et al., 2017) , which is an open-source toolkit for NMT, is used to construct the Amharic-Arabic NMT model. The pre-trained model is used to translate the text in Amharic to the Arabic language. Once the query is translated into Arabic, standard IR algorithms can be used to retrieve the relevant documents from Amharic and Arabic document collections. As shown in Figure 1 , prepossessing (tokenization, punctuation, and stop-word removal) is done for Amharic and Arabic document collections first. Then language models are produced for both languages, which will be used to estimate the query likelihood of the given query.",
"cite_spans": [
{
"start": 114,
"end": 134,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 486,
"end": 494,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Proposed Amharic-Arabic CLIR System",
"sec_num": "4"
},
{
"text": "The search module is used to input Amharic language queries and retrieve relevant documents in both languages. A sample screenshot of the proposed system displaying relevant documents as a list of a hyperlink for a sample user query is shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 245,
"end": 253,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposed Amharic-Arabic CLIR System",
"sec_num": "4"
},
{
"text": "A Sample Amharic text which is preprocesed after sentence spliting, tokenizing words, punctuation and stop-word removal is shown in Table 1 , the same procedure is followed for Arabic text also.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Proposed Amharic-Arabic CLIR System",
"sec_num": "4"
},
{
"text": "A language model, which is a probability of words in each document p(w|d) in the collection, is used to rank the documents according to the probability of generating the query. The query likelihood is given by P (q|d) = \u220f m i=1 p(q i |d). But this will assign zero probability for the words that are not available in the specific documents. Therefore the following maximization technique, which is LM with Jelineck-Mercer smoothing (Zhai and Lafferty, 2017) , is used to optimize the likelihood of a given query, as shown in Equation 1.",
"cite_spans": [
{
"start": 432,
"end": 457,
"text": "(Zhai and Lafferty, 2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Amharic-Arabic CLIR System",
"sec_num": "4"
},
{
"text": "prob(q t i ) = n \u220f i=1 \u03bb * p(q t i |m d ) + 1 \u2212 \u03bb * p(q t i |m c ) (1) where, prob(q t i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Amharic-Arabic CLIR System",
"sec_num": "4"
},
{
"text": "is the probability of query term in position i, m d is the probability in the document language model, m c is the probability in the collection language model \u03bb is the smoothing parameter and n is the length of the given query. After extensive experiments, \u03bb is set to 0.9999. A document that is more likely to generate the user query is considered to be more relevant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Amharic-Arabic CLIR System",
"sec_num": "4"
},
{
"text": "To design, develop, and maintain effective IR system, evaluation is very crucial as it allows the measurement of how successfully an information retrieval system meets its goal of helping users fulfill their information needs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "There are two approaches for evaluating the effectiveness of IR systems: (i) user-based evaluation and (ii) system-based evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "In the system-based evaluation method, several human experts evaluate the system to prepare a set of data that can be reused in later experiments. The user-based evaluation method quantifies the satisfaction of users by monitoring the user's interactions with the system (Samimi and Ravana, 2014) . In this work, the focus is on system-oriented evaluation that focuses on measuring how well an IR system can rank the most relevant documents at the top for a given user query.",
"cite_spans": [
{
"start": 271,
"end": 296,
"text": "(Samimi and Ravana, 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "To evaluate the proposed Amharic-Arabic CLIR system, test collections (document corpus, search queries, and relevance judgments) have been prepared as bench-marked datasets are not available. Amharic is used as a source language to retrieve target language documents in Arabic as well as in Amharic. Experiments are conducted on four conventional IR models, namely Uni-gram and Bigram LM, Probabilistic model, and VSM. Unigram LM is the bag-of-words model where the probability of each word only depends on that word's own probability in the document. Bigram LM denotes n-gram models with n = 2. It is assumed that the probability of observing the i th word w i in the context history of the preceding i\u22121 th word can be approximated by the probability of observing it in the preceding n \u2212 1 th word. The Probabilistic model makes an estimation of the probability of finding if a document d j is relevant to a query q, which assumes that the probability of relevance depends on the query and document representations. VSM is an algebraic model for representing queries and documents as vectors of identifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "Relevant judgments can be created using Crowdsourcing (Maddalena et al., 2016; Efron, 2009) , (Ravana et al., 2015) , which is a timeconsuming and expensive task. Therefore, we considered the topmost ranked documents and took the union of all intersections between Unigram and Bigram, Unigram and VSM, Bigram and probability, and Probability and VSM. If the number of documents in this set is less than 10, the symmetric difference of the uni-gram model is taken. As it is shown in Figure 3 , the documents (Amtext1.txt, Am-text43.txt, Amtext27.txt, Amtext39.txt, Am-text26.txt, Amtext41.txt, Amtext81.txt, Am-text67.txt, Amtext28.txt, Amtext34.txt) are selected as the top-ranked documents relevant for the query \"\u121d\u1235\u130b\u1293 \u1208\u12a0\u120b\u1205 \u12ed\u1308\u1263\u12cd \u12e8\u12d3\u1208\u121b\u1275 \u130c\u1273 \u1208\u12be\u1290\u12cd\" (All praise is due to Allah, Lord of the worlds). For evaluation, we configure our test collection as 75 Amharic search queries, 114 Arabic and equivalent translation of Amharic documents (each verse of the Quran is organized as a single document), and relevant judgments are extracted using Equation 2. the description of this test collection is shown in Table 2 . The test collection and parallel Amharic-Arabic text corpora used for translation will be provided on request. ",
"cite_spans": [
{
"start": 54,
"end": 78,
"text": "(Maddalena et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 79,
"end": 91,
"text": "Efron, 2009)",
"ref_id": "BIBREF4"
},
{
"start": 94,
"end": 115,
"text": "(Ravana et al., 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 482,
"end": 490,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 1101,
"end": 1108,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "X q i = (A\u2229B)\u222a(A\u2229D)\u222a(B \u2229C)\u222a(C \u2229D) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "R D = { X q i , if X q i \u2265 10 X q i \u222a A, if X q i < 10 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "where, R D is list of relevant documents, X q i is set of top-ranked relevant documents for query i which is computed based on Equation 2, and A, B, C, and D are a set of top-ranked retrieved documents from Unigram, Bigram, Probability, and VSM models run. We adopted Text Retrieval Conference The most frequently used and still the dominant approach to evaluating the performance 1 https://trec.nist.gov/ of information retrieval systems are precision and recall. Precision is defined as the proportion of retrieved documents that are actually relevant, and recall is defined as the proportion of relevant documents that are actually retrieved. Both precision and recall can be expressed as; P recision =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "\u2211 n i=1 d i n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": ", and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "Recall = \u2211 n i=1 d i R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "where, d i is the relevance level of the i th document in the ranked output to a certain query, R is number of relevant documents for a query and n denotes the number of documents in the ranked output (Zhou and Yao, 2010) .",
"cite_spans": [
{
"start": 201,
"end": 221,
"text": "(Zhou and Yao, 2010)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "Mean Average Precision (MAP) values are considered to give the best judgment in the presence of multiple queries. The evaluation metrics used in this work are; MAP and Recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "MAP and recall are computed as the sum of Average Precision (AP) of each query divided by the number of queries and sum of average recall of each query divided by the number of queries, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "The other measurement technique used for evaluation is Discount Cumulative Gain (DCG) that measures the usefulness or gain of a document based on its position in the result list. The gain is accumulated from the top of the result list to the bottom, with the gain of each result discounted at lower ranks. DCG adopted from Moffat and Zobel (2008) is accumulated at a particular rank position p as given in Equation 4.",
"cite_spans": [
{
"start": 323,
"end": 346,
"text": "Moffat and Zobel (2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "DCG p = p \u2211 i=1 rel i log 2 (i + 1) = rel 1 + p \u2211 i=2 rel i log 2 (i + 1)",
"eq_num": "(4)"
}
],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "Comparing search algorithms performance from one query to the next cannot be consistently achieved using DCG alone. So the cumulative gain at each position for a chosen value of p should be normalized across queries. This is done by sorting all relevant documents in the corpus by their relative relevance, producing the maximum possible DCG through position p, also called Ideal DCG (IDCG) through that position (Chapelle and Wu, 2010) as shown in Equation 5.",
"cite_spans": [
{
"start": 413,
"end": 436,
"text": "(Chapelle and Wu, 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "nDCG p = DCG p IDCG p",
"eq_num": "(5)"
}
],
"section": "Experiments and Results",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "where,rel i is the graded relevance of result at position i and |REL| is the list of documents ordered by relevance in the corpus up to position p.We also used Normalized NDCG to measure the usefulness of documents at first, fifth, and tenth position of ranked lists.Evaluation Results of all models are presented in Table 3 . In general, the proposed Unigram LM shows better performance than all others for both Amharic and Arabic language document collections. The unigram model makes a strong assumption that each word occurs independently, and consequently, the probability of a word sequence becomes the product of the probabilities of the individual words. Bigram model is better to identify the most relevant document at the top. As it is shown in Table 3 , NDCG@1 has a higher value, which means it has a high cumulative gain in the first position. The bigram model considers the local context, which is the probability of a new word depending on the probability of the previous word. This Bigram model feature allows us to retrieve the most relevant document at the top. Still, it decreases the recall highly because it misses a strong assumption that each word occurs independently. Probability and VSM models perform almost the same. The length of the query influenced the final retrieval to a great extent both in Unigram and Bigram LM. ",
"cite_spans": [],
"ref_spans": [
{
"start": 317,
"end": 324,
"text": "Table 3",
"ref_id": null
},
{
"start": 755,
"end": 762,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "CLIR systems are very demanding and are directly connected with language-specific issues. The retrieval of relevant documents intended for further analysis is the first important step, which significantly influences the retrieval performance. We prepared Test collections (document corpus, search queries, and relevance judgments) as bench-marked data-sets are not available. Experiments are carried out on four conventional IR models, namely Unigram and Bigram LM, Probabilistic model, and VSM. The result illustrates that LM based CLIR performs better compared to others. Furthermore, we discovered that the length of the query influenced the final retrieval to a great extent. Our future directions towards achieving better results include experimenting on large data-sets with different domains because the document collection in this work is taken only from Quran, and explore recently introduced neural IR approaches Mitra et al. (2017) .",
"cite_spans": [
{
"start": 923,
"end": 942,
"text": "Mitra et al. (2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Character Mapping for Cross-Language",
"authors": [
{
"first": "Mazin",
"middle": [],
"last": "Al",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Shuaili",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Carvalho",
"suffix": ""
}
],
"year": 2016,
"venue": "International Journal of Future Computer and Communication",
"volume": "5",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mazin Al-Shuaili and Marco Carvalho. 2016. Char- acter Mapping for Cross-Language. Interna- tional Journal of Future Computer and Com- munication, 5(1):18.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Dictionary-based amharic-french information retrieval",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Atelach Alemu Argaw",
"suffix": ""
},
{
"first": "Rickard",
"middle": [],
"last": "Asker",
"suffix": ""
},
{
"first": "Jussi",
"middle": [],
"last": "Coster",
"suffix": ""
},
{
"first": "Magnus",
"middle": [],
"last": "Karlgren",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sahlgren",
"suffix": ""
}
],
"year": 2005,
"venue": "Workshop of the Cross-Language Evaluation Forum for European Languages",
"volume": "",
"issue": "",
"pages": "83--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Atelach Alemu Argaw, Lars Asker, Rickard Coster, Jussi Karlgren, and Magnus Sahlgren. 2005. Dictionary-based amharic-french infor- mation retrieval. In Workshop of the Cross- Language Evaluation Forum for European Lan- guages, pages 83-92. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Gradient descent optimization of smoothed information retrieval metrics",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Chapelle",
"suffix": ""
},
{
"first": "Mingrui",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2010,
"venue": "Information retrieval",
"volume": "13",
"issue": "3",
"pages": "216--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Chapelle and Mingrui Wu. 2010. Gra- dient descent optimization of smoothed infor- mation retrieval metrics. Information retrieval, 13(3):216-235.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Choosing the best dictionary for Cross-Lingual Word Sense Disambiguation. Knowledge-Based Systems",
"authors": [
{
"first": "Andres",
"middle": [],
"last": "Duque",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Martinez-Romo",
"suffix": ""
},
{
"first": "Lourdes",
"middle": [],
"last": "Araujo",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "81",
"issue": "",
"pages": "65--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andres Duque, Juan Martinez-Romo, and Lour- des Araujo. 2015. Choosing the best dictionary for Cross-Lingual Word Sense Disambiguation. Knowledge-Based Systems, 81:65-75.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Using multiple query aspects to build test collections without human relevance judgments",
"authors": [
{
"first": "Miles",
"middle": [],
"last": "Efron",
"suffix": ""
}
],
"year": 2009,
"venue": "European Conference on Information Retrieval",
"volume": "",
"issue": "",
"pages": "276--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miles Efron. 2009. Using multiple query aspects to build test collections without human relevance judgments. In European Conference on Infor- mation Retrieval, pages 276-287. Springer.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Cross-lingual query suggestion using query logs of different languages",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Hsiao-Wuen",
"middle": [],
"last": "Hon",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 30th annual international ACM SI-GIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "463--470",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Gao, Cheng Niu, Jian-Yun Nie, Ming Zhou, Jian Hu, Kam-Fai Wong, and Hsiao-Wuen Hon. 2007. Cross-lingual query suggestion using query logs of different languages. In Proceed- ings of the 30th annual international ACM SI- GIR conference on Research and development in information retrieval, pages 463-470. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Exploring the limits of language modeling",
"authors": [
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.02410"
]
},
"num": null,
"urls": [],
"raw_text": "Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Ex- ploring the limits of language modeling. arXiv preprint arXiv:1602.02410.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Character-Aware Neural Language Models",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "2741--2749",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-Aware Neural Language Models. In AAAI, pages 2741- 2749.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Opennmt: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.02810"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush. 2017. Open- nmt: Open-source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep learning. nature",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "521",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature, 521(7553):436.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A Neural Approach to Cross-Lingual Information Retrieval",
"authors": [
{
"first": "Qing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qing Liu. 2018. A Neural Approach to Cross- Lingual Information Retrieval. Ph.D. thesis, figshare.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Information retrieval system and machine translation: a review",
"authors": [
{
"first": "Mangala",
"middle": [],
"last": "Madankar",
"suffix": ""
},
{
"first": "Nekita",
"middle": [],
"last": "Chandak",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chavhan",
"suffix": ""
}
],
"year": 2016,
"venue": "Procedia Computer Science",
"volume": "78",
"issue": "",
"pages": "845--850",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mangala Madankar, MB Chandak, and Nekita Chavhan. 2016. Information retrieval system and machine translation: a review. Procedia Computer Science, 78:845-850.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Crowdsourcing relevance assessments: The unexpected benefits of limiting the time to judge",
"authors": [
{
"first": "Eddy",
"middle": [],
"last": "Maddalena",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Basaldella",
"suffix": ""
},
{
"first": "Dario",
"middle": [
"De"
],
"last": "Nart",
"suffix": ""
},
{
"first": "Dante",
"middle": [],
"last": "Degl'innocenti",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Mizzaro",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [],
"last": "Demartini",
"suffix": ""
}
],
"year": 2016,
"venue": "Fourth AAAI Conference on Human Computation and Crowdsourcing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eddy Maddalena, Marco Basaldella, Dario De Nart, Dante Degl'Innocenti, Stefano Mizzaro, and Gianluca Demartini. 2016. Crowdsourcing relevance assessments: The unexpected benefits of limiting the time to judge. In Fourth AAAI Conference on Human Computation and Crowdsourcing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "5 Translation Technology Trends to Watch Out",
"authors": [
{
"first": "",
"middle": [],
"last": "Memoq",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MemoQ. 2019. 5 Translation Technology Trends to Watch Out for in 2019.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Neural models for information retrieval",
"authors": [
{
"first": "Bhaskar",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Craswell",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.01509"
]
},
"num": null,
"urls": [],
"raw_text": "Bhaskar Mitra and Nick Craswell. 2017. Neural models for information retrieval. arXiv preprint arXiv:1705.01509.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Rankbiased precision for measurement of retrieval effectiveness",
"authors": [
{
"first": "Alistair",
"middle": [],
"last": "Moffat",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Zobel",
"suffix": ""
}
],
"year": 2008,
"venue": "ACM Transactions on Information Systems (TOIS)",
"volume": "27",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alistair Moffat and Justin Zobel. 2008. Rank- biased precision for measurement of retrieval ef- fectiveness. ACM Transactions on Information Systems (TOIS), 27(1):2.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Amharic-English bilingual web search engine",
"authors": [
{
"first": "Mequannint",
"middle": [],
"last": "Munye",
"suffix": ""
},
{
"first": "Solomon",
"middle": [],
"last": "Atnafu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the International Conference on Management of Emergent Digital EcoSystems",
"volume": "",
"issue": "",
"pages": "32--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mequannint Munye and Solomon Atnafu. 2012. Amharic-English bilingual web search engine. In Proceedings of the International Conference on Management of Emergent Digital EcoSystems, pages 32-39. ACM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Cross-language information retrieval",
"authors": [
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
}
],
"year": 2010,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "3",
"issue": "1",
"pages": "1--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian-Yun Nie. 2010. Cross-language information retrieval. Synthesis Lectures on Human Lan- guage Technologies, 3(1):1-125.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Cross Lingual Information Retrieval",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cross Lingual Information Retrieval. Ph.D. the- sis, AAU.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Translation approaches in cross language information retrieval",
"authors": [
{
"first": "",
"middle": [],
"last": "Bnv Narasimha Raju",
"suffix": ""
},
{
"first": "Kvv",
"middle": [],
"last": "Msvs Bhadri Raju",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Satyanarayana",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Computing and Communication Technologies",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "BNV Narasimha Raju, MSVS Bhadri Raju, and KVV Satyanarayana. 2014. Translation ap- proaches in cross language information retrieval. In International Conference on Computing and Communication Technologies, pages 1-4. IEEE.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Ranking retrieval systems using pseudo relevance judgments",
"authors": [
{
"first": "Prabha",
"middle": [],
"last": "Sri Devi Ravana",
"suffix": ""
},
{
"first": "Vimala",
"middle": [],
"last": "Rajagopal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Balakrishnan",
"suffix": ""
}
],
"year": 2015,
"venue": "Aslib Journal of Information Management",
"volume": "67",
"issue": "6",
"pages": "700--714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sri Devi Ravana, Prabha Rajagopal, and Vimala Balakrishnan. 2015. Ranking retrieval systems using pseudo relevance judgments. Aslib Jour- nal of Information Management, 67(6):700-714.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Creation of reliable relevance judgments in information retrieval systems evaluation experimentation through crowdsourcing: a review",
"authors": [
{
"first": "Parnia",
"middle": [],
"last": "Samimi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sri Devi Ravana",
"suffix": ""
}
],
"year": 2014,
"venue": "The Scientific World Journal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parnia Samimi and Sri Devi Ravana. 2014. Cre- ation of reliable relevance judgments in infor- mation retrieval systems evaluation experimen- tation through crowdsourcing: a review. The Scientific World Journal, 2014.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Exploring neural translation models for cross-lingual text similarity",
"authors": [
{
"first": "Kazuhiro",
"middle": [],
"last": "Seki",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "1591--1594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazuhiro Seki. 2018. Exploring neural translation models for cross-lingual text similarity. In Pro- ceedings of the 27th ACM International Con- ference on Information and Knowledge Manage- ment, pages 1591-1594. ACM.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Dictionary based amharic-arabic cross language information retrieval",
"authors": [
{
"first": "Ibrahim",
"middle": [],
"last": "Hl Shashirekha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gashaw",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Advances in Computer Science and Information Technology",
"volume": "",
"issue": "",
"pages": "49--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "HL Shashirekha and Ibrahim Gashaw. 2016. Dic- tionary based amharic-arabic cross language in- formation retrieval. In International Conference on Advances in Computer Science and Informa- tion Technology, pages 49-60.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "An extensive literature review on clir and mt activities in india",
"authors": [
{
"first": "Kumar",
"middle": [],
"last": "Sourabh",
"suffix": ""
}
],
"year": 2013,
"venue": "International Journal of Scientific & Engineering Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar Sourabh. 2013. An extensive literature re- view on clir and mt activities in india. Inter- national Journal of Scientific & Engineering Re- search.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Types of language models",
"authors": [
{
"first": "",
"middle": [],
"last": "Eagles Swlg",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "EAGLES SWLG. 1997. Types of language models.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Amharic-English Cross-lingual Information Retrieval: A Corpus Based Approach",
"authors": [
{
"first": "Aynalem",
"middle": [],
"last": "Tesfaye",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Scannell",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aynalem Tesfaye and Kevin Scannell. 2012. Amharic-English Cross-lingual Information Re- trieval: A Corpus Based Approach. Haramaya: Haramaya University.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Phrasal Translation for Amharic English Cross Language Information Retrieval (Clir)",
"authors": [
{
"first": "Fasika",
"middle": [],
"last": "Tesfaye",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fasika Tesfaye. 2010. Phrasal Translation for Amharic English Cross Language Information Retrieval (Clir). Ph.D. thesis, AAU.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Parallel Data, Tools and Interfaces in OPUS",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Lrec",
"volume": "2012",
"issue": "",
"pages": "2214--2218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel Data, Tools and Interfaces in OPUS. In Lrec, volume 2012, pages 2214-2218.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Development of Cross-Language Information Retrieval for Resource-Scarce African Languages",
"authors": [
{
"first": "",
"middle": [],
"last": "Kula Kekeba Tune",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kula Kekeba Tune. 2015. Development of Cross- Language Information Retrieval for Resource- Scarce African Languages. Ph.D. thesis, In- ternational Institute of Information Technology, Hyderabad.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Combining Statistical Translation Techniques for Cross-Language Information Retrieval",
"authors": [
{
"first": "Ferhan",
"middle": [],
"last": "T\u00fcre",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [
"J"
],
"last": "Lin",
"suffix": ""
},
{
"first": "Douglas",
"middle": [
"W"
],
"last": "Oard",
"suffix": ""
}
],
"year": 2012,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "2685--2702",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ferhan T\u00fcre, Jimmy J Lin, and Douglas W Oard. 2012. Combining Statistical Translation Techniques for Cross-Language Information Re- trieval. In COLING, pages 2685-2702.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A study of smoothing methods for language models applied to ad hoc information retrieval",
"authors": [
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM SIGIR Forum",
"volume": "51",
"issue": "",
"pages": "268--276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chengxiang Zhai and John Lafferty. 2017. A study of smoothing methods for language models ap- plied to ad hoc information retrieval. In ACM SIGIR Forum, volume 51, pages 268-276. ACM.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Evaluating information retrieval system performance based on user preference",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yiyu",
"middle": [],
"last": "Yao",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Intelligent Information Systems",
"volume": "34",
"issue": "3",
"pages": "227--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Zhou and Yiyu Yao. 2010. Evaluating infor- mation retrieval system performance based on user preference. Journal of Intelligent Informa- tion Systems, 34(3):227-248.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Amahric-Arabic CLIR Architecture",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Four combinations of relevant judgment identification (TREC) 1 test collection format where each query-document pair has a 5-level relevance scale, 0 to 4, with 4 meaning document d is most relevant to query Q and 0 meaning d is not relevant to Q.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Sample Amharic Text from Tanzile (Chapter 1)</td><td>Preprocessed Text</td></tr><tr><td/><td>\u1260\u12a0\u120b\u1205 \u1229\u1285\u1229\u1205 \u12a0\u12db\u129d</td></tr><tr><td/><td>\u121d\u1235\u130b\u1293 \u1208\u12a0\u120b\u1205 \u12e8\u12d3\u1208\u121b\u1275 \u130c\u1273</td></tr><tr><td/><td>\u122d\u1285\u1229\u1205 \u12a0\u12db\u129d</td></tr><tr><td/><td>\u12e8\u134d\u122d\u12f1</td></tr><tr><td/><td>\u12a5\u1295\u130d\u1308\u12db\u1208\u1295 \u12a5\u122d\u12f3\u1273\u1295 \u12a5\u1295\u1208\u121d\u1293\u1208\u1295</td></tr><tr><td/><td>\u1240\u1325\u1270\u129b\u12cd\u1295 \u1218\u1295\u1308\u12f5 \u121d\u122b\u1295</td></tr><tr><td/><td>\u1260\u130e \u12e8\u12cb\u120d\u12ad\u120b\u1278\u12cd\u1295 \u12eb\u120d\u1270\u1246\u1323\u1205\u1263\u1278\u12cd\u1295\u1293 \u12eb\u120d\u1270\u1233\u1233\u1271\u1275\u1295\u121d</td></tr><tr><td/><td>\u1230\u12ce\u127d \u1218\u1295\u1308\u12f5 \u121d\u122b\u1295</td></tr></table>",
"html": null,
"text": "Sample Amharic Text Preprocesing"
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">: Description of Test Collection for</td></tr><tr><td colspan=\"2\">Amharic-Arabic CLIR Evaluation</td></tr><tr><td>#Query</td><td>#Documents (228)</td></tr><tr><td>75 Amharic Queries</td><td>114 separated Chapters of Quran in Arabic language 114 separated Chapters of</td></tr><tr><td/><td>Quran in Amharic language</td></tr><tr><td colspan=\"2\">Top 10 documents judged as relevant for each</td></tr><tr><td colspan=\"2\">query is computed as;</td></tr></table>",
"html": null,
"text": ""
}
}
}
}