ACL-OCL / Base_JSON /prefixY /json /Y17 /Y17-1037.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y17-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:33:27.725825Z"
},
"title": "Semantic Similarity Analysis for Paraphrase Identification in Arabic Texts",
"authors": [
{
"first": "Adnen",
"middle": [],
"last": "Mahmoud",
"suffix": "",
"affiliation": {
"laboratory": "LATICE Laboratory Research",
"institution": "University of Monastir",
"location": {
"country": "Tunisia"
}
},
"email": "mahmoud.adnen@gmail.com"
},
{
"first": "Mounir",
"middle": [],
"last": "Zrigui",
"suffix": "",
"affiliation": {
"laboratory": "LATICE Laboratory Research",
"institution": "University of Monastir",
"location": {
"country": "Tunisia"
}
},
"email": "mounir.zrigui@fsm.rnu.tn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Arabic plagiarism detection is a difficult task because of the great richness of Arabic language characteristics of which it is a productive, derivational and inflectional language, on the one hand, and a word can has more than one lexical category in different contexts allows us to have different meanings of the word what changes the meaning of the sentence, on the other hand. In this context, Arabic paraphrase identification allows quantifying how much a suspect Arabic text and source Arabic text are similar based on their contexts. In this paper, we proposed a semantic similarity approach for paraphrase identification in Arabic texts by combining different techniques of Natural Language Processing NLP, such as: Term Frequency-Inverse Document Frequency TF-IDF technique to improve the identification of words that are highly descriptive in each sentence; and distributed word vector representations using word2vec algorithm to reduce computational complexity and to optimize the probability of predicting words in the context given the current center word, which they would be subsequently used to generate a sentence vector representations and after applying a similarity measurement operation based on different metrics of comparison, such as: Cosine Similarity and Euclidean Distance. Finally, our proposed approach was evaluated on the Open Source Arabic Corpus OSAC and obtained a promising rate.",
"pdf_parse": {
"paper_id": "Y17-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "Arabic plagiarism detection is a difficult task because of the great richness of Arabic language characteristics of which it is a productive, derivational and inflectional language, on the one hand, and a word can has more than one lexical category in different contexts allows us to have different meanings of the word what changes the meaning of the sentence, on the other hand. In this context, Arabic paraphrase identification allows quantifying how much a suspect Arabic text and source Arabic text are similar based on their contexts. In this paper, we proposed a semantic similarity approach for paraphrase identification in Arabic texts by combining different techniques of Natural Language Processing NLP, such as: Term Frequency-Inverse Document Frequency TF-IDF technique to improve the identification of words that are highly descriptive in each sentence; and distributed word vector representations using word2vec algorithm to reduce computational complexity and to optimize the probability of predicting words in the context given the current center word, which they would be subsequently used to generate a sentence vector representations and after applying a similarity measurement operation based on different metrics of comparison, such as: Cosine Similarity and Euclidean Distance. Finally, our proposed approach was evaluated on the Open Source Arabic Corpus OSAC and obtained a promising rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Plagiarism is defined as the unauthorized use or closer imitation for the language and thought of another author and the representation of them that one's own original work based on a set of rules, such as for example: inadequate referencing, direct copy from one or more sources of the text by displacement of words in a sentence, paraphrase and rewrite texts by presenting other's ideas with different words, and translation by expressing an idea in one language into another one (Abderahman et al, 2016) (Gharavi et al, 2016) .",
"cite_spans": [
{
"start": 482,
"end": 506,
"text": "(Abderahman et al, 2016)",
"ref_id": "BIBREF0"
},
{
"start": 507,
"end": 528,
"text": "(Gharavi et al, 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the field of paraphrase detection in Arabic texts is a difficult task because of the great variability of morphological and typographical features of Arabic language where a plagiarized text can include more changes: in the vocabulary, or syntactic, and semantic representation of the text compared with other languages such as Latin or English and we also find that a word can have more than one lexical category in different contexts what changes the meaning of the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nowadays, Arabic paraphrase identification based on semantic similarity analysis between source text and suspicious text is a difficult task in Natural Language Processing NLP of which we examine the similarity degree of a given pair of texts, in varying in different levels such as word, sentence or paragraph (Vo et al., 2015) . Thus, many distributional semantic approaches based on the resemblance determination of their signification and their semantic contain (Negre, 2013) have drawn a considerable amount of attention by research community. In this context, our work consists in detecting semantic relatedness between the suspect text and the source text by combining different Natural Language Processing NLP methods to detect paraphrase in Arabic texts by generating word vector representations using word2vec algorithm which they would be combined subsequently to generate sentence vector representations and thereafter applying a similarity measurement operation. In this paper, we start by present a state of the art in the field of Arabic plagiarism detection in section 2 describing the complexities of Arabic language, on the one hand, and the works that have been proposed in this field in the literature, on the other hand. Thereafter, we detail different phases that make up our proposed method in section 3. Finally, we present the evaluation in section 4 as well as the results obtained and we end by a conclusion and some future works to realize in the field of plagiarism detection especially in Arabic language in section 5.",
"cite_spans": [
{
"start": 311,
"end": 328,
"text": "(Vo et al., 2015)",
"ref_id": null
},
{
"start": 466,
"end": 479,
"text": "(Negre, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Arabic language is very rich of morphological and typographical features (Meddeb et al, 2016 ) (Zouaghi et al, 2008 which make Arabic semantic analysis a very difficult task for several reasons among which we can mention:",
"cite_spans": [
{
"start": 73,
"end": 92,
"text": "(Meddeb et al, 2016",
"ref_id": null
},
{
"start": 93,
"end": 115,
"text": ") (Zouaghi et al, 2008",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexities of Arabic Language",
"sec_num": "2.1"
},
{
"text": "\u2022 Arabic language is very rich of morphological features. Thus, Arabic script is cursive whose most letters are tied and written from right to left whose letters change shape depending on whether they appear at the beginning, middle or end of the word, on the one hand, and it consists of: a stem composed by a consonant root and a pattern morpheme; more affixes include time markers, sex and/or number; and enclitics include some propositions, conjunctions, determinants and pronouns. (Meddeb et al, 2016) (Boudhief et al, 2014) \u2022 A word can have more than one lexical category such as: noun, verb, adjective, subject, etc. and can have more than one meaning depending on the context in which it is used (Zrigui et al, 2016) where the identification of some words is very difficult because of the non-capitalization of proper noun, acronyms and abbreviations (Lihioui et al, 2014) as shown in Table 1 : Influence of syntactic category on the disambiguation of the word \u202b\"\u0630\u0647\u0628\"\u202c (dhhb)",
"cite_spans": [
{
"start": 507,
"end": 529,
"text": "(Boudhief et al, 2014)",
"ref_id": "BIBREF4"
},
{
"start": 705,
"end": 725,
"text": "(Zrigui et al, 2016)",
"ref_id": null
},
{
"start": 860,
"end": 881,
"text": "(Lihioui et al, 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 894,
"end": 901,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Complexities of Arabic Language",
"sec_num": "2.1"
},
{
"text": "\u2022 Inflected language whose lexical units vary in number and in bending such as the number of names or verb tense according to the grammatical relationships which they have with other lexical units. (Boudhief et al, 2014) \u2022 The absence of diacritic marks makes Arabic language more ambiguous (Meddeb et al, 2016 ) (Zrigui et al, 2016 . Therefore, only the diacritics, the occurrence context, and in some cases the grammatical category of the ambiguous word can disambiguate its sense which complicates the automatic processing of Arabic language and especially in its semantic analysis of which there is not a consistent theoretical formalism capable of taking into account all the phenomena encountered in this language ( \u2022 Synonyms are widespread in which there are many words are considered synonyms which require the use of tools of morphological analysis to find synonyms of a word. (Zrigui et al, 2016) \u2022 The presence of coordination conjunction with a space-free link makes it difficult to distinguish between \u202b'\u0648'\u202c as a letter of a word and the word \u202b'\u0648'\u202c having the role of conjunction of coordination, on the one hand, and plays an important role in the interpretation of a statement by identifying its proposals, on the other hand. (Lihioui, 2014 ) (Zouaghi et al, 2007 as illustrated in the following example:",
"cite_spans": [
{
"start": 198,
"end": 220,
"text": "(Boudhief et al, 2014)",
"ref_id": "BIBREF4"
},
{
"start": 291,
"end": 310,
"text": "(Meddeb et al, 2016",
"ref_id": null
},
{
"start": 311,
"end": 332,
"text": ") (Zrigui et al, 2016",
"ref_id": null
},
{
"start": 720,
"end": 721,
"text": "(",
"ref_id": null
},
{
"start": 887,
"end": 907,
"text": "(Zrigui et al, 2016)",
"ref_id": null
},
{
"start": 1242,
"end": 1256,
"text": "(Lihioui, 2014",
"ref_id": "BIBREF10"
},
{
"start": 1257,
"end": 1279,
"text": ") (Zouaghi et al, 2007",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexities of Arabic Language",
"sec_num": "2.1"
},
{
"text": "\u202b\u0647\u0630\u0647\u202c \u202b\u0623\u0628\u0646\u0627\u0621\u202c \u202b\u0645\u0646\u202c \u202b\u0627\u0644\u062f\u0624\u0648\u0628\u202c \u202b\u0648\u0627\u0644\u0639\u0645\u0644\u202c \u202b\u0628\u0627\u0644\u062d\u0643\u0645\u0629\u202c \u202b\u0627\u0625\u0644\u0646\u062c\u0627\u0632\u202c \u202b\u0647\u0630\u0627\u202c \u202b\u0625\u0643\u0645\u0627\u0644\u202c \u202b\u062a\u0645\u202c \u202b\u0644\u0642\u062f\u202c \u202b\u0627\u0644\u0645\u062f\u064a\u0646\u0629\u202c \"This accomplishment has been completed with the wisdom and hard work of the people of this city.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexities of Arabic Language",
"sec_num": "2.1"
},
{
"text": "To conclude, Arabic language is very difficult to treat automatically because of its properties and its morphological, syntactic and semantic specificities that we quoted above and which make also the field of Arabic paraphrase detection difficult because the change of the word order or its meaning in the suspect sentence causes an ambiguity during semantic analysis between the source text and suspect text whose a word can have more than one lexical category in different contexts which allows us to have different meanings of a word what changes the meaning of a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexities of Arabic Language",
"sec_num": "2.1"
},
{
"text": "This section provides an overview on related works that deal with Arabic plagiarism detection especially in paraphrase identification field based on semantic analysis to determine the relatedness between the suspect and source Arabic text documents. Thus, several similarity detection approaches between documents have been proposed in the literature of which there are three types of methods for computing relatedness according to the type of resources that have been used, we distinguish:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2.2"
},
{
"text": "\u2022 Knowledge-based methods relies on some form of ontology using WordNet which is a well-known knowledge source to compute semantic similarity between words as in (Shenoy et al, 2012) that proposed a semantic plagiarism detection system using ontology mapping where ontologies are a computational model of some domain of the world by describing semantics of terms used in the domain. \u2022 Web-based approach gathers cooccurrence statistics based on the search engine results and used that to compute word relatedness like Point-wise Mutual Information PMI (Niraula et al, 2015). Thus, (Shuka et al, 2016) showed that the use of a web based cross language semantic plagiarism detection approach helps authors and written to secure their files and to make their files sale.",
"cite_spans": [
{
"start": 581,
"end": 600,
"text": "(Shuka et al, 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2.2"
},
{
"text": "\u2022 Corpus-based measurements compute word similarity and relatedness based on word vector representations obtained from a given corpus. Among the most popular methods for inferring word vector representations to select more discriminative features, we can cite: (Hussein, 2015) showed that Arabic document similarity analysis using N-grams and Singular Value Decomposition SVD can generalize the eigen decomposition of a positive semi definite normal matrix 1 . Also, (Hafeez and Patil, 2017) showed that the author analyzed summary of Chinese expression habits using an adaptive weight of word position algorithm based on TF-IDF to dynamically determining the weight of a word position according to the word position. Thus, it introduced the Vector Space Model VSM and designed comparative experiment under the scene of Chinese document clustering of which TF-IDF-AP algorithm improved a promising results. Moreover, Latent Semantic Analysis LSA algorithm allows to measure similarity between texts which represents the meaning not only of individual words but also of whole passages such as sentences, paragraphs, and short essay (Bihi, 2017) , as in (Kenter and Rijke, 2015 ) that proposed a method for inducing polarity to the documentterm matrix before applying LSA which was novel and shown to effectively preserve and generalize the synonymous / antonymous information in the projected space. Also, Latent Dirichlet Allocation LDA which is a probabilistic model can capture polysemy where each word had multiple meanings and used to reduce dimensionality to themes that are useful building blocks for representing a gist of what a collection contains, statically or over time as in (Yih et al, 2012) in order to support navigability between similar documents via dynamically hyperlinks using an LDA based on cosine similarity measurement.",
"cite_spans": [
{
"start": 261,
"end": 276,
"text": "(Hussein, 2015)",
"ref_id": null
},
{
"start": 467,
"end": 491,
"text": "(Hafeez and Patil, 2017)",
"ref_id": "BIBREF6"
},
{
"start": 1131,
"end": 1143,
"text": "(Bihi, 2017)",
"ref_id": "BIBREF3"
},
{
"start": 1152,
"end": 1175,
"text": "(Kenter and Rijke, 2015",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2.2"
},
{
"text": "But nowadays, distributed word vector representation is a branch of machine learning where each word is described by the surrounding context and thereafter a vector is generated automatically containing semantic and syntactic information about the word (Gharavi et al, 2016) of which the learned vectors explicitly encode many linguistic regularities and patterns (Towne et al, 2016). Indeed, distributed word representations in a vector refers as word embeddings where each vector can be located and visualized in multi dimensional space and helps learning algorithms to achieve better performance in Natural Language Processing NLP by grouping similar words and has dwarfed older methods for achieving distributed representations, like: Latent Semantic Analysis LSA (Towne et al, 2016) (Mikolov et al, 2013) . In this context, several methods have been proposed, despite that there is little works have been proposed in the field of Arabic paraphrase detection, such as:",
"cite_spans": [
{
"start": 253,
"end": 274,
"text": "(Gharavi et al, 2016)",
"ref_id": null
},
{
"start": 788,
"end": 809,
"text": "(Mikolov et al, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2.2"
},
{
"text": "(Mikolov et al, 2013) showed that the inclusion of Twitter-based word embeddings using word2vec may yield better tagged sentences when it used to train systems designed for downstream NLP tasks. However, a generic and flexible method for semantic matching of short texts as in (Samuel, 2016) which leveraged word embeddings of different dimensionality obtained by different algorithms (GloVe and word2vec) and from different sources where the purpose was to go from word-level to short-text level semantics by combining insights from methods based on external sources of semantic knowledge with word embeddings. On the other hand, (Prazak et al, 2012) attempt to estimate the similarity score between chunks based upon estimating semantic similarity of individual words and compiling them into one number for a given chunk pair. After, it experimented with word2vec and GloVe to estimate similarity of words and compiled all word similarities in one number that reflected semantics of whole chunks via lexical semantic vectors. Moreover, (Niraula et al, 2015) showed that words relatedness and similarity can be measured by combining word representations, like: LSA, LDA, word2vec and GloVe to complement the coverage of semantic aspects of a word and thus better represent the word than individual representations.",
"cite_spans": [
{
"start": 1038,
"end": 1059,
"text": "(Niraula et al, 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2.2"
},
{
"text": "Paraphrase detection between Arabic documents becomes a very important task in the recent years because of the great variability of Arabic language specificities, on the one hand, and the availability of enormous volume of information over the internet. In this context, we propose a method for Arabic paraphrase detection based on the identification of similarity between source document and suspect document using Natural Language Processing NLP techniques. Thus, our proposed approach allows extracting their semantic similarity to detect paraphrase which can be created by direct copy of sentences, replacement of words with similar ones, and changing the order of words or reconstructing the sentences (Sindhu and Idicula, 2015). Indeed, our proposed approach is composed by three phases, as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "1-We begin with a preprocessing phase to extract the relevant information from texts. 2-After, we apply a features extraction phase to extract more discriminant features. 3-Thereafter, a paraphrase detection phase is used to identify the rate of similarity between source document and plagiarized document. Here is the general architecture of our approach for Arabic paraphrase identification as shown in the following figure 1: Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 429,
"end": 437,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "General architecture of Arabic paraphrase detection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "Our model begins with a preprocessing phase to facilitate further processing by cleaning the Arabic text from noisy information, on the one hand, and to reduce the complexity of Arabic paraphrase identification, on the other hand. So, we proceed by the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing Phase",
"sec_num": "3.1"
},
{
"text": "1. We begin by segmenting the source text and suspect text into sentences by identifying their boundaries in order to extract the meaningful information. Among the boundaries used in the literature, we can mention: \",\" \";\" \".\" \":\" \"!\" \"?\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing Phase",
"sec_num": "3.1"
},
{
"text": "2. After, we try to extract tokens from running text where one Arabic word end and another Arabic word begin by detecting the space between them. (Aliwy, 2012) ",
"cite_spans": [
{
"start": 146,
"end": 159,
"text": "(Aliwy, 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing Phase",
"sec_num": "3.1"
},
{
"text": "Term Frequency-Inverse Document Frequency \"TF-IDF\": TF-IDF is used as a weighting factor in information retrieval and text mining of which it allows the construction of a vector space where each vector represent how a word is important to a document in a collection by the combination between Term Frequency TF (t,d) and Inverse Document Frequency IDF(w). (Shuka et al, 2016) More formally, given the frequency of the occurrence of term in document d, in order to control the fact that some words are common than others by proceeding as follows (Abderahman, 2016) :",
"cite_spans": [
{
"start": 356,
"end": 375,
"text": "(Shuka et al, 2016)",
"ref_id": null
},
{
"start": 545,
"end": 563,
"text": "(Abderahman, 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "\u2022 Term Frequency TF: is defined as the number of times a term occurs in a document as shown in equation 1. Moreover, a term can appear much more times in long documents than shorter ones since every document is different in length 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T O d t TF \uf03d ) , (",
"eq_num": "(1)"
}
],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "Where: O represent the number of times that a term t appears in a document, and T is the number of terms in the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "\u2022 Inverse Document Frequency IDF: is a statistical weight used for measuring the importance of a term in a text document collection. Also, IDF feature is 2 http://www.tfidf.com/ incorporated which reduces the weight of terms that occur very frequently in the document set and increases the weight of terms that occur rarely as shown in the following equation 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "N D \uf03d d)log IDF(t,",
"eq_num": "(2)"
}
],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "Where: |D| is the total number of documents and N is the number of documents with term t in it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "\u2022 Term Frequency-Inverse Document Frequency TF-IDF: is calculated for each term in the document by combining TF and IDF as shown in the following equation 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "d) IDF(t, * d) TF(t, f) d, IDF(t, \uf03d \uf02d TF (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "Word Embeddings \"word2vec\": Word embeddings are based on a probabilistic feed forward neural network language model to learn a space of continuous word representations in which similar words are expected to be close to each other. Thus, word embeddings allows representing words with low dimensional and dense real-value vectors which capture useful semantic and syntactic features of words (Law and al, 2017) , on the one hand, and reducing computational complexity, on the other hand. So, we use word2vec algorithm of which it consists two architectures for learning word embeddings that are less computationally expensive than previous models, such as: Continuous Bag Of Words CBOW and Skip-gram models. Indeed, we use the Skip-Gram model in our work because it showed better performance in Natural Language Processing NLP especially in semantic analysis.",
"cite_spans": [
{
"start": 391,
"end": 409,
"text": "(Law and al, 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "Generally, each input word w is associated with a k-dimensional vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "\uf0c2 \uf0ce k w v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "called the input embedding and each context word wO is associated with a k-dimensional vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "\uf0c2 \uf0ce k v wO '",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "called the output embedding, the probability of observing wO in the context of w is modeled with a softmax function, as follows in equation 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": ") 1 ' i exp( ) ' exp( ) ( vw W v w vw v wO w wO P \uf054 \uf0e5 \uf054 \uf03d (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "Also, given a sequence of training words { wn w ,..., 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "} and a larger size of the training context c in more training examples to lead a higher accuracy at the expense of the training time, the objective of skip gram model is to maximize the average log probability for this sequence, as follows in equation 5 (Mikolov and al, 2013) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ") 1 0 , ( log 1 wt n t j c j c w j t p n \uf0e5 \uf03d \uf0e5 \uf0b9 \uf0a3 \uf0a3 \uf02d \uf02b",
"eq_num": "(5)"
}
],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "Distributed Word Vector Representations \"word2vec and TF-IDF\": In our work, we try to facilitate the identification of similarity between Arabic texts using word embedding representations and TF-IDF techniques, as follows: On the one hand, we capture semantic properties of words by exploiting vectors as word representation in a multi dimensional space using word2vec algorithm based on skip gram model to optimize the probability of predicting words in the context given the current center word and overcoming the problem of the data sparsity problem. On the other hand, we use TF-IDF technique in order to improve the identification of words that are highly descriptive in each sentence of which TF-IDF value is used as a weighting factor to increase proportionally the number of times a word appears in the document but it is counteracting by the frequency of the word in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "More formally, given a sentence S composed by n words, our distributed sentence representations of suspect text and source text is composed by the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "1. We begin by determining the Term Frequency -Inverse Document Frequency \uf0c2 \uf0ce IDF -TF of each word in the sentence. The output of this step is a set of values representing the importance of each word in the text, as follows in equation 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "IDF - TF wn ,..., IDF - TF w1 IDF - TF w n 1,..., \uf03d",
"eq_num": "(6)"
}
],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "2. Then, we learn word embeddings using word2vec algorithm where the i-th word in the sentence S is mapped into a single vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "\uf0c2 \uf0ce k wi",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "represented by a column wi in a matrix M of size n*k. At the end of this operation, we have a sequence of n word vector representations of the sentence S, as follows in equation 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "wn w w w n ,..., 2 , 1 : 1 \uf03d",
"eq_num": "(7)"
}
],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "3. The word-level skip-gram model predicts the context of words given the current word vector but our goal is the prediction of the context of sentences given the vector representation of the current sentence vector. (Peng and Gildea, 2016) So, we proceed as follows: we calculate an average of all word vector representations w i extracted from W for each sentence Si composed by n words, as follows in equation 8:",
"cite_spans": [
{
"start": 217,
"end": 240,
"text": "(Peng and Gildea, 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "n 1 i IDF wi * i Si \uf0e5 \uf03d \uf02d \uf03d n TF w (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "At the end of this step of sentence vector representations, each sentence Si will be mapped into a single vector represented by a column in a matrix V, which will be used in subsequent processing, as follows in equation 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sm S V m ,..., 1 : 1 \uf03d",
"eq_num": "(9)"
}
],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "Where: m is the number of sentences in the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Extraction Phase",
"sec_num": "3.2"
},
{
"text": "Our goal in this study is how identify the rate of similarity between Arabic texts to conclude that there is a paraphrase between them by combining different metrics of comparison to prove our proposed approach. So, given the sentence vector representations of suspect text S1 and source text S 2 of dimension k, we compare each sentence of suspect document with all sentences of source document, as follows: 1. We identify the semantic relation between suspect and source sentence using Cosine Similarity based on the calculation of the number of similar words that exist in source sentence S1 and suspect sentence S 2 to determine the score of similarity between them where the Cosine Similarity is measured using word vectors (Alaa et al, 2016) , as follows in equation 10:",
"cite_spans": [
{
"start": 729,
"end": 747,
"text": "(Alaa et al, 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Texts Similarity Measurement",
"sec_num": "3.3"
},
{
"text": "\uf0e5 \uf03d \uf0e5 \uf03d \uf0e5 \uf03d \uf03d \uf03d k k S S S S S 1 i 1 i S 2 i 2 2 i 1 k 1 i S i 2 S 1i 2 . 1 2 . 1 ) S2 , S1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Texts Similarity Measurement",
"sec_num": "3.3"
},
{
"text": "Cos (102. After, we use the Euclidean distance as another similarity measure which calculates the similarity between two documents as the distance between their vectors representations reduced to a single point (Negre, 2013) as follows in equation 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Texts Similarity Measurement",
"sec_num": "3.3"
},
{
"text": "\uf0e5 \uf02d \uf03d 2 xi ) , ( y j y x Euc (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Texts Similarity Measurement",
"sec_num": "3.3"
},
{
"text": "So, our proposed similarity method is based on the semantic similarity of sentences in Arabic texts to determine the degree of semantic relatedness between them by combining two methods, such as: Cosine Similarity Cos and Euclidean Distance Euc, as follows in equation 12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Texts Similarity Measurement",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "2 ) 2 , 1 ( ) 2 , 1 ( ) 2 , 1 ( im S S Euc S S Cos S S S Comb \uf02b \uf03d",
"eq_num": "(12)"
}
],
"section": "Arabic Texts Similarity Measurement",
"sec_num": "3.3"
},
{
"text": "If the result we found from (12) has also exceeded a threshold \uf061",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Texts Similarity Measurement",
"sec_num": "3.3"
},
{
"text": ", then, we find that there is actually plagiarism (paraphrase) between the source document and suspect document. Otherwise, it is considered to be not plagiarized (not paraphrase).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Texts Similarity Measurement",
"sec_num": "3.3"
},
{
"text": "At the end of this step, we obtain a vector which contains different scores of similarity according to the suspect text document sizes until reaching the source document size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Texts Similarity Measurement",
"sec_num": "3.3"
},
{
"text": "Open Source Arabic Corpora OSAC 4 includes 22,429 text documents where each text document belongs to 1 of 10 categories such as: Economics, History, Entertainments, Education & Family, Religious and Fatwas, Sports, Heath, Astronomy, Low, Stories, Cooking Recipes). Indeed, the evaluation of our proposed approach is carried out on a collection of historical documents contains 3233 text documents of the Open Source Arabic Corpora (OSAC) 3 . The parameters we used and which made our approach efficient are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "\u2022 The word vector representations using word2vec based on Skip-gram model are checked in a matrix of size n*k. In our case, we used more than 350 millions words extracted from Wikipedia and we fixed k at 5 which represent the number of synonyms according to each word context where two words before the word in the middle target and two words after. \u2022 The experiments of this study to identify paraphrase between Arabic texts included the implementation of two combined Table 3 : Results of paraphrase identification approaches To conclude, the combination between distributed word vector representations and TF-IDF method have shown good result when we applied each measure of comparison that we cited above (Cosine Similarity and Euclidean Distance), and especially when we have combined them with a set of different measures of similarity (Cosine Similarity and Euclidean Distance) of which a promised plagiarism detection rate was obtained in terms of precision and recall.",
"cite_spans": [],
"ref_spans": [
{
"start": 470,
"end": 477,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "We proposed a semantic textual similarity approach for paraphrase identification in Arabic texts based on the combination of different Natural Language Processing NLP such as: TF-IDF technique to improve the identification of words that are highly descriptive in each sentence, and distributed word vector representations using word2vec algorithm to reduce computational complexity and to optimize the probability of predicting words in the context which they would be subsequently used to generate a sentence vector representations and after applying a similarity measurement operation based on the combination of different metrics of comparison such as: Cosine Similarity and Euclidean Distance. Finally, our proposed approach was evaluated on the Open Source Arabic Corpus OSAC and obtained a promising rate. Despite the promising results that we have obtained using our proposed approach, several improvements will be applied in our method later on, such as : the use of a Convolutional Neural Network CNN to improve the capability to capture statistical regularities in the context of sentences, on the one hand, and we will try to combine word vector representations to improve the similarity measure and to improve the weakness of each method, like: Latent Semantic Analysis LSA, Latent Dirichlet Allocation LDA and distributed representation of words word2vec.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Works",
"sec_num": "5"
},
{
"text": "https://en.wikipedia.org/wiki/Singular_value_decomposition",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "International Journal of Computational Linguistics and Applications, volume 5, n. 2, pp. 165-193. Meddeb O., Maraoui M. and AlJawerneh S. (2016) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A survey of plagiarism detection for Arabic documents",
"authors": [
{
"first": "Y",
"middle": [
"A"
],
"last": "Abderahman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Khalid",
"suffix": ""
},
{
"first": "I",
"middle": [
"M"
],
"last": "Osman",
"suffix": ""
}
],
"year": 2016,
"venue": "International Journal of Advanced Computer Technology IJACT",
"volume": "4",
"issue": "6",
"pages": "34--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abderahman Y. A., Khalid A. and Osman I. M.. (2016). A survey of plagiarism detection for Arabic documents, International Journal of Advanced Computer Technology IJACT, volume 4, n. 6, pp. 34-38.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Cross-language plagiarism of Arabic-English documents using linear logistic regressing",
"authors": [
{
"first": "Alaa",
"middle": [
"Z"
],
"last": "Tiun",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Abdulameer",
"suffix": ""
},
{
"first": "M",
"middle": [
"H"
],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Theoretical and Applied Information Technology",
"volume": "83",
"issue": "1",
"pages": "20--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alaa Z., Tiun S. and Abdulameer M. H. (2016). Cross-language plagiarism of Arabic-English documents using linear logistic regressing, Journal of Theoretical and Applied Information Technology, 2005 -2015 JATIT & LLS, Volume 83. No.1, , pp. 20-33.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Tokenization as Preprocessing for Arabic Tagging System",
"authors": [
{
"first": "A",
"middle": [
"H"
],
"last": "Aliwy",
"suffix": ""
}
],
"year": 2012,
"venue": "International Journal of Information and Education Technology",
"volume": "2",
"issue": "4",
"pages": "348--353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliwy A. H. (2012). Tokenization as Preprocessing for Arabic Tagging System, International Journal of Information and Education Technology, volume 2, No. 4, pp. 348-353.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Analysis of similarity and differences between articles using semantics",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bihi",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bihi A. (2017). Analysis of similarity and differences between articles using semantics, Universit\u00e9 Malardalen, Sweden.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Elaboration of a model for an indexed base for teaching Arabic language to disabled people",
"authors": [
{
"first": "A",
"middle": [],
"last": "Boudhief",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Maraoui",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zrigui",
"suffix": ""
}
],
"year": 2014,
"venue": "th International Conference on CIST",
"volume": "6",
"issue": "",
"pages": "110--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boudhief A., Maraoui M. and Zrigui M. (2014). Elaboration of a model for an indexed base for teaching Arabic language to disabled people, 6 th International Conference on CIST, pp110- 116.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A deeplearning approach to Persian plagiarism detection",
"authors": [
{
"first": "E",
"middle": [],
"last": "Gharavi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Bijari",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Zahirnia",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Veisi",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gharavi E., Bijari K., Zahirnia K. and Veisi H.. (2015). A deeplearning approach to Persian plagiarism detection, India.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Using Explicit Semantic Similarity for an Improved Web Explorer with ontology and TF-IDF",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hafeez",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Patil",
"suffix": ""
}
],
"year": 2017,
"venue": "International Journal of Advance Scientific Research and Engeneering Trends",
"volume": "2",
"issue": "7",
"pages": "171--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hafeez S. and Patil B. (2017). Using Explicit Semantic Similarity for an Improved Web Explorer with ontology and TF-IDF, International Journal of Advance Scientific Research and Engeneering Trends, Volume 2, Issue 7, pp. 171-173.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Arabic document similarity analysis using Ngram and Singular Value Decomposition",
"authors": [
{
"first": "A",
"middle": [
"S"
],
"last": "Hussein",
"suffix": ""
}
],
"year": 2017,
"venue": "th International Conference on Research Challenges in Information Science RC IS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hussein A. S. 2017. Arabic document similarity analysis using N- gram and Singular Value Decomposition, 9 th International Conference on Research Challenges in Information Science RC IS.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Short text similarity with word embeddings, International Conference on Information and Knowledge Management CIKM'15",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kenter",
"suffix": ""
},
{
"first": "M",
"middle": [
"D"
],
"last": "Rijke",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenter T. and Rijke M. D. (2015). Short text similarity with word embeddings, International Conference on Information and Knowledge Management CIKM'15, Australia.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "LTSG: Latent Topical Skip-Gram for Mutually Learning Topic Model and Vector Representations",
"authors": [
{
"first": "J",
"middle": [],
"last": "Law",
"suffix": ""
},
{
"first": "H",
"middle": [
"H"
],
"last": "Zhuo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Rong",
"suffix": ""
}
],
"year": 2017,
"venue": "Coputer Science, Computation and language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Law J., Zhuo H. H., He J. and Rong E. (2017). LTSG: Latent Topical Skip-Gram for Mutually Learning Topic Model and Vector Representations, Cornell University Library, Coputer Science, Computation and language, United States.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Towards a hybrid approach to semantic analysis of spontaneous Arabic speech",
"authors": [
{
"first": "C",
"middle": [],
"last": "Lihioui",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zouaghi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zrigui",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lihioui C., Zouaghi A. and Zrigui M. (2014). Towards a hybrid approach to semantic analysis of spontaneous Arabic speech,",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"content": "<table><tr><td/><td>:</td><td/></tr><tr><td>Example</td><td>Translation</td><td>Function</td></tr></table>",
"type_str": "table",
"text": "",
"html": null
},
"TABREF2": {
"num": null,
"content": "<table><tr><td>plagiarism (paraphrase), if they pass the \u2022 Two sentences are considered as threshold ( proposed method for similarity detection ) 2 , 1 ( im S S S Comb whose the threshold was fine-tuned by several trials on the training corpus and the results achieved when \u03b1 = 0.3. Each method was tested individually and the combination method gave us the final result of our proposed method as shown in the following table: Proposed Approaches Precision Recall TF-IDF + im S Comb 0.81 0.79 Word2vec + im S Comb 0.83 0.81 word2vec + TF-IDF + im S Comb \uf061 ) between the result of our Final Combination: 0.85 0.84</td></tr></table>",
"type_str": "table",
"text": "//sites.google.com/site/motazsite/corpora/osac methods, which are: Cosine Similarity Cos and Euclidean Distance Euc.",
"html": null
}
}
}
}