ACL-OCL / Base_JSON /prefixW /json /wanlp /2021.wanlp-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:58:18.898475Z"
},
"title": "Automatic Difficulty Classification of Arabic Sentences",
"authors": [
{
"first": "Nouran",
"middle": [],
"last": "Khallaf",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Leeds Leeds",
"location": {
"postCode": "LS2 9JT",
"country": "United Kingdom"
}
},
"email": ""
},
{
"first": "Serge",
"middle": [],
"last": "Sharoff",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Leeds Leeds",
"location": {
"postCode": "LS2 9JT",
"country": "United Kingdom"
}
},
"email": "s.sharoff@leeds.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present a Modern Standard Arabic (MSA) Sentence difficulty classifier, which predicts the difficulty of sentences for language learners using either the CEFR proficiency levels or the binary classification as simple or complex. We compare the use of sentence embeddings of different kinds (fast-Text, mBERT , XLM-R and Arabic-BERT), as well as traditional language features such as POS tags, dependency trees, readability scores and frequency lists for language learners. Our best results have been achieved using finedtuned Arabic-BERT. The accuracy of our 3way CEFR classification is F-1 of 0.80 and 0.75 for Arabic-Bert and XLM-R classification respectively and 0.71 Spearman correlation for regression. Our binary difficulty classifier reaches F-1 0.94 and F-1 0.98 for sentencepair semantic similarity classifier.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present a Modern Standard Arabic (MSA) Sentence difficulty classifier, which predicts the difficulty of sentences for language learners using either the CEFR proficiency levels or the binary classification as simple or complex. We compare the use of sentence embeddings of different kinds (fast-Text, mBERT , XLM-R and Arabic-BERT), as well as traditional language features such as POS tags, dependency trees, readability scores and frequency lists for language learners. Our best results have been achieved using finedtuned Arabic-BERT. The accuracy of our 3way CEFR classification is F-1 of 0.80 and 0.75 for Arabic-Bert and XLM-R classification respectively and 0.71 Spearman correlation for regression. Our binary difficulty classifier reaches F-1 0.94 and F-1 0.98 for sentencepair semantic similarity classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the last century, measuring text readability (TR) has been undertaken in education, psychology, and linguistics. There appears to be some agreement that TR is the quality of a given text to be easy to comprehend by its readers in adequate time with reasonable effort . Research to date has tended to focus on assigning readability levels to whole text rather than to individual sentences, despite the fact that any text is composed of a number of sentences, which vary in their difficulty (Schumacher et al., 2016) . Assigning readability levels for a text is a challenging task and it is even more challenging on the sentence level as much less information is available. Also, the sentence difficulty is influenced by many parameters, such as, genre or topics, as well grammatical structures, which need to be combined in a single classifier. Difficulty assessment at the sentence level is a more challenging task in comparison to the better researched text level task, but the availability of a readability sentence classifier for Arabic is vital, since this is a prerequisite for research on automatic text simplification (ATS), i.e. the process of reducing text-linguistic complexity, while maintaining its meaning (Saggion, 2017) .",
"cite_spans": [
{
"start": 48,
"end": 52,
"text": "(TR)",
"ref_id": null
},
{
"start": 492,
"end": 517,
"text": "(Schumacher et al., 2016)",
"ref_id": "BIBREF31"
},
{
"start": 1222,
"end": 1237,
"text": "(Saggion, 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus here on experiments aimed at measuring to what extent a sentence is understandable by a reader, such as a learner of Arabic as a foreign language, and at exploring different methods for readability assessment. The main aim of this paper lies in developing and testing different sentence representation methodologies, which range from using linguistic knowledge via feature-based machine learning to modern neural methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, the contributions of this paper are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We compiled a novel dataset for training on the sentence level; 2. We developed a range of linguistic features, including POS, syntax and frequency information; 3. We evaluated a range of different sentence embedding approaches, such as fastText, BERT and XLM-R, and compared them to the linguistic features; 4. We cast the readability assessment as a regression problem as well as a classification problem; 5. Our model is the first sentence difficulty system available for Arabic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Corpora and Tools",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This dataset was used for Arabic sentence difficulty classification. We started building our own dataset by compiling a corpus from three available source classified for readability on the document level along with a large Arabic corpus obtained by Web crawling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset One: Sentence-level annotation",
"sec_num": "2.1"
},
{
"text": "The first corpus source is the reading section of the Gloss 1 Corpus developed by the Defense Language Institute (DLI). It has been treated as a gold standard and used in the most recent studies on document level predictions (Forsyth, 2014; Saddiki et al., 2015; Nassiri et al., 2018a,b) . Texts in Gloss have been annotated on a six level scale of the Inter-Agency Language Roundtable (IL ), which has been matched to the CEFR levels according to the schema introduced by (Tschirner et al., 2015) . Gloss is divided according to the four competence areas (lexical, structural, socio-cultural and discursive) and ten different genres (culture, economy, politics, environment, geography, military, politics, science, security, society, and technology) .",
"cite_spans": [
{
"start": 225,
"end": 240,
"text": "(Forsyth, 2014;",
"ref_id": "BIBREF18"
},
{
"start": 241,
"end": 262,
"text": "Saddiki et al., 2015;",
"ref_id": "BIBREF27"
},
{
"start": 263,
"end": 287,
"text": "Nassiri et al., 2018a,b)",
"ref_id": null
},
{
"start": 473,
"end": 497,
"text": "(Tschirner et al., 2015)",
"ref_id": "BIBREF36"
},
{
"start": 634,
"end": 750,
"text": "(culture, economy, politics, environment, geography, military, politics, science, security, society, and technology)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset One: Sentence-level annotation",
"sec_num": "2.1"
},
{
"text": "The second corpus source is the ALC , which consists of Arabic written text produced by learners of Arabic in Saudi Arabia collected by (Alfaifi and Atwell, 2013) . Each text file is annotated with a proficiency level of the student. We mapped these student proficiency levels to CEFR levels.",
"cite_spans": [
{
"start": 136,
"end": 162,
"text": "(Alfaifi and Atwell, 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset One: Sentence-level annotation",
"sec_num": "2.1"
},
{
"text": "Our third corpus source comes from textbook \"Al-Kitaab fii TaAallum al-Arabiyya\" (Brustad et al., 2015) which was compiled from texts and sentences from parts one and two of the third edition but only texts from part three third edition. This book is widely used to teaching Arabic as a second language. These texts were originally classified according to the American Council on the Teaching of Foreign Languages (ACTFL) guidelines which we mapped to CEFR levels.",
"cite_spans": [
{
"start": 81,
"end": 103,
"text": "(Brustad et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset One: Sentence-level annotation",
"sec_num": "2.1"
},
{
"text": "As these corpora have been annotated on the document level and not on the sentence level, we assigned each sentence to the document level in which it appears, by using several filtering heuristics, such as sentence length and containment, as well as via re-annotation through machine learning, see the dataset cleaning procedure below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset One: Sentence-level annotation",
"sec_num": "2.1"
},
{
"text": "A counterpart corpus of texts not produced for language learners in mind is provided by I-AR, 75,630 Arabic web pages collected by wide crawling (Sharoff, 2006) . A random snapshot of 8627 sentences longer than 15 words was used to extend the limitations of C-level sentences coming from corpora for language learners. Table 1 shows distribution of the number of used sentences and tokens per each Common European Framework of language proficiency Reference [CEFR] Level. In principle we have data for 5-way (A1, A2, B1, etc), 3-way (A, B or C) and 1 https://gloss.dliflc.edu/",
"cite_spans": [
{
"start": 145,
"end": 160,
"text": "(Sharoff, 2006)",
"ref_id": "BIBREF34"
},
{
"start": 458,
"end": 464,
"text": "[CEFR]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 319,
"end": 326,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset One: Sentence-level annotation",
"sec_num": "2.1"
},
{
"text": "Old New S T S T A 8661 187225 9030 195343 B 5532 126805 5083 117825 C 8627 287275 8627 287275 Total 22820 601305 22740 600443 Table 1 : (S)sentences and (T)tokens available per each CEFR Level in the two versions of the corpus binary (A+B vs C) classification tasks, but here in this presentation, we focus on the 3-way and binary (simple vs complex) classification tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 147,
"text": "New S T S T A 8661 187225 9030 195343 B 5532 126805 5083 117825 C 8627 287275 8627 287275 Total 22820 601305 22740 600443 Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "CEFR",
"sec_num": null
},
{
"text": "Dataset cleaning: In our initial experiments we noticed unreliable sentence-level assignments in the training corpus. Therefore, we decided to improve the quality of the training corpus by an error analysis strategy introduced by Di Bari et al. 2014, which is based on detecting agreement between classifiers belonging to different Machine Learning paradigms. The cases when the majority of the classifiers agreed on predicting a label while the gold standard was different were inspected manually by a specialist in teaching Arabic. In our Dataset cleaning experiment we used the following classifiers: SVM (with the rbf kernel), Random Forest, KNeighbors, Softmax and XgBoost using linguistic features discussed in Section 3, trained them via cross-validation and compared their majority vote to the gold standard. We modified the error classification tags introduced by Di Bari et al. (2014) as follows:",
"cite_spans": [
{
"start": 876,
"end": 894,
"text": "Bari et al. (2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CEFR",
"sec_num": null
},
{
"text": "Wrong if the classifiers have wrongly labelled the data, and the gold standard is correct. Modify if the classifiers are correct and we need to modify the gold standard. Ambiguous if we consider both either label is possible based on different perspectives. False is an added label which represent the disagreement between the gold standard and the classifiers, when neither is correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CEFR",
"sec_num": null
},
{
"text": "For each sentence, five different predictions are assigned. Compared to the gold standard CEFRlabel, the classifiers agreed in predicting 10204 instances. Then what we need to consider is when all classifiers agree on the predicted label and it contradicts with the gold standard's one. In that matter, the classifiers agreed on 1943 sentence classification. We manually investigated random sentences and assigned the error classification tags. We found that the main classification confusion was in Level B instances. The analysis results as in Table 4 show the distribution of categories where each error type occurred. In the end, 380 instances had to be assigned to lower level (usually from B to A).",
"cite_spans": [],
"ref_spans": [
{
"start": 546,
"end": 553,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "CEFR",
"sec_num": null
},
{
"text": "A set of simple/complex parallel sentences has been compiled from the internationally acclaimed Arabic novel \"Saaq al-Bambuu\" (Al-Sanousi, 2013) which has an authorized simplified version for students of Arabic as a second language (Familiar, 2016) . We assume that a successful classifier should be able to detect sentences in the original text that require simplification. Dataset Two consists of 2980 parallel sentences Table 2 .",
"cite_spans": [
{
"start": 232,
"end": 248,
"text": "(Familiar, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 423,
"end": 430,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Dataset Two: Simplification examples",
"sec_num": "2.2"
},
{
"text": "Sentence Token Simple A+B 2980 34447 Complex C 2980 46521 Total 5690 80968 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Level",
"sec_num": null
},
{
"text": "We work with following groups of features in Table 3 : Part of speech tagging features (POSfeatures); Syntactic structure features (Syntacticfeatures); CEFR-level lexical features; Sentence embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Features and extraction methods",
"sec_num": "3"
},
{
"text": "While the sentence-level classification task is novel, we borrowed some features from previous studies of text-level readability (Forsyth, 2014; Saddiki et al., 2015; Nassiri et al., 2018a,b) . We decided to exclude the sentence length from the feature set, as this creates an artificial skew in understanding what is difficult: more difficult writing styles are often associated with longer sentences, but it is not the sentence length which makes them difficult. Specifically, many long Arabic sentences contain shorter ones, which are connected by conjunctions such as ' /wa /= and'. According to the experience of language teachers such sentences do not present problems for the learners.",
"cite_spans": [
{
"start": 129,
"end": 144,
"text": "(Forsyth, 2014;",
"ref_id": "BIBREF18"
},
{
"start": 145,
"end": 166,
"text": "Saddiki et al., 2015;",
"ref_id": "BIBREF27"
},
{
"start": 167,
"end": 191,
"text": "Nassiri et al., 2018a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic features",
"sec_num": "3.1"
},
{
"text": "[ Table 3 features (1-21)], these features represent the distribution of different word categories in the sentence, and the morpho-syntactic features of these words. According to Knowles and Don (2004) , Arabic lemmatization, unlike that of English, is an essential process for analysing Arabic text, because it is a methodology for dictionary construction. Therefore, we used the Lemma/Type ratio instead of Word/Type ratio. Adding features represents the different verb types (Verb pseudo, Passive verbs, Perfective verbs, Imperfective verbs and 3rdperson). As conjunction is one of the important features in representing sentence complexity in Arabic (Forsyth, 2014) , we used the annotated discourse connectors introduced by Alsaif 2012by splitting this list into 23 simple connectors and 56 complex connectors referring to non-discourse connectors and discourse connectors respectively. For POS-features extraction we used MADAMIRA a robust Arabic morphological analyser and part of speech tagger (Pasha et al., 2014) .",
"cite_spans": [
{
"start": 179,
"end": 201,
"text": "Knowles and Don (2004)",
"ref_id": "BIBREF21"
},
{
"start": 654,
"end": 669,
"text": "(Forsyth, 2014)",
"ref_id": "BIBREF18"
},
{
"start": 1002,
"end": 1022,
"text": "(Pasha et al., 2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 2,
"end": 9,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "The POS-features",
"sec_num": "3.1.1"
},
{
"text": "Features (22-27) from Table 3 provide some information about the sentences structures and number of phrases as well as phases types. These features are derived from a dependency grammar analysis. Because dependency grammar is based on word-word relations, it assumes that the structure of a sentence consists of lexical items that are attached to each other by binary asymmetrical relations, which is known as dependency relations. These relations will be more representative for this task. We used CamelParser (Shahrour et al., 2016) a system for Arabic syntactic dependency analysis together with contextually disambiguated morphological features which rely on the MADAMIRA morphological analysis for more robust results.",
"cite_spans": [
{
"start": 511,
"end": 534,
"text": "(Shahrour et al., 2016)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 22,
"end": 29,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Syntactic features",
"sec_num": "3.1.2"
},
{
"text": "Features (28-34) from 30-million-word corpus of academic/non-academic and written/spoken texts (Buckwalter and Parkinson, 2014) KELLY's list which is produced from the Kelly project (Kilgarriff et al., 2014) , which directly mapped a frequency word list to the CEFR levels using numerous corpora and languages, 3) lists presented at the beginning of each chapter in 'Al-Kitaab' (Brustad et al., 2015) . Merging the lists and aligning them with the Madamira lemmatiser led to our new wide-coverage Arabic frequency list, which can be used to predict difficulty as Entropy of the probability distribution of each label in a sentence. The current list shows some consistency with the English profile list in terms of the percentage of words allocated to each CEFR level.",
"cite_spans": [
{
"start": 182,
"end": 207,
"text": "(Kilgarriff et al., 2014)",
"ref_id": "BIBREF20"
},
{
"start": 378,
"end": 400,
"text": "(Brustad et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CEFR-level lexical features",
"sec_num": "3.1.3"
},
{
"text": "In addition to the 34 traditional features we can represent sentences as embedding vectors using different neural models as following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence embeddings",
"sec_num": "3.2"
},
{
"text": "fastText A straightforward way to create sentence representations is to take a weighted average of word embeddings (WE) of each word, for example, using fastText vectors. This embedding was trained on Common Crawl and Wikipedia using fastText 2 tool. Using the Arabic ar.300.bin file in which each word in WE is represented by the 1D vector mapped of 300 attributes (Grave et al., 2018) . We had to normalize the sentence vectors to have the same length with respect to dimensions. For this, we calculated tf-idf weights of each word in the corpus to use them as weights:",
"cite_spans": [
{
"start": 366,
"end": 386,
"text": "(Grave et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence embeddings",
"sec_num": "3.2"
},
{
"text": "s = w 1 w 2 . . . .w n Embed.[s] = 1 n i tf idf [w i ] * Embed.[w i ] (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence embeddings",
"sec_num": "3.2"
},
{
"text": "Universal sentence encoder (Yang et al., 2019) This model requires modeling the meaning of word sequences rather than just individual words. Also it was generated mainly to be used on the sentence level which after sentence tokenization, it encodes sentence to a 512-dimensional vector. We used here the large version 3 .",
"cite_spans": [
{
"start": 27,
"end": 46,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence embeddings",
"sec_num": "3.2"
},
{
"text": "Multilingual BERT (Devlin et al., 2018) Pretrained transformers models proved their ability to learn successful representations of language inspired by the transformer model presented in (Vaswani et al., 2017) -who introduced using attention instead to incorporate context information into sequence representation. BERT. Here, we used the last layer produced by BERT transformers while padding the sentences to the maximum length of 128 tokens.",
"cite_spans": [
{
"start": 18,
"end": 39,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 187,
"end": 209,
"text": "(Vaswani et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence embeddings",
"sec_num": "3.2"
},
{
"text": "XLM-R (Conneau et al., 2019) This is another multilingual BERT-like model, which is different from mBERT by being trained on Common Crawl (instead of Wikipedias) with slightly different parameters. We used the same setup for classification as in the case of mBERT, while also testing a different setup of combining its output with linguistic features and using it as a joined vector of features for traditional ML classification.",
"cite_spans": [
{
"start": 6,
"end": 28,
"text": "(Conneau et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence embeddings",
"sec_num": "3.2"
},
{
"text": "Arabic BERT We trained two available BERTlike pre-trained Arabic transformer models available at Hugging face transformers (AraBERT 4 and Arabic-BERT 5 ). Both models contain both Modern Standard Arabic (MSA) and Dialectal Arabic (DA). The pretraining data used for the AraBERT model consist of 70 million sentences (Antoun et al., 2020) . Arabic-BERT trained on both filtered Arabic Common Crawl and a recent dump of Arabic Wikipedia contain approximately 8.2 Billion words (Safaya et al., 2020) .",
"cite_spans": [
{
"start": 316,
"end": 337,
"text": "(Antoun et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 475,
"end": 496,
"text": "(Safaya et al., 2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence embeddings",
"sec_num": "3.2"
},
{
"text": "CEFR language proficiency levels can be presented as labels or as a continuous scale. The former is solved as a classification task with macro-averaged F-1 as the main measure for accuracy. The latter is solved as a regression task (Vajjala and Loo, 2014) . At first we decided to work with the three main levels (A,B,and C) because it was quite difficult to determine the boundary between the inner sub levels as in the boundary between B1 and B2.Yet, the other binary classification is either Simple (A+B) or Complex (C). Here there is a problem for evaluation, since the gold standard labels are represented as integers 1, 2, 3 (for the A, B and C levels respectively), which leads to a large number of ties. Out of the standard correlation measures, Kendall's tau-b is designed to handle ties, so in addition to Pearson's \u03c1 this is our measure for regression (Maurice and Dickinson, 1990) . Table 4 presents the results of classification using updated version of dataset one after application of the error analysis. Applying different ML approaches with 10-fold cross-validation on the 3way multi-class classification. The classification results as presented in Table 4 On the one hand, using linguistic features along with sentence embedding vectors, SVM with rbf kernel classifier provides the best F-1 with 0.75 on the updated corpus version. The SVM classifier is slightly better than both Xgboost and Softmax in precision and they have roughly the same recall value. On the other hand, comparing sentence embeddings of different kinds such as: XLM-R, mBERT, FasText and UCS along with AraBERT and Arabic-BERT.his indicated that Arabic-BERT is a clear winner with F-1 0.80. Since the architecture for building for all BERT-like models are very similar, we suspect that the more Arabic varied corpus (Common Crawl and Wikipedia for Arabic-BERT vs Common Crawl XML-R vs Wikipedia for BERT, AraBert and UCS) used to train the Arabic-BERT model is responsible for its better performance The confusion matrix in Table 5 shows a clear separation between the lower and higher level of proficiency. The majority of errors are between neighbouring levels and the number of errors decreases when we move away from the true class. The most problematic level was B which has a tendency to be classified as CEFR Level A.",
"cite_spans": [
{
"start": 232,
"end": 255,
"text": "(Vajjala and Loo, 2014)",
"ref_id": "BIBREF37"
},
{
"start": 863,
"end": 892,
"text": "(Maurice and Dickinson, 1990)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 895,
"end": 902,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 1166,
"end": 1173,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 2015,
"end": 2022,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Predicted A B C A 7485 1021 156 B 4506 1112 0 C 0 0 8627 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Readability as a Classification Problem",
"sec_num": "4.1"
},
{
"text": "Regression allows us to make ranked predictions along the discrete CEFR levels thus assessing which text is more difficult than the other. The training just as in the previous experiment with applying different ML approaches with 10-fold crossvalidation. The results for regression can be rated using mean absolute error (MAE) from the gold standard and the correlation coefficients, Pearson, Spearman and Kendall's tau. The results are listed in Table 6 . As with classification, error analysis leads to improved results across all methods. The best MAE rate of 0.34 shows that sentence difficulty prediction is quite close to the gold labels. As mentioned before, our model has a very large number of ties for the gold labels (which can only take three values), so the preferred evaluation measure for regression is Kendall's tau-b. The best models are RF and SVR on the XLM-R features. ",
"cite_spans": [],
"ref_spans": [
{
"start": 447,
"end": 454,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Readability as a Regression Problem",
"sec_num": "4.2"
},
{
"text": "Interpreting feature importance and effectiveness is a way for a better understanding of the classification ML model's logic. This process provides ranking the features by assigning a score for each feature represents its contribution in the target label prediction. These scores provide insights into data representation and model performance. Working with these features ranking can improve the model efficiency and effectiveness by focusing only on the important variables and ignore the irrelevant or noisy features. For this purpose, we applied the Recursive Feature Elimination (RFE) a wrapper method for feature selection approach on the basis of SVM classifier. RFE works with recursively removing some features and testing the remain features to select the best feature set affecting the classifier decisions. The results of using RFE approach testing the SVM classifier as represented in Table 7 showing the best ten features contributes to the prediction model. Sentence embedding using XLM-R appeared at the top of the list conveying that it is the most useful feature for sentence difficulty scoring. Followed by the CEFR word frequency features with four features in different positions (Label A1, Label B2, Label C2, and Entropy). The third most effective features are that of the syntactic-set representing more in-depth into the sentence's syntactic knowledge. ",
"cite_spans": [],
"ref_spans": [
{
"start": 898,
"end": 905,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "4.3"
},
{
"text": "Going further, we performed feature ablation experiments by excluding certain sets of features. We applied SVM rbf classifier on the full dataset while excluding one of the four main group of features blocks POS, Syntactic-features, CEFR-level lexical features, sentence embedding, along with using only the sentence embeddings (XLM-R since this was shown in comparison of embeddings below). This shows that the sentence embeddings significantly contribute to the classification results Table 8), in spite of the efforts to create hand-crafted features. Nevertheless, the linguistic features are useful in interpreting the results of purely neural classification. This results prove that the transformer models provide a rich representation for the sentences covering linguistic features. According to the primary results from the feature selection and the ablation experiments, which proved the use of sentence embedding alone could fulfill the task without extensive use of linguistics features. These results encouraged us to continue experimentation with applying only the sentence embedding feature to reduce the number of features which consequently decreases the data analysis and training time. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation",
"sec_num": "4.4"
},
{
"text": "For the binary classification, the classifier reached F-1 of 0.94 and 0.98 for Arabic-BERT and SVM XML-R respectively. However, when testing the binary classifiers trained from DataSet One on Dataset Two the accuracy drops considerably, see Table 9 .",
"cite_spans": [],
"ref_spans": [
{
"start": 241,
"end": 248,
"text": "Table 9",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Testing on Dataset Two",
"sec_num": "4.5"
},
{
"text": "As the confusion matrix in Table 10 shows, both classifiers performed better in identifying the complex instances rather than simple ones, so the F1 measure drops. However, the initial results on dataset two shows that XLM-R classifier performed better than Arabic-BERT, we still consider Arabic-BERT classifiers [both 3-way and binary] as best classifier so far. Our interpretation for these confusions is because of the fictional nature of Dataset Two. First, the fiction is well represented in the training data for the A+B levels in Dataset One, while the C level (Snapshot corpus) contains texts of many different types from the internet, so that the classifiers could not handle the mismatch in Arabic-BERT XLM-R P 0.60 0.56 R 0.50 0.53 F-1 0.53 0.54 genres. The other possible reason is that what is considered as complex sentences which are worth simplification according to developers of Dataset Two does not really seem to be complex as to be only suitable for the C-level students. More research is needed to identify the difference between the two datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Table 10",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Testing on Dataset Two",
"sec_num": "4.5"
},
{
"text": "For the purpose of this experiment we needed to include non-simplified sentence to the Dataset two. So that, we duplicated the 2980 complex sentences without simplification aligned with the exact sentence without modification and labeled them with 0 indicating not paraphrased/simplified. Resulting in a Dataset consist of 2980 sentences with right simplification labeled as 1 and other 2980 non-simplified with label 0, in a total of 5960 sentence. The two models trained on this similarity task (AraBert and Arabic-Bert) achieve the F-1 measure of 0.98, leading to the ability to detect sentences which need simplification according to the Dataset Two standard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Two Sentence Similarity",
"sec_num": "4.6"
},
{
"text": "The last two decades have seen enormous efforts (especially for the English language) to develop readability measurement ranging from the traditional readability formulae to ML algorithms. English language researchers have introduced more than 200 readability formulae (DuBay, 2004) as well as hundreds of models (Schwarm and Ostendorf, 2005) . In contrast, less research has addressed Arabic language issues and their challenges for robust readability formulae. Some attempts to formulate statistical formulae for the Arabic language reflected traditional English formulae such as the Flesch-Kincaid Grade. The simplest formulae included the average word length, the average sentence and other surface features. According to these simple formulae are Dawood formula (1977) , Al-Heeti formula (1984) , and the formula presented by Daud et al. (2013) based on a corpus. The more sophisticated formulae represent the syllables and more insights the Arabic sentences, such as AARI Base by Al Tamimi et al. (2014) and OSMAN by El-Haj and Rayson (2016) . Other studies were conducted to measure text readability by targeting either first or second language learners for Arabic language modelled using different ML algorithms. Most of these studies used the previously traditional features along with varying lists of part of speech features (POS) representing the words in each document as in studies by (Al-Khalifa and Al-Ajlan, 2010; Forsyth, 2014; Saddiki et al., 2015; Nassiri et al., 2018a) . Forsyth (2014) used the word frequency dictionary by Buckwalter and Parkinson (2014) to classify the words' level against this dictionary frequencies. The dictionary was used later by Nassiri et al. (2018b) along with 133 POS features to achieve an accuracy of 100% with 3-classes. The 'Al-Kitaab' textbook has a word list introduced at the beginning of each chapter in the book. These lists were used by Cavalli-Sforza et al. (2014) for comparing the words appeared in a text against this list and labelling them by (target, known, unknown) . , highlight adding new syntactic features to their features targeting more in-depth analysis. They used two different datasets for both first and second Arabic language learning. This yielded an accuracy of 94.8%, 72.4% for first language learners and second language learners respectively.",
"cite_spans": [
{
"start": 269,
"end": 282,
"text": "(DuBay, 2004)",
"ref_id": "BIBREF15"
},
{
"start": 313,
"end": 342,
"text": "(Schwarm and Ostendorf, 2005)",
"ref_id": "BIBREF32"
},
{
"start": 767,
"end": 773,
"text": "(1977)",
"ref_id": null
},
{
"start": 776,
"end": 799,
"text": "Al-Heeti formula (1984)",
"ref_id": null
},
{
"start": 831,
"end": 849,
"text": "Daud et al. (2013)",
"ref_id": "BIBREF11"
},
{
"start": 989,
"end": 1009,
"text": "Tamimi et al. (2014)",
"ref_id": "BIBREF2"
},
{
"start": 1023,
"end": 1047,
"text": "El-Haj and Rayson (2016)",
"ref_id": "BIBREF16"
},
{
"start": 1399,
"end": 1430,
"text": "(Al-Khalifa and Al-Ajlan, 2010;",
"ref_id": "BIBREF0"
},
{
"start": 1431,
"end": 1445,
"text": "Forsyth, 2014;",
"ref_id": "BIBREF18"
},
{
"start": 1446,
"end": 1467,
"text": "Saddiki et al., 2015;",
"ref_id": "BIBREF27"
},
{
"start": 1468,
"end": 1490,
"text": "Nassiri et al., 2018a)",
"ref_id": "BIBREF23"
},
{
"start": 1493,
"end": 1507,
"text": "Forsyth (2014)",
"ref_id": "BIBREF18"
},
{
"start": 1546,
"end": 1577,
"text": "Buckwalter and Parkinson (2014)",
"ref_id": "BIBREF7"
},
{
"start": 1677,
"end": 1699,
"text": "Nassiri et al. (2018b)",
"ref_id": "BIBREF24"
},
{
"start": 1898,
"end": 1926,
"text": "Cavalli-Sforza et al. (2014)",
"ref_id": "BIBREF8"
},
{
"start": 2010,
"end": 2034,
"text": "(target, known, unknown)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work on Arabic",
"sec_num": "5"
},
{
"text": "We present the first attempt to build a methodology for Arabic difficulty classification on the sentence level. We have found that while linguistic features, such as POS tags, syntax or frequency lists are useful for prediction, Deep Learning is the most important contribution to performance, but the traditional features can help in interpreting the black box of Deep Learning alone. For this specific task and for the Arabic language, fine-tuned Arabic-BERT offers better performance than other sentence embedding methods. Also, application of the classifiers trained on one dataset to a very different evaluation corpus shows that the classifiers learn some important properties of what is difficult in Arabic, but the transfer is more successful for the feature-based models than for the BERT-based ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "In the end, our best classifier is reasonably reliable in detecting complex sentences; however, it is less successful in separating between the lower learner levels. Still the binary classifier provides the functionality for filtering out really difficult sentences, not suitable for the learners. If we are thinking of Arabic learners especially in higher education, we are expecting learners to graduate with a BA degree in the case of Arabic as a complex language with confidence in reading B2 texts, which implies that the tool for separating A+B vs C level texts is really useful for undergraduate teaching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Through our tool providing computational assessment of difficulty, we will be able: i) to select the appropriate texts for students; ii) to access everlarger volumes of information to find educational material of the right difficulty online; iii) to explore curriculum-based assessment to find what is most effective in finding gaps in a curriculum that can be filled according to students' needs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Our future work involves building a parallel simple/complex Arabic corpus for sentence simplification. The corpus will be classified on the basis of how difficult the sentences are in a Common Crawl snapshot of Arabic web pages. Using the text difficulty classifier, we can split the corpus into two groups for complex and simple sentences. We also consider the semantic similarity detection on \"Saaq al-Bambuu\" as a benchmark, which could be used in the corpus compilation. In this study we only performed some ablation analysis, but because BERT-like models are more useful as the classifiers, we want to investigate their performance via probing for linguistic features following the BERTology framework (Rogers et al., 2020; Sharoff, 2021) . We also want to explore the link between the difficulty assessment on the document vs sentences levels (Dell'Orletta et al., 2014) .",
"cite_spans": [
{
"start": 707,
"end": 728,
"text": "(Rogers et al., 2020;",
"ref_id": "BIBREF26"
},
{
"start": 729,
"end": 743,
"text": "Sharoff, 2021)",
"ref_id": "BIBREF35"
},
{
"start": 849,
"end": 876,
"text": "(Dell'Orletta et al., 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "https://fasttext.cc/docs/en/crawl-vectors.html 3 https://tfhub.dev/google/universal-sentence-encodermultilingual/1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/aubmindlab/bert-base-arabert 5 https://huggingface.co/asafaya/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is a part of PhD project funded by Newton-Mosharafa Fund. All experiments presented in this paper were performed using Advanced Research Computing (ARC) facilities provided by Leeds University.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic readability measurements of the arabic text: An exploratory study",
"authors": [
{
"first": "Amani A",
"middle": [],
"last": "Hend S Al-Khalifa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Al-Ajlan",
"suffix": ""
}
],
"year": 2010,
"venue": "Arabian Journal for Science and Engineering",
"volume": "35",
"issue": "2",
"pages": "103--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hend S Al-Khalifa and Amani A Al-Ajlan. 2010. Au- tomatic readability measurements of the arabic text: An exploratory study. Arabian Journal for Science and Engineering, 35(2 C):103-124.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Saaq al-Bambuu",
"authors": [
{
"first": "Saud",
"middle": [],
"last": "Al-Sanousi",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saud Al-Sanousi. 2013. Saaq al-Bambuu. Arab Scien- tific Publishers Inc., Lebanon.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Aari: automatic arabic readability index",
"authors": [
{
"first": "Abdel",
"middle": [
"Karim"
],
"last": "",
"suffix": ""
},
{
"first": "Al",
"middle": [],
"last": "Tamimi",
"suffix": ""
},
{
"first": "Manar",
"middle": [],
"last": "Jaradat",
"suffix": ""
},
{
"first": "Nuha",
"middle": [],
"last": "Al-Jarrah",
"suffix": ""
},
{
"first": "Sahar",
"middle": [],
"last": "Ghanem",
"suffix": ""
}
],
"year": 2014,
"venue": "Int. Arab J. Inf. Technol",
"volume": "11",
"issue": "4",
"pages": "370--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdel Karim Al Tamimi, Manar Jaradat, Nuha Al- Jarrah, and Sahar Ghanem. 2014. Aari: automatic arabic readability index. Int. Arab J. Inf. Technol., 11(4):370-378.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Arabic learner corpus v1: A new resource for arabic language research",
"authors": [
{
"first": "A",
"middle": [],
"last": "Alfaifi",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"S"
],
"last": "Atwell",
"suffix": ""
}
],
"year": 2013,
"venue": "proceedings of the Second Workshop on Arabic Corpus Linguistics (WACL-2)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Alfaifi and Eric S Atwell. 2013. Arabic learner cor- pus v1: A new resource for arabic language research. In In proceedings of the Second Workshop on Arabic Corpus Linguistics (WACL-2). Leeds.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Human and automatic annotation of discourse relations for Arabic",
"authors": [
{
"first": "Amal",
"middle": [],
"last": "Alsaif",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amal Alsaif. 2012. Human and automatic annota- tion of discourse relations for Arabic. University of Leeds.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Arabert: Transformer-based model for arabic language understanding",
"authors": [
{
"first": "Wissam",
"middle": [],
"last": "Antoun",
"suffix": ""
},
{
"first": "Fady",
"middle": [],
"last": "Baly",
"suffix": ""
},
{
"first": "Hazem",
"middle": [],
"last": "Hajj",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.00104"
]
},
"num": null,
"urls": [],
"raw_text": "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. Arabert: Transformer-based model for arabic language understanding. arXiv preprint arXiv:2003.00104.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Al-Kitaab fii Tacallum al-cArabiyya. A Textbook for Beginning Arabic: Part One Third Edition",
"authors": [
{
"first": "K",
"middle": [],
"last": "Brustad",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Al-Batal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Al-Tonsi",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K Brustad, M Al-Batal, and A Al-Tonsi. 2015. Al- Kitaab fii Tacallum al-cArabiyya. A Textbook for Be- ginning Arabic: Part One Third Edition. George- town University Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A frequency dictionary of Arabic: Core vocabulary for learners",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Buckwalter",
"suffix": ""
},
{
"first": "Dilworth",
"middle": [],
"last": "Parkinson",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Buckwalter and Dilworth Parkinson. 2014. A fre- quency dictionary of Arabic: Core vocabulary for learners. Routledge.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Matching an arabic text to a learners' curriculum",
"authors": [
{
"first": "Violetta",
"middle": [],
"last": "Cavalli-Sforza",
"suffix": ""
},
{
"first": "Mariam",
"middle": [
"El"
],
"last": "Mezouar",
"suffix": ""
},
{
"first": "Hind",
"middle": [],
"last": "Saddiki",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. 5th Int. Conf. on Arabic Language Processing",
"volume": "",
"issue": "",
"pages": "79--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Violetta Cavalli-Sforza, Mariam El Mezouar, and Hind Saddiki. 2014. Matching an arabic text to a learn- ers' curriculum. In Proc. 5th Int. Conf. on Arabic Language Processing (CITALA), Oujda, Morocco, pages 79-88.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Arabic readability research: Current state and future directions",
"authors": [
{
"first": "Violetta",
"middle": [],
"last": "Cavalli-Sforza",
"suffix": ""
},
{
"first": "Hind",
"middle": [],
"last": "Saddiki",
"suffix": ""
}
],
"year": 2018,
"venue": "Procedia computer science",
"volume": "142",
"issue": "",
"pages": "38--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Violetta Cavalli-Sforza, Hind Saddiki, and Naoual Nas- siri. 2018. Arabic readability research: Current state and future directions. Procedia computer science, 142:38-49.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A corpus-based readability formula for estimate of arabic texts reading difficulty",
"authors": [
{
"first": "Haslina",
"middle": [],
"last": "Nuraihan Mat Daud",
"suffix": ""
},
{
"first": "Normaziah Abdul",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aziz",
"suffix": ""
}
],
"year": 2013,
"venue": "World Applied Sciences Journal",
"volume": "21",
"issue": "",
"pages": "168--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nuraihan Mat Daud, Haslina Hassan, and Nor- maziah Abdul Aziz. 2013. A corpus-based readabil- ity formula for estimate of arabic texts reading diffi- culty. World Applied Sciences Journal, 21:168-173.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Assessing document and sentence readability in less resourced languages and across textual genres",
"authors": [
{
"first": "Felice",
"middle": [],
"last": "Dell'orletta",
"suffix": ""
},
{
"first": "Simonetta",
"middle": [],
"last": "Montemagni",
"suffix": ""
},
{
"first": "Giulia",
"middle": [],
"last": "Venturi",
"suffix": ""
}
],
"year": 2014,
"venue": "ITL-International Journal of Applied Linguistics",
"volume": "165",
"issue": "2",
"pages": "163--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felice Dell'Orletta, Simonetta Montemagni, and Giu- lia Venturi. 2014. Assessing document and sentence readability in less resourced languages and across textual genres. ITL-International Journal of Applied Linguistics, 165(2):163-193.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Multiple views as aid to linguistic annotation error analysis",
"authors": [
{
"first": "Di",
"middle": [],
"last": "Marilena",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Bari",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Sharoff",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thomas",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of LAW VIII-The 8th Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "82--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilena Di Bari, Serge Sharoff, and Martin Thomas. 2014. Multiple views as aid to linguistic annotation error analysis. In Proceedings of LAW VIII-The 8th Linguistic Annotation Workshop, pages 82-86.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The principles of readability",
"authors": [
{
"first": "H",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dubay",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William H DuBay. 2004. The principles of readability. Online Submission.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Osman-a novel arabic readability metric",
"authors": [
{
"first": "Mahmoud",
"middle": [],
"last": "El",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Haj",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Rayson",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "250--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahmoud El-Haj and Paul Rayson. 2016. Osman-a novel arabic readability metric. In Proceedings of the Tenth International Conference on Language Re- sources and Evaluation (LREC'16), pages 250-255.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Saud al-Sanousi's Saaq al-Bambuu: The Authorized Abridged Edition for Students of Arabic",
"authors": [
{
"first": "Laila",
"middle": [],
"last": "Familiar",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laila Familiar. 2016. Saud al-Sanousi's Saaq al- Bambuu: The Authorized Abridged Edition for Stu- dents of Arabic. Georgetown University Press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic readability prediction for modern standard Arabic",
"authors": [
{
"first": "Jonathan",
"middle": [
"Neil"
],
"last": "Forsyth",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Neil Forsyth. 2014. Automatic readability prediction for modern standard Arabic. Ph.D. the- sis, Brigham Young University. Department of Lin- guistics and English Language.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning word vectors for 157 languages",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018) [Online].",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Corpus-based vocabulary lists for language learners for nine languages. Language resources and evaluation",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
},
{
"first": "Frieda",
"middle": [],
"last": "Charalabopoulou",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Gavrilidou",
"suffix": ""
},
{
"first": "Janne",
"middle": [],
"last": "Bondi Johannessen",
"suffix": ""
},
{
"first": "Saussan",
"middle": [],
"last": "Khalil",
"suffix": ""
},
{
"first": "Sofie",
"middle": [
"Johansson"
],
"last": "Kokkinakis",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Lew",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Sharoff",
"suffix": ""
},
{
"first": "Ravikiran",
"middle": [],
"last": "Vadlapudi",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Volodina",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "48",
"issue": "",
"pages": "121--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Kilgarriff, Frieda Charalabopoulou, Maria Gavrilidou, Janne Bondi Johannessen, Saussan Khalil, Sofie Johansson Kokkinakis, Robert Lew, Serge Sharoff, Ravikiran Vadlapudi, and Elena Volo- dina. 2014. Corpus-based vocabulary lists for lan- guage learners for nine languages. Language re- sources and evaluation, 48(1):121-163.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The notion of a \"lemma\": Headwords, roots and lexical sets",
"authors": [
{
"first": "Gerry",
"middle": [],
"last": "Knowles",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zuraidah Mohd Don",
"suffix": ""
}
],
"year": 2004,
"venue": "International Journal of Corpus Linguistics",
"volume": "9",
"issue": "1",
"pages": "69--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerry Knowles and Zuraidah Mohd Don. 2004. The notion of a \"lemma\": Headwords, roots and lexical sets. International Journal of Corpus Linguistics, 9(1):69-81.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Rank correlation methods",
"authors": [
{
"first": "Kendall",
"middle": [],
"last": "Maurice",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gibbons Jean Dickinson",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kendall Maurice and Gibbons Jean Dickinson. 1990. Rank correlation methods. London: Edward Arnold.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Arabic readability assessment for foreign language learners",
"authors": [
{
"first": "Naoual",
"middle": [],
"last": "Nassiri",
"suffix": ""
},
{
"first": "Abdelhak",
"middle": [],
"last": "Lakhouaja",
"suffix": ""
},
{
"first": "Violetta",
"middle": [],
"last": "Cavalli-Sforza",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Applications of Natural Language to Information Systems",
"volume": "",
"issue": "",
"pages": "480--488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naoual Nassiri, Abdelhak Lakhouaja, and Violetta Cavalli-Sforza. 2018a. Arabic readability assess- ment for foreign language learners. In International Conference on Applications of Natural Language to Information Systems, pages 480-488. Springer.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Modern standard arabic readability prediction",
"authors": [
{
"first": "Naoual",
"middle": [],
"last": "Nassiri",
"suffix": ""
},
{
"first": "Abdelhak",
"middle": [],
"last": "Lakhouaja",
"suffix": ""
},
{
"first": "Violetta",
"middle": [],
"last": "Cavalli-Sforza",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Arabic Language Processing",
"volume": "",
"issue": "",
"pages": "120--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naoual Nassiri, Abdelhak Lakhouaja, and Violetta Cavalli-Sforza. 2018b. Modern standard arabic readability prediction. In International Conference on Arabic Language Processing, pages 120-133. Springer.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Madamira: A fast, comprehensive tool for morphological analysis and disambiguation of arabic",
"authors": [
{
"first": "Arfath",
"middle": [],
"last": "Pasha",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Al-Badrashiny",
"suffix": ""
},
{
"first": "Mona",
"middle": [
"T"
],
"last": "Diab",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"El"
],
"last": "Kholy",
"suffix": ""
},
{
"first": "Ramy",
"middle": [],
"last": "Eskander",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Manoj",
"middle": [],
"last": "Pooleery",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2014,
"venue": "Lrec",
"volume": "14",
"issue": "",
"pages": "1094--1101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arfath Pasha, Mohamed Al-Badrashiny, Mona T Diab, Ahmed El Kholy, Ramy Eskander, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. Madamira: A fast, comprehensive tool for morphological analysis and disambiguation of ara- bic. In Lrec, volume 14, pages 1094-1101.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A primer in BERTology: What we know about how BERT works",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "842--866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Associ- ation for Computational Linguistics, 8:842-866.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Text readability for arabic as a foreign language",
"authors": [
{
"first": "Hind",
"middle": [],
"last": "Saddiki",
"suffix": ""
},
{
"first": "Karim",
"middle": [],
"last": "Bouzoubaa",
"suffix": ""
},
{
"first": "Violetta",
"middle": [],
"last": "Cavalli-Sforza",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE/ACS 12th International Conference of Computer Systems and Applications (AICCSA)",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hind Saddiki, Karim Bouzoubaa, and Violetta Cavalli- Sforza. 2015. Text readability for arabic as a foreign language. In 2015 IEEE/ACS 12th International Conference of Computer Systems and Applications (AICCSA), pages 1-8. IEEE.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Feature optimization for predicting readability of arabic l1 and l2",
"authors": [
{
"first": "Hind",
"middle": [],
"last": "Saddiki",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Violetta",
"middle": [],
"last": "Cavalli-Sforza",
"suffix": ""
},
{
"first": "Muhamed Al",
"middle": [],
"last": "Khalil",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "20--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hind Saddiki, Nizar Habash, Violetta Cavalli-Sforza, and Muhamed Al Khalil. 2018. Feature optimiza- tion for predicting readability of arabic l1 and l2. In Proceedings of the 5th Workshop on Natural Lan- guage Processing Techniques for Educational Appli- cations, pages 20-29.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Kuisail at semeval-2020 task 12: Bert-cnn for offensive speech identification in social media",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Safaya",
"suffix": ""
},
{
"first": "Moutasem",
"middle": [],
"last": "Abdullatif",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "2054--2059",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Safaya, Moutasem Abdullatif, and Deniz Yuret. 2020. Kuisail at semeval-2020 task 12: Bert-cnn for offensive speech identification in social media. In Proceedings of the Fourteenth Workshop on Seman- tic Evaluation, pages 2054-2059.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Automatic text simplification",
"authors": [],
"year": 2017,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "10",
"issue": "1",
"pages": "1--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Horacio Saggion. 2017. Automatic text simplification. Synthesis Lectures on Human Language Technolo- gies, 10(1):1-137.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Predicting the relative difficulty of single sentences with and without surrounding context",
"authors": [
{
"first": "Elliot",
"middle": [],
"last": "Schumacher",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Eskenazi",
"suffix": ""
},
{
"first": "Gwen",
"middle": [],
"last": "Frishkoff",
"suffix": ""
},
{
"first": "Kevyn",
"middle": [],
"last": "Collins-Thompson",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.08425"
]
},
"num": null,
"urls": [],
"raw_text": "Elliot Schumacher, Maxine Eskenazi, Gwen Frishkoff, and Kevyn Collins-Thompson. 2016. Predict- ing the relative difficulty of single sentences with and without surrounding context. arXiv preprint arXiv:1606.08425.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Reading level assessment using support vector machines and statistical language models",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Schwarm",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {
"DOI": [
"10.3115/1219840.1219905"
]
},
"num": null,
"urls": [],
"raw_text": "Sarah Schwarm and Mari Ostendorf. 2005. Reading level assessment using support vector machines and statistical language models. In Proceedings of the 43rd Annual Meeting of the Association for Compu- tational Linguistics (ACL'05), pages 523-530, Ann Arbor, Michigan. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Camelparser: A system for arabic syntactic analysis and morphological disambiguation",
"authors": [
{
"first": "Anas",
"middle": [],
"last": "Shahrour",
"suffix": ""
},
{
"first": "Salam",
"middle": [],
"last": "Khalifa",
"suffix": ""
},
{
"first": "Dima",
"middle": [],
"last": "Taji",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "228--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anas Shahrour, Salam Khalifa, Dima Taji, and Nizar Habash. 2016. Camelparser: A system for arabic syntactic analysis and morphological disambigua- tion. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguis- tics: System Demonstrations, pages 228-232.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Open-source corpora: using the net to fish for linguistic data",
"authors": [
{
"first": "Serge",
"middle": [],
"last": "Sharoff",
"suffix": ""
}
],
"year": 2006,
"venue": "International Journal of Corpus Linguistics",
"volume": "11",
"issue": "4",
"pages": "435--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Serge Sharoff. 2006. Open-source corpora: using the net to fish for linguistic data. International Journal of Corpus Linguistics, 11(4):435-462.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Genre annotation for the web: text-external and text-internal perspectives",
"authors": [
{
"first": "Serge",
"middle": [],
"last": "Sharoff",
"suffix": ""
}
],
"year": 2021,
"venue": "Register studies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Serge Sharoff. 2021. Genre annotation for the web: text-external and text-internal perspectives. Register studies, 3.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Assessing evidence of validity of the actfl cefr listening and reading proficiency tests (lpt and rpt) using a standard-setting approach",
"authors": [
{
"first": "E",
"middle": [],
"last": "Tschirner",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "B\u00e4renf\u00e4nger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wisniewski",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E Tschirner, O B\u00e4renf\u00e4nger, and K Wisniewski. 2015. Assessing evidence of validity of the actfl cefr listen- ing and reading proficiency tests (lpt and rpt) using a standard-setting approach. Technical Report 2015- EU-PUB-2.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Automatic cefr level prediction for estonian learner text",
"authors": [
{
"first": "Sowmya",
"middle": [],
"last": "Vajjala",
"suffix": ""
},
{
"first": "Kaidi",
"middle": [],
"last": "Loo",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the third workshop on NLP for computerassisted language learning",
"volume": "",
"issue": "",
"pages": "113--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sowmya Vajjala and Kaidi Loo. 2014. Automatic cefr level prediction for estonian learner text. In Pro- ceedings of the third workshop on NLP for computer- assisted language learning, pages 113-127.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Attention is all you need",
"authors": [
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Multilingual universal sentence encoder for semantic retrieval",
"authors": [
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Amin",
"middle": [],
"last": "Ahmad",
"suffix": ""
},
{
"first": "Mandy",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Jax",
"middle": [],
"last": "Law",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Hernandez Abrego",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Tar",
"suffix": ""
},
{
"first": "Yun-Hsuan",
"middle": [],
"last": "Sung",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.04307"
]
},
"num": null,
"urls": [],
"raw_text": "Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernan- dez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, et al. 2019. Multilingual universal sen- tence encoder for semantic retrieval. arXiv preprint arXiv:1907.04307.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null,
"html": null
},
"TABREF1": {
"content": "<table><tr><td>are used to assign</td></tr></table>",
"text": "",
"type_str": "table",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table/>",
"text": "The Feature set. (all measures are for the rate of tokens on the sentence levels)",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table><tr><td>: 3-way classification using weighted macro-</td></tr><tr><td>averaged precision, recall and F-1, Dataset One Using</td></tr><tr><td>all features versus neural models.</td></tr></table>",
"text": "",
"type_str": "table",
"num": null,
"html": null
},
"TABREF6": {
"content": "<table><tr><td>: Confusion Matrix of SVM (rbf) on 3-way</td></tr><tr><td>classification with XLM-R.</td></tr></table>",
"text": "",
"type_str": "table",
"num": null,
"html": null
},
"TABREF8": {
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null,
"html": null
},
"TABREF10": {
"content": "<table/>",
"text": "List of ten most effective features using REF approach based on SVM classifier",
"type_str": "table",
"num": null,
"html": null
},
"TABREF11": {
"content": "<table><tr><td>: SVM Classification ablation experiment on</td></tr><tr><td>3-way classification</td></tr></table>",
"text": "",
"type_str": "table",
"num": null,
"html": null
},
"TABREF12": {
"content": "<table><tr><td/><td>ArabicBert</td><td colspan=\"2\">XLM-R</td></tr><tr><td colspan=\"2\">Predicted A C</td><td>A</td><td>C</td></tr><tr><td>A</td><td>19 2961</td><td colspan=\"2\">138 2842</td></tr><tr><td>C</td><td>46 2934</td><td colspan=\"2\">223 2757</td></tr></table>",
"text": "Fine-tuned Arabic-BERT versus SVM XLM-R Classifier's performance on Dataset two",
"type_str": "table",
"num": null,
"html": null
},
"TABREF13": {
"content": "<table><tr><td>: Confusion Matrix with binary classifier</td></tr><tr><td>Arabic-BERT versus XLM-R on Dataset Two.</td></tr></table>",
"text": "",
"type_str": "table",
"num": null,
"html": null
}
}
}
}