ACL-OCL / Base_JSON /prefixW /json /wanlp /2020.wanlp-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:59:17.979614Z"
},
"title": "Multi-Dialect Arabic BERT for Country-Level Dialect Identification",
"authors": [
{
"first": "Bashar",
"middle": [],
"last": "Talafha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mawdoo3 Ltd",
"location": {
"settlement": "Amman",
"country": "Jordan"
}
},
"email": "bashar.talafha@mawdoo3.com"
},
{
"first": "Mohammad",
"middle": [],
"last": "Ali",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mawdoo3 Ltd",
"location": {
"settlement": "Amman",
"country": "Jordan"
}
},
"email": "mohammad.ali@mawdoo3.com"
},
{
"first": "Muhy",
"middle": [
"Eddin"
],
"last": "Za'ter",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mawdoo3 Ltd",
"location": {
"settlement": "Amman",
"country": "Jordan"
}
},
"email": "muhy.zater@mawdoo3.com"
},
{
"first": "Haitham",
"middle": [],
"last": "Seelawi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mawdoo3 Ltd",
"location": {
"settlement": "Amman",
"country": "Jordan"
}
},
"email": "haitham.selawi@mawdoo3.com"
},
{
"first": "Ibraheem",
"middle": [],
"last": "Tuffaha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mawdoo3 Ltd",
"location": {
"settlement": "Amman",
"country": "Jordan"
}
},
"email": "ibraheem.tuffaha@mawdoo3.com"
},
{
"first": "Mostafa",
"middle": [],
"last": "Samir",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mawdoo3 Ltd",
"location": {
"settlement": "Amman",
"country": "Jordan"
}
},
"email": "mostafa.samir@mawdoo3.com"
},
{
"first": "Wael",
"middle": [],
"last": "Farhan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mawdoo3 Ltd",
"location": {
"settlement": "Amman",
"country": "Jordan"
}
},
"email": "wael.farhan@mawdoo3.com"
},
{
"first": "Hussein",
"middle": [
"T"
],
"last": "Al-Natsheh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mawdoo3 Ltd",
"location": {
"settlement": "Amman",
"country": "Jordan"
}
},
"email": "h.natsheh@mawdoo3.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Arabic dialect identification is a complex problem for a number of inherent properties of the language itself. In this paper, we present the experiments conducted, and the models developed by our competing team, Mawdoo3 AI, along the way to achieving our winning solution to subtask 1 of the Nuanced Arabic Dialect Identification (NADI) shared task. The dialect identification subtask provides 21,000 country-level labeled tweets covering all 21 Arab countries. An unlabeled corpus of 10M tweets from the same domain is also presented by the competition organizers for optional use. Our winning solution itself came in the form of an ensemble of different training iterations of our pre-trained BERT model, which achieved a micro-averaged F1-score of 26.78% on the subtask at hand. We publicly release the pre-trained language model component of our winning solution under the name of Multi-dialect-Arabic-BERT model, for any interested researcher out there.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Arabic dialect identification is a complex problem for a number of inherent properties of the language itself. In this paper, we present the experiments conducted, and the models developed by our competing team, Mawdoo3 AI, along the way to achieving our winning solution to subtask 1 of the Nuanced Arabic Dialect Identification (NADI) shared task. The dialect identification subtask provides 21,000 country-level labeled tweets covering all 21 Arab countries. An unlabeled corpus of 10M tweets from the same domain is also presented by the competition organizers for optional use. Our winning solution itself came in the form of an ensemble of different training iterations of our pre-trained BERT model, which achieved a micro-averaged F1-score of 26.78% on the subtask at hand. We publicly release the pre-trained language model component of our winning solution under the name of Multi-dialect-Arabic-BERT model, for any interested researcher out there.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The term Arabic language is better thought of as an umbrella term, under which it is possible to list hundreds of varieties of the language, some of which are not even mutually comprehensible. Nonetheless, such varieties can be grouped together with varying levels of granularity, all of which correspond to the various ways the geographical extent of the Arab world can be divided, albeit loosely. Despite such diversity, up until recently, such varieties were strictly confined to the spoken domains, with Modern Standard Arabic (MSA) dominating the written forms of communication all over the Arab world. However, with the advent of social media, an explosion of written content in said varieties have flooded the internet, attracting the attention and interest of the wide Arabic NLP research community in the process. This is evident in the number of held workshops dedicated to the topic in the last few years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present and discuss the strategies and experiments we conducted to achieve the first place in the Nuanced Arabic Dialect Identification (NADI) Shared Task 1 (Abdul-Mageed et al., 2020) , which is dedicated to dialect identification at the country level. In section 2 we discuss related work. This is followed by section 3 in which we discuss the data used to develop our model. Section 4 discusses the most significant models we tested and tried in our experiments. The details and results of said experiments can be found in section 5. The analysis and discussion of the results can be obtained in section 6 followed by our conclusions in section 7.",
"cite_spans": [
{
"start": 174,
"end": 201,
"text": "(Abdul-Mageed et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task of Arabic dialect identification is challenging. This can be attributed to a number of reasons, including: a paucity of corpora dedicated to the topic, the lack of a standard orthography between and across the various dialects, and the nature of the language itself (e.g. its morphological richness among other peculiarities). To tackle these challenges, the Arabic NLP community has come up with a number of responses. One response was the development of annotated corpora that focus primarily on dialectical data, such as the Arabic On-line Commentary dataset (Zaidan and Callison-Burch, 2014), the MADAR Arabic dialect corpus and lexicon , the Arap-Tweet corpus (Zaghouani and Charfi, 2018) , in addition to a city-level dataset of Arabic dialects that was curated by . Another popular form of response is the organization of NLP workshops and shared tasks, which are solely dedicated to developing approaches and models that can detect and classify the use of Arabic dialects in written text. One example is the MADAR shared task (Bouamor et al., 2019) , which focuses on dialect detection at the level of Arab countries and cities.",
"cite_spans": [
{
"start": 674,
"end": 702,
"text": "(Zaghouani and Charfi, 2018)",
"ref_id": "BIBREF25"
},
{
"start": 1043,
"end": 1065,
"text": "(Bouamor et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The aforementioned efforts by the Arabic NLP community, have resulted in a number of publications that explore the application of a variety of Machine Learning (ML) tools to the problem of dialect identification, with varying emphasis on feature engineering, ensemble methods, and the level of supervision involved Elfardy and Diab, 2013; Huang, 2015; Talafha et al., 2019b) .",
"cite_spans": [
{
"start": 315,
"end": 338,
"text": "Elfardy and Diab, 2013;",
"ref_id": "BIBREF10"
},
{
"start": 339,
"end": 351,
"text": "Huang, 2015;",
"ref_id": "BIBREF12"
},
{
"start": 352,
"end": 374,
"text": "Talafha et al., 2019b)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The past few years have also witnessed a number of published papers that explore the potential of Deep Learning (DL) models for dialect detection, starting with Ali, 2018) , who show the enhanced performance that can be brought about through the use of LSTMs and CNNs, all the way to (Zhang and Abdul-Mageed, 2019) , who highlight the potential of pre-trained language models to achieve state of the art performance on the task of dialect detection.",
"cite_spans": [
{
"start": 161,
"end": 171,
"text": "Ali, 2018)",
"ref_id": "BIBREF2"
},
{
"start": 284,
"end": 314,
"text": "(Zhang and Abdul-Mageed, 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The novel dataset of NADI shared task consists of around 31,000 labeled tweets covering the entirety of the 21 Arab countries. Additionally, the task presents an unlabeled corpus of 10M tweets. The labeled dataset is split into 21,000 examples for training, with the rest of the tweets, i.e., 10,000, distributed equally between the development and test sets. Each tweet is annotated with a single country only. In Figure 1 we can see the distribution of tweets per country in which Egypt and Bahrain has the highest and lowest tweet frequencies, respectively. We also note that the ratio of the development to train examples is generally similar across the various dialects, except for the ones with lowest frequencies. The unlabeled dataset is provided in the form a twitter crawling script, and the IDs of 10M tweets, which in combination can be used to retrieve the text of these tweets. We are able to retrieve 97.7% of them, as the rest seem to be unavailable (possibly deleted since then or made private). This dataset can be beneficial in multiple ways, including building embedding models (e.g., Word2Vec, FastText) (Mikolov et al., 2013; Bojanowski et al., 2017) , pre-training language models, or in semi-supervised learning and data augmentation techniques. This dataset can also be used to further pre-train an already existing language model, which can positively affect its performance on tasks derived from a domain similar to that of the 10M tweets, as we show in our results in Section 6.",
"cite_spans": [
{
"start": 1125,
"end": 1147,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF16"
},
{
"start": 1148,
"end": 1172,
"text": "Bojanowski et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 415,
"end": 423,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "In this section, we present the various approaches employed in our experiments, starting with the winning approach of our Multi-dialect-Arabic-BERT model, followed by the rest of them. The results of our experiments are presented in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 240,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Description",
"sec_num": "4"
},
{
"text": "Our top performing approach, which achieved the number one place on the NADI task 1, is based on a Bidirectional Encoder Representations from Transformers (BERT) architecture (Devlin et al., 2018) . BERT uses the encoder part of a Transformer (Vaswani et al., 2017) , and is trained using a masked language model (MLM) objective. This involves training the model to predict corrupted tokens, which is achieved using a special mask token that replaces the original ones. This is typically done on a huge corpus of unlabeled text. The resultant model can then produce contextual vector representations for tokens that capture various linguistics signals, which in turn can be beneficial for downstream tasks.",
"cite_spans": [
{
"start": 175,
"end": 196,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 243,
"end": 265,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-dialect-Arabic-BERT",
"sec_num": "4.1"
},
{
"text": "We started with the ArabicBERT (Safaya et al., 2020) , which is a publicly released BERT model trained on around 93 GB of Arabic content crawled from around the internet. This model is then finetuned on the NADI task 1, by retrieving the output of a special token [CLS] , placed at the beginning of a given tweet. The retrieved vector is in turn fed into a shallow feed-forward neural classifier that consists of a dropout layer, a dense layer, and a softmax activation output function, which produces the final predicated class out of the original 21. It is worth mentioning that during the fine-tuning process, the loss is propagated back across the entire network, including the BERT encoder.",
"cite_spans": [
{
"start": 31,
"end": 52,
"text": "(Safaya et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 264,
"end": 269,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-dialect-Arabic-BERT",
"sec_num": "4.1"
},
{
"text": "We then were able to significantly improve the results obtained from the model above, by further pretraining ArabicBERT on the 10M tweets released by the NADI organizers, for 3 epochs. We refer to this final resultant model as the Multi-dialect-Arabic-BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-dialect-Arabic-BERT",
"sec_num": "4.1"
},
{
"text": "In order to squeeze out more performance from our model, we ended up using ensemble techniques. The best ensemble results came from the voting of 4 models that were trained with different maximum sequence lengths (i.e., 80, 90, 100 and 250). The voting step was accomplished by taking the elementwise average of the predicted probabilities per class for each of these models. The class with the highest value is then outputed as the predicted label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-dialect-Arabic-BERT",
"sec_num": "4.1"
},
{
"text": "All of our models were trained using an Adam optimizer (Kingma and Ba, 2014) with a learning rate of 3.75 \u00d7 10 \u22125 and a batch size of 16 for 3 epochs. No preprocessing was applied to the data except for the processing done by ArabicBERT tokenizer. The vocabulary size for ArabicBert is 32000 and sentencepiece (Kudo and Richardson, 2018) is used as a tokenizer.",
"cite_spans": [
{
"start": 310,
"end": 337,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-dialect-Arabic-BERT",
"sec_num": "4.1"
},
{
"text": "We publicly release the Multi-dialect-Arabic-BERT 1 on GitHub to make it available for use by all researchers for any task including reproducing this paper's results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-dialect-Arabic-BERT",
"sec_num": "4.1"
},
{
"text": "In addition to our winning solution, we experimented with a number of other approaches, none of which has exceeded an F1-score of 21, but which we list here for the sake of completeness anyway.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Traditional Machine and Deep Learning Models",
"sec_num": "4.2"
},
{
"text": "Originally proposed by Ragab et al. (2019) , three models (i.e., a Multinomial Naive Bayes (MNB), logistic regression, and weak dummy classifier) are trained separately on the data to obtain their dialect probability distributions, which then, in conjunction with TF-IDF vectors, make up the feature space. These features are then fed into an ensamble of five other models (i.e., MNB with one-vs-rest strategy, a Support Vector Machine model (SVM), a Bernoulli Naive Bayes classifier, a K-nearest-neighbours classifier with one-vs rest strategy, and finally a weak dummy classifier). The final predicted classes are obtained using a hard voting approach.",
"cite_spans": [
{
"start": 23,
"end": 42,
"text": "Ragab et al. (2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 MADAR-Mawdoo3 Model",
"sec_num": null
},
{
"text": "This model follows Safina model proposed in (Bouamor et al., 2019 (Heafield et al., 2013) , Naive Bayes classifier based on 4-to-6 char n-gram features, Naive Bayes classifier based on 1word n-gram features. The only pre-processing step used is to duplicate every single word for the language model and the char n-gram classifiers. The purpose of duplicating every single word is to detect circumfix n-gram patterns.",
"cite_spans": [
{
"start": 44,
"end": 65,
"text": "(Bouamor et al., 2019",
"ref_id": "BIBREF6"
},
{
"start": 66,
"end": 89,
"text": "(Heafield et al., 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 MADAR-Safina Model",
"sec_num": null
},
{
"text": "\u2022 MADAR-JUST Model In this model, we applied the approach proposed by Talafha et al. (2019a) . In order to balance the training data, a data augmentation technique based on random shuffling was performed to enlarge and balance the training data. After that, for each sentence, a vector of size 21 that represents a language model probability for each country was extracted and concatenated to a word and character level TF-IDF vectors. An MNB classifier is then applied with the One-vs-the-rest strategy.",
"cite_spans": [
{
"start": 70,
"end": 92,
"text": "Talafha et al. (2019a)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 MADAR-Safina Model",
"sec_num": null
},
{
"text": "\u2022 FastText Model FastText (Bojanowski et al., 2017) was originally implemented to help obtain enhanced word representations over simpler methods such as Word2Vec (Mikolov et al., 2013) . In our experiments, we pool the fastText vectors of each token in a given sentence, to obtain a fixed-size dense representation of the sentence at hand. This is in turn fed into a multinomial logistic regression for classification (Zolotov and Kung, 2017; Joulin et al., 2016) .",
"cite_spans": [
{
"start": 26,
"end": 51,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 162,
"end": 184,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF16"
},
{
"start": 418,
"end": 442,
"text": "(Zolotov and Kung, 2017;",
"ref_id": "BIBREF28"
},
{
"start": 443,
"end": 463,
"text": "Joulin et al., 2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 MADAR-Safina Model",
"sec_num": null
},
{
"text": "\u2022 AraVec fully connected Model AraVec is an Arabic based Word2Vec model, trained and published by (Soliman et al., 2017) . Similar to our FastText model above, we pool the AraVec vectors of the constituent tokens of a sentence to obtain its fixed-size vector representation. However, instead of a conventional ML algorithm, we feed these representations into a feed forward classifier, which is trained to obtain the final predictions (Ashi et al., 2018).",
"cite_spans": [
{
"start": 98,
"end": 120,
"text": "(Soliman et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 MADAR-Safina Model",
"sec_num": null
},
{
"text": "As mentioned above, multiple approaches have been investigated in the experiments we conducted, starting with traditional ML techniques then moving to DL approaches, before finally settling on our winning BERT based model. For our traditional ML experiments, we tried various models such as SVM, Logistic Regression (LR) and Naive Bayes (NB), along with features such as TF-IDF. We also tried other ML models that performed well on previous similar tasks such as MADAR Mawdoo3-AI and MADAR Safina models. However, all of these models came short when compared to the BERT models as can be seen in Table 1 , with the best Macro-Averaged F1-score achieved using traditional ML approaches being Table 2 : Final results on NADI testing dataset for the 3 top performing participating teams 17.06%. We then experimented with a number DL models, along with pre-trained word embedding features, such as fastText and Word2Vec. These models easily surpassed the performance of their traditional ML counterparts, with a maximum macro-averaged F1 score of 20.86%. As alluded to above, the best results were achieved by our BERT models. Using the standalone ArabicBERT (Safaya et al., 2020) we were able to achieve 24.45% Macro-Averaged F1-score on the development dataset. This score was further increased to 26.46% using ensemble techniques. This motivated us to further pre-train it on the 10 million unlabelled tweets to form the Multi-dialect-Arabic-BERT model. Using this setup, we were able to achieve a Macro-Averaged Macro-Averaged F1 score of 26%. Here again, we used the ensemble trick to obtain a 27.58% Macro-Averaged F1-score on the development set and 26.78% on the test set, thus winning the competition. We note that applying lexiconbased prediction rules to the best model mentioned above boosted the results of development set to 29.03 F1-score. However, these rules slightly decreased the test set results to 26.77 F1-score, concluding that such rules cause the system to suffer from over-fitting the development set. To aid us with the analysis of the strengths and weaknesses of our winning model, we provide the confusion matrix for its performance on the NADI development set in Figure 2 . The matrix highlights a number of issues stemming from the training dataset itself. For instance, it can be clearly seen that the model is biased to the countries with more training data such as Egypt, Iraq and Saudi Arabia; for these countries, the model achieves better results, while achieving much worse F1-scores for the ones with the least training data available. It can also be seen that the model suffers when trying to differentiate between geographically nearby countries. For example, 50% of the development samples from Bahrain are labeled as UAE and 22% from Sudan are labeled as Egypt. This is expected, given the similarities in dialects between neighbouring countries. Some of the results shown in the confusion matrix have also led us to further investigate the datasets themselves. This resulted in finding that our model does in fact predict the correct class for certain tweets, which were somehow originally mislabeled. Some of these examples can be seen in Table 3 . ",
"cite_spans": [
{
"start": 1155,
"end": 1176,
"text": "(Safaya et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 596,
"end": 603,
"text": "Table 1",
"ref_id": null
},
{
"start": 691,
"end": 698,
"text": "Table 2",
"ref_id": null
},
{
"start": 2189,
"end": 2197,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 3180,
"end": 3187,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "In this paper we describe our first place solution for the NADI competition, task 1. This was achieved via three stages: firstly, we further pre-trained a publicly released BERT model (i.e., Arabic-BERT) on the 10 millions tweets supplied by the NADI competition organizers. Secondly, we trained the resultant model on the NADI labelled data for task 1, multiple times, independently, with each of these iterations using a different mixture of maximum sentence length and learning rate. Thirdly, we selected the 4 best performing iterations (based on their performance on the development dataset), and aggregated their softmax predictions via a simple element wise averaging function, to produce the final prediction for a given tweet. For future work, we would like to investigate other advanced pre-training methods, such as XLNET (Yang et al., 2019) , and ELECTRA (Clark et al., 2020) , which we believe might hold the key to better performance on this task.",
"cite_spans": [
{
"start": 833,
"end": 852,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 867,
"end": 887,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://github.com/mawdoo3/Multi-dialect-Arabic-BERT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "You tweet what you speak: A citylevel dataset of arabic dialects",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Abdul-Mageed",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Alhuzali",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Elaraby",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhammad Abdul-Mageed, Hassan Alhuzali, and Mohamed Elaraby. 2018. You tweet what you speak: A city- level dataset of arabic dialects. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Houda Bouamor, and Nizar Habash. 2020. NADI 2020: The First Nuanced Arabic Dialect Identification Shared Task",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Abdul-Mageed",
"suffix": ""
},
{
"first": "Chiyu",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Fifth Arabic Natural Language Processing Workshop (WANLP 2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhammad Abdul-Mageed, Chiyu Zhang, Houda Bouamor, and Nizar Habash. 2020. NADI 2020: The First Nuanced Arabic Dialect Identification Shared Task. In Proceedings of the Fifth Arabic Natural Language Processing Workshop (WANLP 2020), Barcelona, Spain.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Character level convolutional neural network for arabic dialect identification",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Ali",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "122--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Ali. 2018. Character level convolutional neural network for arabic dialect identification. In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018), pages 122-127.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Pre-trained word embeddings for arabic aspect-based sentiment analysis of airline tweets",
"authors": [
{
"first": "Muazzam",
"middle": [
"Ahmed"
],
"last": "Mohammed Matuq Ashi",
"suffix": ""
},
{
"first": "Farrukh",
"middle": [],
"last": "Siddiqui",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nadeem",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Advanced Intelligent Systems and Informatics",
"volume": "",
"issue": "",
"pages": "241--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammed Matuq Ashi, Muazzam Ahmed Siddiqui, and Farrukh Nadeem. 2018. Pre-trained word embeddings for arabic aspect-based sentiment analysis of airline tweets. In International Conference on Advanced Intelligent Systems and Informatics, pages 241-251. Springer.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The madar arabic dialect corpus and lexicon",
"authors": [
{
"first": "Houda",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "Wajdi",
"middle": [],
"last": "Zaghouani",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Abdulrahim",
"suffix": ""
},
{
"first": "Ossama",
"middle": [],
"last": "Obeid",
"suffix": ""
},
{
"first": "Salam",
"middle": [],
"last": "Khalifa",
"suffix": ""
},
{
"first": "Fadhl",
"middle": [],
"last": "Eryani",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Erdmann",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Houda Bouamor, Nizar Habash, Mohammad Salameh, Wajdi Zaghouani, Owen Rambow, Dana Abdulrahim, Os- sama Obeid, Salam Khalifa, Fadhl Eryani, Alexander Erdmann, et al. 2018. The madar arabic dialect corpus and lexicon. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The madar shared task on arabic fine-grained dialect identification",
"authors": [
{
"first": "Houda",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "Sabit",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "199--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Houda Bouamor, Sabit Hassan, and Nizar Habash. 2019. The madar shared task on arabic fine-grained dialect identification. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 199-207.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Electra: Pre-training text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.10555"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep models for arabic dialect identification on benchmarked data",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Elaraby",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Abdul-Mageed",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "263--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Elaraby and Muhammad Abdul-Mageed. 2018. Deep models for arabic dialect identification on bench- marked data. In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018), pages 263-274.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentence level dialect identification in arabic",
"authors": [
{
"first": "Heba",
"middle": [],
"last": "Elfardy",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "456--461",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heba Elfardy and Mona Diab. 2013. Sentence level dialect identification in arabic. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 456-461.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Scalable modified Kneser-Ney language model estimation",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Pouzyrevsky",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "690--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 690-696, Sofia, Bulgaria, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improved arabic dialect classification with social media data",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2118--2126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Huang. 2015. Improved arabic dialect classification with social media data. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2118-2126.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.01759"
]
},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.06226"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Mawdoo3 ai at madar shared task: Arabic fine-grained dialect identification with ensemble learning",
"authors": [
{
"first": "Ahmad",
"middle": [],
"last": "Ragab",
"suffix": ""
},
{
"first": "Haitham",
"middle": [],
"last": "Seelawi",
"suffix": ""
},
{
"first": "Mostafa",
"middle": [],
"last": "Samir",
"suffix": ""
},
{
"first": "Abdelrahman",
"middle": [],
"last": "Mattar",
"suffix": ""
},
{
"first": "Hesham",
"middle": [],
"last": "Al-Bataineh",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Zaghloul",
"suffix": ""
},
{
"first": "Ahmad",
"middle": [],
"last": "Mustafa",
"suffix": ""
},
{
"first": "Bashar",
"middle": [],
"last": "Talafha",
"suffix": ""
},
{
"first": "Abed",
"middle": [
"Alhakim"
],
"last": "Freihat",
"suffix": ""
},
{
"first": "Hussein",
"middle": [],
"last": "Al-Natsheh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "244--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmad Ragab, Haitham Seelawi, Mostafa Samir, Abdelrahman Mattar, Hesham Al-Bataineh, Mohammad Za- ghloul, Ahmad Mustafa, Bashar Talafha, Abed Alhakim Freihat, and Hussein Al-Natsheh. 2019. Mawdoo3 ai at madar shared task: Arabic fine-grained dialect identification with ensemble learning. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 244-248.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Kuisail at semeval-2020 task 12: Bert-cnn for offensive speech identification in social media",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Safaya",
"suffix": ""
},
{
"first": "Moutasem",
"middle": [],
"last": "Abdullatif",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the International Workshop on Semantic Evaluation (SemEval)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Safaya, Moutasem Abdullatif, and Deniz Yuret. 2020. Kuisail at semeval-2020 task 12: Bert-cnn for offensive speech identification in social media. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Fine-grained arabic dialect identification",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "Houda",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1332--1344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Salameh, Houda Bouamor, and Nizar Habash. 2018. Fine-grained arabic dialect identification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1332-1344.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Aravec: A set of arabic word embedding models for use in arabic nlp",
"authors": [
{
"first": "Kareem",
"middle": [],
"last": "Abu Bakr Soliman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eissa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Samhaa R El-Beltagy",
"suffix": ""
}
],
"year": 2017,
"venue": "Procedia Computer Science",
"volume": "117",
"issue": "",
"pages": "256--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abu Bakr Soliman, Kareem Eissa, and Samhaa R El-Beltagy. 2017. Aravec: A set of arabic word embedding models for use in arabic nlp. Procedia Computer Science, 117:256-265.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Team just at the madar shared task on arabic fine-grained dialect identification",
"authors": [
{
"first": "Bashar",
"middle": [],
"last": "Talafha",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Fadel",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Al-Ayyoub",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Jararweh",
"suffix": ""
},
{
"first": "Al-Smadi",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Juola",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "285--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bashar Talafha, Ali Fadel, Mahmoud Al-Ayyoub, Yaser Jararweh, AL-Smadi Mohammad, and Patrick Juola. 2019a. Team just at the madar shared task on arabic fine-grained dialect identification. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 285-289.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Mawdoo3 ai at madar shared task: Arabic tweet dialect identification",
"authors": [
{
"first": "Bashar",
"middle": [],
"last": "Talafha",
"suffix": ""
},
{
"first": "Wael",
"middle": [],
"last": "Farhan",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Altakrouri",
"suffix": ""
},
{
"first": "Hussein",
"middle": [],
"last": "Al-Natsheh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "239--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bashar Talafha, Wael Farhan, Ahmed Altakrouri, and Hussein Al-Natsheh. 2019b. Mawdoo3 ai at madar shared task: Arabic tweet dialect identification. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 239-243.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xl- net: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5753-5763.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Arap-tweet: A large multi-dialect twitter corpus for gender, age and language variety identification",
"authors": [
{
"first": "Wajdi",
"middle": [],
"last": "Zaghouani",
"suffix": ""
},
{
"first": "Anis",
"middle": [],
"last": "Charfi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.07674"
]
},
"num": null,
"urls": [],
"raw_text": "Wajdi Zaghouani and Anis Charfi. 2018. Arap-tweet: A large multi-dialect twitter corpus for gender, age and language variety identification. arXiv preprint arXiv:1808.07674.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Arabic dialect identification",
"authors": [
{
"first": "F",
"middle": [],
"last": "Omar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Zaidan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics",
"volume": "40",
"issue": "1",
"pages": "171--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omar F Zaidan and Chris Callison-Burch. 2014. Arabic dialect identification. Computational Linguistics, 40(1):171-202.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "No army, no navy: Bert semi-supervised learning of arabic dialects",
"authors": [
{
"first": "Chiyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Abdul-Mageed",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "279--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiyu Zhang and Muhammad Abdul-Mageed. 2019. No army, no navy: Bert semi-supervised learning of arabic dialects. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 279-284.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Analysis and optimization of fasttext linear text classifier",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Zolotov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Kung",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.05531"
]
},
"num": null,
"urls": [],
"raw_text": "Vladimir Zolotov and David Kung. 2017. Analysis and optimization of fasttext linear text classifier. arXiv preprint arXiv:1702.05531.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Classes distribution for both Train and Development sets",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "The confusion matrix of our best model on NADI development set",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF3": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Examples of mislabeled and confusing tweets."
}
}
}
}