ACL-OCL / Base_JSON /prefixV /json /vardial /2020.vardial-1.23.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:14:19.809571Z"
},
"title": "Combining Deep Learning and String Kernels for the Localization of Swiss German Tweets",
"authors": [
{
"first": "Mihaela",
"middle": [],
"last": "G\u0203man",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Bucharest",
"location": {
"addrLine": "14 Academiei",
"settlement": "Bucharest",
"country": "Romania"
}
},
"email": "mp.gaman@gmail.com"
},
{
"first": "Radu",
"middle": [
"Tudor"
],
"last": "Ionescu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Bucharest",
"location": {
"addrLine": "14 Academiei",
"settlement": "Bucharest",
"country": "Romania"
}
},
"email": "raducu.ionescu@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this work, we introduce the methods proposed by the UnibucKernel team in solving the Social Media Variety Geolocation task featured in the 2020 VarDial Evaluation Campaign. We address only the second subtask, which targets a data set composed of nearly 30 thousand Swiss German Jodels. The dialect identification task is about accurately predicting the latitude and longitude of test samples. We frame the task as a double regression problem, employing a variety of machine learning approaches to predict both latitude and longitude. From simple models for regression, such as Support Vector Regression, to deep neural networks, such as Long Short-Term Memory networks and character-level convolutional neural networks, and, finally, to ensemble models based on meta-learners, such as XGBoost, our interest is focused on approaching the problem from a few different perspectives, in an attempt to minimize the prediction error. With the same goal in mind, we also considered many types of features, from high-level features, such as BERT embeddings, to low-level features, such as characters n-grams, which are known to provide good results in dialect identification. Our empirical results indicate that the handcrafted model based on string kernels outperforms the deep learning approaches. Nevertheless, our best performance is given by the ensemble model that combines both handcrafted and deep learning models.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this work, we introduce the methods proposed by the UnibucKernel team in solving the Social Media Variety Geolocation task featured in the 2020 VarDial Evaluation Campaign. We address only the second subtask, which targets a data set composed of nearly 30 thousand Swiss German Jodels. The dialect identification task is about accurately predicting the latitude and longitude of test samples. We frame the task as a double regression problem, employing a variety of machine learning approaches to predict both latitude and longitude. From simple models for regression, such as Support Vector Regression, to deep neural networks, such as Long Short-Term Memory networks and character-level convolutional neural networks, and, finally, to ensemble models based on meta-learners, such as XGBoost, our interest is focused on approaching the problem from a few different perspectives, in an attempt to minimize the prediction error. With the same goal in mind, we also considered many types of features, from high-level features, such as BERT embeddings, to low-level features, such as characters n-grams, which are known to provide good results in dialect identification. Our empirical results indicate that the handcrafted model based on string kernels outperforms the deep learning approaches. Nevertheless, our best performance is given by the ensemble model that combines both handcrafted and deep learning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The organizers of the 2020 VarDial Evaluation Campaign (G\u0203man et al., 2020) proposed a shared task targeted towards the geolocation of short texts, e.g. tweets, namely the Social Media Variety Geolocation (SMG) task. Typically formulated as a double regression problem, the task is about predicting the location, expressed in latitude and longitude, from where the text received as input was posted on a certain social media platform. Twitter and Jodel are the platforms used for data collection, divided by the language area in three subtasks, namely:",
"cite_spans": [
{
"start": 55,
"end": 75,
"text": "(G\u0203man et al., 2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Standard German Jodels (DE-AT) -formed of conversations initiated in Germany and Austria in regional dialectal forms (Hovy and Purschke, 2018) .",
"cite_spans": [
{
"start": 119,
"end": 144,
"text": "(Hovy and Purschke, 2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Swiss German Jodels (CH) -based on a smaller number of Jodel conversations from Switzerland (Hovy and Purschke, 2018) .",
"cite_spans": [
{
"start": 94,
"end": 119,
"text": "(Hovy and Purschke, 2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 BCMS Tweets -from the area of Bosnia and Herzegovina, Croatia, Montenegro and Serbia where the macro-language used is BCMS, with both similarities and a fair share of variation among the component languages (Ljube\u0161i\u0107 et al., 2016) .",
"cite_spans": [
{
"start": 209,
"end": 232,
"text": "(Ljube\u0161i\u0107 et al., 2016)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus only on the second subtask, SMG-CH, proposing a variety of handcrafted and deep learning models, as well as an ensemble model that combines all our previous models through meta-learning. Our first model is a Support Vector Regression (SVR) classifier (Chang and Lin, 2002) based on string kernels, which are known to perform well in other dialect identification tasks (Butnaru and Ionescu, 2018b; Ionescu and Butnaru, 2017) . Our second model is a character-level convolutional neural network (CNN) (Zhang et al., 2015) , which is also known to provide good results in dialect identification (Butnaru and Ionescu, 2019; Tudoreanu, 2019) . Due to the high popularity and the outstanding results of Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) in solving mainstream NLP tasks, we decided to try out a Long Short-Term Memory (LSTM) network (Hochreiter and Schmidhuber, 1997) based on German BERT embeddings as our third model. Lastly, we combine our three models into an ensemble that employs Extreme Gradient Boosting (XGBoost) (Chen and Guestrin, 2016) as meta-learner. We conducted experiments on the development set provided by the organizers, in order to decide which models to choose for our three submissions for the SMG-CH subtask. Our results indicate that the ensemble model attains the best results. Perhaps surprisingly, our shallow approach based on string kernels outperforms both deep learning models. Our observations are consistent across the development and the test sets provided by the organizers.",
"cite_spans": [
{
"start": 275,
"end": 296,
"text": "(Chang and Lin, 2002)",
"ref_id": "BIBREF10"
},
{
"start": 392,
"end": 420,
"text": "(Butnaru and Ionescu, 2018b;",
"ref_id": "BIBREF8"
},
{
"start": 421,
"end": 447,
"text": "Ionescu and Butnaru, 2017)",
"ref_id": "BIBREF36"
},
{
"start": 523,
"end": 543,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF74"
},
{
"start": 616,
"end": 643,
"text": "(Butnaru and Ionescu, 2019;",
"ref_id": "BIBREF9"
},
{
"start": 644,
"end": 660,
"text": "Tudoreanu, 2019)",
"ref_id": "BIBREF67"
},
{
"start": 784,
"end": 805,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 901,
"end": 935,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF31"
},
{
"start": 1090,
"end": 1115,
"text": "(Chen and Guestrin, 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. We present related work on dialect identification and geolocation of short texts in Section 2. Our approaches are described in more detail in Section 3. We present the experiments and empirical results in Section 4. Finally, our conclusions are drawn in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One of the initial works on text-based geotagging (Ding et al., 2000) aims at automatically finding the geographic scope of web pages, in a classification setup relying on named location entities such as cities and states. The authors used gazetteers as the source of the location mappings, proposing a rather heuristic approach. Gazetteers, constitute a tool used in one of the three general approaches taken so far in text-based geolocation, this tool being adopted in a number of works Quercini et al., 2010; Cheng et al., 2010) . In this line of research, some researchers employed rule-based methods (Bilhaut et al., 2003) , while others plugged named entity recognition into various machine learning techniques (Gelernter and Mushegian, 2011; Qin et al., 2010) . The main disadvantage of these methods is that they rely on the existence of specific mentions of locations in text, rather than inferring them in a not so straightforward manner. These direct mentions of places do not represent a safe assumption, especially when it comes to social media platforms such as Twitter, which is used as the data source in some of these studies (Cheng et al., 2010) . The other two main categories of approaches for text-based geolocation rely on either unsupervised learning (Ahmed et al., 2013; Hong et al., 2012; Eisenstein et al., 2010) or supervised classification (Wing and Baldridge, 2011; Kinsella et al., 2011) . The unsupervised methods can be described in large part as clustering techniques based on topic models.",
"cite_spans": [
{
"start": 50,
"end": 69,
"text": "(Ding et al., 2000)",
"ref_id": "BIBREF16"
},
{
"start": 489,
"end": 511,
"text": "Quercini et al., 2010;",
"ref_id": "BIBREF59"
},
{
"start": 512,
"end": 531,
"text": "Cheng et al., 2010)",
"ref_id": "BIBREF12"
},
{
"start": 605,
"end": 627,
"text": "(Bilhaut et al., 2003)",
"ref_id": "BIBREF4"
},
{
"start": 717,
"end": 748,
"text": "(Gelernter and Mushegian, 2011;",
"ref_id": "BIBREF23"
},
{
"start": 749,
"end": 766,
"text": "Qin et al., 2010)",
"ref_id": "BIBREF58"
},
{
"start": 1143,
"end": 1163,
"text": "(Cheng et al., 2010)",
"ref_id": "BIBREF12"
},
{
"start": 1274,
"end": 1294,
"text": "(Ahmed et al., 2013;",
"ref_id": "BIBREF0"
},
{
"start": 1295,
"end": 1313,
"text": "Hong et al., 2012;",
"ref_id": "BIBREF33"
},
{
"start": 1314,
"end": 1338,
"text": "Eisenstein et al., 2010)",
"ref_id": "BIBREF19"
},
{
"start": 1368,
"end": 1394,
"text": "(Wing and Baldridge, 2011;",
"ref_id": "BIBREF72"
},
{
"start": 1395,
"end": 1417,
"text": "Kinsella et al., 2011)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There are some studies on user geolocation in social media, that look at this task from a supervised learning perspective (Rout et al., 2013) and can be included in the second set of approaches for geotagging. However, in such works, other details (e.g. social ties) in the users profile have been considered rather than their written content. Although these works cover geolocation prediction in social media, they do not use text as input. Our current interest in studying language variation for the geolocation of users in social media has been covered in the literature in a series of works (Rahimi et al., 2017; Han et al., 2014; Doyle, 2014; Roller et al., 2012; Eisenstein et al., 2010) , employing various machine learning techniques, that range from probabilistic graphical models (Eisenstein et al., 2010) and adaptive grid search (Roller et al., 2012) to Bayesian methods (Doyle, 2014) and neural networks (Rahimi et al., 2017) .",
"cite_spans": [
{
"start": 122,
"end": 141,
"text": "(Rout et al., 2013)",
"ref_id": "BIBREF63"
},
{
"start": 595,
"end": 616,
"text": "(Rahimi et al., 2017;",
"ref_id": "BIBREF61"
},
{
"start": 617,
"end": 634,
"text": "Han et al., 2014;",
"ref_id": "BIBREF29"
},
{
"start": 635,
"end": 647,
"text": "Doyle, 2014;",
"ref_id": "BIBREF17"
},
{
"start": 648,
"end": 668,
"text": "Roller et al., 2012;",
"ref_id": "BIBREF62"
},
{
"start": 669,
"end": 693,
"text": "Eisenstein et al., 2010)",
"ref_id": "BIBREF19"
},
{
"start": 790,
"end": 815,
"text": "(Eisenstein et al., 2010)",
"ref_id": "BIBREF19"
},
{
"start": 841,
"end": 862,
"text": "(Roller et al., 2012)",
"ref_id": "BIBREF62"
},
{
"start": 883,
"end": 896,
"text": "(Doyle, 2014)",
"ref_id": "BIBREF17"
},
{
"start": 917,
"end": 938,
"text": "(Rahimi et al., 2017)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The related work to date covers a wide range of languages and dialects, including Dutch (Wieling et al., 2011) , British (Szmrecsanyi, 2008) , American (Huang et al., 2016; Eisenstein et al., 2010) and even African American Vernacular English (Jones, 2015) . Most related to our work is the study of Hovy and Purschke (2018), which targets the German language and its variations and, in addition to the previously mentioned endeavours, performs a quantitative analysis against a dialect map. Moreover, Hovy and Purschke (2018) collected 16.8 million online posts from the German-speaking area with the aim of learning document representations of cities. Among these posts, some were from the German speaking side of Switzerland, being part of the SMG shared task, more specifically the SMG-CH subtask that we are addressing. The authors aimed at capturing enough regional variations in the written language, serving as input in automatically distinguishing the geographical region of speakers. The focus was on larger regions covering a given dialect, the proposed approach being based on clustering. Given the shared task formulation, we take a different approach and use the provided data in a double regression setup, addressing the problem both from a shallow perspective and a deep learning perspective, respectively.",
"cite_spans": [
{
"start": 88,
"end": 110,
"text": "(Wieling et al., 2011)",
"ref_id": "BIBREF71"
},
{
"start": 121,
"end": 140,
"text": "(Szmrecsanyi, 2008)",
"ref_id": "BIBREF66"
},
{
"start": 152,
"end": 172,
"text": "(Huang et al., 2016;",
"ref_id": "BIBREF35"
},
{
"start": 173,
"end": 197,
"text": "Eisenstein et al., 2010)",
"ref_id": "BIBREF19"
},
{
"start": 243,
"end": 256,
"text": "(Jones, 2015)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3.1 \u03bd-Support Vector Regression based on String Kernels. String Kernels. Lodhi et al. (2001) introduced string kernels as a means of comparing two documents, based on the inner product generated by all substrings of length n, typically known as character n-grams. Since then, string kernels have found many applications, from sentiment analysis (Gim\u00e9nez-P\u00e9rez et al., 2017; , automated essay scoring (Cozma et al., 2018) and sentence selection (Masala et al., 2017) to native language identification (Ionescu et al., 2014; Popescu and Ionescu, 2013) and dialect identification (Butnaru and Ionescu, 2018b; Butnaru and Ionescu, 2019; Ionescu and Butnaru, 2017) .",
"cite_spans": [
{
"start": 73,
"end": 92,
"text": "Lodhi et al. (2001)",
"ref_id": "BIBREF53"
},
{
"start": 345,
"end": 373,
"text": "(Gim\u00e9nez-P\u00e9rez et al., 2017;",
"ref_id": "BIBREF26"
},
{
"start": 400,
"end": 420,
"text": "(Cozma et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 444,
"end": 465,
"text": "(Masala et al., 2017)",
"ref_id": "BIBREF54"
},
{
"start": 500,
"end": 522,
"text": "(Ionescu et al., 2014;",
"ref_id": "BIBREF40"
},
{
"start": 523,
"end": 549,
"text": "Popescu and Ionescu, 2013)",
"ref_id": "BIBREF56"
},
{
"start": 577,
"end": 605,
"text": "(Butnaru and Ionescu, 2018b;",
"ref_id": "BIBREF8"
},
{
"start": 606,
"end": 632,
"text": "Butnaru and Ionescu, 2019;",
"ref_id": "BIBREF9"
},
{
"start": 633,
"end": 659,
"text": "Ionescu and Butnaru, 2017)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "In this work, we employ string kernels as described in (Butnaru and Ionescu, 2019) , specifically using the efficient algorithm for building string kernels of . We note that the number of character n-grams is usually much higher than the number of samples, so representing the text samples as feature vectors may require a lot of space. String kernels provide an efficient way to avoid storing and using the feature vectors (primal form), by representing the data though a kernel matrix (dual form). Each cell in the kernel matrix represents the similarity between some text samples x i and x j . In our experiments, we use the presence bits string kernel (Popescu and Ionescu, 2013) as the similarity function. For two strings x i and x j over a set of characters S, the presence bits string kernel is defined as follows:",
"cite_spans": [
{
"start": 55,
"end": 82,
"text": "(Butnaru and Ionescu, 2019)",
"ref_id": "BIBREF9"
},
{
"start": 656,
"end": 683,
"text": "(Popescu and Ionescu, 2013)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k 0/1 (x i , x j ) = g\u2208S n #(x i , g) \u2022 #(x j , g),",
"eq_num": "(1)"
}
],
"section": "Methods",
"sec_num": "3"
},
{
"text": "where n is the length of n-grams and #(x, g) is a function that returns 1 when the number of occurrences of n-gram g in x is greater than 1, and 0 otherwise. \u03bd-Support Vector Regression. Support Vector Machines (SVM) (Cortes and Vapnik, 1995) represent a popular method initially designed for binary classification, which was subsequently repurposed for regression, under the SVR (Drucker et al., 1997) acronym (i.e. Support Vector Regression). Similar to SVM, SVR uses the notion of support vectors and margin in order to find an optimal estimator. In the original -SVR formulation (Drucker et al., 1997) , there is an -insensitive region, i.e. tube, defined in the optimization function. The goal is to find the flattest tube containing most of the training samples, while also minimizing the prediction error and model complexity. Different from linear regression, -SVR fits the error within the maximum margin , instead of minimizing the error directly (Smola and Sch\u00f6lkopf, 2004) . In our experiments, we employ an equivalent SVR formulation known as \u03bd-SVR (Chang and Lin, 2002) , where \u03bd is the configurable proportion of support vectors to keep with respect to the number of samples in the data set. In \u03bd-SVR, the margin is automatically estimated to its optimal value. Using \u03bd-SVR, the optimal solution can converge to a small model, with only a few support vectors. This is especially useful in our use case, as the data set provided in the SMG-CH subtask does not contain too many samples. Another reason to employ \u03bd-SVR in our regression task is that it was found to surpass other regression methods in complex word identification (Butnaru and Ionescu, 2018a) .",
"cite_spans": [
{
"start": 217,
"end": 242,
"text": "(Cortes and Vapnik, 1995)",
"ref_id": "BIBREF13"
},
{
"start": 380,
"end": 402,
"text": "(Drucker et al., 1997)",
"ref_id": "BIBREF18"
},
{
"start": 583,
"end": 605,
"text": "(Drucker et al., 1997)",
"ref_id": "BIBREF18"
},
{
"start": 957,
"end": 984,
"text": "(Smola and Sch\u00f6lkopf, 2004)",
"ref_id": "BIBREF64"
},
{
"start": 1062,
"end": 1083,
"text": "(Chang and Lin, 2002)",
"ref_id": "BIBREF10"
},
{
"start": 1642,
"end": 1670,
"text": "(Butnaru and Ionescu, 2018a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Character Embeddings. From the pioneering works in language modelling at the character level (Gasthaus et al., 2010; Wood et al., 2009) to date (Georgescu et al., 2020) , a broad range of neural architectures rely on characters as features. Among these, we can mention Recurrent Neural Networks (RNNs) (Sutskever et al., 2011) , LSTM networks (Ballesteros et al., 2015) , CNNs (Kim et al., 2016; Zhang et al., 2015) and transformer models (Al-Rfou et al., 2019) . Characters are the base units in building words that exist in the vocabulary of most languages. Knowledge of words, semantic structure or syntax is not required when working with characters. Robustness to spelling errors and words that are outside the vocabulary (Ballesteros et al., 2015) constitute other advantages explaining the growing interest in using characters as features. In our paper, we employ a convolutional neural network working at the character level (Zhang et al., 2015) . The employed CNN is equipped with a character embedding layer, automatically learning a 2D representation of text formed of character embedding vectors, that is further processed by the convolutional layers.",
"cite_spans": [
{
"start": 93,
"end": 116,
"text": "(Gasthaus et al., 2010;",
"ref_id": "BIBREF22"
},
{
"start": 117,
"end": 135,
"text": "Wood et al., 2009)",
"ref_id": "BIBREF73"
},
{
"start": 144,
"end": 168,
"text": "(Georgescu et al., 2020)",
"ref_id": null
},
{
"start": 302,
"end": 326,
"text": "(Sutskever et al., 2011)",
"ref_id": "BIBREF65"
},
{
"start": 343,
"end": 369,
"text": "(Ballesteros et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 377,
"end": 395,
"text": "(Kim et al., 2016;",
"ref_id": "BIBREF43"
},
{
"start": 396,
"end": 415,
"text": "Zhang et al., 2015)",
"ref_id": "BIBREF74"
},
{
"start": 439,
"end": 461,
"text": "(Al-Rfou et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 727,
"end": 753,
"text": "(Ballesteros et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 933,
"end": 953,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF74"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Character-Level Convolutional Neural Network",
"sec_num": "3.2"
},
{
"text": "Convolutional Neural Networks. Inspired by the visual cortex of mammals (Fukushima, 1980) , CNNs have been extensively used in image classification (LeCun et al., 1989; LeCun et al., 2004; Krizhevsky et al., 2012) , subsequently being adapted for various NLP tasks (Kim, 2014; Zhang et al., 2015) . CNNs are composed of convolutional blocks, consisting of convolutions and pooling operations, usually followed by a sequence of dense layers and ending with an output layer, with the number of neurons equal to the number of values that we are interested in predicting. In the experiments, we employ a characterlevel CNN (Zhang et al., 2015) with squeeze-and-excitation (SE) blocks, introduced by Butnaru and Ionescu (2019). Since this method has been previously applied in Romanian dialect identification with good results (Butnaru and Ionescu, 2019; Tudoreanu, 2019) , we consider it a good candidate for our text geolocation task. We therefore change the original architecture by replacing the Softmax classification layer with a regression layer formed of two units, one predicting the latitude and one predicting the longitude, respectively. We train our character-level CNN towards minimizing the mean squared error (MSE) loss function with respect to the ground-truth latitude and longitude.",
"cite_spans": [
{
"start": 72,
"end": 89,
"text": "(Fukushima, 1980)",
"ref_id": "BIBREF21"
},
{
"start": 148,
"end": 168,
"text": "(LeCun et al., 1989;",
"ref_id": "BIBREF48"
},
{
"start": 169,
"end": 188,
"text": "LeCun et al., 2004;",
"ref_id": "BIBREF49"
},
{
"start": 189,
"end": 213,
"text": "Krizhevsky et al., 2012)",
"ref_id": "BIBREF47"
},
{
"start": 265,
"end": 276,
"text": "(Kim, 2014;",
"ref_id": "BIBREF44"
},
{
"start": 277,
"end": 296,
"text": "Zhang et al., 2015)",
"ref_id": "BIBREF74"
},
{
"start": 619,
"end": 639,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF74"
},
{
"start": 822,
"end": 849,
"text": "(Butnaru and Ionescu, 2019;",
"ref_id": "BIBREF9"
},
{
"start": 850,
"end": 866,
"text": "Tudoreanu, 2019)",
"ref_id": "BIBREF67"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Character-Level Convolutional Neural Network",
"sec_num": "3.2"
},
{
"text": "BERT Embeddings. Transformers (Vaswani et al., 2017) represent a very important advance in Natural Language Processing, with many benefits over the traditional sequential neural architectures. Based on an encoder-decoder architecture with attention, transformers proved to be better at modelling long-term dependencies in sequences, while being effectively trained as the sequential dependency of previous tokens is removed. Unlike other contemporary attempts at using transformers in language modelling (Radford et al., 2018) , BERT (Devlin et al., 2019) incorporates context from both directions in the process of building deep language representations, in a self-supervised fashion. The masked language modeling technique enables BERT to pre-train these deep bidirectional representations, that can be further finetuned and adapted for a variety of tasks, without significant architectural updates. We also make use of this property in the current work, employing a TensorFlow version of a German BERT model 1 . The model has been trained on the latest German Wikipedia dump, the OpenLegalData dump and a collection of news articles, summing up to a total of 12 GB of text files. We add the pre-trained German BERT model to be fine-tuned in an end-to-end fashion along with our LSTM architecture for geolocation of Swiss German short texts.",
"cite_spans": [
{
"start": 30,
"end": 52,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF68"
},
{
"start": 504,
"end": 526,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF60"
},
{
"start": 534,
"end": 555,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Long Short-Term Memory Networks based on BERT Embeddings",
"sec_num": "3.3"
},
{
"text": "Long Short-Term Memory Networks. RNNs (Werbos, 1988) operate at the sequence level, attaining state-of-the-art performance on various problems involving time series (Weiss et al., 2018) . Major drawbacks in regular RNNs are the phenomenons of exploding and vanishing gradients, which can be caused by an increase in the length of the input sequence (Hochreiter et al., 2001) . LSTM networks (Hochreiter and Schmidhuber, 1997) represent a flavour of RNN, designed to overcome the aforementioned challenges faced when working with RNNs. An LSTM unit has a more complex structure, including a memory cell to remember dependencies in the input and three gates acting as regulators: input, output and, in more recent versions, forget gates, which enable the cell to reset its state (Gers et al., 2000) . The LSTM architecture used in this work is inspired by the one described in (Onose et al., 2019) , which has been successfully employed in Romanian dialect identification. We train our LSTM model using the mean squared logarithmic error as loss function. We opted for the aforementioned loss in favor of the mean squared error, as the latter loss function did not produce optimal results for our LSTM.",
"cite_spans": [
{
"start": 38,
"end": 52,
"text": "(Werbos, 1988)",
"ref_id": "BIBREF70"
},
{
"start": 165,
"end": 185,
"text": "(Weiss et al., 2018)",
"ref_id": "BIBREF69"
},
{
"start": 349,
"end": 374,
"text": "(Hochreiter et al., 2001)",
"ref_id": "BIBREF32"
},
{
"start": 777,
"end": 796,
"text": "(Gers et al., 2000)",
"ref_id": "BIBREF25"
},
{
"start": 875,
"end": 895,
"text": "(Onose et al., 2019)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Long Short-Term Memory Networks based on BERT Embeddings",
"sec_num": "3.3"
},
{
"text": "XGBoost. Gradient tree boosting (Friedman, 2001 ) is based on training a tree ensemble model in an additive fashion. This technique has been successfully used in classification (Li, 2010) and ranking (Burges, 2010) problems, obtaining notable results in reputed competitions such as the Netflix Challenge (Bennett et al., 2007) . Furthermore, gradient tree boosting is the ensemble method of choice in real-world pipelines running in production (He et al., 2014) . XGBoost (Chen and Guestrin, 2016) is a tree boosting model targeted at solving large-scale tasks with limited computational resources. This approach aims at parallelizing tree learning while also trying to handle various sparsity patterns. Overfitting is addressed through shrinkage and column subsampling. Shrinkage acts as a learning rate, reducing the influence of each individual tree. Column subsampling is borrowed from Random Forests (Breiman, 2001) , bearing the advantage of speeding up the computations. In the experiments, we employ XGBoost as a metalearner over the individual predictions of each of the models described above. We opted for XGBoost in detriment of average voting and a \u03bd-SVR meta-learner, both providing comparatively lower performance levels in a set of preliminary ensemble experiments.",
"cite_spans": [
{
"start": 32,
"end": 47,
"text": "(Friedman, 2001",
"ref_id": "BIBREF20"
},
{
"start": 177,
"end": 187,
"text": "(Li, 2010)",
"ref_id": "BIBREF50"
},
{
"start": 305,
"end": 327,
"text": "(Bennett et al., 2007)",
"ref_id": "BIBREF3"
},
{
"start": 445,
"end": 462,
"text": "(He et al., 2014)",
"ref_id": "BIBREF30"
},
{
"start": 473,
"end": 498,
"text": "(Chen and Guestrin, 2016)",
"ref_id": "BIBREF11"
},
{
"start": 906,
"end": 921,
"text": "(Breiman, 2001)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Learning",
"sec_num": "3.4"
},
{
"text": "The data set for the SMG-CH subtask contains a training set of 22,600 samples, with one sample per line, each formed of a piece of text and a pair of coordinates representing the position on Earth, i.e. latitude and longitude. The development set is composed of 3,086 samples, provided in the same format. The test set consists in 3,097 samples without coordinates. We note that the centroid computed on the training data has a latitude of 47.26 degrees and a longitude of 8.33 degrees, confirming that the average location is on the territory of Switzerland.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Data Set",
"sec_num": "4"
},
{
"text": "\u03bd-SVR based on string kernels. In the experiments, we use \u03bd-SVR with a pre-computed string kernel, employing the efficient algorithm proposed in . In a set of preliminary experiments, we employed various blended spectrum string kernels based on various n-gram ranges that include ngrams from 3 to 7 characters long. The best performance in terms of both mean absolute error (MAE) and mean squared error (MSE) were attained by a string kernel based on the blended spectrum of 3 to 5 character n-grams. These results are consistent with those reported by Ionescu and Butnaru (2017) , suggesting that the 3-5 n-gram range is optimal for German dialect identification. The resulting kernel matrix is used as input in a double regression setup, with a \u03bd-SVR model for predicting the latitude (in degrees), and another \u03bd-SVR model for predicting the longitude (in degrees), respectively. We tried out values ranging from 10 \u22124 to 10 4 for the regularization penalty C, during the hyperparameter tuning phase. Similarly, for the proportion of support vectors \u03bd, we considered 10 values covering the interval (0, 1] with a step of 0.1. For both regression models, the best value for the parameter C is 10. As for the parameter \u03bd, the default value of 0.5 seems to yield the best results.",
"cite_spans": [
{
"start": 565,
"end": 579,
"text": "Butnaru (2017)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Tuning",
"sec_num": "4.2"
},
{
"text": "Character-level CNN. Except for the last layer, the architecture used in our experiments is identical to the architecture employed by Butnaru and Ionescu (2019) for Romanian dialect identification. An input of maximum 5000 characters (zero-padding is used as necessary) is expected into the network, with the characters initially encoded with their position in the vocabulary. For each character, a vectorial representation of 128 elements is learned in the embedding layer. Three convolutional blocks follow, each being composed of a convolutional layer with 128 one-dimensional filters of size 7 for the first two blocks, and of size 3 for the last block, resptectively. Each convolutional block also performs downsampling through max-pooling operations with a filter of size 3. Squeeze-and-excitation (SE) attention modules are integrated after each pooling layer. The outputs are then flattened and given as input into the regression layer, containing two neurons, one for predicting the latitude and the other for predicting the longitude. Adam (Kingma and Ba, 2015) is used as the optimization algorithm, in an attempt to minimize the MSE loss. We trained our model on mini-batches of 128 samples for 100 epochs with early stopping, using a learning rate of 5 \u2022 10 \u22124 . The network converged in 60 epochs, after observing no improvements for the last 7 epochs. LSTM based on BERT embeddings. In conjunction with the LSTM, we fine-tuned a BERT model that is pre-trained on a German corpus, as detailed in Section 3.1. Thus, we input the data into a BERT layer, initialized with the corresponding pre-trained parameters. We set the maximum sequence length to 310, which is around the mean sequence length in the SMG-CH data set. The BERT layer is followed by two LSTM layers of size 128 each, both having tanh activation. We use dropout for regularization, randomly removing 20% of the neurons. We tried out various optimization algorithms such as Adam, RMSProp and stochastic gradient descent (SGD) with momentum. We obtained the best convergence using SGD with a momentum rate of 0.9 and a learning rate of \u03b1 = 10 \u22121 . The training automatically ended after 18 epochs because of early stopping, as there were no more improvements registered for the loss value. Extreme Gradient Boosting. We employed XGBoost as a meta-learner, training it over the predictions of all the other models. In our case, XGBoost provided the best results with the number of estimators set to 1000, a maximum depth of 10 \u22124 and the learning rate \u03b1 = 10 \u22122 .",
"cite_spans": [
{
"start": 134,
"end": 160,
"text": "Butnaru and Ionescu (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Tuning",
"sec_num": "4.2"
},
{
"text": "In the development phase, as there was no metric specified in the description of the SMG task, we treated it as any other regression problem and evaluated the predictions in terms of both MAE and MSE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminary Results",
"sec_num": "4.3"
},
{
"text": "In Table 1 , we present the results obtained by our four models on the development set. As the organizers released the ground-truth labels for the test set after the competition, we also include the MAE and the MSE on the test set, for reference. Considering the results presented in Table 1 , it is clear that the algorithm that achieves the best MAE and MSE values is the ensemble based on the XGBoost meta-learner. This does not come as a surprise, as the ensemble combines the predictions of three individual models, each being based on a different type of features and a different learning model. While these aspects are complementary in theory, the results indicate that this is also the case in practice.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 284,
"end": 291,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Preliminary Results",
"sec_num": "4.3"
},
{
"text": "Additionally, we note that the performance achieved by \u03bd-SVR comes close to the one attained by the ensemble, leaving behind the two neural models based on characters and fine-tuned BERT embeddings. This confirms the efficiency of string kernels over deep learning approaches observed in related works (Butnaru and Ionescu, 2019; G\u0203man and Ionescu, 2020) , which seems to be independent of the task to be solved.",
"cite_spans": [
{
"start": 302,
"end": 329,
"text": "(Butnaru and Ionescu, 2019;",
"ref_id": "BIBREF9"
},
{
"start": 330,
"end": 354,
"text": "G\u0203man and Ionescu, 2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminary Results",
"sec_num": "4.3"
},
{
"text": "Another comment regarding the results outlined in Table 1 is that the errors on the development set do not fall far from the ones obtained on the test set. However, the ensemble model as well as the \u03bd-SVR based on string kernels obtain slightly lower errors at test time compared to the errors reported on the development set. The opposite seems to happen with the deep models, as both the LSTM based on BERT embeddings and the character-level CNN yield slightly higher errors on the test data than on the development data.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Preliminary Results",
"sec_num": "4.3"
},
{
"text": "Every participant was allowed to make three submissions to compete against other shared task participants. Based on the results reported on the development set, we have decided to choose the character- level CNN, the \u03bd-SVR based on string kernels and the XGBoost ensemble as candidates for the SMG-CH challenge. We excluded the LSTM based on BERT embeddings, since it attains the highest errors among the considered models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminary Results",
"sec_num": "4.3"
},
{
"text": "Submission Table 2 : The final results of our team (UnibucKernel) obtained in the SMG-CH subtask, with the metrics picked by the organizers, oriented on clustering by city and on distances expressed in kilometers. Table 2 shows our final results obtained on the test set, considering the metrics chosen by the organizers, which are oriented on distances (in kilometers) and on clustering accuracy. Considering the official metrics, our best submission placed us in the top six participants.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 2",
"ref_id": null
},
{
"start": 214,
"end": 221,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "We observe that our best performing algorithm, namely the XGBoost ensemble combining the predictions of both deep and shallow methods based on various types of features, achieves a median distance of 25.57 km, a mean distance of 30.52 km and a clustering accuracy of 53.88%. Consistent with our findings on the development set, the \u03bd-SVR based on string kernels does not fall far behind the XGBoost ensemble, obtaining a mean distance and median distance that is about one kilometer higher and a clustering accuracy that is nearly 2.75% lower. The deep character-level CNN seems significantly worse, with around 15 km and more than 10 km higher errors in terms of the median and the mean distances as compared to the other two submitted models, and around 21% lower clustering accuracy. In our opinion, these results stand proof that neural networks might not be the holy grail in every possible situation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "We believe that the proposed methods attain decent results, given the challenging nature of the problem at hand. However, it is clear to us that they could benefit from some improvements, considering the values obtained in the final evaluation phase. One important step in this direction is to visualize the errors, at scale, on the map of Switzerland, for a better understanding of the patterns that our best performing algorithm drew with its predictions. Thus, we overlap the predicted and the ground-truth locations on the map of Switzerland, illustrating the result in Figure 1 . The points depicted on the map are described by their 2D coordinates, latitude and longitude, as in the data set provided for the task. The ground-truth locations are colored in blue, while the predictions are illustrated in red. For each pair of ground-truth and predicted location, there is a line connecting the two points, giving us a better idea regarding the errors made by the XGBoost ensemble, in terms of distance. For the visualization presented in Figure 1 , we have randomly selected a subset of 100 points from the test set, with non-overlapping ground-truth locations. We hereby notice that including all the data points from the test set would generate a visualization that is too cluttered and hard to understand. Hence, we opted for a smaller number of points for a better visualization experience. Considering the annotated map illustrated in Figure 1 , we observe that our predictions tend to be clustered around the main cities in the German-speaking side of Switzerland, such as Z\u00fcrich, Bern, Lucerne and Basel. This bias towards the mentioned cities might be induced by the data samples from the training set, likely not having a well-distributed variance in terms of the locations or the texts used in the learning process. We also observe that the ground-truth locations exhibit a higher variance than the predicted locations. One possible solution would to manually adjust the variance of the predicted location to match the variance of the actual locations.",
"cite_spans": [],
"ref_spans": [
{
"start": 574,
"end": 582,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1044,
"end": 1052,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1446,
"end": 1454,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "In the current work, we tackled the SMG-CH shared subtask of the 2020 VarDial Evaluation Campaign. We addressed this challenge from a shallow perspective, with handcrafted models such as a \u03bd-SVR based on string kernels, as well as from a deep learning perspective, with neural models such as an LSTM based on BERT embeddings and a character-level CNN, respectively. Additionally, we combined the proposed models into an ensemble, employing the XGBoost meta-learner. We obtained our best results with the XGBoost ensemble, which benefits from complementary information from the handcrafted and deep models. We therefore brought one more proof regarding the effectiveness of ensemble learning in general, and of XGBoost, in particular. Another important conclusion is that our shallow model based on string kernels outperforms the two deep neural networks. We consider this as yet another indicator of the high discriminative power that string kernels can bring to a fairly standard learning model, i.e. the \u03bd-SVR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In future work, we aim to explore ways to improve our performance with respect to the metrics proposed by the shared task organizers. Currently, it seems that training the models to simply minimize the MSE or the MAE values is not effective, as our best model was significantly outperformed by the model proposed by the shared task organizers themselves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/deepset-ai/FARM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by a grant of the Romanian Ministry of Education and Research, CNCS -UEFISCDI, project number PN-III-P1-1.1-TE-2019-0235, within PNCDI III. This article has also benefited from the support of the Romanian Young Academy, which is funded by Stiftung Mercator and the Alexander von Humboldt Foundation for the period 2020-2022.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Hierarchical geographical modeling of user locations from social media posts",
"authors": [
{
"first": "Amr",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Liangjie",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Alex",
"middle": [
"J"
],
"last": "Smola",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of WWW",
"volume": "",
"issue": "",
"pages": "25--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amr Ahmed, Liangjie Hong, and Alex J. Smola. 2013. Hierarchical geographical modeling of user locations from social media posts. In Proceedings of WWW, pages 25-36.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Character-Level Language Modeling with Deeper Self-Attention",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Dokook",
"middle": [],
"last": "Choe",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Mandy",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "3159--3166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. 2019. Character-Level Language Modeling with Deeper Self-Attention. In Proceedings of AAAI, pages 3159-3166.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Improved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP 2015",
"volume": "",
"issue": "",
"pages": "349--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs. In Proceedings of EMNLP 2015, pages 349-59.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Netflix Prize",
"authors": [
{
"first": "James",
"middle": [],
"last": "Bennett",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Lanning",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Bennett, Stan Lanning, et al. 2007. The Netflix Prize. In Proceedings of KDD, volume 2007, page 35.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Geographic reference analysis for geographic document querying",
"authors": [
{
"first": "Fr\u00e9d\u00e9rik",
"middle": [],
"last": "Bilhaut",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Charnois",
"suffix": ""
},
{
"first": "Patrice",
"middle": [],
"last": "Enjalbert",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Mathet",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL-GEOREF",
"volume": "",
"issue": "",
"pages": "55--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fr\u00e9d\u00e9rik Bilhaut, Thierry Charnois, Patrice Enjalbert, and Yann Mathet. 2003. Geographic reference analysis for geographic document querying. In Proceedings of HLT-NAACL-GEOREF, pages 55-62.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Random forests. Machine learning",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Breiman",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "45",
"issue": "",
"pages": "5--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leo Breiman. 2001. Random forests. Machine learning, 45(1):5-32.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "From RankNet to LambdaRank to LambdaMART: An Overview. Learning",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Burges",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher J.C. Burges. 2010. From RankNet to LambdaRank to LambdaMART: An Overview. Learning, 11(23-581):81.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "UnibucKernel: A kernel-based learning method for complex word identification",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Butnaru",
"suffix": ""
},
{
"first": "Radu Tudor",
"middle": [],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of BEA-13",
"volume": "",
"issue": "",
"pages": "175--183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Butnaru and Radu Tudor Ionescu. 2018a. UnibucKernel: A kernel-based learning method for complex word identification. In Proceedings of BEA-13, pages 175-183.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "UnibucKernel Reloaded: First Place in Arabic Dialect Identification for the Second Year in a Row",
"authors": [
{
"first": "Andrei",
"middle": [
"M"
],
"last": "Butnaru",
"suffix": ""
},
{
"first": "Radu",
"middle": [
"Tudor"
],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of VarDial",
"volume": "",
"issue": "",
"pages": "77--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei M. Butnaru and Radu Tudor Ionescu. 2018b. UnibucKernel Reloaded: First Place in Arabic Dialect Identification for the Second Year in a Row. In Proceedings of VarDial, pages 77-87.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "MOROCO: The Moldavian and Romanian Dialectal Corpus",
"authors": [
{
"first": "Andrei",
"middle": [
"M"
],
"last": "Butnaru",
"suffix": ""
},
{
"first": "Radu",
"middle": [
"Tudor"
],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "688--698",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei M. Butnaru and Radu Tudor Ionescu. 2019. MOROCO: The Moldavian and Romanian Dialectal Corpus. In Proceedings of ACL, pages 688-698.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Training \u03bd-Support Vector Regression: Theory and Algorithms",
"authors": [
{
"first": "Chih-Chung",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2002,
"venue": "Neural Computation",
"volume": "14",
"issue": "",
"pages": "1959--1977",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2002. Training \u03bd-Support Vector Regression: Theory and Algorithms. Neural Computation, 14:1959-1977.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "XGBoost: A scalable tree boosting system",
"authors": [
{
"first": "Tianqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of KDD",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A scalable tree boosting system. In Proceedings of KDD, pages 785-794.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "You are where you tweet: a content-based approach to geo-locating twitter users",
"authors": [
{
"first": "Zhiyuan",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Caverlee",
"suffix": ""
},
{
"first": "Kyumin",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of CIKM",
"volume": "",
"issue": "",
"pages": "759--768",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiyuan Cheng, James Caverlee, and Kyumin Lee. 2010. You are where you tweet: a content-based approach to geo-locating twitter users. In Proceedings of CIKM, pages 759-768.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Support-vector networks",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine Learning",
"volume": "20",
"issue": "",
"pages": "273--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine Learning, 20(3):273-297.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automated essay scoring with string kernels and word embeddings",
"authors": [
{
"first": "M\u0203d\u0203lina",
"middle": [],
"last": "Cozma",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Butnaru",
"suffix": ""
},
{
"first": "Radu Tudor",
"middle": [],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "503--509",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M\u0203d\u0203lina Cozma, Andrei Butnaru, and Radu Tudor Ionescu. 2018. Automated essay scoring with string kernels and word embeddings. In Proceedings of ACL, pages 503-509.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirec- tional Transformers for Language Understanding. In Proceedings of NAACL, pages 4171-4186.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Computing geographical scopes of web resources",
"authors": [
{
"first": "Junyan",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Gravano",
"suffix": ""
},
{
"first": "Narayanan",
"middle": [],
"last": "Shivakumar",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of VLDV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyan Ding, Luis Gravano, and Narayanan Shivakumar. 2000. Computing geographical scopes of web resources. In Proceedings of VLDV.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Mapping dialectal variation by querying social media",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Doyle",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EACL 2014",
"volume": "",
"issue": "",
"pages": "98--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Doyle. 2014. Mapping dialectal variation by querying social media. In Proceedings of EACL 2014, pages 98-106.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Support vector regression machines",
"authors": [
{
"first": "Harris",
"middle": [],
"last": "Drucker",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "Linda",
"middle": [],
"last": "Burges",
"suffix": ""
},
{
"first": "Alex",
"middle": [
"J"
],
"last": "Kaufman",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "155--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harris Drucker, Christopher J.C. Burges, Linda Kaufman, Alex J. Smola, and Vladimir Vapnik. 1997. Support vector regression machines. In Proceedings of NIPS, pages 155-161.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A latent variable model for geographic lexical variation",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP 2010",
"volume": "",
"issue": "",
"pages": "1277--1287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein, Brendan O'Connor, Noah A Smith, and Eric Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of EMNLP 2010, pages 1277-1287.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Greedy function approximation: a gradient boosting machine",
"authors": [
{
"first": "H",
"middle": [],
"last": "Jerome",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2001,
"venue": "Annals of statistics",
"volume": "",
"issue": "",
"pages": "1189--1232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerome H Friedman. 2001. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189-1232.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position",
"authors": [
{
"first": "Kunihiko",
"middle": [],
"last": "Fukushima",
"suffix": ""
}
],
"year": 1980,
"venue": "Biological cybernetics",
"volume": "36",
"issue": "4",
"pages": "193--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kunihiko Fukushima. 1980. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics, 36(4):193-202.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Lossless Compression Based on the Sequence Memoizer",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Gasthaus",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Wood",
"suffix": ""
},
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of DCC",
"volume": "",
"issue": "",
"pages": "337--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Gasthaus, Frank Wood, and Yee Whye Teh. 2010. Lossless Compression Based on the Sequence Memoizer. In Proceedings of DCC, page 337-345.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Geo-parsing messages from microtext",
"authors": [
{
"first": "Judith",
"middle": [],
"last": "Gelernter",
"suffix": ""
},
{
"first": "Nikolai",
"middle": [],
"last": "Mushegian",
"suffix": ""
}
],
"year": 2011,
"venue": "Transactions in GIS",
"volume": "15",
"issue": "6",
"pages": "753--773",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Judith Gelernter and Nikolai Mushegian. 2011. Geo-parsing messages from microtext. Transactions in GIS, 15(6):753-773.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Radu Tudor Ionescu, Nicolae-Catalin Ristea, and Nicu Sebe. 2020. Non-linear Neurons with Human-like Apical Dendrite Activations",
"authors": [
{
"first": "Mariana-Iuliana",
"middle": [],
"last": "Georgescu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.03229"
]
},
"num": null,
"urls": [],
"raw_text": "Mariana-Iuliana Georgescu, Radu Tudor Ionescu, Nicolae-Catalin Ristea, and Nicu Sebe. 2020. Non-linear Neu- rons with Human-like Apical Dendrite Activations. arXiv preprint arXiv:2003.03229.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning to forget: Continual prediction with LSTM",
"authors": [
{
"first": "Felix",
"middle": [
"A"
],
"last": "Gers",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Cummins",
"suffix": ""
}
],
"year": 2000,
"venue": "Neural Computation",
"volume": "12",
"issue": "10",
"pages": "2451--2471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix A. Gers, J\u00fcrgen Schmidhuber, and Fred Cummins. 2000. Learning to forget: Continual prediction with LSTM. Neural Computation, 12(10):2451-2471.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Single and Cross-domain Polarity Classification using String Kernels",
"authors": [
{
"first": "M",
"middle": [],
"last": "Rosa",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Gim\u00e9nez-P\u00e9rez",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Franco-Salvador",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "558--563",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosa M. Gim\u00e9nez-P\u00e9rez, Marc Franco-Salvador, and Paolo Rosso. 2017. Single and Cross-domain Polarity Classification using String Kernels. In Proceedings of EACL, pages 558-563.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The Unreasonable Effectiveness of Machine Learning in Moldavian versus Romanian Dialect Identification",
"authors": [
{
"first": "Mihaela",
"middle": [],
"last": "G\u0203man",
"suffix": ""
},
{
"first": "Radu Tudor",
"middle": [],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.15700"
]
},
"num": null,
"urls": [],
"raw_text": "Mihaela G\u0203man and Radu Tudor Ionescu. 2020. The Unreasonable Effectiveness of Machine Learning in Molda- vian versus Romanian Dialect Identification. journal=arXiv preprint arXiv:2007.15700.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Proceedings of VarDial",
"authors": [
{
"first": "Mihaela",
"middle": [],
"last": "G\u0203man",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Tudor",
"middle": [],
"last": "Radu",
"suffix": ""
},
{
"first": "Heidi",
"middle": [],
"last": "Ionescu",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Krister",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Partanen",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Purschke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scherrer",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihaela G\u0203man, Dirk Hovy, Radu Tudor Ionescu, Heidi Jauhiainen, Tommi Jauhiainen, Krister Lind\u00e9n, Nikola Ljube\u0161i\u0107, Niko Partanen, Christoph Purschke, Yves Scherrer, and Marcos Zampieri. 2020. A Report on the VarDial Evaluation Campaign 2020. In Proceedings of VarDial.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Text-based twitter user geolocation prediction",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Artificial Intelligence Research",
"volume": "49",
"issue": "",
"pages": "451--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han, Paul Cook, and Timothy Baldwin. 2014. Text-based twitter user geolocation prediction. Journal of Artificial Intelligence Research, 49:451-500.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Practical lessons from predicting clicks on ads at facebook",
"authors": [
{
"first": "Xinran",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Junfeng",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Ou",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Tianbing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yanxin",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Atallah",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Herbrich",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Bowers",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ADKDD",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinran He, Junfeng Pan, Ou Jin, Tianbing Xu, Bo Liu, Tao Xu, Yanxin Shi, Antoine Atallah, Ralf Herbrich, Stuart Bowers, et al. 2014. Practical lessons from predicting clicks on ads at facebook. In Proceedings of ADKDD, pages 1-9.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Long Short-Term Memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation, 9(8):1735- 1780.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. A Field Guide to Dynamical Recurrent Neural Networks",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Frasconi",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "237--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, J\u00fcrgen Schmidhuber, et al. 2001. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. A Field Guide to Dynamical Recurrent Neural Networks, pages 237-244.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Discovering geographical topics in the twitter stream",
"authors": [
{
"first": "Liangjie",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Amr",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Siva",
"middle": [],
"last": "Gurumurthy",
"suffix": ""
},
{
"first": "Alex",
"middle": [
"J"
],
"last": "Smola",
"suffix": ""
},
{
"first": "Kostas",
"middle": [],
"last": "Tsioutsiouliklis",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of WWW",
"volume": "",
"issue": "",
"pages": "769--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liangjie Hong, Amr Ahmed, Siva Gurumurthy, Alex J. Smola, and Kostas Tsioutsiouliklis. 2012. Discovering geographical topics in the twitter stream. In Proceedings of WWW, pages 769-778.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Capturing Regional Variation with Distributed Place Representations and Geographic Retrofitting",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Purschke",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "4383--4394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy and Christoph Purschke. 2018. Capturing Regional Variation with Distributed Place Representations and Geographic Retrofitting. In Proceedings of EMNLP, pages 4383-4394.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Understanding us regional linguistic variation with twitter data analysis. Computers, Environment and Urban Systems",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Diansheng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Alice",
"middle": [],
"last": "Kasakoff",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Grieve",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "59",
"issue": "",
"pages": "244--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Huang, Diansheng Guo, Alice Kasakoff, and Jack Grieve. 2016. Understanding us regional linguistic variation with twitter data analysis. Computers, Environment and Urban Systems, 59:244-255.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Learning to Identify Arabic and German Dialects using Multiple Kernels",
"authors": [
{
"first": "Tudor",
"middle": [],
"last": "Radu",
"suffix": ""
},
{
"first": "Andrei",
"middle": [
"M"
],
"last": "Ionescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Butnaru",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of VarDial",
"volume": "",
"issue": "",
"pages": "200--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Tudor Ionescu and Andrei M. Butnaru. 2017. Learning to Identify Arabic and German Dialects using Multiple Kernels. In Proceedings of VarDial, pages 200-209.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Improving the results of string kernels in sentiment analysis and Arabic dialect identification by adapting them to your test set",
"authors": [
{
"first": "Tudor",
"middle": [],
"last": "Radu",
"suffix": ""
},
{
"first": "Andrei",
"middle": [
"M"
],
"last": "Ionescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Butnaru",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1084--1090",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Tudor Ionescu and Andrei M. Butnaru. 2018. Improving the results of string kernels in sentiment analysis and Arabic dialect identification by adapting them to your test set. In Proceedings of EMNLP, pages 1084-1090.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "UnibucKernel: An Approach for Arabic Dialect Identification based on Multiple String Kernels",
"authors": [
{
"first": "Tudor",
"middle": [],
"last": "Radu",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Ionescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Popescu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of VarDial",
"volume": "",
"issue": "",
"pages": "135--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Tudor Ionescu and Marius Popescu. 2016. UnibucKernel: An Approach for Arabic Dialect Identification based on Multiple String Kernels. In Proceedings of VarDial, pages 135-144.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Can string kernels pass the test of time in native language identification?",
"authors": [
{
"first": "Tudor",
"middle": [],
"last": "Radu",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Ionescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Popescu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of BEA-12",
"volume": "",
"issue": "",
"pages": "224--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Tudor Ionescu and Marius Popescu. 2017. Can string kernels pass the test of time in native language identification? In Proceedings of BEA-12, pages 224-234.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Can characters reveal your native language? A language-independent approach to native language identification",
"authors": [
{
"first": "Marius",
"middle": [],
"last": "Radu Tudor Ionescu",
"suffix": ""
},
{
"first": "Aoife",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cahill",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1363--1373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Tudor Ionescu, Marius Popescu, and Aoife Cahill. 2014. Can characters reveal your native language? A language-independent approach to native language identification. In Proceedings of EMNLP, pages 1363-1373.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "String kernels for native language identification: Insights from behind the curtains",
"authors": [
{
"first": "Marius",
"middle": [],
"last": "Radu Tudor Ionescu",
"suffix": ""
},
{
"first": "Aoife",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cahill",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics",
"volume": "42",
"issue": "3",
"pages": "491--525",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Tudor Ionescu, Marius Popescu, and Aoife Cahill. 2016. String kernels for native language identification: Insights from behind the curtains. Computational Linguistics, 42(3):491-525.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Toward a description of African American Vernacular English dialect regions using \"Black Twitter",
"authors": [
{
"first": "Taylor",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2015,
"venue": "American Speech",
"volume": "90",
"issue": "4",
"pages": "403--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taylor Jones. 2015. Toward a description of African American Vernacular English dialect regions using \"Black Twitter\". American Speech, 90(4):403-440.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Character-Aware Neural Language Models",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "2741--2749",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016. Character-Aware Neural Language Models. In Proceedings of AAAI, pages 2741-2749.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Convolutional Neural Networks for Sentence Classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In Proceedings of EMNLP, pages 1746-1751.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "I'm eating a sandwich in Glasgow\" modeling locations with tweets",
"authors": [
{
"first": "Sheila",
"middle": [],
"last": "Kinsella",
"suffix": ""
},
{
"first": "Vanessa",
"middle": [],
"last": "Murdock",
"suffix": ""
},
{
"first": "Neil O'",
"middle": [],
"last": "Hare",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of SMUC",
"volume": "",
"issue": "",
"pages": "61--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sheila Kinsella, Vanessa Murdock, and Neil O'Hare. 2011. \"I'm eating a sandwich in Glasgow\" modeling locations with tweets. In Proceedings of SMUC, pages 61-68.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "ImageNet Classification with Deep Convolutional Neural Networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceddings of NIPS",
"volume": "",
"issue": "",
"pages": "1097--1105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Proceddings of NIPS, pages 1097-1105.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Backpropagation Applied to Handwritten Zip Code Recognition",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Boser",
"suffix": ""
},
{
"first": "John",
"middle": [
"S"
],
"last": "Denker",
"suffix": ""
},
{
"first": "Donnie",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"E"
],
"last": "Howard",
"suffix": ""
},
{
"first": "Wayne",
"middle": [],
"last": "Hubbard",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"D"
],
"last": "Jackel",
"suffix": ""
}
],
"year": 1989,
"venue": "Neural Computation",
"volume": "1",
"issue": "4",
"pages": "541--551",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, Bernhard Boser, John S. Denker, Donnie Henderson, Richard E. Howard, Wayne Hubbard, and Lawrence D. Jackel. 1989. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Computa- tion, 1(4):541-551.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Learning methods for generic object recognition with invariance to pose and lighting",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of CVPR",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, Fu Jie Huang, and Leon Bottou. 2004. Learning methods for generic object recognition with invariance to pose and lighting. In Proceedings of CVPR, volume 2, pages II-104.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Robust Logitboost and Adaptive Base Class (ABC) Logitboost",
"authors": [
{
"first": "Ping",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of UAI",
"volume": "",
"issue": "",
"pages": "302--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ping Li. 2010. Robust Logitboost and Adaptive Base Class (ABC) Logitboost. In Proceedings of UAI, pages 302-311.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Geotagging with local lexicons to build indexes for textually-specified spatial data",
"authors": [
{
"first": "Hanan",
"middle": [],
"last": "Michael D Lieberman",
"suffix": ""
},
{
"first": "Jagan",
"middle": [],
"last": "Samet",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sankaranarayanan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ICDE",
"volume": "",
"issue": "",
"pages": "201--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael D Lieberman, Hanan Samet, and Jagan Sankaranarayanan. 2010. Geotagging with local lexicons to build indexes for textually-specified spatial data. In In Proceedings of ICDE, pages 201-212.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "TweetGeo -A Tool for Collecting, Processing and Analysing Geo-encoded Linguistic Data",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Samard\u017ei\u0107",
"suffix": ""
},
{
"first": "Curdin",
"middle": [],
"last": "Derungs",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "3412--3421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Ljube\u0161i\u0107, Tanja Samard\u017ei\u0107, and Curdin Derungs. 2016. TweetGeo -A Tool for Collecting, Processing and Analysing Geo-encoded Linguistic Data. In Proceedings of COLING, pages 3412-3421.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Text Classification Using String Kernels",
"authors": [
{
"first": "Huma",
"middle": [],
"last": "Lodhi",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"J C H"
],
"last": "Watkins",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "563--569",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huma Lodhi, John Shawe-Taylor, Nello Cristianini, and Christopher J.C.H. Watkins. 2001. Text Classification Using String Kernels. In Proceedings of NIPS, pages 563-569.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Sentence selection with neural networks using string kernels",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Masala",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ruseti",
"suffix": ""
},
{
"first": "Traian",
"middle": [],
"last": "Rebedea",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of KES",
"volume": "",
"issue": "",
"pages": "1774--1782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Masala, Stefan Ruseti, and Traian Rebedea. 2017. Sentence selection with neural networks using string kernels. In Proceedings of KES, pages 1774-1782.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "SC-UPB at the VarDial 2019 Evaluation Campaign: Moldavian vs. Romanian Cross-Dialect Topic Identification",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Onose",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumitru-Clementin",
"suffix": ""
},
{
"first": "\u015e",
"middle": [],
"last": "Cercel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tefan Tr\u0203u\u015fan-Matu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of VarDial",
"volume": "",
"issue": "",
"pages": "172--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristian Onose, Dumitru-Clementin Cercel, and \u015e tefan Tr\u0203u\u015fan-Matu. 2019. SC-UPB at the VarDial 2019 Evalu- ation Campaign: Moldavian vs. Romanian Cross-Dialect Topic Identification. In Proceedings of VarDial, pages 172-177.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "The Story of the Characters, the DNA and the Native Language",
"authors": [
{
"first": "Marius",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "Radu Tudor",
"middle": [],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of BEA-8",
"volume": "",
"issue": "",
"pages": "270--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marius Popescu and Radu Tudor Ionescu. 2013. The Story of the Characters, the DNA and the Native Language. In Proceedings of BEA-8, pages 270-278.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "HASKER: An efficient algorithm for string kernels. Application to polarity classification in various languages",
"authors": [
{
"first": "Marius",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "Cristian",
"middle": [],
"last": "Grozea",
"suffix": ""
},
{
"first": "Radu Tudor",
"middle": [],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of KES",
"volume": "",
"issue": "",
"pages": "1755--1763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marius Popescu, Cristian Grozea, and Radu Tudor Ionescu. 2017. HASKER: An efficient algorithm for string kernels. Application to polarity classification in various languages. In Proceedings of KES, pages 1755-1763.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "An efficient location extraction algorithm by leveraging web contextual information",
"authors": [
{
"first": "Teng",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Rong",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of GIS",
"volume": "",
"issue": "",
"pages": "53--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Teng Qin, Rong Xiao, Lei Fang, Xing Xie, and Lei Zhang. 2010. An efficient location extraction algorithm by leveraging web contextual information. In Proceedings of GIS, pages 53-60.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Determining the spatial reader scopes of news sources using local lexicons",
"authors": [
{
"first": "Gianluca",
"middle": [],
"last": "Quercini",
"suffix": ""
},
{
"first": "Hanan",
"middle": [],
"last": "Samet",
"suffix": ""
},
{
"first": "Jagan",
"middle": [],
"last": "Sankaranarayanan",
"suffix": ""
},
{
"first": "Michael D",
"middle": [],
"last": "Lieberman",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of GIS",
"volume": "",
"issue": "",
"pages": "43--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gianluca Quercini, Hanan Samet, Jagan Sankaranarayanan, and Michael D Lieberman. 2010. Determining the spatial reader scopes of news sources using local lexicons. In Proceedings of GIS, pages 43-52.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Improving language understanding with unsupervised learning",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. Technical report, OpenAI.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "A neural model for user geolocation and lexical dialectology",
"authors": [
{
"first": "Afshin",
"middle": [],
"last": "Rahimi",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.04008"
]
},
"num": null,
"urls": [],
"raw_text": "Afshin Rahimi, Trevor Cohn, and Timothy Baldwin. 2017. A neural model for user geolocation and lexical dialectology. arXiv preprint arXiv:1704.04008.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Supervised textbased geolocation using language models on an adaptive grid",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Speriosu",
"suffix": ""
},
{
"first": "Sarat",
"middle": [],
"last": "Rallapalli",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Wing",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1500--1510",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Roller, Michael Speriosu, Sarat Rallapalli, Benjamin Wing, and Jason Baldridge. 2012. Supervised text- based geolocation using language models on an adaptive grid. In Proceedings of EMNLP, pages 1500-1510.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Where's@ wally? A Classification Approach to Geolocating Users Based on their Social Ties",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Rout",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Preo\u0163iuc-Pietro",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of HT",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominic Rout, Kalina Bontcheva, Daniel Preo\u0163iuc-Pietro, and Trevor Cohn. 2013. Where's@ wally? A Classifi- cation Approach to Geolocating Users Based on their Social Ties. In Proceedings of HT, pages 11-20.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "A tutorial on support vector regression",
"authors": [
{
"first": "Alex",
"middle": [
"J"
],
"last": "Smola",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Sch\u00f6lkopf",
"suffix": ""
}
],
"year": 2004,
"venue": "Statistics and computing",
"volume": "14",
"issue": "3",
"pages": "199--222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex J. Smola and Bernhard Sch\u00f6lkopf. 2004. A tutorial on support vector regression. Statistics and computing, 14(3):199-222.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Generating Text with Recurrent Neural Networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Martens",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "1017--1024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, James Martens, and Geoffrey Hinton. 2011. Generating Text with Recurrent Neural Networks. In Proceedings of ICML, pages 1017-1024.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Corpus-based dialectometry: Aggregate morphosyntactic variability in british english dialects",
"authors": [
{
"first": "Benedikt",
"middle": [],
"last": "Szmrecsanyi",
"suffix": ""
}
],
"year": 2008,
"venue": "International Journal of Humanities and Arts Computing",
"volume": "2",
"issue": "1-2",
"pages": "279--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benedikt Szmrecsanyi. 2008. Corpus-based dialectometry: Aggregate morphosyntactic variability in british en- glish dialects. International Journal of Humanities and Arts Computing, 2(1-2):279-296.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "DTeam @ VarDial 2019: Ensemble based on skip-gram and triplet loss neural networks for Moldavian vs. Romanian cross-dialect topic identification",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Tudoreanu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of VarDial",
"volume": "",
"issue": "",
"pages": "202--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Tudoreanu. 2019. DTeam @ VarDial 2019: Ensemble based on skip-gram and triplet loss neural networks for Moldavian vs. Romanian cross-dialect topic identification. In Proceedings of VarDial, pages 202-208.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS, pages 5998-6008.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "On the Practical Computational Power of Finite Precision RNNs for Language Recognition",
"authors": [
{
"first": "Gail",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Eran",
"middle": [],
"last": "Yahav",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "740--745",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the Practical Computational Power of Finite Precision RNNs for Language Recognition. In Proceedings of ACL, pages 740-745.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Generalization of backpropagation with application to a recurrent gas market model",
"authors": [
{
"first": "Paul",
"middle": [
"J"
],
"last": "Werbos",
"suffix": ""
}
],
"year": 1988,
"venue": "Neural Networks",
"volume": "1",
"issue": "4",
"pages": "339--356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul J. Werbos. 1988. Generalization of backpropagation with application to a recurrent gas market model. Neural Networks, 1(4):339-356.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Quantitative social dialectology: Explaining linguistic variation geographically and socially",
"authors": [
{
"first": "Martijn",
"middle": [],
"last": "Wieling",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Nerbonne",
"suffix": ""
},
{
"first": "R",
"middle": [
"Harald"
],
"last": "Baayen",
"suffix": ""
}
],
"year": 2011,
"venue": "PloS One",
"volume": "6",
"issue": "9",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martijn Wieling, John Nerbonne, and R. Harald Baayen. 2011. Quantitative social dialectology: Explaining linguistic variation geographically and socially. PloS One, 6(9):e23613.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Simple supervised document geolocation with geodesic grids",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Wing",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "955--964",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Wing and Jason Baldridge. 2011. Simple supervised document geolocation with geodesic grids. In Proceedings of ACL, pages 955-964.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "A Stochastic Memoizer for Sequence Data",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Wood",
"suffix": ""
},
{
"first": "C\u00e9dric",
"middle": [],
"last": "Archambeau",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Gasthaus",
"suffix": ""
},
{
"first": "Lancelot",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "1129--1136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Wood, C\u00e9dric Archambeau, Jan Gasthaus, Lancelot James, and Yee Whye Teh. 2009. A Stochastic Memo- izer for Sequence Data. In Proceedings of ICML, pages 1129-1136.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Character-level Convolutional Networks for Text Classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "649--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level Convolutional Networks for Text Classifica- tion. In Proceedings of NIPS, pages 649-657.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Distances between ground-truth locations (blue) and predicted locations (red) for a subset of 100 Swiss German Jodels randomly selected from the official test set. Best viewed in color."
},
"TABREF0": {
"html": null,
"num": null,
"text": "Results in terms of Mean Absolute Error (MAE) and Mean Squared Error (MSE) obtained on the development set and on the test set by the proposed handcrafted, deep and ensemble algorithms. The reported MAE and MSE values represent the average value computed over the latitude and the longitude, both being expressed in degrees.",
"content": "<table><tr><td>Method</td><td>MAE</td><td/><td>MSE</td><td/></tr><tr><td/><td>Development</td><td>Test</td><td>Development</td><td>Test</td></tr><tr><td>\u03bd-SVR + string kernels</td><td>0.2306</td><td>0.2289</td><td>0.1066</td><td>0.1049</td></tr><tr><td>character-level CNN</td><td>0.2937</td><td>0.3123</td><td>0.1552</td><td>0.1633</td></tr><tr><td>LSTM + BERT embeddings</td><td>0.3594</td><td>0.3618</td><td>0.2226</td><td>0.2259</td></tr><tr><td>XGBoost ensemble</td><td>0.2234</td><td>0.2207</td><td>0.1043</td><td>0.1017</td></tr></table>",
"type_str": "table"
}
}
}
}