ACL-OCL / Base_JSON /prefixV /json /vardial /2020.vardial-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:14:12.181338Z"
},
"title": "ZHAW-InIT -Social Media Geolocation at VarDial 2020",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Benites",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Manuela",
"middle": [],
"last": "H\u00fcrlimann",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mark",
"middle": [],
"last": "Cieliebak",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Spinningbytes",
"middle": [],
"last": "Ag",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe our approaches for the Social Media Geolocation (SMG) task at the VarDial Evaluation Campaign 2020. The goal was to predict geographical location (latitudes and longitudes) given an input text. There were three subtasks corresponding to German-speaking Switzerland (CH), Germany and Austria (DE-AT), and Croatia, Bosnia and Herzegovina, Montenegro and Serbia (BCMS). We submitted solutions to all subtasks but focused our development efforts on the CH subtask, where we achieved third place out of 16 submissions with a median distance of 15.93 km and had the best result of 14 unconstrained systems. In the DE-AT subtask, we ranked sixth out of ten submissions (fourth of 8 unconstrained systems) and for BCMS we achieved fourth place out of 13 submissions (second of 11 unconstrained systems).",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe our approaches for the Social Media Geolocation (SMG) task at the VarDial Evaluation Campaign 2020. The goal was to predict geographical location (latitudes and longitudes) given an input text. There were three subtasks corresponding to German-speaking Switzerland (CH), Germany and Austria (DE-AT), and Croatia, Bosnia and Herzegovina, Montenegro and Serbia (BCMS). We submitted solutions to all subtasks but focused our development efforts on the CH subtask, where we achieved third place out of 16 submissions with a median distance of 15.93 km and had the best result of 14 unconstrained systems. In the DE-AT subtask, we ranked sixth out of ten submissions (fourth of 8 unconstrained systems) and for BCMS we achieved fourth place out of 13 submissions (second of 11 unconstrained systems).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The 7th Workshop on NLP for Similar Languages, Varieties and Dialects (G\u0203man et al., 2020) introduced a new task on Social Media Geolocation (SMG): Given a social media post, a system has to predict the latitude and longitude of where it was written. This is an extension to previous evaluation campaigns (Zampieri et al., 2019; Zampieri et al., 2018; , which focused on dialect identification, assigning a discrete label -usually corresponding to a geographic region -to a piece of text. Geolocation prediction allows for a more fine-grained assessment of dialectal varieties without the need to define hard and somewhat arbitrary boundaries within dialect continua.",
"cite_spans": [
{
"start": 70,
"end": 90,
"text": "(G\u0203man et al., 2020)",
"ref_id": null
},
{
"start": 305,
"end": 328,
"text": "(Zampieri et al., 2019;",
"ref_id": "BIBREF38"
},
{
"start": 329,
"end": 351,
"text": "Zampieri et al., 2018;",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our motivation for participating in the SMG shared task was to gain more knowledge about real-world, noisy, digital data. More specifically, we seek to mine written texts for different Swiss German Dialects and would profit from being able to place them geographically, particularly in the context of our other projects on Swiss German.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We submitted solutions to all three sub-tasks (see Results in Section 4) and, in light of our motivation, focused specifically on the Swiss sub-task during development. Our submissions are based on three different models (see Section 3): an SVM meta-classifier combining different classifiers based on word and character features for CH (see Section 3.4); a single SVM with fewer features and no meta-classifer for DE-AT and BCMS (see Section 3.6); and a language modelling approach (see Section 3.5) which was applied to all subtasks. We furthermore experimented with character-level Convolutional Neural Networks (CNNs) (see Section 3.7). For all systems, we cluster geolocations to get a number of discrete labels to predict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The central focus of the evaluation campaign at VarDial is to identify dialects of various languages. There have been three previous editions, which laid the basis for dialect identification in Swiss German (Zampieri et al., 2019; Zampieri et al., 2018; . Dialect classification is useful for many tasks and applications, e.g. for POS-tagging of dialectal data (Hollenstein and Aepli, 2014) , for compilation of German dialect corpora (Hollenstein and Aepli, 2015) , or for automatic speech recognition of Swiss German.",
"cite_spans": [
{
"start": 207,
"end": 230,
"text": "(Zampieri et al., 2019;",
"ref_id": "BIBREF38"
},
{
"start": 231,
"end": 253,
"text": "Zampieri et al., 2018;",
"ref_id": "BIBREF37"
},
{
"start": 361,
"end": 390,
"text": "(Hollenstein and Aepli, 2014)",
"ref_id": "BIBREF15"
},
{
"start": 435,
"end": 464,
"text": "(Hollenstein and Aepli, 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Past VarDial campaigns have led to the creation of diverse datasets for language and dialect identification, for example: Samard\u017ei\u0107 et al. (2016) provide a Swiss German dialect data set based on the Archi-Mob corpus, Jauhiainen et al. (2019) present a collection of cuneiform texts derived from a larger open access collection, and Huang et al. (2000) and McEnery and Xiao (2003) created data sets for Taiwanese and Mandarin Chinese. The 2020 SMG task is based on social media posts from Twitter (Ljube\u0161i\u0107 et al., 2016) and Jodel (Hovy and Purschke, 2018) , annotated with geolocations (see Section 3.1).",
"cite_spans": [
{
"start": 332,
"end": 351,
"text": "Huang et al. (2000)",
"ref_id": "BIBREF18"
},
{
"start": 356,
"end": 379,
"text": "McEnery and Xiao (2003)",
"ref_id": "BIBREF30"
},
{
"start": 496,
"end": 519,
"text": "(Ljube\u0161i\u0107 et al., 2016)",
"ref_id": "BIBREF25"
},
{
"start": 530,
"end": 555,
"text": "(Hovy and Purschke, 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Many studies addressed the problem of language and dialect identification, creating a noticeable amount of related work, summarised in the evaluation campaign reports (Zampieri et al., 2019; Zampieri et al., 2018; and Jauhiainen et al. (2018b) . A typical approach uses Support Vector Machines (SVMs) with different feature extraction methods. The use of character language models for language identification has previously been studied by Vatanen et al. (2010) .",
"cite_spans": [
{
"start": 167,
"end": 190,
"text": "(Zampieri et al., 2019;",
"ref_id": "BIBREF38"
},
{
"start": 191,
"end": 213,
"text": "Zampieri et al., 2018;",
"ref_id": "BIBREF37"
},
{
"start": 218,
"end": 243,
"text": "Jauhiainen et al. (2018b)",
"ref_id": "BIBREF21"
},
{
"start": 440,
"end": 461,
"text": "Vatanen et al. (2010)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Over the years various models have been proposed for text-based geolocation prediction (Han et al., 2014; Kinsella et al., 2011; Rahimi et al., 2017b; Rahimi et al., 2017a) .",
"cite_spans": [
{
"start": 87,
"end": 105,
"text": "(Han et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 106,
"end": 128,
"text": "Kinsella et al., 2011;",
"ref_id": "BIBREF24"
},
{
"start": 129,
"end": 150,
"text": "Rahimi et al., 2017b;",
"ref_id": "BIBREF32"
},
{
"start": 151,
"end": 172,
"text": "Rahimi et al., 2017a)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As for discretization of geolocations, Wing and Baldridge (2014) propose a hierarchical approach to divide the earth into a grid with different levels of granularity. Similarly to Duong-Trung et al. 2017, we use a K-Means clustering approach to subdivide the space, which is more data-driven than a grid.",
"cite_spans": [
{
"start": 39,
"end": 64,
"text": "Wing and Baldridge (2014)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our main focus is the CH subtask, where our approach is, from a text classification point of view, most similar to MAZA, which was proposed at VarDial 2017 . MAZA uses Term Frequency (TF) on character n-grams and word unigram features to train several SVMs. Then it uses a Random Forest meta-classifier with 10-fold cross-validation on the predictions of the SVMs. We extended this approach and used Term Frequency-Inverse Document Frequency (TF-IDF) on word and on character level. We used an SVM as a meta-classifier, and concatenated the output of the base classifiers (see Section 3.4). This solution approach was motivated by the fact that we have already applied similar architectures successfully in a wide range of tasks (Benites de Azevedo e Souza et al., 2019; Benites et al., 2018b; , especially in (Benites et al., 2018a) we established empirically that for (Swiss German) dialect recognition TF-IDF is better than just TF.",
"cite_spans": [
{
"start": 729,
"end": 770,
"text": "(Benites de Azevedo e Souza et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 771,
"end": 793,
"text": "Benites et al., 2018b;",
"ref_id": "BIBREF9"
},
{
"start": 810,
"end": 833,
"text": "(Benites et al., 2018a)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For the BCMS and DE-AT subtasks, we used a single SVM with word-and character-level TF-IDF features (see Section 3.6). We also made submissions using a variant of the HeLI method by Jauhiainen et al. (2016; Jauhiainen et al. (2018a) , which we extended with a voting mechanism that takes the centre of the top predicted coordinates in case of low confidence (see Section 3.5).",
"cite_spans": [
{
"start": 182,
"end": 206,
"text": "Jauhiainen et al. (2016;",
"ref_id": "BIBREF19"
},
{
"start": 207,
"end": 232,
"text": "Jauhiainen et al. (2018a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The shared task data was collected from the social media platforms Jodel 1 and Twitter 2 . Jodel posts were collected from Germany and Austria (DE-AT), as well as German-speaking Switzerland (CH) (Hovy and Purschke, 2018) . Tweets were sourced from Bosnia and Herzegovina, Croatia, Montenegro, and Serbia (BCMS) (Ljube\u0161i\u0107 et al., 2016) . Every sample contains, in addition to the text, latitude and longitude coordinates as set by the users of the respective platform (Jodel or Twitter).",
"cite_spans": [
{
"start": 196,
"end": 221,
"text": "(Hovy and Purschke, 2018)",
"ref_id": "BIBREF17"
},
{
"start": 312,
"end": 335,
"text": "(Ljube\u0161i\u0107 et al., 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3.1"
},
{
"text": "While Tweets are usually authored by a single person, the Jodel samples consist of short conversations involving multiple speakers. This leads to some samples containing multiple dialects. Similarly, we observed samples containing indirect speech in non-local dialects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3.1"
},
{
"text": "For evaluation, two metrics were defined by the organizers: the median and the mean distances between predicted and real geolocations across all texts in the test set, with the former being the official metric of the SMG shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3.1"
},
{
"text": "We opted to frame the task as a text classification problem, by combining locations into discrete clusters and predicting cluster identities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3.1"
},
{
"text": "In order to obtain a small number of classes, we use K-Means clustering (Lloyd, 1982) to cluster the geolocations. This allows standard classification methods to tackle the problem, since a certain number of samples per class can then be guaranteed. Generalization is increased, while resolution suffers from the somewhat coarser view. We experimented with different values of k, which will be discussed in subsequent sections. In order to generate coordinates for prediction, we used the centroid coordinate of the predicted cluster.",
"cite_spans": [
{
"start": 72,
"end": 85,
"text": "(Lloyd, 1982)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Label Clustering",
"sec_num": "3.2"
},
{
"text": "The basic preprocessing step common to all systems consisted in splitting the sentences into words on whitespaces. No stopword removal or lemmatization was performed since these steps have been shown to erase features which are useful for differentiating between the dialects (Maharjan et al., 2014). Afterwards, multiple feature extraction methods were applied, as explained in the next sections. We use a collection of different feature sets based on the TF-IDF representation (Manning et al., 2008) . They vary by the type of tokens considered (words, characters, and characters ignoring whitespace), case-sensitivity, the range of n-grams, and the maximum number of features in the set. Table 1 gives an overview of the feature sets that were used. Note that the token type char wb refers to character tokens ignoring whitespace between words. We use the implementation provided by the scikit-learn 3 library to extract these features.",
"cite_spans": [
{
"start": 479,
"end": 501,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 691,
"end": 698,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "3.3"
},
{
"text": "For every feature set we train separate linear one-vs-rest SVM classifiers with the discrete cluster identities as target labels. We then use the distances to the decision boundaries of every classifier for every feature set as a new feature vector for another linear SVM meta-classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers",
"sec_num": "3.4.2"
},
{
"text": "During training every base classifier is trained via 5-fold cross-validation, and predictions on the heldout fold are used to train the meta-classifier. Figure 1 illustrates the approach, and we refer to Benites et al. (2018a) for a detailed description. During prediction, we usually output the geolocation corresponding to the result of our meta-classifier. However, if a sample is below a certain confidence threshold (see also Section 3.8), we assign it the mean latitude and longitude from the complete training data, instead of the location of the predicted cluster center, so the error would be equally distributed and not skewed.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 161,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Classifiers",
"sec_num": "3.4.2"
},
{
"text": "Our second approach is a language modelling system and is heavily modelled on the HeLI submission to the VarDial 2018 GDI task (Jauhiainen et al., 2018a) . The full method is described in Jauhiainen et al. 2016, to which we refer the interested reader.",
"cite_spans": [
{
"start": 127,
"end": 153,
"text": "(Jauhiainen et al., 2018a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System 2: LM",
"sec_num": "3.5"
},
{
"text": "We first created local corpora using the same K-Means clustering procedure as outlined in 3.2. We then create character-level language models for each of the corpora using the scoring procedure defined in Jauhiainen et al. (2018a) : the text is split into words at whitespaces and relative n-gram frequencies are calculated within each word (including the preceding and following space characters). The score associated with an n-gram of a dialect is the negative decadic logarithm of its relative frequency within that dialect's subcorpus, meaning that n-grams with a high relative frequency have low scores.",
"cite_spans": [
{
"start": 205,
"end": 230,
"text": "Jauhiainen et al. (2018a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora and Language Models",
"sec_num": "3.5.1"
},
{
"text": "In order to make a prediction for an unseen input text, a score is calculated for each dialect based on the language models. The text is split into words at whitespaces, and for each word (again including leading and trailing space) the mean of its n-gram scores is calculated. If an n-gram is not present in the model of this dialect, a penalty term is assigned instead. The score of the text is calculated as the mean of its word-level scores, and the dialect with the lowest score is selected as output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction",
"sec_num": "3.5.2"
},
{
"text": "We define the confidence of a prediction in line with Jauhiainen et al. (2018a) as the difference in scores between the second best and the best dialect. For samples that have low confidence, we introduce a voting mechanism where we use the centre of the V highest-confidence clusters as the prediction, whose coordinate is represented by the mean of the V longitudes and the mean of the V latitudes. In section 4.1.2, the value V is represented by v.",
"cite_spans": [
{
"start": 54,
"end": 79,
"text": "Jauhiainen et al. (2018a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Voting Mechanism",
"sec_num": "3.5.3"
},
{
"text": "The tunable hyperparameters of this method are: the number of clusters (k), the n-gram order of the language models (n), whether case is preserved in the input to the language models, the penalty term (we assume the same term for all languages) (p), the confidence threshold below which to apply voting, and the number of clusters to use when determining the centre during voting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters and Tuning",
"sec_num": "3.5.4"
},
{
"text": "We briefly experimented with using a maximum number of features per dialect (called \"cutoff\" by Jauhiainen et al. (2018a)) but found no improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters and Tuning",
"sec_num": "3.5.4"
},
{
"text": "We used neither the backoff procedure to lower-order n-grams from Jauhiainen et al. (2016) nor the highly promising semi-supervised language model adaptation (Jauhiainen et al., 2018a) due to lack of time.",
"cite_spans": [
{
"start": 158,
"end": 184,
"text": "(Jauhiainen et al., 2018a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters and Tuning",
"sec_num": "3.5.4"
},
{
"text": "For the larger DE-AT and BCMS datasets, we did not have sufficient time to train and tune SVM-CV. Instead, we used a simple linear SVM classifier for these languages with the feature sets shown in Table 2 . Feature sets 1, 2 and 3 are also used for SVM-CV, corresponding to rows 1, 2 and 5 in 1 word no 1 -3 70000 2 word no 1 -5 70000 3 char yes 2 -3 50000 4 char wb yes 1 -150000 Table 2 : Overview of the different feature sets used by the SVM-Base system.",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 2",
"ref_id": null
},
{
"start": 293,
"end": 373,
"text": "1 word no 1 -3 70000 2 word no 1 -5 70000 3 char yes 2 -3 50000 4",
"ref_id": "TABREF1"
},
{
"start": 396,
"end": 403,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "System 3: SVM-Base",
"sec_num": "3.6"
},
{
"text": "We also experimented with a character-wise Convolutional Neural Network (CNN) (Zhang et al., 2015 ), which we did not submit. We include it as a neural baseline to compare our other approaches against. The network was composed of multiple convolutions in parallel with filter size and width of {(128,2), (96,2), (96,4), (64,3), (64,4)} with dropout set at 0.1 and maxpooling. The output of the convolutional layers are subsequently concatenated. Afterwards a 3-layer fully connected network is applied with 100, 100 and 50 neurons per layer, respectively. The activation function on all layers was ReLU, except for the last where softmax was applied. As output, and so as the number of classes, the number of clusters is used, similarly to the approach of SVMs. We used the Adam optimizer (Kingma and Ba, 2014) with the learning rate set to 0.001 and minimizing the binary cross entropy loss. The network is then trained for 100 epochs.",
"cite_spans": [
{
"start": 78,
"end": 97,
"text": "(Zhang et al., 2015",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System 4: CNN",
"sec_num": "3.7"
},
{
"text": "We discovered one text in French in the development set and decided to use the language detection library langdetect 4 . If the language is detected as French we set the coordinates to (46.67, 7.0), the center of the French-speaking part of Switzerland. In case the prediction score is very low (below -0.9 for SVM-CV and -0.8 for SVM-Base) we assign the text to the center of the training data with coordinates (47.26, 8.3 ).",
"cite_spans": [
{
"start": 412,
"end": 423,
"text": "(47.26, 8.3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Handling Outliers",
"sec_num": "3.8"
},
{
"text": "In the following, we evaluate the performance of our four systems plus the two simple baseline systems. Since we were primarily focusing on the CH subtaks, we present the detailed analysis for these data. Later on, we briefly present how our systems performed on the other subtasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "One of the most important parameters when using clustering to discretize geographical data, is the number of centroids k for the K-Means algorithm. This determines the upper bound on performance as well as the number of samples per class and the number of classes. Usually, classification performance decreases rapidly with an increasing number of classes, which most probably negatively affects the median distance 5 , the main metric of this competition. We analyzed the reconstruction error with different numbers of clusters on the training set, over ten runs for each setting, i.e. we clustered the locations of the training samples and then calculate the distance between the cluster centroid to the actual location of the sample assigned to this centroid. The results are depicted in Table 3 , where we show median and mean distances for 10, 20, 35, 50, 75 and 100 clusters, along with their variances. We see that the largest relative drop is between 50 and 75 ( 0.79 0.09 =878%), but the difference is almost negligible in terms of geographical dialectal difference. The drop between 35 and 50 is also interesting, although it might be difficult to argue that there are about 50 dialectal hotspots. We chose 35 to use as k parameter for the K-Means algorithm, since it promised the least error for the most generalization capacity, i.e. lower risk of overfitting. Table 5 : Tuning results for LM-CH: second step; best relevant results marked in bold",
"cite_spans": [],
"ref_spans": [
{
"start": 791,
"end": 798,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1373,
"end": 1380,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Optimizing Number of K-Means Centroids on CH data",
"sec_num": "4.1.1"
},
{
"text": "We proceeded in two steps for tuning the parameters of the LM system. First, we searched over the n-gram-level (n \u2208 {4, 5, 6}), number of clusters (c \u2208 {35, 40, 50, 60, 70}), penalty (p \u2208 {5, 5.1, 5.2, . . . , 6}), and case-sensitivity (cs), of which we selected the best configuration. Using this parameter set, we fine-tuned the parameters relating to voting (see Section 3.5.3) in a second step , i.e. the number of voters (v \u2208 {0, 2, 3, 4, 5}) and the voting confidence threshold (vt \u2208 {0.001, 0.01, 0.02, 0.05, 0.1}).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LM Parameter Tuning",
"sec_num": "4.1.2"
},
{
"text": "Please refer to Section 3.5.4 for the description of the parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LM Parameter Tuning",
"sec_num": "4.1.2"
},
{
"text": "In Table 4 , we report the results of the first step, showing the ten configurations with the best results in descending order by median distance error. The best-performing n-gram order is 4, which is in line with results obtained by Jauhiainen et al. (2018a) . We can also see that larger numbers of clusters and penalties above 5.4 are beneficial. The best models are not sensitive to case; we hypothesize that this is because lower-casing helps overcome data sparsity. Table 5 shows the results of the second step using n-gram-level of 4 (n=4), cluster size of 60 (k=60), and penalty to 5.5 (p=5.5), with the best ten results by median distance on the development set. We tune the voting-related parameters v and vt. We also tune case-sensitivity cs again, since the voting scenario could equalize the more sparse data. We can see that the voting mechanism significantly improves performance on both development and test set. The most successful configuration for CH uses three voters and a confidence threshold of 0.01, leading to a median distance of 17.66 km on the test set, which corresponds to the fourth best submission for this subtask. We report the results for the various systems with different numbers of clusters in Table 6 . In addition to the systems described in Section 3 we include 2 baselines: Center predicting the geographic center of the training set for every sample, and SVM-Base-Unigram which is a version of SVM-Base using only unigram word features. The parameters of LM are set according to the best setting presented in 4.1.2 and only the number of clusters is varied. CNNs give relatively good results which would score about 10-11th place in the competition. A simple SVM-TF-IDF baseline with Unigram feature extraction would already be among the best 10 places with a median distance of about 20km. Increasing the number of clusters from 10 to 20 makes it better, but then the error distance increases for SVM-Base-Unigram. SVM-Base has comparable performance to SVM-Base-Unigram which could point to simple word/features being already good indications of geographic locations.",
"cite_spans": [
{
"start": 234,
"end": 259,
"text": "Jauhiainen et al. (2018a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 472,
"end": 479,
"text": "Table 5",
"ref_id": null
},
{
"start": 1232,
"end": 1239,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "LM Parameter Tuning",
"sec_num": "4.1.2"
},
{
"text": "The LM method (System 2) benefits from a larger number of clusters than the SVM-and CNN based ones, peaking at 50 clusters on the test set and 60 on the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LM Parameter Tuning",
"sec_num": "4.1.2"
},
{
"text": "For the SVM-CV system we see a drop of about 2 points compared to SVM-Base. Using the optimum number of clusters we get very close to the winning system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LM Parameter Tuning",
"sec_num": "4.1.2"
},
{
"text": "We can see from Figure 2a that the hotspots around Zurich with the most texts were predicted with good quality. Problematic were the borders where there were regions containing smaller number of texts. For example, the Basel region (top left) was very well predicted, whereas the regions of Schaffhausen (top most) and St. Galler Rheintal (right most) were often wrongly predicted by a large distance.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 25,
"text": "Figure 2a",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Geographical Error Analysis of SVM-CV",
"sec_num": null
},
{
"text": "In Figure 2b , we can see the confusion of the largest errors (more than 5 km). We can clearly see a confusion between the region of Bern (left most) and St. Gallen (top right). Also St. Gallen and Schwyz (red spot below in the middle). System 1: SVM-CV: Addendum As pointed out before in Section 4.2, after the competition, we ran System 1 on all sub-tasks. This took for DE-AT roughly 5 days to finish. The prediction quality achieved with 35 clusters yielded a median distance of 167.01 km (first place: 143.3 km) and a mean distance of 193.32 km (first place: 166.64 km).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 12,
"text": "Figure 2b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Geographical Error Analysis of SVM-CV",
"sec_num": null
},
{
"text": "We visualize the results of the better submission which was again SVM-Base. As can be seen in Figure 4a , the main difficulties of the system are in the regions of Eastern Germany and Eastern Austria. In Figure 4b , we can see that texts from these problematic regions tend to be assigned more to the west; but also that many smaller errors are accumulated in the more populous areas along the Rhine.",
"cite_spans": [],
"ref_spans": [
{
"start": 94,
"end": 103,
"text": "Figure 4a",
"ref_id": "FIGREF1"
},
{
"start": 204,
"end": 213,
"text": "Figure 4b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Geographical error analysis",
"sec_num": null
},
{
"text": "We presented our approach to the VarDial 2020 SMG shared task, focusing on the submission for the CH subtask. Despite the expected noise, caused by people moving between different regions without adjusting their writing, a meta algorithm on top of SVMs with different n-grams weighted by TF-IDF performs impressively well, particularly for Switzerland (CH subtask). We achieve the second rank in terms of teams and the third by submissions with a median distance error of only 15.93 km. Deep learning approaches combining CNN and K-Means also showed interesting results but are still far behind a simple Unigram-TF-IDF with K-Means.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://jodel.com/ 2 https://twitter.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://scikit-learn.org/stable/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/Mimino666/langdetect 5 We judge the probability very low for the case when a finer-grained division (more clusters, e.g. cluster a is subdivided into subcluster b, c and d), allows a finer resolution, and a misclassification might still decrease the mean distance (e.g. b is right, but d is predicted, however subcluster c cause that the center of a is far distant than the target (which would lie within b)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the task organizers for their support and the reviewers for their detailed and helpful feedback. This research has been funded by the Swiss Innovation Agency project no. 28190.1 PFES-ES and by SpinningBytes AG, Switzerland.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Errors of SVM-CV classifier for CH",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Errors of SVM-CV classifier for CH.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Errors of SVM-CV classifier for CH",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Errors of SVM-CV classifier for CH.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Errors of SVM-Base classifier for BCMS. CH and found the best setting to be identical to CH, except for the absence of case-sensitivity, and using five voters instead of three, which resulted in a development set median distance of 109.86 km. The LM-based approach performed rather poorly in the evaluation, scoring last place out of all submissions with 111.4 km median distance. System 3: SVM-Base The SVM-Base system performed somewhat better. We evaluated different numbers of clusters: 25",
"authors": [
{
"first": "",
"middle": [],
"last": "Errors",
"suffix": ""
},
{
"first": "Bcms",
"middle": [],
"last": "Svm-Base Classifier For",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "35",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Errors of SVM-Base classifier for BCMS. (b) Errors of SVM-Base classifier for BCMS. CH and found the best setting to be identical to CH, except for the absence of case-sensitivity, and using five voters instead of three, which resulted in a development set median distance of 109.86 km. The LM-based approach performed rather poorly in the evaluation, scoring last place out of all submissions with 111.4 km median distance. System 3: SVM-Base The SVM-Base system performed somewhat better. We evaluated different numbers of clusters: 25, 35, 50, 75 and 100, which yielded development set results within 3 km (59.02",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Addendum After the competition, the gold labels were released and we had time to run the SVM-CV system on all sub-tasks. We also calculated the predictions for System",
"authors": [
{
"first": "",
"middle": [],
"last": "Svm-Cv",
"suffix": ""
}
],
"year": null,
"venue": "System",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "System 1: SVM-CV: Addendum After the competition, the gold labels were released and we had time to run the SVM-CV system on all sub-tasks. We also calculated the predictions for System 1 with 35",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "We achieved a better result than the first placed (41.54 km) approach with a median distance of 36.79 km but a worse mean distance of 83",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cluster which took roughly 2 days. We achieved a better result than the first placed (41.54 km) approach with a median distance of 36.79 km but a worse mean distance of 83.08 km (80.89 km for the first place).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An analysis why this system performed better in this dataset in comparison to the other competitors in the other two datasets would be interesting",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "An analysis why this system performed better in this dataset in comparison to the other competitors in the other two datasets would be interesting, but we leave this for future work.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "We can see that areas with many samples, mostly around the capital cities of the respective countries, are predicted accurately (Figure 3a)",
"authors": [],
"year": null,
"venue": "Geographical error analysis Figures 3a and 3b visualise the errors of the SVM-Base system on the BCMS subtask",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geographical error analysis Figures 3a and 3b visualise the errors of the SVM-Base system on the BCMS subtask. We can see that areas with many samples, mostly around the capital cities of the respec- tive countries, are predicted accurately (Figure 3a), but also that there is a strong trend of assigning False Positives to them (Figure 3b).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "LM We used the same parameters of BCMS subtask System 2 for the DE-AT",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "DE-AT subtask System 2: LM We used the same parameters of BCMS subtask System 2 for the DE-AT. On the development set, this achieved 229.46 km, while on the test set the result was 217.8 km, 8th place among",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Twist Bytes-German Dialect Identification with Data Mining Optimization",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Benites",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Grubenmann",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Pius Von D\u00e4niken",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Von Gr\u00fcnigen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Deriu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cieliebak",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "218--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Benites, Ralf Grubenmann, Pius von D\u00e4niken, Dirk von Gr\u00fcnigen, Jan Deriu, and Mark Cieliebak. 2018a. Twist Bytes-German Dialect Identification with Data Mining Optimization. In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018), pages 218-227.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Classifying patent applications with ensemble methods",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Benites",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.04695"
]
},
"num": null,
"urls": [],
"raw_text": "Fernando Benites, Shervin Malmasi, and Marcos Zampieri. 2018b. Classifying patent applications with ensemble methods. arXiv preprint arXiv:1811.04695.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Twistbytes-identification of cuneiform languages and german dialects at vardial 2019",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Benites De Azevedo E Souza",
"suffix": ""
}
],
"year": 2019,
"venue": "6th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "194--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Benites de Azevedo e Souza, Pius von D\u00e4niken, and Mark Cieliebak. 2019. Twistbytes-identification of cuneiform languages and german dialects at vardial 2019. In 6th Workshop on NLP for Similar Languages, Varieties and Dialects, VarDial 2019, Minneapolis, United States, 7 June 2019, pages 194-201. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Twistbytes-hierarchical classification at germeval 2019: walking the fine line (of recall and precision)",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Benites",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.06493"
]
},
"num": null,
"urls": [],
"raw_text": "Fernando Benites. 2019. Twistbytes-hierarchical classification at germeval 2019: walking the fine line (of recall and precision). arXiv preprint arXiv:1908.06493.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An effective approach for geolocation prediction in twitter streams using clustering based discretization",
"authors": [
{
"first": "Nghia",
"middle": [],
"last": "Duong-Trung",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Schilling",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Rego Drumond",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Schmidt-Thieme",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nghia Duong-Trung, Nicolas Schilling, Lucas Rego Drumond, and Lars Schmidt-Thieme. 2017. An effective approach for geolocation prediction in twitter streams using clustering based discretization.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Yves Scherrer, and Marcos Zampieri. 2020. A Report on the VarDial Evaluation Campaign 2020",
"authors": [
{
"first": "Mihaela",
"middle": [],
"last": "G\u0203man",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Tudor",
"middle": [],
"last": "Radu",
"suffix": ""
},
{
"first": "Heidi",
"middle": [],
"last": "Ionescu",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Krister",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Partanen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Purschke",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Seventh Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihaela G\u0203man, Dirk Hovy, Radu Tudor Ionescu, Heidi Jauhiainen, Tommi Jauhiainen, Krister Lind\u00e9n, Nikola Ljube\u0161i\u0107, Niko Partanen, Christoph Purschke, Yves Scherrer, and Marcos Zampieri. 2020. A Report on the VarDial Evaluation Campaign 2020. In Proceedings of the Seventh Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Text-based twitter user geolocation prediction",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2014,
"venue": "J. Artif. Int. Res",
"volume": "49",
"issue": "1",
"pages": "451--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han, Paul Cook, and Timothy Baldwin. 2014. Text-based twitter user geolocation prediction. J. Artif. Int. Res., 49(1):451-500, January.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Compilation of a Swiss German dialect corpus and its application to PoS tagging",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "No\u00ebmi",
"middle": [],
"last": "Aepli",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "85--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein and No\u00ebmi Aepli. 2014. Compilation of a Swiss German dialect corpus and its application to PoS tagging. In Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects, pages 85-94.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A Resource for Natural Language Processing of Swiss German Dialects",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "No\u00ebmi",
"middle": [],
"last": "Aepli",
"suffix": ""
}
],
"year": 2015,
"venue": "GSCL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein and No\u00ebmi Aepli. 2015. A Resource for Natural Language Processing of Swiss German Dialects. In GSCL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Capturing regional variation with distributed place representations and geographic retrofitting",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Purschke",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4383--4394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy and Christoph Purschke. 2018. Capturing regional variation with distributed place representations and geographic retrofitting. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4383-4394, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Sinica treebank: Design criteria, annotation guidelines, and on-line interface",
"authors": [
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Feng-Yi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Keh-Jiann",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhao-Ming",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Kuang-Yu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Second Workshop on Chinese Language Processing: Held in Conjunction with the 38th Annual Meeting of the Association for Computational Linguistics",
"volume": "12",
"issue": "",
"pages": "29--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chu-Ren Huang, Feng-Yi Chen, Keh-Jiann Chen, Zhao-ming Gao, and Kuang-Yu Chen. 2000. Sinica treebank: Design criteria, annotation guidelines, and on-line interface. In Proceedings of the Second Workshop on Chinese Language Processing: Held in Conjunction with the 38th Annual Meeting of the Association for Computational Linguistics -Volume 12, CLPW '00, pages 29-37, Stroudsburg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Heli, a word-based backoff method for language identification",
"authors": [
{
"first": "Tommi",
"middle": [],
"last": "Sakari Jauhiainen",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Krister Johan Linden",
"suffix": ""
},
{
"first": "Heidi",
"middle": [
"Annika"
],
"last": "Jauhiainen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects VarDial3",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommi Sakari Jauhiainen, Bo Krister Johan Linden, Heidi Annika Jauhiainen, et al. 2016. Heli, a word-based backoff method for language identification. In Proceedings of the Third Workshop on NLP for Similar Lan- guages, Varieties and Dialects VarDial3, Osaka, Japan, December 12 2016.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Heli-based experiments in swiss german dialect identification",
"authors": [
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Heidi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Krister",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "254--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommi Jauhiainen, Heidi Jauhiainen, and Krister Lind\u00e9n. 2018a. Heli-based experiments in swiss german dialect identification. In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018), pages 254-262.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Automatic language identification in texts: A survey",
"authors": [
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Krister",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.08186"
]
},
"num": null,
"urls": [],
"raw_text": "Tommi Jauhiainen, Marco Lui, Marcos Zampieri, Timothy Baldwin, and Krister Lind\u00e9n. 2018b. Automatic language identification in texts: A survey. arXiv preprint arXiv:1804.08186.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Language and Dialect Identification of Cuneiform Texts",
"authors": [
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Heidi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Tero",
"middle": [],
"last": "Alstola",
"suffix": ""
},
{
"first": "Krister",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.01891"
]
},
"num": null,
"urls": [],
"raw_text": "Tommi Jauhiainen, Heidi Jauhiainen, Tero Alstola, and Krister Lind\u00e9n. 2019. Language and Dialect Identification of Cuneiform Texts. arXiv preprint, arXiv:1903.01891.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "i'm eating a sandwich in glasgow\": Modeling locations with tweets",
"authors": [
{
"first": "Sheila",
"middle": [],
"last": "Kinsella",
"suffix": ""
},
{
"first": "Vanessa",
"middle": [],
"last": "Murdock",
"suffix": ""
},
{
"first": "Neil O'",
"middle": [],
"last": "Hare",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 3rd International Workshop on Search and Mining User-Generated Contents, SMUC '11",
"volume": "",
"issue": "",
"pages": "61--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sheila Kinsella, Vanessa Murdock, and Neil O'Hare. 2011. \"i'm eating a sandwich in glasgow\": Modeling locations with tweets. In Proceedings of the 3rd International Workshop on Search and Mining User-Generated Contents, SMUC '11, page 61-68, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "TweetGeo -a tool for collecting, processing and analysing geo-encoded linguistic data",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Samard\u017ei\u0107",
"suffix": ""
},
{
"first": "Curdin",
"middle": [],
"last": "Derungs",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "3412--3421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Ljube\u0161i\u0107, Tanja Samard\u017ei\u0107, and Curdin Derungs. 2016. TweetGeo -a tool for collecting, processing and analysing geo-encoded linguistic data. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3412-3421, Osaka, Japan, December. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Least squares quantization in pcm",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Lloyd",
"suffix": ""
}
],
"year": 1982,
"venue": "IEEE transactions on information theory",
"volume": "28",
"issue": "2",
"pages": "129--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart Lloyd. 1982. Least squares quantization in pcm. IEEE transactions on information theory, 28(2):129-137.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A Simple Approach to Author Profiling in MapReduce",
"authors": [
{
"first": "Prasha",
"middle": [],
"last": "Suraj Maharjan",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Shrestha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2014,
"venue": "CLEF",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suraj Maharjan, Prasha Shrestha, and Thamar Solorio. 2014. A Simple Approach to Author Profiling in MapRe- duce. In CLEF.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "German Dialect Identification in Interview Transcriptions",
"authors": [
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)",
"volume": "",
"issue": "",
"pages": "164--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shervin Malmasi and Marcos Zampieri. 2017. German Dialect Identification in Interview Transcriptions. In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), pages 164-169, Valencia, Spain, April.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Introduction to Information Retrieval",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The lancaster corpus of mandarin chinese",
"authors": [
{
"first": "A",
"middle": [
"M"
],
"last": "Mcenery",
"suffix": ""
},
{
"first": "R",
"middle": [
"Z"
],
"last": "Xiao",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. M. McEnery and R. Z. Xiao. 2003. The lancaster corpus of mandarin chinese., 12.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Continuous representation of location for geolocation and lexical dialectology using mixture density networks",
"authors": [
{
"first": "Afshin",
"middle": [],
"last": "Rahimi",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "167--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Afshin Rahimi, Timothy Baldwin, and Trevor Cohn. 2017a. Continuous representation of location for geolocation and lexical dialectology using mixture density networks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 167-176, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A neural model for user geolocation and lexical dialectology",
"authors": [
{
"first": "Afshin",
"middle": [],
"last": "Rahimi",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "209--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Afshin Rahimi, Trevor Cohn, and Timothy Baldwin. 2017b. A neural model for user geolocation and lexical dialectology. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 209-216, Vancouver, Canada, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "ArchiMob -a corpus of spoken Swiss German",
"authors": [
{
"first": "Tanja",
"middle": [],
"last": "Samard\u017ei\u0107",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": ""
},
{
"first": "Elvira",
"middle": [],
"last": "Glaser",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanja Samard\u017ei\u0107, Yves Scherrer, and Elvira Glaser. 2016. ArchiMob -a corpus of spoken Swiss German. In Proceedings of LREC.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Language Identification of Short Text Segments with N-gram Models",
"authors": [
{
"first": "Tommi",
"middle": [],
"last": "Vatanen",
"suffix": ""
},
{
"first": "Jaakko",
"middle": [
"J"
],
"last": "V\u00e4yrynen",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": ".",
"middle": [
";"
],
"last": "",
"suffix": ""
},
{
"first": "Khalid",
"middle": [],
"last": "Choukri",
"suffix": ""
},
{
"first": "Bente",
"middle": [],
"last": "Maegaard",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Mariani",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommi Vatanen, Jaakko J. V\u00e4yrynen, and Sami Virpioja. 2010. Language Identification of Short Text Segments with N-gram Models. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Rosner, and Daniel Tapias, editors, Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta, may. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Hierarchical discriminative classification for text-based geolocation",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Wing",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "336--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Wing and Jason Baldridge. 2014. Hierarchical discriminative classification for text-based geolocation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 336-348.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Findings of the VarDial Evaluation Campaign",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Ali",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": ""
},
{
"first": "No\u00ebmi",
"middle": [],
"last": "Aepli",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Nikola Ljube\u0161i\u0107, Preslav Nakov, Ahmed Ali, J\u00f6rg Tiedemann, Yves Scherrer, and No\u00ebmi Aepli. 2017. Findings of the VarDial Evaluation Campaign 2017. In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), Valencia, Spain.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Language Identification and Morphosyntactic Tagging: The Second VarDial Evaluation Campaign",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Ali",
"suffix": ""
},
{
"first": "Suwon",
"middle": [],
"last": "Shuon",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Samard\u017ei\u0107",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Van Der Lee",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Grondelaers",
"suffix": ""
},
{
"first": "Nelleke",
"middle": [],
"last": "Oostdijk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Bosch",
"suffix": ""
},
{
"first": "Bornini",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Mayank",
"middle": [],
"last": "Lahiri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jain",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Ahmed Ali, Suwon Shuon, James Glass, Yves Scherrer, Tanja Samard\u017ei\u0107, Nikola Ljube\u0161i\u0107, J\u00f6rg Tiedemann, Chris van der Lee, Stefan Grondelaers, Nelleke Oostdijk, Antal van den Bosch, Ritesh Kumar, Bornini Lahiri, and Mayank Jain. 2018. Language Identification and Morphosyntactic Tagging: The Second VarDial Evaluation Campaign. In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), Santa Fe, USA.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A Report on the Third VarDial Evaluation Campaign",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Samard\u017ei\u0107",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Miikka",
"middle": [],
"last": "Silfverberg",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Klyueva",
"suffix": ""
},
{
"first": "Tung-Le",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Radu",
"middle": [
"Tudor"
],
"last": "Ionescu",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Butnaru",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Yves Scherrer, Tanja Samard\u017ei\u0107, Francis Tyers, Miikka Silfverberg, Natalia Klyueva, Tung-Le Pan, Chu-Ren Huang, Radu Tudor Ionescu, Andrei Butnaru, and Tommi Jauhiainen. 2019. A Report on the Third VarDial Evaluation Campaign. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial). Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Character-level convolutional networks for text classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [
"Jake"
],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classifi- cation. CoRR, abs/1509.01626.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Overview of the SVM-CV classifier",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Errors of SVM-Base classifier for DE-AT. (b) Errors of SVM-Base classifier for DE-AT. System 3: SVM-Base Due to lack of time we evaluated only 25, 35 and 50 clusters, of which 25 clusters performed the best on the development set, resulting in a median distance of 200.81 km. The test set result of 205.81 km ranked sixth, markedly behind the top three submissions.",
"num": null,
"uris": null
},
"TABREF1": {
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>, while set</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF4": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Nr. k</td><td>p</td><td>med-dv mean-dv</td></tr><tr><td>1</td><td colspan=\"3\">60 5.5 18.33</td><td>27.27</td></tr><tr><td>2</td><td colspan=\"3\">60 5.6 18.33</td><td>27.30</td></tr><tr><td>3</td><td colspan=\"3\">70 5.8 18.33</td><td>27.42</td></tr><tr><td>4</td><td colspan=\"3\">70 5.6 18.42</td><td>27.52</td></tr><tr><td>5</td><td colspan=\"3\">70 5.9 18.49</td><td>27.37</td></tr><tr><td>6</td><td colspan=\"3\">70 5.5 18.64</td><td>27.61</td></tr><tr><td>7</td><td colspan=\"3\">70 5.7 18.64</td><td>27.64</td></tr><tr><td>8</td><td colspan=\"3\">60 5.8 18.70</td><td>27.47</td></tr><tr><td>9</td><td colspan=\"3\">60 5.9 18.70</td><td>27.41</td></tr><tr><td colspan=\"4\">10 60 5.4 18.73</td><td>27.28</td></tr><tr><td>: Reconstruction error depending on the</td><td/><td/></tr><tr><td>number of clusters for CH-subtask training data,</td><td/><td/></tr><tr><td>for 10 runs</td><td/><td/></tr></table>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF5": {
"html": null,
"content": "<table><tr><td/><td>: Tuning results for LM-CH: first</td></tr><tr><td/><td>step, n=4, cs=no</td></tr><tr><td>Dev</td><td>Test</td></tr><tr><td colspan=\"2\">Nr. cs v vt median mean median mean</td></tr><tr><td colspan=\"2\">no 0 n/a 18.33 27.27 19.05 27.97</td></tr><tr><td colspan=\"2\">1 yes 3 0.01 17.17 26.06 17.66 26.21</td></tr><tr><td colspan=\"2\">2 yes 3 0.02 17.30 25.76 17.75 25.79</td></tr><tr><td colspan=\"2\">3 no 3 0.01 17.41 25.87 17.84 26.44</td></tr><tr><td colspan=\"2\">4 no 3 0.02 17.42 25.33 17.56 25.77</td></tr><tr><td colspan=\"2\">5 yes 2 0.01 17.47 26.38 18.35 26.63</td></tr><tr><td colspan=\"2\">6 yes 2 0.02 17.49 26.09 18.44 26.41</td></tr><tr><td colspan=\"2\">7 no 4 0.01 17.53 25.81 18.07 26.30</td></tr><tr><td colspan=\"2\">8 yes 4 0.01 17.78 26.16 17.87 26.15</td></tr><tr><td colspan=\"2\">9 no 4 0.02 17.81 25.32 18.04 25.55</td></tr><tr><td colspan=\"2\">10 yes 4 0.02 18.01 25.87 18.24 25.74</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF7": {
"html": null,
"content": "<table/>",
"text": "Results for CH subtask for different systems on development and test sets 4.1.3 Comparison of the Different Systems on the CH subtask",
"type_str": "table",
"num": null
}
}
}
}