ACL-OCL / Base_JSON /prefixN /json /N10 /N10-1027.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N10-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:50:18.896044Z"
},
"title": "Language Identification: The Long and the Short of the Matter",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne",
"location": {
"postCode": "3010",
"settlement": "VIC",
"country": "Australia"
}
},
"email": ""
},
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne",
"location": {
"postCode": "3010",
"settlement": "VIC",
"country": "Australia"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Language identification is the task of identifying the language a given document is written in. This paper describes a detailed examination of what models perform best under different conditions, based on experiments across three separate datasets and a range of tokenisation strategies. We demonstrate that the task becomes increasingly difficult as we increase the number of languages, reduce the amount of training data and reduce the length of documents. We also show that it is possible to perform language identification without having to perform explicit character encoding detection.",
"pdf_parse": {
"paper_id": "N10-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "Language identification is the task of identifying the language a given document is written in. This paper describes a detailed examination of what models perform best under different conditions, based on experiments across three separate datasets and a range of tokenisation strategies. We demonstrate that the task becomes increasingly difficult as we increase the number of languages, reduce the amount of training data and reduce the length of documents. We also show that it is possible to perform language identification without having to perform explicit character encoding detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the growth of the worldwide web, everincreasing numbers of documents have become available, in more and more languages. This growth has been a double-edged sword, however, in that content in a given language has become more prevalent but increasingly hard to find, due to the web's sheer size and diversity of content. While the majority of (X)HTML documents declare their character encoding, only a tiny minority specify what language they are written in, despite support for language declaration existing in the various (X)HTML standards. 1 Additionally, a single encoding can generally be used to render a large number of languages such that the document encoding at best filters out a subset of languages which are incompatible with the given encoding, rather than disambiguates the source language. Given this, the need for automatic means to determine the source language of web doc-1 http://dev.opera.com/articles/view/ mama-head-structure/ uments is crucial for web aggregators of various types.",
"cite_spans": [
{
"start": 546,
"end": 547,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is widespread misconception of language identification being a \"solved task\", generally as a result of isolated experiments over homogeneous datasets with small numbers of languages Xia et al., 2009) . Part of the motivation for this paper is to draw attention to the fact that, as a field, we are still a long way off perfect language identification of web documents, as evaluated under realistic conditions.",
"cite_spans": [
{
"start": 188,
"end": 205,
"text": "Xia et al., 2009)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we describe experiments on language identification of web documents, focusing on the broad question of what combination of tokenisation strategy and classification model achieves the best overall performance. We additionally evaluate the impact of the volume of training data and the test document length on the accuracy of language identification, and investigate the interaction between character encoding detection and language identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One assumption we make in this research, following standard assumptions made in the field, is that all documents are monolingual. This is clearly an unrealistic assumption when dealing with general web documents , and we plan to return to investigate language identification over multilingual documents in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions in this paper are: the demonstration that language identification is: (a) trivial over datasets with smaller numbers of languages and approximately even amounts of training data per language, but (b) considerably harder over datasets with larger numbers of languages with more skew in the amount of training data per language; bytebased tokenisation without character encoding detection is superior to codepoint-based tokenisation with character encoding detection; and simple cosine similarity-based nearest neighbour classification is equal to or better than models including support vector machines and naive Bayes over the language identification task. We also develop datasets to facilitate standardised evaluation of language identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Language identification was arguably established as a task by Gold (1967) , who construed it as a closed class problem: given data in each of a predefined set of possible languages, human subjects were asked to classify the language of a given test document. It wasn't until the 1990s, however, that the task was popularised as a text categorisation task.",
"cite_spans": [
{
"start": 62,
"end": 73,
"text": "Gold (1967)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background Research",
"sec_num": "2"
},
{
"text": "The text categorisation approach to language identification applies a standard supervised classification framework to the task. Perhaps the bestknown such model is that of Cavnar and Trenkle (1994) , as popularised in the textcat tool. 2 The method uses a per-language character frequency model, and classifies documents via their relative \"out of place\" distance from each language (see Section 5.1). Variants on this basic method include Bayesian models for character sequence prediction (Dunning, 1994) , dot products of word frequency vectors (Darnashek, 1995) and informationtheoretic measures of document similarity (Aslam and Frost, 2003; Martins and Silva, 2005) . More recently, support vector machines (SVMs) and kernel methods have been applied to the task of language identification task with success (Teytaud and Jalam, 2001; Lodhi et al., 2002; Kruengkrai et al., 2005) , and Markov logic has been used for joint inferencing in contexts where there are multiple evidence sources (Xia et al., 2009) .",
"cite_spans": [
{
"start": 172,
"end": 197,
"text": "Cavnar and Trenkle (1994)",
"ref_id": "BIBREF3"
},
{
"start": 236,
"end": 237,
"text": "2",
"ref_id": null
},
{
"start": 490,
"end": 505,
"text": "(Dunning, 1994)",
"ref_id": "BIBREF6"
},
{
"start": 547,
"end": 564,
"text": "(Darnashek, 1995)",
"ref_id": "BIBREF4"
},
{
"start": 622,
"end": 645,
"text": "(Aslam and Frost, 2003;",
"ref_id": "BIBREF1"
},
{
"start": 646,
"end": 670,
"text": "Martins and Silva, 2005)",
"ref_id": "BIBREF18"
},
{
"start": 813,
"end": 838,
"text": "(Teytaud and Jalam, 2001;",
"ref_id": "BIBREF21"
},
{
"start": 839,
"end": 858,
"text": "Lodhi et al., 2002;",
"ref_id": "BIBREF16"
},
{
"start": 859,
"end": 883,
"text": "Kruengkrai et al., 2005)",
"ref_id": "BIBREF14"
},
{
"start": 993,
"end": 1011,
"text": "(Xia et al., 2009)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background Research",
"sec_num": "2"
},
{
"text": "Language identification has also been carried out via linguistically motivated models. Johnson (1993) used a list of stop words from different languages to identify the language of a given document, choosing the language with the highest stop word overlap with the document. Grefenstette (1995) used word and part of speech (POS) correlation to determine if two text samples were from the same or different languages. Giguet (1995) developed a cross-language tokenisation model and used it to identify the language of a given document based on its tokenisation similarity with training data. Dueire Lins and Gon\u00e7alves (2004) considered the use of syntactically-derived closed grammaticalclass models, matching syntactic structure rather than words or character sequences.",
"cite_spans": [
{
"start": 87,
"end": 101,
"text": "Johnson (1993)",
"ref_id": "BIBREF13"
},
{
"start": 275,
"end": 294,
"text": "Grefenstette (1995)",
"ref_id": "BIBREF9"
},
{
"start": 418,
"end": 431,
"text": "Giguet (1995)",
"ref_id": "BIBREF7"
},
{
"start": 599,
"end": 624,
"text": "Lins and Gon\u00e7alves (2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background Research",
"sec_num": "2"
},
{
"text": "The observant reader will have noticed that some of the above approaches make use of notions such as \"word\", typically based on the naive assumption that the language uses white space to delimit words. These approaches are appropriate in contexts where there is a guarantee of a document being in one of a select set of languages where words are spacedelimited, or where manual segmentation has been performed (e.g. interlinear glossed text). However, we are interested in language identification of web documents, which can be in any language, including languages that do not overtly mark word boundaries, such as Japanese, Chinese and Thai; while relatively few languages fall into this categories, they are among the most populous web languages and therefore an important consideration. Therefore, approaches that assume a language is spacedelimited are clearly not suitable for our purposes. Equally, approaches which make assumptions about the availability of particular resources for each language to be identified (e.g. POS taggers, or the existence of precompiled stop word lists) cannot be used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background Research",
"sec_num": "2"
},
{
"text": "Language identification has been applied in a number of contexts, the most immediate application being in multilingual text retrieval, where retrieval results are generally superior if the language of the query is known, and the search is restricted to only those documents predicted to be in that language (McNamee and Mayfield, 2004) . It can also be used to \"word spot\" foreign language terms in multilingual documents, e.g. to improve parsing performance (Alex et al., 2007) , or for linguistic corpus creation purposes Xia et al., 2009; Xia and Lewis, 2009) .",
"cite_spans": [
{
"start": 307,
"end": 335,
"text": "(McNamee and Mayfield, 2004)",
"ref_id": "BIBREF20"
},
{
"start": 459,
"end": 478,
"text": "(Alex et al., 2007)",
"ref_id": "BIBREF0"
},
{
"start": 524,
"end": 541,
"text": "Xia et al., 2009;",
"ref_id": "BIBREF23"
},
{
"start": 542,
"end": 562,
"text": "Xia and Lewis, 2009)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background Research",
"sec_num": "2"
},
{
"text": "In the experiments reported in this paper, we employ three novel datasets, with differing properties relevant to language identification research: EUROGOV: longer documents, all in a single encoding, spread evenly across a relatively small number (10) of Western European languages; this dataset is comparable to the datasets conventionally used in language identification research. As the name would suggest, the documents were sourced from the Euro-GOV document collection, as used in the 2005 Web-CLEF task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "TCL: a larger number of languages (60) across a wider range of language families, with shorter documents and a range of character encodings (12). The collection was manually sourced by the Thai Computational Linguistics Laboratory (TCL) in 2005 from online news sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "WIKIPEDIA: a slightly larger number of languages again (67), a single encoding, and shorter documents; the distribution of languages is intended to approximate that of the actual web. This collection was automatically constructed by taking the dumps of all versions of Wikipedia with 1000 or more documents in non-constructed languages, and randomly selecting documents from them in a biaspreserving manner (i.e. preserving the document distribution in the full collection); this is intended to represent the document language bias observed on the web. All three corpora are available on request.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "We outline the characteristics of the three datasets in Table 1 . We further detail the language distribution in Figure 1 , using a constant vector of languages for all three datasets, based on the order of languages in the WIKIPEDIA dataset (in descending order of documents per language). Of note are the contrasting language distributions between the three datasets, in terms of both the languages represented and the relative skew of documents per language. In the following sections, we provide details of the corpus compilation and document sampling method for each dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 63,
"text": "Table 1",
"ref_id": null
},
{
"start": 113,
"end": 121,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "As we are interested in performing language identification over arbitrary web documents, we require a language-neutral document representation which does not make artificial assumptions about the source language of the document. Separately, there is the question of whether it is necessary to determine the character encoding of the document in order to extract out character sequences, or whether the raw byte stream is sufficient. To explore this question, we experiment with two document representations: (1) byte n-grams, and (2) codepoint ngrams. In both cases, a document is represented as a feature vector of token counts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Representation",
"sec_num": "4"
},
{
"text": "Byte n-grams can be extracted directly without explicit encoding detection. Codepoint n-grams, on the other hand, require that we know the character encoding of the document in order to perform tokenisation. Additionally, they should be based on a common encoding to prevent: (a) over-fragmenting the feature space (e.g. ending up with discrete feature spaces for euc-jp, s-jis and utf-8 in the case of Japanese); and (b) spurious matches between encodings (e.g. Japanese hiragana and Korean hangul mapping onto the same codepoint in euc-jp and euc-kr, respectively). We use uni-code as the common encoding for all documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Representation",
"sec_num": "4"
},
{
"text": "In practice, character encoding detection is an issue only for TCL, as the other two datasets are in a single encoding. Where a character encoding was provided for a document in TCL and it was possible to transcode the document to unicode based on that encoding, we used the encoding information. In cases where a unique encoding was not provided, we used an encoding detection library based on the Mozilla browser. 3 Having disambiguated the encoding for each document, we transcoded it into unicode.",
"cite_spans": [
{
"start": 416,
"end": 417,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document Representation",
"sec_num": "4"
},
{
"text": "In our experiments we use a number of different language identification models, as outlined below. We first describe the nearest-neighbour and nearestprototype models, and a selection of distance and similarity metrics combined with each. We then present three standalone text categorisation models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5"
},
{
"text": "The 1-nearest-neighbour (1NN) model is a common classification technique, whereby a test document D is classified based on the language of the closest training document D i (with language l(D i )), as determined by a given distance or similarity metric. In nearest-neighbour models, each training document is represented as a single instance, meaning that the computational cost of classifying a test document is proportional to the number of training documents. A related model which aims to reduce this cost is nearest-prototype (AM), where each language is represented as a single instance, by merging all of the training instances for that language into a single centroid via the arithmetic mean.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest-Neighbour and Nearest-Prototype Models",
"sec_num": "5.1"
},
{
"text": "For both nearest-neighbour and nearest-prototype methods, we experimented with three similarity and distance measures in this research:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest-Neighbour and Nearest-Prototype Models",
"sec_num": "5.1"
},
{
"text": "Cosine similarity (COS): the cosine of the angle between two feature vectors, as measured by the dot product of the two vectors, normalised to unit length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest-Neighbour and Nearest-Prototype Models",
"sec_num": "5.1"
},
{
"text": "Skew divergence (SKEW): a variant of Kullback-Leibler divergence, whereby the second distribution 3 http://chardet.feedparser.org/ (y) is smoothed by linear interpolation with the first (x) using a smoothing factor \u03b1 (Lee, 2001):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest-Neighbour and Nearest-Prototype Models",
"sec_num": "5.1"
},
{
"text": "s \u03b1 (x, y) = D(x || \u03b1y + (1 \u2212 \u03b1)x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest-Neighbour and Nearest-Prototype Models",
"sec_num": "5.1"
},
{
"text": "where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest-Neighbour and Nearest-Prototype Models",
"sec_num": "5.1"
},
{
"text": "D(x || y) = i x i (log 2 x i \u2212 log 2 y i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest-Neighbour and Nearest-Prototype Models",
"sec_num": "5.1"
},
{
"text": "In all our experiments, we set \u03b1 to 0.99.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest-Neighbour and Nearest-Prototype Models",
"sec_num": "5.1"
},
{
"text": "Out-of-place (OOP): a ranklist-based distance metric, where the distance between two documents is calculated as (Cavnar and Trenkle, 1994) :",
"cite_spans": [
{
"start": 112,
"end": 138,
"text": "(Cavnar and Trenkle, 1994)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest-Neighbour and Nearest-Prototype Models",
"sec_num": "5.1"
},
{
"text": "oop(D x , D y ) = t\u2208Dx\u2228Dy abs(R Dx (t) \u2212 R Dy (t))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest-Neighbour and Nearest-Prototype Models",
"sec_num": "5.1"
},
{
"text": "R D (t) is the rank of term t in document D, based on the descending order of frequency in document D; terms not occurring in document D are conventionally given the rank 1 + max i R D (t i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest-Neighbour and Nearest-Prototype Models",
"sec_num": "5.1"
},
{
"text": "Naive Bayes is a popular text classification model, due to it being lightweight, robust and easy to update. The language of test document D is predicted by:l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Bayes (NB)",
"sec_num": "5.2"
},
{
"text": "(D) = arg max l i \u2208L P (l i ) |V | j=1 P (t j |l i ) N D,t j N D,t j !",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Bayes (NB)",
"sec_num": "5.2"
},
{
"text": "where L is the set of languages in the training data, N D,t j is the frequency of the jth term in D, V is the set of all terms, and:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Bayes (NB)",
"sec_num": "5.2"
},
{
"text": "P (t|l i ) = 1 + |D| k=1 N k,t P (l i |D k ) |V | + |V | j=1 |D| k=1 N k,t j P (l i |D k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Bayes (NB)",
"sec_num": "5.2"
},
{
"text": "In this research, we use the rainbow implementation of multinominal naive Bayes (McCallum, 1996) .",
"cite_spans": [
{
"start": 80,
"end": 96,
"text": "(McCallum, 1996)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Bayes (NB)",
"sec_num": "5.2"
},
{
"text": "Support vector machines (SVMs) are one of the most popular methods for text classification, largely because they can automatically weight large numbers of features, capturing feature interactions in the process (Joachims, 1998; Manning et al., 2008) . The basic principle underlying SVMs is to maximize the margin between training instances and the calculated decision boundary based on structural risk minimisation (Vapnik, 1995) .",
"cite_spans": [
{
"start": 211,
"end": 227,
"text": "(Joachims, 1998;",
"ref_id": "BIBREF12"
},
{
"start": 228,
"end": 249,
"text": "Manning et al., 2008)",
"ref_id": "BIBREF17"
},
{
"start": 416,
"end": 430,
"text": "(Vapnik, 1995)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machines (SVM)",
"sec_num": "5.3"
},
{
"text": "In this work, we have made use of bsvm, 4 an implementation of SVMs with multiclass classification support (Hsu et al., 2008) . We only report results for multi-class bound-constrained support vector machines with linear kernels, as they were found to perform best over our data.",
"cite_spans": [
{
"start": 107,
"end": 125,
"text": "(Hsu et al., 2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machines (SVM)",
"sec_num": "5.3"
},
{
"text": "We carry out experiments over the cross-product of the following options, as described above: We evaluate the models using micro-averaged precision (P \u00b5 ), recall (R \u00b5 ) and F-score (F \u00b5 ), as well as macro-averaged precision (P M ), recall (R M ) and F-score (F M ). The micro-averaged scores indicate the average performance per document; as we always make a unique prediction per document, the micro-averaged precision, recall and F-score are always identical (as is the classification accuracy). The macro-averaged scores, on the other hand, indicate the average performance per language. In each case, we average the precision, recall and F-score across the 10 folds of cross validation. 6 As a baseline, we use a majority class, or ZeroR, classifier (ZEROR), which assigns the language with highest prior in the training data to each of the test documents.",
"cite_spans": [
{
"start": 693,
"end": 694,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Methodology",
"sec_num": "6"
},
{
"text": "model (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Methodology",
"sec_num": "6"
},
{
"text": "Token ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "P M R M F M P \u00b5 /R \u00b5 /F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "In our experiments, we first compare the different models for fixed n-gram order, then come back to vary the n-gram order. Subsequently, we examine the relative performance of the different models on test documents of differing lengths, and finally look into the impact of the amount of training data for a given language on the performance for that language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "First, we present the results for each of the classifiers in Tables 2-4, based on byte or codepoint tokenisation and bigrams. In each case, we present the best result in each column in bold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for the Different Models and Tokenisation Strategies",
"sec_num": "7.1"
},
{
"text": "The relative performance over EUROGOV and TCL is roughly comparable for all methods barring SKEW 1NN , with near-perfect scores over all 6 evaluation metrics. SKEW 1NN is near-perfect over EU-ROGOV and TCL, but drops to baseline levels over WIKIPEDIA; we return to discuss this effect in Section 7.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for the Different Models and Tokenisation Strategies",
"sec_num": "7.1"
},
{
"text": "In the case of EUROGOV, the near-perfect results are in line with our expectations for the dataset, based on its characteristics and results reported for comparable datasets. The results for WIKIPEDIA, however, fall off considerably, with the best model achieving an F M of .671 and F \u00b5 of .869, due to The nearest-neighbour models outperform the corresponding nearest-prototype models to varying degrees, with the one exception of SKEW 1NN over WIKIPEDIA. The nearest-prototype classifiers were certainly faster than the nearest-neighbour classifiers, by roughly an order of 10, but this is more than outweighed by the drop in classification performance. With the exception of SKEW 1NN over WIKIPEDIA, all methods were well above the baselines for all three datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for the Different Models and Tokenisation Strategies",
"sec_num": "7.1"
},
{
"text": "Model Token P M R M F M P \u00b5 /R \u00b5 /F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for the Different Models and Tokenisation Strategies",
"sec_num": "7.1"
},
{
"text": "The two methods which perform consistently well at this point are COS 1NN and SVM, with COS 1NN holding up particularly well under micro-averaged F-score while NB drops away over WIKIPEDIA, the most skewed dataset; this is due to the biasing effect of the prior in NB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for the Different Models and Tokenisation Strategies",
"sec_num": "7.1"
},
{
"text": "Looking to the impact of byte-vs. codepointtokenisation on classifier performance over the three datasets, we find that overall, bytes outperform codepoints. This is most notable for TCL and WIKIPEDIA, and the SKEW 1NN and NB models. Given this result, we present only results for bytebased tokenisation in the remainder of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for the Different Models and Tokenisation Strategies",
"sec_num": "7.1"
},
{
"text": "Token Table 4 : Results for byte vs. codepoint (bigram) tokenisation over WIKIPEDIA",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 13,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "P M R M F M P \u00b5 /R \u00b5 /F \u00b5 ZEROR - .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The results for byte tokenisation of TCL are particularly noteworthy. The transcoding into unicode and use of codepoints, if anything, hurts performance, suggesting that implicit character encoding detection based on byte tokenisation is the best approach: it is both more accurate and simplifies the system, in removing the need to perform encoding detection prior to language identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We present results with byte unigrams, bigrams and trigrams in Table 5 for WIKIPEDIA. 7 We omit results for the other two datasets, as the overall trend is the same as for WIKIPEDIA, with lessened relative differences between n-gram orders due to the relative simplicity of the respective classification tasks.",
"cite_spans": [
{
"start": 86,
"end": 87,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results for Differing n-gram Sizes",
"sec_num": "7.2"
},
{
"text": "SKEW 1NN is markedly different to the other methods in achieving the best performance with unigrams, moving from the worst-performing method by far to one of the best-performing methods. This is the result of the interaction between data sparseness and heavy-handed smoothing with the \u03b1 constant. Rather than using a constant \u03b1 value for all n-gram orders, it may be better to parameterise it using an exponential scale such as \u03b1 = 1 \u2212 \u03b2 n (with Table 5 : Results for different n-gram orders over WIKIPEDIA \u03b2 = 0.01, e.g.), based on the n-gram order. We leave this for future research. For most methods, bigrams and trigrams are better than unigrams, with the one notable exception of SKEW 1NN . In general, there is little separating bigrams and trigrams, although the best result for is achieved slightly more often for bigrams than for trigrams.",
"cite_spans": [],
"ref_spans": [
{
"start": 446,
"end": 453,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results for Differing n-gram Sizes",
"sec_num": "7.2"
},
{
"text": "Model n-gram P M R M F M P \u00b5 /R \u00b5 /F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Differing n-gram Sizes",
"sec_num": "7.2"
},
{
"text": "For direct comparability with Cavnar and Trenkle (1994), we additionally carried out a preliminary experiment with hybrid byte n-grams (all of 1-to 5grams), combined with simple frequency-based feature selection of the top-1000 features for each ngram order. The significance of this setting is that it is the strategy adopted by textcat, based on the original paper of Cavnar and Trenkle (1994) (with the one exception that we use 1000 features rather than 300, as all methods other than OOP 1NN benefitted from more features). The results are shown in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 554,
"end": 561,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results for Differing n-gram Sizes",
"sec_num": "7.2"
},
{
"text": "Compared to the results in Table 5 , SKEW 1NN and SKEW AM both increase markedly to achieve the best overall results. OOP 1NN , on the other hand, rises slightly, while the remaining three methods actually Table 6 : Results for mixed n-grams (1-5) and feature selection over WIKIPEDIA (a l\u00e1 Cavnar and Trenkle (1994)) drop back slightly. Clearly, there is considerably more experimentation to be done here with mixed n-gram models and different feature selection methods, but the results indicate that some methods certainly benefit from n-gram hybridisation and feature selection, and also that we have been able to surpass the results of Cavnar and Trenkle (1994) with SKEW 1NN in an otherwise identical framework.",
"cite_spans": [
{
"start": 285,
"end": 317,
"text": "(a l\u00e1 Cavnar and Trenkle (1994))",
"ref_id": null
},
{
"start": 640,
"end": 665,
"text": "Cavnar and Trenkle (1994)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 27,
"end": 34,
"text": "Table 5",
"ref_id": null
},
{
"start": 206,
"end": 213,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results for Differing n-gram Sizes",
"sec_num": "7.2"
},
{
"text": "Model P M R M F M P \u00b5 /R \u00b5 /F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Differing n-gram Sizes",
"sec_num": "7.2"
},
{
"text": "To better understand the impact of test document size on classification accuracy, we divided the test documents into 5 equal-size bins according to their length, measured by the number of tokens. We then computed F \u00b5 individually for each bin across the 10 folds of cross validation. We present the breakdown of results for WIKIPEDIA in Figure 2 . WIKIPEDIA shows a pseudo-logarithmic growth in F \u00b5 (= P \u00b5 = R \u00b5 ) as the test document size increases. This fits with our intuition, as the model has progressively more evidence to base the classification on. It also suggests that performance over shorter documents appears to be the dominating factor in the overall ranking of the different methods. In particular, COS 1NN and SVM appear to be able to classify shorter documents most reliably, leading to the overall result of them being the best-performing methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 345,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Breakdown Across Test Document Length",
"sec_num": "7.3"
},
{
"text": "While we do not show the graph for reasons of space, the equivalent graph for EUROGOV displays a curious effect: F \u00b5 drops off as the test documents get longer. Error analysis of the data indicates that this is due to longer documents being more likely to be \"contaminated\" with either data from a second language or extra-linguistic data, such as large tables of numbers or chemical names. This suggests that all the models are brittle when the assump- tion of strict monolingualism is broken, or when the document is dominated by extra-linguistic data. Clearly, this underlines our assumption of monolingual documents, and suggests multilingual language identification is a fertile research area even in terms of optimising performance over our \"monolingual\" datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Breakdown Across Test Document Length",
"sec_num": "7.3"
},
{
"text": "As a final data point in our analysis, we calculated the F M for each language relative to the amount of training data available for that language, and present the results in the form of a combined scatter plot for the three datasets in Figure 3 . The differing distributions of the three datasets are self-evident, with most languages in EUROGOV (the squares) both having reasonably large amounts of training data and achieving high F M values, but the majority of languages in WIKIPEDIA (the crosses) having very little data (including a number of languages with no training data, as there is a singleton document in that language in the dataset). As an overall trend, we can observe that the greater the volume of training data, the higher the F M across all three datasets, but there is considerable variation between the languages in terms of their F M for a given training data size (the column of crosses for WIKIPEDIA to the left of the graph is particularly striking).",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 245,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance Relative to Training Data Size",
"sec_num": "7.4"
},
{
"text": "We have carried out a thorough (re)examination of the task of language identification, that is predicting the language that a given document is written in, focusing on monolingual documents at present. We experimented with a total of 7 models, and tested each over two tokenisation strategies (bigrams vs. codepoints) and three token n-gram orders (unigrams, bigrams and trigrams). At the same time as reproducing results from earlier research on how easy the task can be over small numbers of languages with longer documents, we demonstrated that the task becomes much harder for larger numbers of languages, shorter documents and greater class skew. We also found that explicit character encoding detection is not necessary in language detection, and that the most consistent model overall is either a simple 1-NN model with cosine similarity, or an SVM with a linear kernel, using a byte bigram or trigram document representation. We also confirmed that longer documents tend to be easier to classify, but also that multilingual documents cause problems for the standard model of language identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "http://www.let.rug.nl/vannoord/TextCat/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.csie.ntu.edu.tw/\u02dccjlin/bsvm/ 5 We do not include the results for nearest-prototype classifiers with the OOP distance metric as the results were considerably lower than the other methods.6 Note that this means that the averaged FM is not necessarily the harmonic mean of the averaged PM and RM .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The results for OOP1NN over byte trigrams are missing due to the computational cost associated with the method, and our experiment hence not having run to completion at the time of writing. Extrapolating from the results for the other two datasets, we predict similar results to bigrams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported by a Google Research Award.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Using foreign inclusion detection to improve parsing performance",
"authors": [
{
"first": "Beatrice",
"middle": [],
"last": "Alex",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Dubey",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "151--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beatrice Alex, Amit Dubey, and Frank Keller. 2007. Using foreign inclusion detection to improve parsing performance. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning 2007 (EMNLP-CoNLL 2007), pages 151-160, Prague, Czech Republic.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An information-theoretic measure for document similarity",
"authors": [
{
"first": "A",
"middle": [],
"last": "Javed",
"suffix": ""
},
{
"first": "Meredith",
"middle": [],
"last": "Aslam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frost",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of 26th International ACM-SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2003)",
"volume": "",
"issue": "",
"pages": "449--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javed A. Aslam and Meredith Frost. 2003. An information-theoretic measure for document similar- ity. In Proceedings of 26th International ACM-SIGIR Conference on Research and Development in Informa- tion Retrieval (SIGIR 2003), pages 449-450, Toronto, Canada.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Collecting low-density language materials on the web",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Baden",
"middle": [],
"last": "Hughes",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 12th Australasian Web Conference (AusWeb06)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin, Steven Bird, and Baden Hughes. 2006. Collecting low-density language materials on the web. In Proceedings of the 12th Australasian Web Conference (AusWeb06). http://www.ausweb. scu.edu.au/ausweb06/edited/hughes/.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Ngram-based text categorization",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "John",
"middle": [
"M"
],
"last": "Cavnar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Trenkle",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Third Symposium on Document Analysis and Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B. Cavnar and John M. Trenkle. 1994. N- gram-based text categorization. In Proceedings of the Third Symposium on Document Analysis and Informa- tion Retrieval, Las Vegas, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Gauging similarity with ngrams: Language-independent categorization of text",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Darnashek",
"suffix": ""
}
],
"year": 1995,
"venue": "Science",
"volume": "267",
"issue": "",
"pages": "843--848",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Darnashek. 1995. Gauging similarity with n- grams: Language-independent categorization of text. Science, 267:843-848.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic language identification of written texts",
"authors": [
{
"first": "Paulo",
"middle": [],
"last": "Rafael Dueire Lins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gon\u00e7alves",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 ACM Symposium on Applied Computing (SAC 2004)",
"volume": "",
"issue": "",
"pages": "1128--1133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rafael Dueire Lins and Paulo Gon\u00e7alves. 2004. Au- tomatic language identification of written texts. In Proceedings of the 2004 ACM Symposium on Applied Computing (SAC 2004), pages 1128-1133, Nicosia, Cyprus.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Statistical identification of language",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Dunning. 1994. Statistical identification of lan- guage. Technical Report MCCS 940-273, Computing Research Laboratory, New Mexico State University.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Categorization according to language: A step toward combining linguistic knowledge and statistic learning",
"authors": [
{
"first": "Emmanuel",
"middle": [],
"last": "Giguet",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 4th International Workshop on Parsing Technologies (IWPT-1995)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emmanuel Giguet. 1995. Categorization according to language: A step toward combining linguistic knowl- edge and statistic learning. In Proceedings of the 4th International Workshop on Parsing Technologies (IWPT-1995), Prague, Czech Republic.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Language identification in the limit",
"authors": [
{
"first": "E",
"middle": [
"Mark"
],
"last": "Gold",
"suffix": ""
}
],
"year": 1967,
"venue": "Information and Control",
"volume": "5",
"issue": "",
"pages": "447--474",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Mark Gold. 1967. Language identification in the limit. Information and Control, 5:447-474.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Comparing two language identification schemes",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of Analisi Statistica dei Dati Testuali (JADT)",
"volume": "",
"issue": "",
"pages": "263--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Grefenstette. 1995. Comparing two language identification schemes. In Proceedings of Analisi Sta- tistica dei Dati Testuali (JADT), pages 263-268.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A practical guide to support vector classification",
"authors": [
{
"first": "Chih-Wei",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Chih-Chung",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen Lin. 2008. A practical guide to support vector classifica- tion. Technical report, Department of Computer Sci- ence National Taiwan University.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Reconsidering language identification for written language resources",
"authors": [
{
"first": "Baden",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Nicholson",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mackinlay",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006)",
"volume": "",
"issue": "",
"pages": "485--488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baden Hughes, Timothy Baldwin, Steven Bird, Jeremy Nicholson, and Andrew MacKinlay. 2006. Recon- sidering language identification for written language resources. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006), pages 485-488, Genoa, Italy.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Text categorization with support vector machines: learning with many relevant features",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 10th European Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "137--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 1998. Text categorization with sup- port vector machines: learning with many relevant fea- tures. In Proceedings of the 10th European Confer- ence on Machine Learning, pages 137-142, Chemnitz, Germany.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Solving the problem of language recognition",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Johnson. 1993. Solving the problem of lan- guage recognition. Technical report, School of Com- puter Studies, University of Leeds.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Language identification based on string kernels",
"authors": [
{
"first": "Canasai",
"middle": [],
"last": "Kruengkrai",
"suffix": ""
},
{
"first": "Prapass",
"middle": [],
"last": "Srichaivattana",
"suffix": ""
},
{
"first": "Virach",
"middle": [],
"last": "Sornlertlamvanich",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 5th International Symposium on Communications and Information Technologies (ISCIT-2005)",
"volume": "",
"issue": "",
"pages": "896--899",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Canasai Kruengkrai, Prapass Srichaivattana, Virach Sornlertlamvanich, and Hitoshi Isahara. 2005. Lan- guage identification based on string kernels. In Pro- ceedings of the 5th International Symposium on Com- munications and Information Technologies (ISCIT- 2005), pages 896-899, Beijing, China.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "On the effectiveness of the skew divergence for statistical language analysis",
"authors": [
{
"first": "Lillian",
"middle": [
"Lee"
],
"last": "",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lillian Lee. 2001. On the effectiveness of the skew diver- gence for statistical language analysis. In Proceedings of Artificial Intelligence and Statistics 2001 (AISTATS 2001), pages 65-72, Key West, USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Text classification using string kernels",
"authors": [
{
"first": "Huma",
"middle": [],
"last": "Lodhi",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Saunders",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Watkins",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Machine Learning Research",
"volume": "2",
"issue": "",
"pages": "419--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classifica- tion using string kernels. Journal of Machine Learning Research, 2:419-444.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Introduction to Information Retrieval",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hin- rich Sch\u00fctze. 2008. Introduction to Information Re- trieval. Cambridge University Press, Cambridge, UK.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Language identification in web pages",
"authors": [
{
"first": "Bruno",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "M\u00e1rio",
"middle": [
"J"
],
"last": "Silva",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 2005 ACM symposium on Applied computing",
"volume": "",
"issue": "",
"pages": "764--768",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruno Martins and M\u00e1rio J. Silva. 2005. Language iden- tification in web pages. In Proceedings of the 2005 ACM symposium on Applied computing, pages 764- 768, Santa Fe, USA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering",
"authors": [
{
"first": "Andrew Kachites",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Kachites McCallum. 1996. Bow: A toolkit for statistical language modeling, text retrieval, classifica- tion and clustering. http://www.cs.cmu.edu/ mccallum/bow.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Character Ngram Tokenization for European Language Text Retrieval",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Mayfield",
"suffix": ""
}
],
"year": 2004,
"venue": "Information Retrieval",
"volume": "7",
"issue": "1-2",
"pages": "73--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul McNamee and James Mayfield. 2004. Character N - gram Tokenization for European Language Text Re- trieval. Information Retrieval, 7(1-2):73-97.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Kernelbased text categorization",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Teytaud",
"suffix": ""
},
{
"first": "Radwan",
"middle": [],
"last": "Jalam",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the International Joint Conference on Neural Networks (IJCNN'2001)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Teytaud and Radwan Jalam. 2001. Kernel- based text categorization. In Proceedings of the International Joint Conference on Neural Networks (IJCNN'2001), Washington DC, USA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Applying NLP technologies to the collection and enrichment of language data on the web to aid linguistic research",
"authors": [
{
"first": "Vladimir",
"middle": [
"N"
],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the EACL 2009 Workshop on Language Technology and Resources for Cultural Heritage, Social Sciences, Humanities, and Education",
"volume": "",
"issue": "",
"pages": "51--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir N. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer-Verlag, Berlin, Germany. Fei Xia and William Lewis. 2009. Applying NLP tech- nologies to the collection and enrichment of language data on the web to aid linguistic research. In Pro- ceedings of the EACL 2009 Workshop on Language Technology and Resources for Cultural Heritage, So- cial Sciences, Humanities, and Education (LaTeCH - SHELT&R 2009), pages 51-59, Athens, Greece.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Language ID in the context of harvesting language data off the web",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the EACL (EACL 2009)",
"volume": "",
"issue": "",
"pages": "870--878",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Xia, William Lewis, and Hoifung Poon. 2009. Lan- guage ID in the context of harvesting language data off the web. In Proceedings of the 12th Conference of the EACL (EACL 2009), pages 870-878, Athens, Greece.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Breakdown of F \u00b5 over WIKIPEDIA for test documents of increasing lengthFigure 3: Per-language F M for COS 1NN , relative to the training data size (in MB) for that language",
"uris": null,
"num": null
},
"TABREF4": {
"content": "<table/>",
"html": null,
"text": "Results for byte vs. codepoint (bigram) tokenisation over TCL the larger number of languages, smaller documents, and skew in the amounts of training data per language. All models are roughly balanced in the relative scores they attain for P M , R M and F M (i.e.",
"type_str": "table",
"num": null
}
}
}
}