ACL-OCL / Base_JSON /prefixW /json /W17 /W17-0218.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W17-0218",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:23:44.916289Z"
},
"title": "Creating register sub-corpora for the Finnish Internet Parsebank",
"authors": [
{
"first": "Veronika",
"middle": [],
"last": "Laippala",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Turku",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Juhani",
"middle": [],
"last": "Luotolahti",
"suffix": "",
"affiliation": {
"laboratory": "Turku NLP Group",
"institution": "University of Turku",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Aki-Juhani",
"middle": [],
"last": "Kyr\u00f6l\u00e4inen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Turku",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": "",
"affiliation": {
"laboratory": "Turku NLP Group",
"institution": "University of Turku",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": "",
"affiliation": {
"laboratory": "Turku NLP Group",
"institution": "University of Turku",
"location": {
"country": "Finland"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper develops register sub-corpora for the Web-crawled Finnish Internet Parsebank. Currently, all the documents belonging to different registers, such as news and user manuals, have an equal status in this corpus. Detecting the text register would be useful for both NLP and linguistics (Giesbrecht and Evert, 2009) (Webber, 2009) (Sinclair, 1996) (Egbert et al., 2015). We assemble the subcorpora by first naively deducing four register classes from the Parsebank document URLs and then developing a classifier based on these, to detect registers also for the rest of the documents. The results show that the naive method of deducing the register is efficient and that the classification can be done sufficiently reliably. The analysis of the prediction errors however indicates that texts sharing similar communicative purposes but belonging to different registers, such as news and blogs informing the reader, share similar linguistic characteristics. This attests of the well-known difficulty to define the notion of registers for practical uses. Finally, as a significant improvement to its usability, we release two sets of sub-corpus collections for the Parsebank. The A collection consists of two million documents classified to blogs, forum discussions, encyclopedia articles and news with a naive classification precision of >90%, and the B collection four million documents with a precision of >80%.",
"pdf_parse": {
"paper_id": "W17-0218",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper develops register sub-corpora for the Web-crawled Finnish Internet Parsebank. Currently, all the documents belonging to different registers, such as news and user manuals, have an equal status in this corpus. Detecting the text register would be useful for both NLP and linguistics (Giesbrecht and Evert, 2009) (Webber, 2009) (Sinclair, 1996) (Egbert et al., 2015). We assemble the subcorpora by first naively deducing four register classes from the Parsebank document URLs and then developing a classifier based on these, to detect registers also for the rest of the documents. The results show that the naive method of deducing the register is efficient and that the classification can be done sufficiently reliably. The analysis of the prediction errors however indicates that texts sharing similar communicative purposes but belonging to different registers, such as news and blogs informing the reader, share similar linguistic characteristics. This attests of the well-known difficulty to define the notion of registers for practical uses. Finally, as a significant improvement to its usability, we release two sets of sub-corpus collections for the Parsebank. The A collection consists of two million documents classified to blogs, forum discussions, encyclopedia articles and news with a naive classification precision of >90%, and the B collection four million documents with a precision of >80%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Internet offers a constantly growing source of information, not only in terms of size, but also in terms of languages and communication settings it includes. As a consequence, Web corpora, language resources developed by automatically crawling the Web, offer revolutionary potentials for fields using textual data, such as Natural Language Processing (NLP), linguistics and other humanities (Kilgariff and Grefenstette, 2003) .",
"cite_spans": [
{
"start": 395,
"end": 429,
"text": "(Kilgariff and Grefenstette, 2003)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite their potentials, Web corpora are underused. One of the important reasons behind this is the fact that in the existing Web corpora, all of the different documents have an equal status. This complicates their use, as for many applications, knowing the composition of the corpus would be beneficial. In particular, it would be important to know what registers, i.e. text varieties such as a user manual or a blog post, the corpus consists of (see Section 2 for a definition). In NLP, detecting the register of a text has been noted to be useful for instance in POS tagging (Giesbrecht and Evert, 2009) , discourse parsing (Webber, 2009) and information retrieval (Vidulin et al., 2007) . In linguistics, the correct constitution of a corpus and the criteria used to assemble it have been subject to long discussions (Sinclair, 1996) , and note that without systematic classification, Web corpora cannot be fully benefited from.",
"cite_spans": [
{
"start": 579,
"end": 607,
"text": "(Giesbrecht and Evert, 2009)",
"ref_id": "BIBREF11"
},
{
"start": 628,
"end": 642,
"text": "(Webber, 2009)",
"ref_id": null
},
{
"start": 669,
"end": 691,
"text": "(Vidulin et al., 2007)",
"ref_id": "BIBREF26"
},
{
"start": 822,
"end": 838,
"text": "(Sinclair, 1996)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we explore the development of register sub-corpora for the Finnish Internet Parsebank 1 , a Web-crawled corpus of Internet Finnish. We assemble the sub-corpora by first naively deducing four register classes from the Parsebank document URLs and then creating a classifier based on these classes to detect texts representing these registers from the rest of the Parsebank (see Section 4). The register classes we develop are news, blogs, forum discussions and encyclopedia articles. Instead of creating a full-coverage taxonomy of all the registers covered by the Parsebank, in this article our aim is to test this method in the detection of these four registers. If the method works, the number of registers will be extended in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the register detection and analysis, we compare three methods: the traditional bag-of-words as a baseline, lexical trigrams as proposed by Gries & al. (2011) , and Dependency Profiles (DP), cooccurrence patterns of the documents labelled in a specific class, assumed a register, and dependency syntax relations.",
"cite_spans": [
{
"start": 142,
"end": 160,
"text": "Gries & al. (2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition to reporting the standard metrics to estimate the classifier performance, we evaluate the created sub-corpora by analysing the mismatches between the naively assumed register classes and the classifier predictions. In addition, we analyse the linguistic register characteristics estimated by the classifier. This validates the quality of the sub-corpora and is informative about the linguistic variation inside the registers (see Section 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, we publish four register-specific subcorpora for the Parsebank that we develop in this paper: blogs, forum discussions, encyclopedia articles and news (see Section 6). We release two sets of sub-corpora: the A collection consists of two million documents with register-specific labels. For these documents, we estimate the register prediction precision to be >90%. The collection B consists of four million documents. For these, the precision is >80%. These sub-corpora allow the users to focus on specific registers, which improves the Parsebank usability significantly (see discussions in and (Asheghi et al., 2016) ).",
"cite_spans": [
{
"start": 604,
"end": 626,
"text": "(Asheghi et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the 1980s, linguistic variation has been studied in relation to the communicative situation, form and function of the piece of speech or writing under analysis (Biber, 1989; Biber, 1995; Biber et al., 1999; Miller, 1984; Swales, 1990) . Depending on the study, these language varieties are usually defined as registers or genres, the definitions emphasising different aspects of the variation (see discussion in (Asheghi et al., 2016; . We adopt the term register and define it, following (Biber, 1989; Biber, 1995; , as a text variety with specific situational characteristics, communicative purpose and lexico-grammatical features.",
"cite_spans": [
{
"start": 166,
"end": 179,
"text": "(Biber, 1989;",
"ref_id": "BIBREF4"
},
{
"start": 180,
"end": 192,
"text": "Biber, 1995;",
"ref_id": "BIBREF5"
},
{
"start": 193,
"end": 212,
"text": "Biber et al., 1999;",
"ref_id": "BIBREF2"
},
{
"start": 213,
"end": 226,
"text": "Miller, 1984;",
"ref_id": "BIBREF19"
},
{
"start": 227,
"end": 240,
"text": "Swales, 1990)",
"ref_id": "BIBREF25"
},
{
"start": 418,
"end": 440,
"text": "(Asheghi et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 495,
"end": 508,
"text": "(Biber, 1989;",
"ref_id": "BIBREF4"
},
{
"start": 509,
"end": 521,
"text": "Biber, 1995;",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous studies",
"sec_num": "2"
},
{
"text": "Studies aiming at automatically identifying reg-isters from the Web face several challenges. Although some studies reach a very high accuracy, their approaches are very difficult to apply in realworld applications. Other studies, adopting a more realistic approach, present a weaker performance. In particular, the challenges are related to the definition of registers in practice: how many of them should there be, and how to reliably identify them? In addition, it is not always clear whether registers have different linguistic properties (Sch\u00e4fer and Bildhauer, 2016) . Based on the situational characteristics of a register, a blog post discussing a news topic and a news article on the same topic should be analysed as different registers. But how does this difference show in the linguistic features of the documents, or does it?",
"cite_spans": [
{
"start": 542,
"end": 571,
"text": "(Sch\u00e4fer and Bildhauer, 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous studies",
"sec_num": "2"
},
{
"text": "For instance, Sharoff & al. (2010) achieve an accuracy of 97% using character tetragrams and single words with a stop list as classification features, while Lindemann & Littig (2011) report an F-score of 80% for many registers using both structural Web page features and topical characteristics based on the terms used in the documents. They, however, use as corpora only samples of the Web, which can represent only a limited portion of all the registers of the entire Web (Sharoff et al., 2010; Santini and Sharoff, 2009) .",
"cite_spans": [
{
"start": 14,
"end": 34,
"text": "Sharoff & al. (2010)",
"ref_id": "BIBREF23"
},
{
"start": 157,
"end": 182,
"text": "Lindemann & Littig (2011)",
"ref_id": "BIBREF17"
},
{
"start": 474,
"end": 496,
"text": "(Sharoff et al., 2010;",
"ref_id": "BIBREF23"
},
{
"start": 497,
"end": 523,
"text": "Santini and Sharoff, 2009)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous studies",
"sec_num": "2"
},
{
"text": "Another, more linguistically motivated perspective to study Web registers is adopted by Biber and his colleagues. Using typical end users of the Web to code a large number of nearly random Web documents (48 000) with hierarchical, situational characteristics, they apply a bottom-up method for creating a taxonomy of Web registers . Then, applying a custom built tagger identifying 150+ lexico-grammatical features, they report an overall accuracy of 44.2% for unrestricted Web texts using a taxonomy of 20 registers . In addition to the relatively weak register identification performance, their approach suffers from a low interannotator agreement for the register classes. Similar problems are also discussed in (Crowston et al., 2011; Essen and Stein, 2004) , who note that both experts and end users have troubles identifying registers reliably. This leads to question, whether register identification can at all be possible, if even humans cannot agree on their labelling. This concern is expressed by Sch\u00e4fer and Bildhauer (2016) , who decide to focus on classifying their COW Corpora Figure 1 : Unlexicalised syntactic biarcs from the sentence Ja haluaisitko kehitt\u00e4\u00e4 kielitaitoasi? 'And would you like to improve your language skills?' to topic domains, such as medical or science instead of registers. The recently presented Leeds Web Genre Corpus (Asheghi et al., 2016) shows, however, very reliable interannotator agreement scores. This proves that when the register taxonomy is well developed, the registers can as well be reliably identified.",
"cite_spans": [
{
"start": 715,
"end": 738,
"text": "(Crowston et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 739,
"end": 761,
"text": "Essen and Stein, 2004)",
"ref_id": "BIBREF10"
},
{
"start": 1008,
"end": 1036,
"text": "Sch\u00e4fer and Bildhauer (2016)",
"ref_id": "BIBREF21"
},
{
"start": 1358,
"end": 1380,
"text": "(Asheghi et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1092,
"end": 1100,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Previous studies",
"sec_num": "2"
},
{
"text": "Finnish Internet Parsebank (Luotolahti et al., 2015 ) is a Web-crawled corpus on the Finnish Internet. The corpus is sampled from a Finnish Webcrawl data. The crawl itself is produced using Spi-derLing Crawler, which is especially crafted for efficient gathering of unilingual corpora for linguistic purposes. The version we used is composed of 3.7 billion tokens, 6,635,960 documents and has morphological and dependency syntax annotations carried out with a state-of-the-art dependency parser by Bohnet (2010) , with a labelled attachment score of 82.1% (Luotolahti et al., 2015) . The Parsebank is distributed via a user interface at bionlp-www.utu.fi/dep_search/ and as a downloadable, sentence-shuffled version at bionlp.utu.fi.",
"cite_spans": [
{
"start": 27,
"end": 51,
"text": "(Luotolahti et al., 2015",
"ref_id": "BIBREF18"
},
{
"start": 498,
"end": 511,
"text": "Bohnet (2010)",
"ref_id": "BIBREF6"
},
{
"start": 556,
"end": 581,
"text": "(Luotolahti et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finnish Internet Parsebank",
"sec_num": "3"
},
{
"text": "In this Section, we first discuss the development of the naive register corpora from the Parsebank. These will be used as training data for the system identifying the registers from the entire Parsebank. We then motivate our selection of features in the classifier development, and finally, we present the classification results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting registers from the Parsebank",
"sec_num": "4"
},
{
"text": "Our naive interpretation of the document registers was based on the presence of lexical cues in the Parsebank document URLs. For the purposes of this article, we used four well-motivated register classes: news, blogs, encyclopedia and forum discussions. These were identified by the presence of blog for the blog class; and discussion, forum or keskustelu 'discussion' for the discussion class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive registers as training data",
"sec_num": "4.1"
},
{
"text": "In deciding the registers to be searched for, we aimed at a simple, experimental solution that would be informative about the performance of the naive method and offer direct application potentials for the Parsebank to increase its usability. Therefore, instead of creating a full-coverage taxonomy of all registers possibly found online, our aim here was to experiment with a few generally acknowledged, broad-coverage terms for register classes. Once we can in this paper show that the naive method works, the number of the registers will be expanded in future work. Table 1 presents the proportion of the naively assumed registers in the entire Parsebank and in the subset we use for the classifier training in Section 4.3. The sizes of the retrieved sub-corpora vary significantly. The most frequent, news and blogs, cover more than 10% of the Parsebank documents, respectively, while in particular the encyclopedia corpus remains smaller. Still, the sizes of these classes are relatively large, thanks to the size of the entire Parsebank.",
"cite_spans": [],
"ref_spans": [
{
"start": 569,
"end": 576,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Naive registers as training data",
"sec_num": "4.1"
},
{
"text": "The subset used for the classifier training is created by matching the document URLs of 100,000 first documents from the Parsebank. Of these, 26,216 had a URL that matched one of the keywords defined above. At this stage, all sentencelevel duplicates were also removed from the train-ing data to prevent the classifier from learning elements that are often repeated in Web pages, such as Lue lis\u00e4\u00e4 'Read more' but should not be used as classifier features. This proved to be highly needed, as from the 3,421,568 sentences in the 100,000 documents, 497,449 were duplicates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive registers as training data",
"sec_num": "4.1"
},
{
"text": "The work by Biber and colleagues on the variation of lexico-grammatical features across registers has been very influential in corpus linguistics over the years (Biber, 1995; Biber et al., 1999) . Recently, they have extended their work to Web registers and applied the carefully tuned Biber tagger identifying both lexical and grammatical features to explore registers from the Web . Gries & al. (2011) adopt an opposite approach by using simple word-trigrams. Sharoff and colleagues compare a number of different feature sets and conclude that bag-of-words and character n-grams achieve the best results (Sharoff et al., 2010) . For detecting mostly thematic domains, Sch\u00e4fer and Bildhauer (2016) apply lexical information attached to coarse-grained part-of-speech labels.",
"cite_spans": [
{
"start": 161,
"end": 174,
"text": "(Biber, 1995;",
"ref_id": "BIBREF5"
},
{
"start": 175,
"end": 194,
"text": "Biber et al., 1999)",
"ref_id": "BIBREF2"
},
{
"start": 385,
"end": 403,
"text": "Gries & al. (2011)",
"ref_id": "BIBREF12"
},
{
"start": 606,
"end": 628,
"text": "(Sharoff et al., 2010)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical and syntactic approaches to model register variation",
"sec_num": "4.2"
},
{
"text": "We compare three methods for detecting the four registers represented by our naively assembled sub-corpora. As a baseline method, we apply the standard bag-of-words, which, despite its simplicity, has often achieved a good performance (Sharoff et al., 2010) . Second, we use wordtrigrams similar to Gries & al. (2011) , and finally, Dependency Profiles (DPs), which are cooccurrences of the register documents with unlexicalised syntactic biarcs, three-token subtrees of dependency syntax analysis with the lexical information deleted (Kanerva et al., 2014 ) (see Figure 1 ). As opposed to e.g. keyword analysis (Scott and Tribble, 2006) based on the document words, DPs do not restrict to the lexical or topical aspects of texts, and thus offer linguistically better motivated analysis tools. Many studies on register variation highlight the importance of syntactic and grammatical features (Biber, 1995; Biber et al., 1999; Gries, 2012) . Therefore, we hypothesise that DPs would allow to generalise beyond individual topics to differentiate, e.g., between texts representing different registers but discussing similar topics, such as news and forum discussions on sports.",
"cite_spans": [
{
"start": 235,
"end": 257,
"text": "(Sharoff et al., 2010)",
"ref_id": "BIBREF23"
},
{
"start": 299,
"end": 317,
"text": "Gries & al. (2011)",
"ref_id": "BIBREF12"
},
{
"start": 535,
"end": 556,
"text": "(Kanerva et al., 2014",
"ref_id": "BIBREF15"
},
{
"start": 613,
"end": 638,
"text": "(Scott and Tribble, 2006)",
"ref_id": "BIBREF22"
},
{
"start": 893,
"end": 906,
"text": "(Biber, 1995;",
"ref_id": "BIBREF5"
},
{
"start": 907,
"end": 926,
"text": "Biber et al., 1999;",
"ref_id": "BIBREF2"
},
{
"start": 927,
"end": 939,
"text": "Gries, 2012)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 564,
"end": 573,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lexical and syntactic approaches to model register variation",
"sec_num": "4.2"
},
{
"text": "To predict the registers for all the Parsebank documents, we trained a linear SVM. In the training and testing, we used a subset of the Parsebank described in Table 1 . As features, we used the sets described in the previous Section. Specifically, the four register classes were modelled as a function of the feature sets, i.e. a co-occurence vector of the used features across the documents. These vectors were L2-normalised and then used to model the register class of a given document by fitting a linear SVM to the data, as implemented in the Scikit package 2 in Python. To validate the performance of the fitted model, we implemented a 10-fold cross-validation procedure with stratified random subsampling to keep the proportion of the register classes approximately equal between the training and test sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 159,
"end": 166,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Classifier development and testing",
"sec_num": "4.3"
},
{
"text": "The results of the SVM performance are described in Table 2 . First, the best results are achieved with the bag-of-words and lexical ngram approaches with an F-score of 80 % and 81%. This already confirms that the registers can be identified and that our naive method of assuming the registers based on the URLs is justified.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Classifier development and testing",
"sec_num": "4.3"
},
{
"text": "Second, although the DPs consisting of syntactic biarcs would allow for detailed linguistic examinations of the registers, and even if they follow the influential work by Biber and colleagues on modelling registers, their classification performance is clearly lower than those of the lexical approaches. The average F-score for the biarcs is only 72%. Interestingly, combining biarcs and the bag-of-words results in a very similar F-score of 79% and does not improve the classifier performance at all. In other words, three of the four feature sets attest very similar performances. This can suggest that the remaining 20% of the data may be somehow unreachable with these feature sets and requires further data examination, which we will present in Section 5.1 and 5.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier development and testing",
"sec_num": "4.3"
},
{
"text": "Third, it is noteworthy that the classifier performance varies clearly across the registers. News and blogs receive the best detection rates, rising to the very reliable 86% and 79% F-score, respectively, while the discussion and encyclopedia article detection rates are clearly lower, 70% and 73%. Naturally, the higher frequency of blogs and news in the training and test set explains some of these differences. Still, these differences merit fur- ther analyses in future work. Finally, the variation between the precision and recall rates across the registers requires closer examination. While the precision and recall are very similar for the news class, for the blogs the precision is higher than the recall. This suggests that some features, words in this case, are very reliable indicators of the blog register, but that they do are not present in all the class documents. For the discussion and encyclopedia classes, the recall is higher than the precision, indicating that such reliable indicators are less frequent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier development and testing",
"sec_num": "4.3"
},
{
"text": "The classifier performance seems sufficiently reliable to be applied for identifying the registers from the Parsebank. Before classifying the entire corpus, we will, however, in this Section seek answers to questions raised by the SVM results. First, we analyse the classifier decisions to find possible explanations for the 20% of the data that the SVM does not detect. This will also ensure the validity our naive method for assuming the register classes. Second, we study the most important register class features, words in our case, estimated by the SVM. These can explain the variation between the precision and recall across the registers revealed, and also further clarify the classifier's choices and the classification quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Validating the classifier quality",
"sec_num": "5"
},
{
"text": "Mismatches between the SVM predictions and the naively assumed register labels are informative both about the SVM performance and about the coherence of the naive corpora: a mismatch can occur either because the classifier makes a mistake or because the document, in fact, does not represent the register its URL implies. This can also explain why the classifier results achieved with different feature sets were very similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mismatches between the SVM predictions and the naively assumed registers",
"sec_num": "5.1"
},
{
"text": "We went manually through 60 wrongly classified Parsebank documents that did not belong to the subset on which the SVM was trained on. Although the number of documents was not high, the analysis revealed clear tendencies on the classification mismatches and the composition of the naively presumed registers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mismatches between the SVM predictions and the naively assumed registers",
"sec_num": "5.1"
},
{
"text": "Above all, the analysis proved the efficiency of our naive method of assuming the register. The blog, encyclopedia and discussion classes included only one document, respectively, where the URL did not refer to the document register. The news class included more variation, as in particular the documents with lehti 'magazine' in the URL included also other registers than actual news. Of the 15 analysed documents naively presumed news, nine were actual news, four columns or editorials and two discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mismatches between the SVM predictions and the naively assumed registers",
"sec_num": "5.1"
},
{
"text": "For the mismatches between the naive register labels and the SVM predictions, our analysis showed that many could be explained with significant linguistic variation within the registers, both in terms of the communicative aim of the document and its style. For instance, some of the blogs we analysed followed a very informal, narrative model, while others aimed at informing the reader on a current topic, and yet others resembled advertisements with an intention of promoting or sell-ing. Also the distinction between news and encyclopedia articles that both can focus on informing the reader was for some documents vague in terms of linguistic features. Similarly, some shorter blog posts and forum discussion posts appeared very similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mismatches between the SVM predictions and the naively assumed registers",
"sec_num": "5.1"
},
{
"text": "Similar communicative goals thus seem to result in similar linguistic text characteristics across registers. In addition to explaining the mismatches between the SVM predictions and the naively assumed registers, this linguistic variation inside registers clarifies the SVM results and the fact that the performances of three of the four feature sets were very similar. On one hand, this could also suggest that the registers should be defined differently than we have currently done, so that they would better correspond to the communicative aims and linguistic characteristics of the texts. For instance, the register taxonomy proposed by Biber and colleagues with registers such as opinion or informational persuasion follows better these communicative goals and could, perhaps, result in better classification results. On the other hand, such denominations are not commonly known and the registers can be difficult to identify, as noted by Asheghi & al. (2016) . Also, they can result in very similar texts falling to different registers. For instance in the taxonomy presented by Biber & al, personal blogs, travel blogs and opinion blogs are all placed in different registers.",
"cite_spans": [
{
"start": 944,
"end": 964,
"text": "Asheghi & al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mismatches between the SVM predictions and the naively assumed registers",
"sec_num": "5.1"
},
{
"text": "To obtain a better understanding of the classifier's decisions and the features it bases its decisions on, we analysed the most important words of each register class as estimated by the SVM classifier based on the training corpus. These words can be seen as the keywords of these classes. In corpus linguistics, keyword analysis (Scott and Tribble, 2006 ) is a standard corpus analysis method. These words are said to be informative about the corpus topic and style. (See, however, also Guyon and Elisseeff (2003) and Carpena & al. (2009) .) To this end, we created a frequency list of the 20 words which were estimated as the most important in each register class across the ten validation rounds. Table 3 presents, for each class, five of the ten most frequent words on this list that we consider the most revealing.",
"cite_spans": [
{
"start": 330,
"end": 354,
"text": "(Scott and Tribble, 2006",
"ref_id": "BIBREF22"
},
{
"start": 488,
"end": 514,
"text": "Guyon and Elisseeff (2003)",
"ref_id": "BIBREF14"
},
{
"start": 519,
"end": 539,
"text": "Carpena & al. (2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 700,
"end": 707,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "The most important register features",
"sec_num": "5.2"
},
{
"text": "The most important words for each register class listed in Table 3 reveal clear tendencies that the classifier seems to follow. Despite the extraction of sentence-level duplicates presented in Section 3, the blog and forum discussion classes include words coming from templates and other automatically inserted phrases, such as Thursday, anonymous and the English words. Although these do not reveal any linguistic characteristics of the registers, they thus allow the classifier to identify the classes, and also explain the higher precision than recall reported in Section 2. Interestingly, Asheghi & al. (2016) report similar keywords for both blogs and discussions in English, which demonstrates the similarity of these registers across languages. In our data, the encyclopedia and news classes include words reflecting topics, such as 20-tuumaiset '20-inch', and for instance verbs denoting typical actions in the registers, such as kommentoi 'comments'. These are more informative also on the linguistic characteristics of the registers and their communicative purposes.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "The most important register features",
"sec_num": "5.2"
},
{
"text": "6 Finnish Internet Parsebank with register sub-corpora",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The most important register features",
"sec_num": "5.2"
},
{
"text": "The classifier performance results reported in Section 4.3 and the analysis described in Section 5 proved that the developed classifier is sufficiently reliable to improve the usability of the Parsebank. In this Section, we apply the model to classify the entire Parsebank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The most important register features",
"sec_num": "5.2"
},
{
"text": "We classified all the Parsebank documents with the bag-of-words feature set and parameters reported in Section 4.3. The SVM was developed to detect the four classes for which we had the training data thanks to the naive labels present in the document URLs. The addition of a negative class to the training data, with none of the labels in the URLs, would have increased significantly its noisiness, as these documents could still, despite the absence of the naive label, belong to one of the positive classes. Therefore, we needed to take some additional steps in the Parsebank classification, as the final classification should still include a fifth, negative class. First, we ran the four-class classifier on all the Parsebank documents. In addition to the register labels, we also collected the scores for each regis-Blogs Forum discussions Encyclopedia News kirjoitettu 'written' keskustelualue 'discussion area' 20-tuumaiset '20-inch' kertoo 'tells' ihana 'wonderful' wrote opiskelu 'studying' aikoo 'will' kl. 'o'clock'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting five register classes with a four-class SVM",
"sec_num": "6.1"
},
{
"text": "nimet\u00f6n 'anonymous' wikiin 'to the wiki' tutkijat 'researchers' torstai 'Thursday'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting five register classes with a four-class SVM",
"sec_num": "6.1"
},
{
"text": "ketjussa 'in the thread' perustettiin 'was founded' huomauttaa 'notes' archives forum liitet\u00e4\u00e4n 'is attached' kommentoi 'comments' ter, as assigned by the SVM, and sorted the documents based on these scores. Then, we counted a naive precision rate for the predictions by counting the proportion of the correct SVM predictions that matched the naive register label gotten from the URL. This gave us a sorted list of the Parsebank documents, where, in addition to the scores assigned by the classifier, we also have an estimate of the prediction precisions. From this sorted list, we could then take the best ranking ones that are the most likely to be correctly classified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting five register classes with a four-class SVM",
"sec_num": "6.1"
},
{
"text": "These estimated precisions for the documents descend from 1 for the most reliably classified documents to 0.74 for the least reliable ones. The question is, where to set the threshold to distinguish the documents that we consider as correctly predicted and those that we do not. As we do not know the distribution of registers in the Finnish Web, this is difficult to approximate. The study by on a large sample of Web documents reports the most frequent registers in English. These are described in Table 4 . Our news and encyclopedia registers would most likely belong to the informational category, blogs to the narrative and Forum discussions naturally to the interactive discussion category. Very likely many could also be classified as hybrid. Based on these, we can estimate that the registers we have can cover a large proportion of the Finnish Web and of the Parsebank, in particular if we consider them as relatively general categories that can include a number of subclasses, similar to .",
"cite_spans": [],
"ref_spans": [
{
"start": 500,
"end": 507,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Detecting five register classes with a four-class SVM",
"sec_num": "6.1"
},
{
"text": "To improve the Parsebank usability to the maximum, we decided to release two sets of subcorpora: the A collection includes all the Parsebank documents with best-ranking scores assigned by the SVM, where the naive match precision threshold was set to 90%, and the B corpora where the threshold was set to 80% 3 This allows the users to choose the precision with which the register labels are correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collections A and B",
"sec_num": "6.2"
},
{
"text": "The sizes of the sub-corpus collections are presented in Tables 5 and 6 . The A collection consists of altogether 2 million documents classified to four registers. Of these, the URLs of nearly 800,000 documents match the SVM prediction, and more than a million do not have a naive label deduced from the URL. The news sub-corpus is clearly the largest covering nearly 50% of the total, blogs including 0.5 million documents.",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 71,
"text": "Tables 5 and 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Collections A and B",
"sec_num": "6.2"
},
{
"text": "In the B collection, the total number of documents rises to four million, which presents nearly 60% of the Parsebank. Similarly to the A collection, News and Blogs are the largest registerspecific classes. In this version, the number of documents with mismatches between the classifier predictions and the naively assumed registers is evidently higher than in the A, and also the number of documents without any naive label is higher. This naturally implies a lower register prediction quality. Despite this, the B collection offers novel possibilities for researchers. It is a very large corpus, where the registers should be seen as upperlevel, coarse-grained classes. In addition to offering register-specific documents, this collection can be seen as a less noisy version of the Parsebank, which is useful also when the actual registers are not central. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collections A and B",
"sec_num": "6.2"
},
{
"text": "The aim of this article was to explore the development of register sub-corpora for the Finnish Internet Parsebank by training a classifier based on documents for which we had naively deduced the register by the presence of keyword matches in the document URL. We also experimented with several feature sets in detecting these registers and evaluated the validity of the created sub-corpora by analysing their linguistic characteristics and classifier prediction mistakes. First of all, the results showed that our naive method of assuming the document registers is valid. Only the news class proved to include some documents belonging to other, although related, registers. Of the four feature sets we experimented on, the best classification performance was achieved with the bag-of-words and lexical trigram sets. The average F-score of 81% proved that the registers can be relatively reliably identified. In addition, the analysis of the classifier mistakes showed that texts with similar communicative purposes, such as news articles and blog posts that both aim at informing the reader, share linguistic characteristics. This complicates their identification, and attests of the challenges related to defining registers in practice, as already discussed in previous studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "After validating the classifier performance and the quality of the naively assembled sub-corpora, we classified the entire Parsebank using the fourclass model developed with the naive registers. To create a fifth, negative class for the documents not belonging to any of the four known registers, we sorted the documents based on the scores estimated by the SVM and counted a naive classi-fication precision based on the proportion of the documents with matching naive register labels deduced from the URL and classifier predictions. This allowed us to establish a precision threshold, above which we can assume the document labels to be sufficiently reliably predicted. To improve the Parsebank usability, we release to sets of sub-corpora: the A collection includes two million documents classified to four register-specific corpora with a precision above 90%, and the B collection four million documents with a precision above 80%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Naturally, this first sub-corpus release leaves many perspectives and needs for future work. More precisely and reliably defined register classes would further increase the usability of the sub-corpora. Also the number of available registers should be increased, as the none class currently includes still many registers. The naming of the registers and their inner variation would also merit further analyses to decide how to deal with linguistically similar texts that at least in our current system belong to different registers, such as different texts aiming at informing the reader.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Bonnie Webber. 2009. Genre distinctions for discourse in the Penn treebank. In Proceedings of the joint conference of the 47th annual meeting of the ACL and the 4th international joint conference on natural language processing of the AFNLP., pages 674-682. Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "http://bionlp.utu.fi",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://scikit-kearn.org/stable/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These corpora will be put publicly available on the acceptance of this article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been funded by the Kone Foundation. Computational resources were provided by CSC -It center for science.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Crowdsourcing for web genre annotation",
"authors": [
{
"first": "Serge",
"middle": [],
"last": "Noushin Rezapour Asheghi",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Sharoff",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Markert",
"suffix": ""
}
],
"year": 2016,
"venue": "Language Resources and Evaluation",
"volume": "50",
"issue": "3",
"pages": "603--641",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noushin Rezapour Asheghi, Serge Sharoff, and Katja Markert. 2016. Crowdsourcing for web genre annotation. Language Resources and Evaluation, 50(3):603-641.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Using grammatical features for automatic regisgter identification in an unrestricted corpus of documents from the Open Web",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Biber",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Egbert",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Research Design and Statistics in Linguistics and Communication Science",
"volume": "2",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Biber and Jesse Egbert. 2015. Using gram- matical features for automatic regisgter identifica- tion in an unrestricted corpus of documents from the Open Web. Journal of Research Design and Statistics in Linguistics and Communication Sci- ence, 2(1).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Longman Grammar of Spoken and Written English",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Biber",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Leech",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Conrad",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Finegan",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Biber, S. Johansson, G. Leech, Susan Conrad, and E. Finegan. 1999. The Longman Grammar of Spoken and Written English. Longman, London.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Exloring the composition of the searchable web: a corpus-based taxonomy of web registers",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Biber",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Egbert",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Davies",
"suffix": ""
}
],
"year": 2015,
"venue": "Corpora",
"volume": "10",
"issue": "1",
"pages": "11--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Biber, Jesse Egbert, and Mark Davies. 2015. Exloring the composition of the searchable web: a corpus-based taxonomy of web registers. Corpora, 10(1):11-45.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Variation across speech and writing",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Biber",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Biber. 1989. Variation across speech and writing. Cambridge University Press, Cambridge.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Dimensions of Register Variation: A Cross-linguistic Comparison",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Biber",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Biber. 1995. Dimensions of Register Vari- ation: A Cross-linguistic Comparison. Cambridge University Press, Cambridge.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Very high accuracy and fast dependency parsing is not a contradiction",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10",
"volume": "",
"issue": "",
"pages": "89--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet. 2010. Very high accuracy and fast de- pendency parsing is not a contradiction. In Proceed- ings of the 23rd International Conference on Com- putational Linguistics, COLING '10, pages 89-97, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Level statistics of words: Finding keywords in literary texts and symbolic sequences",
"authors": [
{
"first": "Pedro",
"middle": [],
"last": "Carpena",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Bernaola-Galv\u00e1n",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Hackenberg",
"suffix": ""
},
{
"first": "Ana",
"middle": [
"V"
],
"last": "Coronado",
"suffix": ""
},
{
"first": "Jose",
"middle": [
"L"
],
"last": "Oliver",
"suffix": ""
}
],
"year": 2009,
"venue": "Physical Review E (Statistical, Nonlinear, and Soft Matter Physics)",
"volume": "79",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedro. Carpena, Pedro. Bernaola-Galv\u00e1n, Michael Hackenberg, Ana. V. Coronado, and Jose L. Oliver. 2009. Level statistics of words: Finding keywords in literary texts and symbolic sequences. Physical Review E (Statistical, Nonlinear, and Soft Matter Physics), 79(3):035102.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Genres on the Web: Computational Models and Empirical Studies, chapter Problems in the Use-Centered Development of a Taxonomy of Web Genres",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Crowston",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Kwa\u015bnik",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Rubleske",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "69--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Crowston, Barbara Kwa\u015bnik, and Joseph Rubleske, 2011. Genres on the Web: Computational Models and Empirical Studies, chapter Problems in the Use-Centered Development of a Taxonomy of Web Genres, pages 69-84. Springer Netherlands, Dordrecht.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Developing a bottom-up, user-based method of web register classification",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Egbert",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Biber",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Davies",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of the Association for Information Science and Technology",
"volume": "66",
"issue": "9",
"pages": "1817--1831",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Egbert, Douglas Biber, and Mark Davies. 2015. Developing a bottom-up, user-based method of web register classification. Journal of the Association for Information Science and Technology, 66(9):1817- 1831.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Genre classification of web pages: User study and feasibility analysis",
"authors": [
{
"first": "S",
"middle": [],
"last": "Meyer Zu Essen",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 27th Annual German Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "256--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Meyer Zu Essen and Barbara Stein. 2004. Genre classification of web pages: User study and fea- sibility analysis. Proceedings of the 27th Annual German Conference on Artificial Intelligence, pages 256-259.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Is part-ofspeech tagging a solved task? an evaluation of pos taggers for the german web as corpus",
"authors": [
{
"first": "Eugenie",
"middle": [],
"last": "Giesbrecht",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Evert",
"suffix": ""
}
],
"year": 2009,
"venue": "Web as Corpus Workshop (WAC5)",
"volume": "",
"issue": "",
"pages": "27--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugenie Giesbrecht and Stefan Evert. 2009. Is part-of- speech tagging a solved task? an evaluation of pos taggers for the german web as corpus. In Web as Corpus Workshop (WAC5), pages 27-36.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "N-grams and the clustering of registers. Empirical Language Research",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gries",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "Cyrus",
"middle": [],
"last": "Shaoul",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gries, John Newman, and Cyrus Shaoul. 2011. N-grams and the clustering of registers. Em- pirical Language Research.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Methodological and analytic frontiers in lexical research, chapter Behavioral Profiles: a fine-grained and quantitative approach in corpus-based lexical semantics",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gries",
"suffix": ""
}
],
"year": 2012,
"venue": "John Benjamins",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gries, 2012. Methodological and analytic frontiers in lexical research, chapter Behavioral Pro- files: a fine-grained and quantitative approach in corpus-based lexical semantics. John Benjamins, Amsterdam and Philadelphia.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An introduction to variable and feature selection. The journal of machine learning research",
"authors": [
{
"first": "Isabelle",
"middle": [
"M"
],
"last": "Guyon",
"suffix": ""
},
{
"first": "Andre",
"middle": [],
"last": "Elisseeff",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "3",
"issue": "",
"pages": "1157--1182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabelle M. Guyon and Andre Elisseeff. 2003. An introduction to variable and feature selection. The journal of machine learning research, 3:1157-1182.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Syntactic n-gram collection from a large-scale corpus of Internet Finnish",
"authors": [
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Matti",
"middle": [],
"last": "Luotolahti",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Laippala",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Sixth International Conference Baltic HLT 2014",
"volume": "",
"issue": "",
"pages": "184--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenna Kanerva, Matti Luotolahti, Veronika Laippala, and Filip Ginter. 2014. Syntactic n-gram collec- tion from a large-scale corpus of Internet Finnish. In Proceedings of the Sixth International Conference Baltic HLT 2014, pages 184-191. IOS Press.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Introduction to the special issue on Web as Corpus",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Kilgariff",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Kilgariff and Gregory Grefenstette. 2003. In- troduction to the special issue on Web as Corpus. Computational Linguistics, 29(3).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Genres on the Web: Computational Models and Empirical Studies, chapter Classification of Web Sites at Super-genre Level",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Lindemann",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Littig",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "211--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Lindemann and Lars Littig, 2011. Gen- res on the Web: Computational Models and Em- pirical Studies, chapter Classification of Web Sites at Super-genre Level, pages 211-235. Springer Netherlands, Dordrecht.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Towards universal web parsebanks",
"authors": [
{
"first": "Juhani",
"middle": [],
"last": "Luotolahti",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Laippala",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Dependency Linguistics (Depling'15)",
"volume": "",
"issue": "",
"pages": "211--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juhani Luotolahti, Jenna Kanerva, Veronika Laippala, Sampo Pyysalo, and Filip Ginter. 2015. Towards universal web parsebanks. In Proceedings of the In- ternational Conference on Dependency Linguistics (Depling'15), pages 211-220. Uppsala University.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Genre as social action",
"authors": [
{
"first": "C",
"middle": [
"R"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1984,
"venue": "Quaterly journal of speech",
"volume": "70",
"issue": "2",
"pages": "151--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.R. Miller. 1984. Genre as social action. Quaterly journal of speech, 70(2):151-167.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Web genre benchmark under construction",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Santini",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Sharoff",
"suffix": ""
}
],
"year": 2009,
"venue": "JLCL",
"volume": "24",
"issue": "1",
"pages": "129--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Santini and Serge Sharoff. 2009. Web genre benchmark under construction. JLCL, 24(1):129- 145.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Proceedings of the 10th Web as Corpus Workshop, chapter Automatic Classification by Topic Domain for Meta Data Generation, Web Corpus Evaluation, and Corpus Comparison",
"authors": [
{
"first": "Roland",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Bildhauer",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roland Sch\u00e4fer and Felix Bildhauer, 2016. Proceed- ings of the 10th Web as Corpus Workshop, chapter Automatic Classification by Topic Domain for Meta Data Generation, Web Corpus Evaluation, and Cor- pus Comparison, pages 1-6. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Textual Patterns: keyword and corpus analysis in language education",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Chistopher",
"middle": [],
"last": "Tribble",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Scott and Chistopher Tribble. 2006. Textual Pat- terns: keyword and corpus analysis in language ed- ucation. Benjamins, Amsterdam.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The web library of Babel: evaluating genre collections",
"authors": [
{
"first": "Serge",
"middle": [],
"last": "Sharoff",
"suffix": ""
},
{
"first": "Zhili",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Markert",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Serge Sharoff, Zhili Wu, and Katja Markert. 2010. The web library of Babel: evaluating genre collections.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Preliminary recommendations on corpus typology",
"authors": [
{
"first": "John",
"middle": [],
"last": "Sinclair",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Sinclair. 1996. Preliminary recommendations on corpus typology.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Genre analysis: English in academic and research settings",
"authors": [
{
"first": "John",
"middle": [],
"last": "Swales",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Swales. 1990. Genre analysis: English in aca- demic and research settings. Cambridge University Press, Cambridge.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Using genres to improve search engines",
"authors": [
{
"first": "Vedrana",
"middle": [],
"last": "Vidulin",
"suffix": ""
},
{
"first": "Mitja",
"middle": [],
"last": "Lustrek",
"suffix": ""
},
{
"first": "Matjax",
"middle": [],
"last": "Gams",
"suffix": ""
}
],
"year": 2007,
"venue": "Workshop \"Towards genre-enabled Search Engines: The impact of NLP",
"volume": "",
"issue": "",
"pages": "45--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vedrana Vidulin, Mitja Lustrek, and Matjax Gams. 2007. Using genres to improve search engines. In Workshop \"Towards genre-enabled Search Engines: The impact of NLP\" at RANLP, pages 45-51.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"content": "<table/>",
"html": null,
"text": "Total number of documents in the naively assembled register classes and in the subset used in SVM training",
"type_str": "table"
},
"TABREF3": {
"num": null,
"content": "<table/>",
"html": null,
"text": "The SVM results achieved with the four feature sets.",
"type_str": "table"
},
"TABREF4": {
"num": null,
"content": "<table><tr><td>Register Narrative Informational description Opinion Interactive discussion Hybrid</td><td>Proportion 31.2% 24,5% 11.2% 6.4% 29.2%</td></tr></table>",
"html": null,
"text": "The most important features for each register class as estimated by the classifier. The original words are italicised and the translations inside quotations. Note that some words are originally in English.",
"type_str": "table"
},
"TABREF5": {
"num": null,
"content": "<table/>",
"html": null,
"text": "Register frequencies in the English Web, as reported by",
"type_str": "table"
},
"TABREF7": {
"num": null,
"content": "<table><tr><td>Register Blogs Forum discussions Encyclopedia News B collection total</td><td>Register total URL match W/o naive label Mismatch 1,122,451 450,202 604,877 67,372 673,678 189,441 394,501 89,736 483,425 74,679 376,586 32,160 2,425,261 542,970 1,735,926 146,365 4,704,815 1,257,292 3,111,890 335,633</td></tr></table>",
"html": null,
"text": "Sizes of the register classes in the A collection",
"type_str": "table"
},
"TABREF8": {
"num": null,
"content": "<table/>",
"html": null,
"text": "Sizes of the register-specific classes in the B collection",
"type_str": "table"
}
}
}
}