ACL-OCL / Base_JSON /prefixW /json /W17 /W17-0204.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W17-0204",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:28:33.516705Z"
},
"title": "Tagging Named Entities in 19th Century and Modern Finnish Newspaper Material with a Finnish Semantic Tagger",
"authors": [
{
"first": "Kimmo",
"middle": [],
"last": "Kettunen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The National Library of Finland",
"location": {
"addrLine": "Saimaankatu 6",
"postCode": "FI-50100",
"region": "Mikkeli"
}
},
"email": "kimmo.kettunen@helsinki.fi"
},
{
"first": "Laura",
"middle": [],
"last": "L\u00f6fberg",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lancaster University",
"location": {
"country": "UK"
}
},
"email": "l.lofberg@lancaster.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Named Entity Recognition (NER), search, classification and tagging of names and name like informational elements in texts, has become a standard information extraction procedure for textual data during the last two decades. NER has been applied to many types of texts and different types of entities: newspapers, fiction, historical records, persons, locations, chemical compounds, protein families, animals etc. In general a NER system's performance is genre and domain dependent. Also used entity categories vary a lot (Nadeau and Sekine, 2007). The most general set of named entities is usually some version of three part categorization of locations, persons and corporations. In this paper we report evaluation results of NER with two different data: digitized Finnish historical newspaper collection Digi and modern Finnish technology news, Digitoday. Historical newspaper collection Digi contains 1,960,921 pages of newspaper material from years 1771-1910 both in Finnish and Swedish. We use only material of Finnish documents in our evaluation. The OCRed newspaper collection has lots of OCR errors; its estimated word level correctness is about 70-75%, and its NER evaluation collection consists of 75 931 words (Kettunen and P\u00e4\u00e4kk\u00f6nen, 2016; Kettunen et al., 2016). Digitoday's annotated collection consists of 240 articles in six different sections of the newspaper. Our new evaluated tool for NER tagging is non-conventional: it is a rulebased Finnish Semantic Tagger, the FST (L\u00f6fberg et al., 2005), and its results are compared to those of a standard rulebased NE tagger, FiNER.",
"pdf_parse": {
"paper_id": "W17-0204",
"_pdf_hash": "",
"abstract": [
{
"text": "Named Entity Recognition (NER), search, classification and tagging of names and name like informational elements in texts, has become a standard information extraction procedure for textual data during the last two decades. NER has been applied to many types of texts and different types of entities: newspapers, fiction, historical records, persons, locations, chemical compounds, protein families, animals etc. In general a NER system's performance is genre and domain dependent. Also used entity categories vary a lot (Nadeau and Sekine, 2007). The most general set of named entities is usually some version of three part categorization of locations, persons and corporations. In this paper we report evaluation results of NER with two different data: digitized Finnish historical newspaper collection Digi and modern Finnish technology news, Digitoday. Historical newspaper collection Digi contains 1,960,921 pages of newspaper material from years 1771-1910 both in Finnish and Swedish. We use only material of Finnish documents in our evaluation. The OCRed newspaper collection has lots of OCR errors; its estimated word level correctness is about 70-75%, and its NER evaluation collection consists of 75 931 words (Kettunen and P\u00e4\u00e4kk\u00f6nen, 2016; Kettunen et al., 2016). Digitoday's annotated collection consists of 240 articles in six different sections of the newspaper. Our new evaluated tool for NER tagging is non-conventional: it is a rulebased Finnish Semantic Tagger, the FST (L\u00f6fberg et al., 2005), and its results are compared to those of a standard rulebased NE tagger, FiNER.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Digital newspapers and journals, either OCRed or born digital, form a growing global network of data that is available 24/7, and as such they are an important source of information. As the amount of digitized journalistic data grows, also tools for harvesting the data are needed to gather information. Named Entity Recognition has become one of the basic techniques for information extraction of texts since the mid-1990s (Nadeau and Sekine, 2007) . In its initial form NER was used to find and mark semantic entities like person, location and organization in texts to enable information extraction related to this kind of material. Later on other types of extractable entities, like time, artefact, event and measure/numerical, have been added to the repertoires of NER software (Nadeau and Sekine, 2007) . In this paper we report evaluation results of NER for both historical 19 th century Finnish and modern Finnish. Our historical data consists of an evaluation collection out of an OCRed Finnish historical newspaper collection 1771 -1910 . Our present day Finnish evaluation collection is from a Finnish technology newspaper Digitoday 1 . have reported NER evaluation results of the historical Finnish data with two tools, FiNER and ARPA (M\u00e4kel\u00e4, 2014) . Both tools achieved maximal F-scores of about 60 at best, but with many categories the results were much weaker. Word level accuracy of the evaluation collection was about 73%, and thus the data can be considered very noisy. Results for modern Finnish NER have not been reported extensively so far. Silfverberg (2015) mentions a few results in his description of transferring an older version of FiNER to a new version. With modern Finnish data F-scores round 90 are achieved. We use an older version of FiNER in this evaluation as a baseline NE tagger. FiNER is described more in . Shortly described it is a rule-based NER tagger that uses morphological recognition, morphological disambiguation, gazetteers (name lists), as well as pattern and context rules for name tagging.",
"cite_spans": [
{
"start": 423,
"end": 448,
"text": "(Nadeau and Sekine, 2007)",
"ref_id": "BIBREF13"
},
{
"start": 781,
"end": 806,
"text": "(Nadeau and Sekine, 2007)",
"ref_id": "BIBREF13"
},
{
"start": 1034,
"end": 1038,
"text": "1771",
"ref_id": null
},
{
"start": 1039,
"end": 1044,
"text": "-1910",
"ref_id": null
},
{
"start": 1245,
"end": 1259,
"text": "(M\u00e4kel\u00e4, 2014)",
"ref_id": "BIBREF8"
},
{
"start": 1561,
"end": 1579,
"text": "Silfverberg (2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Along with FiNER we use a non-standard NER tool, a semantic tagger for Finnish, the FST (L\u00f6fberg et al., 2005) . The FST is not a NER tool as such; it has first and foremost been developed for semantic analysis of full text. The FST assigns a semantic category to each word in text employing a comprehensive semantic category scheme (USAS Semantic Tagset, available in English 2 and also in Finnish 3 ). The Finnish Semantic Tagger (the FST) has its origins in Benedict, the EU-funded language technology project from the early 2000s, the aim of which was to discover an optimal way of catering for the needs of dictionary users in modern electronic dictionaries by utilizing state-of-theart language technology of the early 2000s.",
"cite_spans": [
{
"start": 88,
"end": 110,
"text": "(L\u00f6fberg et al., 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The FST was developed using the English Semantic Tagger as a model. This semantic tagger was developed at the University Centre for Corpus Research on Language (UCREL) at 1 https://github.com/mpsilfve/finerdata/tree/master/digitoday/ner_test_data _annotatated Lancaster University as part of the UCREL Semantic Analysis System (USAS 4 ) framework, and both these equivalent semantic taggers were utilized in the Benedict project in the creation of a context-sensitive search tool for a new intelligent dictionary. The overall architecture of the FST is described in L\u00f6fberg et al. (2005) and the intelligent dictionary application in L\u00f6fberg et al. (2004) .",
"cite_spans": [
{
"start": 566,
"end": 587,
"text": "L\u00f6fberg et al. (2005)",
"ref_id": null
},
{
"start": 634,
"end": 655,
"text": "L\u00f6fberg et al. (2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In different evaluations the FST has been shown to be capable of dealing with most general domains which appear in a modern standard Finnish text. Furthermore, although the semantic lexical resources of the tagger were originally developed for the analysis of general modern standard Finnish, evaluation results have shown that the lexical resources are also applicable to the analysis of both older Finnish text and the more informal type of writing found on the Web. In addition, the semantic lexical resources can be tailored for various domain-specific tasks thanks to the flexible USAS category system. Lexical resources used by the FST consist of two separate lexicons: the semantically categorized single word lexicon contains 45,871 entries and the multiword expression lexicon contains 6,113 entries, representing all parts of speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our aim in the paper is twofold: first we want to evaluate whether a general computational semantic tool like the FST is able to perform a limited semantic task like NER as well as dedicated NER taggers. Secondly, we try to establish the gap on NER performance of a modern Finnish tool with 19 th century low quality OCRed text and good quality modern newspaper text. These two tasks will inform us about the adaptability of the FST to NER in general and also its adaptability to tagging of 19 th century Finnish that has lots of errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our historical Finnish evaluation data consists of 75 931 lines of manually annotated newspaper text. Most of the data is from the last decades of 19 th century. Our earlier NER evaluations with this data have achieved at best F-scores of 50-60 in some name categories .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for the Historical Data",
"sec_num": "2"
},
{
"text": "We evaluated performance of the FST and FiNER using the conlleval 5 script used in Conference on Computational Natural Language Learning (CONLL). Conlleval uses standard measures of precision, recall and F-score, the last one defined as 2PR/(R+P), where P is precision and R recall (Manning and Sch\u00fctze, 1999) . Its evaluation is based on \"exact-match evaluation\" (Nadeau and Sekine, 2007) . In this type of evaluation NER system is evaluated based on the micro-averaged F-measure (MAF) where precision is the percentage of correct named entities found by the NER software; recall is the percentage of correct named entities present in the tagged evaluation corpus that are found by the NER system. In the strict version of evaluation named entity is considered correct only if it is an exact match of the corresponding entity in the tagged evaluation corpus: a result is considered correct only if the boundaries and classification are exactly as annotated (Poibeau and Kosseim, 2001 ). As the FST does not distinguish multipart names with their boundaries only loose evaluation without entity boundary detection was performed with the FST.",
"cite_spans": [
{
"start": 282,
"end": 309,
"text": "(Manning and Sch\u00fctze, 1999)",
"ref_id": "BIBREF10"
},
{
"start": 364,
"end": 389,
"text": "(Nadeau and Sekine, 2007)",
"ref_id": "BIBREF13"
},
{
"start": 958,
"end": 984,
"text": "(Poibeau and Kosseim, 2001",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results for the Historical Data",
"sec_num": "2"
},
{
"text": "The FST tags three different types of names: personal names, geographical names and other proper names. These are tagged with tags Z1, Z2, and Z3, respectively (L\u00f6fberg et al., 2005) . Their top level semantic category in the UCREL scheme is Names & Grammatical Words (Z), which are considered as closed class words (Hirst, 2009 , Rayson et al., 2004 . Z3 is a slightly vague category with mostly names of corporations, categories Z1 and Z2 are clearly cut. Table 1 shows results of the FST's tagging of locations and persons in our evaluation data compared to those of FiNER. We performed two evaluations with the FST: one with the words as they are, and the other with w\uf0e0v substitution. Variation of w and v is one of the most salient features of 19 th century Finnish. Modern Finnish uses w mainly in foreign names like Wagner, but in 19 th century Finnish w was used frequently instead of v in all words. In many other respects the Finnish of late 19 th century does not differ too much from modern Finnish, and it can be analyzed reasonably well with computational tools that have been developed for modern Finnish (Kettunen and P\u00e4\u00e4kk\u00f6nen, 2016 Substitution of w with v decreased number of unknown words to FST with about 2% units and it has a noticeable effect on detection of locations and a small effect on persons. Overall FST recognizes locations better; their recognition with w/v substitution is almost 5 per cent points better than without substitution. FST's performance with locations outperforms that of FiNER's slightly, but FST's performance with person names is 7% points below that of FiNER. Performance of either tagger is not very good, which is expected as the data is very noisy.",
"cite_spans": [
{
"start": 160,
"end": 182,
"text": "(L\u00f6fberg et al., 2005)",
"ref_id": null
},
{
"start": 316,
"end": 328,
"text": "(Hirst, 2009",
"ref_id": "BIBREF2"
},
{
"start": 329,
"end": 350,
"text": ", Rayson et al., 2004",
"ref_id": "BIBREF19"
},
{
"start": 1120,
"end": 1149,
"text": "(Kettunen and P\u00e4\u00e4kk\u00f6nen, 2016",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 458,
"end": 465,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results for the Historical Data",
"sec_num": "2"
},
{
"text": "It is evident that the main reason for low NER performance is the quality of the OCRed texts. If we analyze the tagged words with a morphological analyzer (Omorfi v. 0.3 6 ), we can see that wrongly tagged words are recognized clearly worse by Omorfi than those that are tagged right. Figures are shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 306,
"end": 313,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results for the Historical Data",
"sec_num": "2"
},
{
"text": "The FST: right tag, word unrec. rate 5.6 0.06",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locs Pers",
"sec_num": null
},
{
"text": "The FST: wrong tag, word unrec. rate 44.0 33.3 Table 2 . Percentages of non-recognized words with correctly and wrongly tagged locations and persons -Omorfi 0.3",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Locs Pers",
"sec_num": null
},
{
"text": "Another indication of the effect of textual quality to tagging is comparison of amount of tags with equal texts of different quality. We made tests with three versions of a 100,000 word text material that is different from our historical NER evaluation material but derives from the 19th century newspaper collection as well. One text version was old OCR, another manually corrected OCR version and third a new OCRed version. Besides character level errors also word order errors have been corrected in the two new versions. For these texts we did not have a gold standard NE tagged version, and thus we could only count number of NER tags in different texts. Results are shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 681,
"end": 688,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Locs Pers",
"sec_num": null
},
{
"text": "Old OCR As the figures show, there is a 5-12% unit increase in the number of tags, when the quality of the texts is better. Although all of the tags are obviously not right, the increase is still noticeable and suggests that improvement in text quality will also improve finding of NEs. Same kind of results were achieved in with FiNER and ARPA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gain Pers Gain",
"sec_num": null
},
{
"text": "NER experiments with OCRed data in other languages show usually improvement of NER when the quality of the OCRed data has been improved from very poor to somehow better (Lopresti, 2009) . Results of Alex and Burns (2014) imply that with lower level OCR quality (below 70% word level correctness) name recognition is harmed clearly. Packer et al. (2010) report partial correlation of Word Error Rate of the text and achieved NER result; their experiments imply that word order errors are more significant than character level errors. Miller et al. (2000) show that rate of achieved NER performance of a statistical trainable tagger degraded linearly as a function of word error rates. On the other hand, results of Rodriquez et al. (2012) show that manual correction of OCRed material that has 88-92% word accuracy does not increase performance of four different NER tools significantly.",
"cite_spans": [
{
"start": 169,
"end": 185,
"text": "(Lopresti, 2009)",
"ref_id": "BIBREF7"
},
{
"start": 332,
"end": 352,
"text": "Packer et al. (2010)",
"ref_id": null
},
{
"start": 533,
"end": 553,
"text": "Miller et al. (2000)",
"ref_id": "BIBREF12"
},
{
"start": 714,
"end": 737,
"text": "Rodriquez et al. (2012)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gain Pers Gain",
"sec_num": null
},
{
"text": "As the word accuracy of the historical newspaper material is low, it would be expectable, that somehow better recognition results would be achieved, if the word accuracy was round 80-90% instead of 70-75%. Our informal tests with different quality texts suggest this, too, as do the distinctly different unrecognition rates with rightly and wrongly tagged words. Ehrmann et al. (2016) suggest that application of NE tools on historical texts faces three challenges: i) noisy input texts, ii) lack of coverage in linguistic resources, and iii) dynamics of language. In our case the first obstacle is the most obvious, as was shown. Lack of coverage in linguistic resources e.g. in the form of missing old names in the lexicons of the NE tools is also a considerable source of errors in our case, as our tools are made for modern Finnish. With dynamics of language Ehrmann et al. refer to different rules and conventions for the use of written language in different times. In this respect late 19 th century Finnish is not that different from current Finnish, but obviously also this can affect the results and should be studied more thoroughly.",
"cite_spans": [
{
"start": 363,
"end": 384,
"text": "Ehrmann et al. (2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gain Pers Gain",
"sec_num": null
},
{
"text": "Our second evaluation data is modern Finnish, texts of a technology and business oriented web newspaper, Digitoday. NE tagged Digitoday data 7 has been classified to eight different content sections according to the web publication (http://www.digitoday.fi/). Two sections, Opinion and Entertainment, have been left out of the tagged data. Each content section has 15-40 tagged files (altogether 240 files) that comprise of articles, one article in each file. The content sections are yhteiskunta (Society), bisnes (Business), tiede ja teknologia (Science and technology), data, mobiili (Mobile), tietoturva (Data security), ty\u00f6 ja ura (Job and career) and vimpaimet (Gadgets) We included first 20 articles of each category's tagged data in the evaluation. Vimpaimet had only 15 files, which were all included in the data. Each evaluation data set includes about 2700-4100 lines of text, altogether 31 100 lines with punctuation. About 64% of the evaluation data available in the Github repository was utilized in our evaluation -155 files out of 240. Structure of the tagged files was simple; they contained one word per line and possibly a NER tag. Punctuation marks were tokenized on lines of their own. The resulting individual files had a few tags like <paragraph> and <headline>, which were removed. Also dates of publishing were removed. Table 4 shows evaluation results of the eight sections of the Digitoday data with the FST and FiNER section-by-section. Table 5 shows combined results of the eight sections and Figure 1 shows combined results graphically. Table 5 . Combined results of all Digitoday's sections FiNER achieves best F-scores in most of Digitoday's sections. Out of all the 24 cases, FiNER performs better in 20 cases and the FST in four. The FST performs worst with corporations, but differences with locations compared to FiNER are small. Performance differences with persons between FiNER and the FST are also not that great, and the FST performs better than FiNER in three of the sections.",
"cite_spans": [],
"ref_spans": [
{
"start": 1345,
"end": 1352,
"text": "Table 4",
"ref_id": null
},
{
"start": 1465,
"end": 1472,
"text": "Table 5",
"ref_id": null
},
{
"start": 1522,
"end": 1531,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1568,
"end": 1575,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results of the FST and FiNER for the Digitoday Data",
"sec_num": "3"
},
{
"text": "Both taggers find locations best and quite evenly in all the Digitoday's sections. Persons are found varyingly by both taggers, and section wise performance is uneven. Especially bad they are found in the Data and Data security sections. One reason for FiNER's bad performance in this section is that many products are confused to persons. In Business and Society sections, persons are found more reliably. One reason for the FST's varying performance with persons is variance of section-by-section usage of Finnish and non-Finnish person names. In some sections mainly Finnish persons are discussed and in some sections mainly foreign persons. The FST recognizes Finnish names relatively well, but it does not cover foreign names as well. The morphological analyzer's components in the FST are also lexically quite old, which shows in some lacking name analyses, such as Google, Facebook, Obama, Twitter, if the words are in inflected forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F-score FST",
"sec_num": null
},
{
"text": "We have shown in this paper results of NE tagging of both historical OCRed Finnish and modern digital born Finnish with two tools, FiNER and a Finnish Semantic Tagger, the FST. FiNER is a dedicated rule-based NER tool for Finnish, but the FST is a general lexicon-based semantic tagger. We set a twofold task for our evaluation. Firstly we wanted to compare a general computational semantics tool the FST to a dedicated NE tagger in named entity search. Secondly we wanted to see, what is the approximate decrease in NER performance of modern Finnish taggers, when they are used with noisy historical Finnish data. Answer to the first question is clear: the FST performs mostly as well as FiNER with persons and locations in modern data. With historical data FiNER outperforms the FST with persons; with locations both taggers perform equally. Corporations were not evaluated in the historical data. In Digitoday's data FiNER was clearly better than FST with corporations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Answer to our second question is more ambiguous. With historical data both taggers achieve F-scores round 57 with locations, the FST 61.5 with w/v substitution. With Digitoday's data F-scores of 70-74.5 are achieved, and thus there is 9-16% point difference in the performance. With persons FiNER's score with both data are quite even in average. It would be expectable, that FiNER performed better with modern data. Some sections of Digitoday data (Scitech, Data, Data Security) are performing clearly worse than others, and FiNER's performance only in Work and Career is on expectable level. It is possible that section wise topical content has some effect in the results. The FST's performance with persons is worst with historical data, but with Digitoday's data it performs much better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Technology behind FST is relatively old, but it has a sound basis and a long history. Since the beginning of the 1990s and within the framework of several different projects, the UCREL team has been developing an English semantic tagger, the EST, for the annotation of both spoken and written data with the emphasis on general language. The EST has also been redesigned to create a historical semantic tagger for English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "To show versatility of the UCREL semantic tagger approach, we list a few other computational analyses where the EST has been applied to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "-stylistic analysis of written and spoken English -analysis and standardization of SMS spelling variation, -analysis of the semantic content and persuasive composition of extremist media, -corpus stylistics, -discourse analysis, -ontology learning, -phraseology, -political science research, -sentiment analysis, and -deception detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "More applications are referenced on UCREL's web pages (http://ucrel.lancs.ac.uk/usas/; http://ucrel.lancs.ac.uk/wmatrix/#apps).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "As can be seen from the list, the approach taken in UCREL that is also behind the FST is robust with regards to linguistic applications. Based on our results with both historical and modern Finnish data, we believe that the EST based FST is also a relevant tool for named entity recognition. It is not optimal for the task in its present form, as it lacks e.g. disambiguation of ambiguous name tags at this stage 8 . On the other hand, the FST's open and well documented semantic lexicons are adaptable to different tasks as they can be updated relatively easily. The FST would also benefit from an updated open source morphological analyzer. Omorfi 9 , for example, would be suitable for use, as it has a comprehensive lexicon of over 400 000 base forms. With an up-to-date Finnish morphological analyzer and disambiguation tool the FST would yield better NER results and in the same time it would be a versatile multipurpose semantical analyzer of Finnish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Overall our results show that a general semantic tool like the FST is able to perform in a restricted semantic task of name recognition almost as well as a dedicated NE tagger. As NER is a popular task in information extraction and retrieval, our results show that NE tagging does not need to be only a task of dedicated NE taggers, but it can be performed equally well with more general multipurpose semantic tools.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "http://ucrel.lancs.ac.uk/usas/USASSemant icTagset.pdf 3 https://github.com/UCREL/Multilingual-USAS/raw/master/Finnish/USASSemanticTags et-Finnish.pdf",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://ucrel.lancs.ac.uk/usas/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cnts.ua.ac.be/conll2002/ner/b in/conlleval.txt, author Erik Tjong KimSang, version 2004-01-26",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/flammie/omorfi. This release is from year 2016.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/mpsilfve/finerdata/tree/master/digitoday/ner_test_data _annotatated. Data was collected on the first week of October and November 2016 in two parts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Many lexicon entries contain more senses than one, and these are arranged in perceived frequency order. For example, it is common in Finnish that a name of location is also a family name. 9 https://github.com/flammie/omorfi",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Work of the first author is funded by the EU Commission through its European Regional Development Fund, and the program Leverage from the EU 2014-2020.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Diachronic Evaluation of NER Systems on Old Newspapers",
"authors": [],
"year": 2016,
"venue": "Proceedings of the 13th Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "97--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diachronic Evaluation of NER Systems on Old Newspapers. In Proceedings of the 13th Conference on Natural Language Processing (KONVENS 2016), 97-107.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Ontology and the Lexicon",
"authors": [
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graeme Hirst. 2009. Ontology and the Lexicon. ftp://ftp.cs.toronto.edu/pub/gh/Hirst-Ontol- 2009.pdf",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Measuring Lexical Quality of a Historical Finnish Newspaper Collection -Analysis of Garbled OCR Data with Basic Language Technology Tools and Means",
"authors": [
{
"first": "Kimmo",
"middle": [],
"last": "Kettunen",
"suffix": ""
},
{
"first": "Tuula",
"middle": [],
"last": "P\u00e4\u00e4kk\u00f6nen",
"suffix": ""
}
],
"year": 2016,
"venue": "Tenth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimmo Kettunen and Tuula P\u00e4\u00e4kk\u00f6nen. 2016. Measuring Lexical Quality of a Historical Finnish Newspaper Collection -Analysis of Garbled OCR Data with Basic Language Technology Tools and Means. LREC 2016, Tenth International Conference on Language Resources and Evaluation. http://www.lrec-",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Modern Tools for Old Content -in Search of Named Entities in a Finnish OCRed Historical Newspaper Collection 1771-1910",
"authors": [
{
"first": "Kimmo",
"middle": [],
"last": "Kettunen",
"suffix": ""
},
{
"first": "Eetu",
"middle": [],
"last": "M\u00e4kel\u00e4",
"suffix": ""
},
{
"first": "Juha",
"middle": [],
"last": "Kuokkala",
"suffix": ""
},
{
"first": "Teemu",
"middle": [],
"last": "Ruokolainen",
"suffix": ""
},
{
"first": "Jyrki",
"middle": [],
"last": "Niemi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimmo Kettunen, Eetu M\u00e4kel\u00e4, Juha Kuokkala, Teemu Ruokolainen and Jyrki Niemi. 2016. Modern Tools for Old Content -in Search of Named Entities in a Finnish OCRed Historical Newspaper Collection 1771-1910. Krestel, R., Mottin, D. and M\u00fcller, E. (eds.), Proceedings of Conference \"Lernen, Wissen, Daten, Analysen\", LWDA 2016, http://ceur- ws.org/Vol-1670/",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using a semantic tagger as dictionary search tool",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "L\u00f6fberg",
"suffix": ""
},
{
"first": "Jukka-Pekka",
"middle": [],
"last": "Juntunen",
"suffix": ""
},
{
"first": "Asko",
"middle": [],
"last": "Nyk\u00e4nen",
"suffix": ""
}
],
"year": 2004,
"venue": "11th EURALEX (European Association for Lexicography) International Congress Euralex",
"volume": "",
"issue": "",
"pages": "127--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura L\u00f6fberg, Jukka-Pekka Juntunen, Asko Nyk\u00e4nen, Krista Varantola, Paul Rayson and Dawn Archer. 2004. Using a semantic tagger as dictionary search tool. In 11th EURALEX (European Association for Lexicography) International Congress Euralex 2004: 127- 134.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Jukka-Pekka Juntunen, Asko Nyk\u00e4nen and Krista Varantola. 2005. A semantic tagger for the Finnish language",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "L\u00f6fberg",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Piao",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Rayson",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura L\u00f6fberg, Scott Piao, Paul Rayson, Jukka- Pekka Juntunen, Asko Nyk\u00e4nen and Krista Varantola. 2005. A semantic tagger for the Finnish language.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Optical character recognition errors and their effects on natural language processing",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Lopresti",
"suffix": ""
}
],
"year": 2009,
"venue": "International Journal on Document Analysis and Recognition",
"volume": "12",
"issue": "3",
"pages": "141--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Lopresti. 2009. Optical character recognition errors and their effects on natural language processing. International Journal on Document Analysis and Recognition, 12(3): 141-151.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Combining a REST Lexical Analysis Web Service with SPARQL for Mashup Semantic Annotation from Text",
"authors": [
{
"first": "Eetu",
"middle": [],
"last": "M\u00e4kel\u00e4",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eetu M\u00e4kel\u00e4. 2014. Combining a REST Lexical Analysis Web Service with SPARQL for Mashup Semantic Annotation from Text.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The Semantic Web: ESWC 2014 Satellite Events",
"authors": [
{
"first": "V",
"middle": [],
"last": "Presutti",
"suffix": ""
}
],
"year": null,
"venue": "Lecture Notes in Computer Science",
"volume": "8798",
"issue": "",
"pages": "424--428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Presutti, V. et al. (Eds.), The Semantic Web: ESWC 2014 Satellite Events. Lecture Notes in Computer Science, vol. 8798, Springer: 424- 428.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Foundations of Statistical Language Processing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning and Hinrich Sch\u00fctze. 1999. Foundations of Statistical Language Processing. The MIT Press, Cambridge, Massachusetts.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Named Entity Recognition: Fallacies, challenges and opportunities",
"authors": [
{
"first": "M\u00f3nica",
"middle": [],
"last": "Marrero",
"suffix": ""
},
{
"first": "Juli\u00e1n",
"middle": [],
"last": "Urbano",
"suffix": ""
},
{
"first": "Sonia",
"middle": [],
"last": "S\u00e1nchez-Cuadrado",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Morato",
"suffix": ""
},
{
"first": "Juan Miguel G\u00f3mez-Berb\u00eds",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2013,
"venue": "Computer Standards & Interfaces",
"volume": "35",
"issue": "5",
"pages": "482--489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M\u00f3nica Marrero, Juli\u00e1n Urbano , Sonia S\u00e1nchez- Cuadrado , Jorge Morato and Juan Miguel G\u00f3mez-Berb\u00eds. 2013. Named Entity Recognition: Fallacies, challenges and opportunities. Computer Standards & Interfaces 35(5): 482-489.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Named entity extraction from noisy input: Speech and OCR",
"authors": [
{
"first": "David",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Boisen",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Stone",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 6th Applied Natural Language Processing Conference",
"volume": "",
"issue": "",
"pages": "316--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Miller, Sean Boisen, Richard Schwartz, Rebecca Stone and Ralph Weischedel. 2000. Named entity extraction from noisy input: Speech and OCR. Proceedings of the 6th Applied Natural Language Processing Conference: 316-324, Seattle, WA. http://www.anthology.aclweb.org /A/A00/A00-1044.pdf",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Survey of Named Entity Recognition and Classification",
"authors": [
{
"first": "David",
"middle": [],
"last": "Nadeau",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2007,
"venue": "Linguisticae Investigationes",
"volume": "30",
"issue": "1",
"pages": "3--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Nadeau and Satoshi Sekine. 2007. A Survey of Named Entity Recognition and Classification. Linguisticae Investigationes 30(1):3-26.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Extracting Person Names from Diverse and Noisy OCR Tex",
"authors": [],
"year": null,
"venue": "Proceedings of the fourth workshop on Analytics for noisy unstructured text data",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Extracting Person Names from Diverse and Noisy OCR Tex. Proceedings of the fourth workshop on Analytics for noisy unstructured text data. Toronto, ON, Canada: ACM. http://dl.acm.org/citation.cfm? id=1871845.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Lexical Coverage Evaluation of Large-scale Multilingual Semantic Lexicons for Twelve Languages",
"authors": [
{
"first": "Mahmoud",
"middle": [],
"last": "El-Haj",
"suffix": ""
},
{
"first": "Ricardo-Mar\u00eda",
"middle": [],
"last": "Jim\u00e9nez",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Kren",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "L\u00f6fberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahmoud El-Haj, Ricardo-Mar\u00eda Jim\u00e9nez, Dawn Knight, Michal Kren, Laura L\u00f6fberg, Rao Muhammad Adeel Nawab, Jawad Shafi, Phoey Lee Teh and Olga Mudraya. 2016. Lexical Coverage Evaluation of Large-scale Multilingual Semantic Lexicons for Twelve Languages. Proceedings of LREC. http://www.lrec-",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Proper Name Extraction from Non-Journalistic Texts",
"authors": [
{
"first": "Thierry",
"middle": [],
"last": "Poibeau",
"suffix": ""
},
{
"first": "Leila",
"middle": [],
"last": "Kosseim",
"suffix": ""
}
],
"year": 2001,
"venue": "Language and Computers",
"volume": "37",
"issue": "",
"pages": "144--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thierry Poibeau and Leila Kosseim. 2001. Proper Name Extraction from Non- Journalistic Texts. Language and Computers, 37: 144-157.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The UCREL semantic analysis system",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Rayson",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Archer",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Piao",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Mcenery",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the workshop on Beyond Named Entity Recognition Semantic labelling for NLP tasks in association with 4th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Rayson, Dawn Archer, Scott Piao and Tony McEnery. 2004. The UCREL semantic analysis system. Proceedings of the workshop on Beyond Named Entity Recognition Semantic labelling for NLP tasks in association with 4th International Conference on Language Resources and Evaluation (LREC 2004): 7-12.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Comparison of Named Entity Recognition Tools for raw OCR text",
"authors": [
{
"first": "Kepa Joseba",
"middle": [],
"last": "Rodriquez",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Blanke",
"suffix": ""
},
{
"first": "Magdalena",
"middle": [],
"last": "Luszczynska",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of KONVENS 2012 (LThist 2012 wordshop)",
"volume": "",
"issue": "",
"pages": "410--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kepa Joseba Rodriquez, Mike Bryant, Tobias Blanke and Magdalena Luszczynska. 2012. Comparison of Named Entity Recognition Tools for raw OCR text. Proceedings of KONVENS 2012 (LThist 2012 wordshop), Vienna September 21: 410-414.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Reverse Engineering a Rule-Based Finnish Named Entity Recognizer",
"authors": [
{
"first": "Miikka",
"middle": [],
"last": "Silfverberg",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miikka Silfverberg. 2015. Reverse Engineering a Rule-Based Finnish Named Entity Recognizer. https://kitwiki.csc.fi/twiki/pu b/FinCLARIN/KielipankkiEventNER Workshop2015/Silfverberg_presen tation.pdf.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Combined results of all Digitoday's sections"
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>Tag</td><td>F-score</td><td>F-</td><td>Found</td><td>Found</td></tr><tr><td/><td>FST</td><td>score</td><td>tags</td><td>tags</td></tr><tr><td/><td/><td>FiNER</td><td>FST</td><td>FiNER</td></tr><tr><td>Pers</td><td>51.1</td><td>58.1</td><td>1496</td><td>2681</td></tr><tr><td>Locs</td><td>56.7</td><td>57.5</td><td>1253</td><td>1541</td></tr><tr><td>Pers</td><td>52.2</td><td>N/A</td><td>1566</td><td>N/A</td></tr><tr><td>w/v</td><td/><td/><td/><td/></tr><tr><td>Locs</td><td>61.5</td><td>N/A</td><td>1446</td><td>N/A</td></tr><tr><td>w/v</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">Table 1. Evaluation of the FST and FiNER with</td></tr><tr><td colspan=\"5\">loose criteria and two categories in the historical</td></tr><tr><td colspan=\"5\">newspaper collection. W/v stands for w to v</td></tr><tr><td/><td colspan=\"3\">substitution in words.</td><td/></tr></table>",
"html": null,
"text": ").",
"num": null
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "",
"num": null
}
}
}
}