Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/QhOp8oE2Pm/Initial_manuscript_md/Initial_manuscript.md +591 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/QhOp8oE2Pm/Initial_manuscript_tex/Initial_manuscript.tex +483 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TFZGxtsyk3/Initial_manuscript_md/Initial_manuscript.md +333 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TFZGxtsyk3/Initial_manuscript_tex/Initial_manuscript.tex +199 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TqEvrDbInx/Initial_manuscript_md/Initial_manuscript.md +767 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TqEvrDbInx/Initial_manuscript_tex/Initial_manuscript.tex +656 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/UcWZrerHDCe/Initial_manuscript_md/Initial_manuscript.md +527 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/UcWZrerHDCe/Initial_manuscript_tex/Initial_manuscript.tex +338 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/V5PGSHHJEw/Initial_manuscript_md/Initial_manuscript.md +417 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/V5PGSHHJEw/Initial_manuscript_tex/Initial_manuscript.tex +289 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/Vzp2aRidnh/Initial_manuscript_md/Initial_manuscript.md +849 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/Vzp2aRidnh/Initial_manuscript_tex/Initial_manuscript.tex +750 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/WGYiq3yOTa/Initial_manuscript_md/Initial_manuscript.md +747 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/WGYiq3yOTa/Initial_manuscript_tex/Initial_manuscript.tex +682 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/YTVwaoG0Mi/Initial_manuscript_md/Initial_manuscript.md +523 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/YTVwaoG0Mi/Initial_manuscript_tex/Initial_manuscript.tex +563 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_7VPETQwnPX/Initial_manuscript_md/Initial_manuscript.md +949 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_7VPETQwnPX/Initial_manuscript_tex/Initial_manuscript.tex +587 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_bbk5bLa9K/Initial_manuscript_md/Initial_manuscript.md +847 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_bbk5bLa9K/Initial_manuscript_tex/Initial_manuscript.tex +799 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/rrsAzPAGhs/Initial_manuscript_md/Initial_manuscript.md +733 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/rrsAzPAGhs/Initial_manuscript_tex/Initial_manuscript.tex +724 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/tcxy7vRVKlg/Initial_manuscript_md/Initial_manuscript.md +749 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/tcxy7vRVKlg/Initial_manuscript_tex/Initial_manuscript.tex +680 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/uygq9_N7TL/Initial_manuscript_md/Initial_manuscript.md +571 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/uygq9_N7TL/Initial_manuscript_tex/Initial_manuscript.tex +509 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wEJaCIkgLG/Initial_manuscript_md/Initial_manuscript.md +1121 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wEJaCIkgLG/Initial_manuscript_tex/Initial_manuscript.tex +748 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wKieg8k2taJ/Initial_manuscript_md/Initial_manuscript.md +923 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wKieg8k2taJ/Initial_manuscript_tex/Initial_manuscript.tex +780 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wbQd_esbJC/Initial_manuscript_md/Initial_manuscript.md +781 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wbQd_esbJC/Initial_manuscript_tex/Initial_manuscript.tex +728 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/xbPTfBIUby/Initial_manuscript_md/Initial_manuscript.md +411 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/xbPTfBIUby/Initial_manuscript_tex/Initial_manuscript.tex +412 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/-eCgVcWbnzE/Initial_manuscript_md/Initial_manuscript.md +79 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/-eCgVcWbnzE/Initial_manuscript_tex/Initial_manuscript.tex +51 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/1w8vMnVeJB/Initial_manuscript_md/Initial_manuscript.md +197 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/1w8vMnVeJB/Initial_manuscript_tex/Initial_manuscript.tex +171 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/2w4CsrCUXq/Initial_manuscript_md/Initial_manuscript.md +105 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/2w4CsrCUXq/Initial_manuscript_tex/Initial_manuscript.tex +103 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/EGZ8XdoLm0/Initial_manuscript_md/Initial_manuscript.md +150 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/EGZ8XdoLm0/Initial_manuscript_tex/Initial_manuscript.tex +190 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/L-NgOKyH7jZ/Initial_manuscript_md/Initial_manuscript.md +131 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/L-NgOKyH7jZ/Initial_manuscript_tex/Initial_manuscript.tex +108 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/VJvluDhBfOS/Initial_manuscript_md/Initial_manuscript.md +141 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/VJvluDhBfOS/Initial_manuscript_tex/Initial_manuscript.tex +115 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/c4A2txzl82P/Initial_manuscript_md/Initial_manuscript.md +89 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/c4A2txzl82P/Initial_manuscript_tex/Initial_manuscript.tex +70 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/kjTVwUVVWP/Initial_manuscript_md/Initial_manuscript.md +201 -0
- RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/kjTVwUVVWP/Initial_manuscript_tex/Initial_manuscript.tex +183 -0
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/QhOp8oE2Pm/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,591 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Gamli - Icelandic Oral History Corpus: Design, Collection and Evaluation
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author
|
| 10 |
+
|
| 11 |
+
Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
006 Affiliation / Address line 2 Affiliation / Address line 3
|
| 14 |
+
|
| 15 |
+
email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 3
|
| 24 |
+
|
| 25 |
+
email@domain
|
| 26 |
+
|
| 27 |
+
Anonymousest Author 057
|
| 28 |
+
|
| 29 |
+
Affiliation / Address line 1 058
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 32 |
+
|
| 33 |
+
063
|
| 34 |
+
|
| 35 |
+
## Abstract
|
| 36 |
+
|
| 37 |
+
013 This paper presents Gamli, an ASR corpus for Icelandic oral histories, the first of its kind for this language, derived from the
|
| 38 |
+
|
| 39 |
+
016 Ísmús ethnographic collection. Corpora for oral histories differ in various ways
|
| 40 |
+
|
| 41 |
+
018 from corpora for general ASR, namely they contain spontaneous speech, multiple speakers per channel, noisy environ-
|
| 42 |
+
|
| 43 |
+
021 ments, the effects of historic recording equipment, and typically a large propor-
|
| 44 |
+
|
| 45 |
+
023 tion of elderly speakers. Gamli contains 188 hours of aligned speech and tran-
|
| 46 |
+
|
| 47 |
+
026 scripts, split into a training set and a test set. We describe our approach for creating
|
| 48 |
+
|
| 49 |
+
028 the transcripts, through both Optical Character Recognition of previous transcripts and post-editing of ASR output. We also
|
| 50 |
+
|
| 51 |
+
031 describe our approach for aligning, segmenting, and filtering the corpus and fi-
|
| 52 |
+
|
| 53 |
+
033 nally training a Kaldi ASR system, which achieves 22.1% word error rate (WER) on the Gamli test set, a substantial improvement from 53.4% word error rate from a baseline general ASR system for Ice-
|
| 54 |
+
|
| 55 |
+
038 landic.
|
| 56 |
+
|
| 57 |
+
## 1 Introduction
|
| 58 |
+
|
| 59 |
+
Icelandic open-licensed speech corpora have in re-
|
| 60 |
+
|
| 61 |
+
043 cent years grown in volume and numbers, there are now Talrómur (Sigurgeirsson et al., 2021), Málrómur (Steingrímsson et al., 2017), Samró- mur (Mollberg et al., 2020) and the Althingi's Parliamentary Speeches corpus (Helgadóttir et al., 2017b; Nikulásdóttir et al., 2018) to name a few. However both historical speech and older speakers are underrepresented in these corpora. For instance, regarding older speakers, in Samrómur, the largest open-licensed ASR corpus for Icelandic
|
| 62 |
+
|
| 63 |
+
053 (2233 hours in the latest release (Hedström et al.,
|
| 64 |
+
|
| 65 |
+
2022)), only 4.8% of speakers are over 60 years 065 old.
|
| 66 |
+
|
| 67 |
+
Gamli, the oral history speech corpus presented 067 in this paper differs from that in many ways. Firstly, it contains, predominantly, spontaneous
|
| 68 |
+
|
| 69 |
+
speech in the form of interviews, secondly, it has a 070 very high ratio of older speakers (94.8% of speak-
|
| 70 |
+
|
| 71 |
+
ers are over 60 years old), thirdly, background 072 noise is common as well as noise artefacts from
|
| 72 |
+
|
| 73 |
+
historical recording equipment and lastly, historic 075 dialects (word choice and accent) are much more
|
| 74 |
+
|
| 75 |
+
prevalent than in existing corpora. 077
|
| 76 |
+
|
| 77 |
+
The corpus contains 188 hours of aligned speech and transcripts split into a training set and
|
| 78 |
+
|
| 79 |
+
a test set. This data, based on valuable historical 080 20th century recordings stored at the Department
|
| 80 |
+
|
| 81 |
+
of Ethnology and Folklore at The Árni Magnús- 082 son Institute for Icelandic Studies, is therefore an important addition to the existing Icelandic speech
|
| 82 |
+
|
| 83 |
+
corpora. ${}^{1}$ 085
|
| 84 |
+
|
| 85 |
+
The custom ASR system presented in this pa-
|
| 86 |
+
|
| 87 |
+
per along with the corpus will in due course be 087 used to automatically transcribe all of the ethnographic audio recordings stored at the institute.
|
| 88 |
+
|
| 89 |
+
The transcripts will then be made available on the 090 online portal Ismús ${}^{2}$ and paired with the respective
|
| 90 |
+
|
| 91 |
+
recording. 092
|
| 92 |
+
|
| 93 |
+
## 2 Related Work
|
| 94 |
+
|
| 95 |
+
For many years, ASR systems have been trained
|
| 96 |
+
|
| 97 |
+
on unaligned transcriptions (Panayotov et al., 097 2015) and even approximate transcriptions of spontaneous speech (Jang and Hauptmann, 1999). In the case of Icelandic ASR for spontaneous speech, there has been an ongoing project (Hel-gadóttir et al., 2017b), (Helgadóttir et al., 2017a) to align and filter Icelandic parliamentary transcripts for ASR in order to reduce the manual work
|
| 98 |
+
|
| 99 |
+
107 involved in transcribing parliamentary proceed-
|
| 100 |
+
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+
${}^{1}$ The corpus is available under an open license at https: //anonymo.us/gamli
|
| 104 |
+
|
| 105 |
+
2 www.ismus.is
|
| 106 |
+
|
| 107 |
+
---
|
| 108 |
+
|
| 109 |
+
109 ings. Creating the corpora involves text normalization, time-alignment, and filtering utterances.
|
| 110 |
+
|
| 111 |
+
While ASR for oral histories is new for Icelandic, it is already being used in other languages. For example, the first large project was the MALACH project (Psutka et al., 2002) in 2002, where ASR transcriptions were used for indexing oral history archives and making them more searchable. However, some authors still consider oral history speech recognition an open problem (Picheny et al., 2019; Gref et al., 2020) and a recent study (Gref et al., 2022) found that human word error rate was ${8.7}\%$ on a German oral history corpus (taking into account case-sensitivity and annotation of hesitations). Whereas (Lippmann, 1997) found a human word error rate of less than 4% on the Switchboard corpus of spontaneous telephony speech and less than 0.4% on the Wall Street Journal corpus of clear read speech. This suggests that the minimum possible word error rate for ASR might be much higher on oral histories than it is for cleaner speech corpora.
|
| 112 |
+
|
| 113 |
+
One other factor that makes oral history ASR an interesting challenge is the particularly high ratio of older speakers. It has been noted by (Vipperla et al., 2008) that for general ASR models, WER correlates strongly with age, even throughout a single speakers lifetime. This could be caused by multiple changes in aging voices, such as slower speaking rate, changes in F0 (decrease for males and increase for females), increase in jitter and shimmer (all from (Vipperla et al., 2008)), some of which could be mitigated by increasing the number of older speakers in the training set. However, other changes might not be so easily solved, such as a reduction of tongue and jaw strength and an increase in breathiness (all from (Vipperla et al., 2008)) which could reduce articulatory precision.
|
| 114 |
+
|
| 115 |
+
## 3 Origin of the corpus
|
| 116 |
+
|
| 117 |
+
The ethnography collection of the Department of Ethnology and Folklore at The Árni Magnússon Institute for Icelandic Studies contains more than 2,300 hours of audio recordings of oral heritage and traditions, with a little less than 2,500 interviewees. The oldest material are recordings made on wax cylinders in the early 20th century and the collection is continually expanding with new material being added every year.
|
| 118 |
+
|
| 119 |
+
The bulk of the collection, however, consists
|
| 120 |
+
|
| 121 |
+
of recordings from the 1960's and 1970's, mainly 162
|
| 122 |
+
|
| 123 |
+
the work of three collectors. Their focus was 163 to gather ethnographic material from the whole country, first and foremost from older generations - the majority of the informants were born before or around the turn of the 20th century,
|
| 124 |
+
|
| 125 |
+
This resulted in an extensive collection of leg- 168 ends and fairy tales, accounts of beliefs and customs, poems, hymns, nursery rhymes, Icelandic ballads (rímur), occasional verses and more, with the material being variously spoken, sung or chanted. Apart from recited verse and that which is sung or chanted the speech is spontaneous. Accompanying the recordings is detailed metadata on the speaker, time and location of recording, as well as various other parameters such as genre (for different kinds of verse or prose material, e.g. poems or nursery rhymes, fairy tales or legends etc.), mode of performance (sung, chanted, spoken), key words, content (short summary, description), tale-types and motifs (in folktales and legends).
|
| 126 |
+
|
| 127 |
+
### 3.1 Speaker distribution in the collection
|
| 128 |
+
|
| 129 |
+
185
|
| 130 |
+
|
| 131 |
+
In their work the collectors mainly relied on a snowball method of sorts, asking speakers to point them to other possible informants, as well as contacting teachers or clergy to enquire about interesting subjects in their region. Speaker profession is often listed in the metadata, but there is no information about education, and most of the speakers were common people, i.e. workers, farmers, fishermen, housewives etc., with little formal education.
|
| 132 |
+
|
| 133 |
+
Gender was probably not a decisive factor at the outset and the total ratio is ${57.6}\%$ male speakers and 42.4% female, i.e. based on the number of speakers. However, if audio length for each gender is included the difference increases quite a bit, i.e. 1504 hours (65%) for men vs. 821 hours (35%) for women.
|
| 134 |
+
|
| 135 |
+
As mentioned, the data in the collection also stands out in that that the age of the speakers is higher than in other existing Icelandic corpora. The oldest speaker in the collection was 105 years old at the time of recording in 1954 and the oldest speaker in the collection, with regards to date of birth, was born in 1827, and recorded in 1904 (not included in the Gamli corpus). In fact, 72.4% of the speakers are older than 63 and ${31.4}\%$ are 71 - 80 years old. In Gamli this ratio is substantially
|
| 136 |
+
|
| 137 |
+
higher, as detailed in Section 4. 215
|
| 138 |
+
|
| 139 |
+
### 3.2 Regional features in pronunciation
|
| 140 |
+
|
| 141 |
+
217 The speakers in the collection are from all over the country and therefore reflect the various regional differences in pronunciation much better than recently recorded speech corpora such as Samró-
|
| 142 |
+
|
| 143 |
+
222 mur, due to the fact that these regional features either have already more or less disappeared or are gradually disappearing. Amongst these features is for example the "hard" pronunciation of $/\mathrm{p},\mathrm{t},\mathrm{k}/$ (still a distinct feature) and voiced pronunciation of $/\mathrm{l},\mathrm{m},\mathrm{n}/$ before $/\mathrm{p},\mathrm{t},\mathrm{k}/$ in North-Iceland, ${rn}$ -, ${rl}$ -pronunciation in South-East-Iceland, monoph-
|
| 144 |
+
|
| 145 |
+
229 thongs before $/\mathrm{{ng}},\mathrm{{nk}}/$ in the North-West etc.
|
| 146 |
+
|
| 147 |
+
230 While these features are not tagged in any way
|
| 148 |
+
|
| 149 |
+
231 in the Gamli corpus, the ASR system trained on
|
| 150 |
+
|
| 151 |
+
232 the corpus seems to prove well on these features, with possibly the exception of labial or velar stops
|
| 152 |
+
|
| 153 |
+
234 before $\left\lbrack \partial \right\rbrack$ , such as $\left\lbrack {\operatorname{hap}\partial \mathrm{I}}\right\rbrack$ instead of $\left\lbrack {\operatorname{hav}\partial \mathrm{I}}\right\rbrack$ for ${haf\delta i}$ or $\left\lbrack {\operatorname{lak}\delta \mathrm{I}}\right\rbrack$ instead of $\left\lbrack {\operatorname{lay}\delta \mathrm{I}}\right\rbrack$ for $\operatorname{lag}{\delta i}$ . We have, however, not inspected this systematically, so it needs further looking into to state the precision with any certainty.
|
| 154 |
+
|
| 155 |
+
### 3.3 Recording procedure
|
| 156 |
+
|
| 157 |
+
Most of the recordings were made at the speakers' homes, in many cases in elderly homes, and carried out by the interviewer. It was not uncommon that other people, e.g. children, spouses etc. were present during the recording sessions, but they were in most cases not meant to play a part in the recording. Because of this, and for various
|
| 158 |
+
|
| 159 |
+
249 other reasons, some background noise and disturbances occur in the recordings, e.g. children playing, traffic sounds, phones ringing etc., but these are generally not prominent.
|
| 160 |
+
|
| 161 |
+
254 Much of the recordings were recorded using high quality reel-to-reel tape recording devices, although some were done by amateurs who weren't as well equipped, whereas a part of the recordings are from the recording studios of The Ice-
|
| 162 |
+
|
| 163 |
+
259 landic National Broadcasting Service (Porsteins-dóttir, 2013).
|
| 164 |
+
|
| 165 |
+
The digitalization of these recordings began in the late 1990's and continued into the early 2000's with the recordings being converted into WAV for-
|
| 166 |
+
|
| 167 |
+
264 mat as well as compressed MP3s for online use.
|
| 168 |
+
|
| 169 |
+
## 4 Corpus content
|
| 170 |
+
|
| 171 |
+
Gamli contains 188 hours of transcribed audio
|
| 172 |
+
|
| 173 |
+
269 broken down into
|
| 174 |
+
|
| 175 |
+
1. $\sim {145}$ hours from optical character recogni- 270
|
| 176 |
+
|
| 177 |
+
tion (OCR) of previous transcriptions in var- 271
|
| 178 |
+
|
| 179 |
+
ious formats 272
|
| 180 |
+
|
| 181 |
+
273
|
| 182 |
+
|
| 183 |
+
2. $\sim {43}$ hours of new transcriptions (post-edited 274
|
| 184 |
+
|
| 185 |
+
from ASR output) 275
|
| 186 |
+
|
| 187 |
+
276
|
| 188 |
+
|
| 189 |
+
The 145 hours include $\sim 8$ hours defined as a test
|
| 190 |
+
|
| 191 |
+
set, which was manually reviewed and corrected 278
|
| 192 |
+
|
| 193 |
+
and annotated with speaker ID and time align- 279
|
| 194 |
+
|
| 195 |
+
ments in the annotation tool ${ELAN}$ . The test set 280
|
| 196 |
+
|
| 197 |
+
contains recordings with 10 speakers, 5 women 281 (239 minutes) and 5 men (219 minutes), plus the
|
| 198 |
+
|
| 199 |
+
interviewers ( 4 men) and serves for evaluating the 283 system's performance.
|
| 200 |
+
|
| 201 |
+
A validation set has not been defined for the cor-
|
| 202 |
+
|
| 203 |
+
pus as the acoustic model training in Kaldi (Povey 286 et al., 2011) used a random sample of the training
|
| 204 |
+
|
| 205 |
+
corpus for validation. 288
|
| 206 |
+
|
| 207 |
+
<table><tr><td>Data split</td><td>Hours</td><td>Male speakers</td><td>Female speakers</td><td>Total speakers</td></tr><tr><td>Training</td><td>180</td><td>115</td><td>85</td><td>200</td></tr><tr><td>Test</td><td>8</td><td>5</td><td>5</td><td>10</td></tr></table>
|
| 208 |
+
|
| 209 |
+
Table 1: Data splits in Gamli
|
| 210 |
+
|
| 211 |
+
291
|
| 212 |
+
|
| 213 |
+
293
|
| 214 |
+
|
| 215 |
+
### 4.1 Speaker distribution in the corpus
|
| 216 |
+
|
| 217 |
+
296
|
| 218 |
+
|
| 219 |
+
The corpus contains 210 unique speakers, 90
|
| 220 |
+
|
| 221 |
+
women and 120 men (plus the interviewers: 13 298 men and 1 woman). At the outset we aimed to have the gender ratio as equal as possible in the
|
| 222 |
+
|
| 223 |
+
acoustic training data, but with three men surpass- 301 ing 20 hours of speech each (with one topping at
|
| 224 |
+
|
| 225 |
+
29 hours) and accounting for more than one third 303 of the entire data, that picture became quite distorted. As a result the gender bias in the corpus is
|
| 226 |
+
|
| 227 |
+
even greater than in the collection itself, which is 306 unfortunate, but simply reflects the data that was
|
| 228 |
+
|
| 229 |
+
at hand, i.e. ${73.5}\%$ vs. ${26.5}\%$ , cf. Section 4.2. 308 309 The age ranges from 38 to 99 , but most of the 310 speakers are ${60} + \left( {{94.8}\% }\right)$ , as shown in Figure 1, and the average age of the speakers is 77 years.
|
| 230 |
+
|
| 231 |
+
This ratio is unprecedented in all existing corpora 313 for Icelandic speech (cf. 4.8% in Samrómur as referred to in Section 1) and makes Gamli an important addition to that collection.
|
| 232 |
+
|
| 233 |
+
### 4.2 Corpus compilation
|
| 234 |
+
|
| 235 |
+
318
|
| 236 |
+
|
| 237 |
+
As mentioned, the largest part of the corpus, about
|
| 238 |
+
|
| 239 |
+
145 hours, stems from OCR of transcriptions at 320
|
| 240 |
+
|
| 241 |
+
the Department of Ethnology and Folklore at The 321
|
| 242 |
+
|
| 243 |
+
Årni Magnússon Institute for Icelandic Studies. 322
|
| 244 |
+
|
| 245 |
+
These transcripts that were generated over several 323
|
| 246 |
+
|
| 247 |
+
324
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+
Figure 1: Age distribution of unique speakers in the training set
|
| 252 |
+
|
| 253 |
+
325
|
| 254 |
+
|
| 255 |
+
329
|
| 256 |
+
|
| 257 |
+
330
|
| 258 |
+
|
| 259 |
+
335
|
| 260 |
+
|
| 261 |
+
339
|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
|
| 265 |
+
Figure 2: Age distribution of unique speakers in the test set
|
| 266 |
+
|
| 267 |
+
340 decades are not all in the same format (e.g. typewritten, dot printed, printed Word documents) and therefore needed first to be processed, i.e. scanned and OCRed (the results of which varied depending on the format). These transcripts were then catalogued and paired with the respective recordings.
|
| 268 |
+
|
| 269 |
+
Once this ready data had been processed the first ASR output was produced and manually cor-
|
| 270 |
+
|
| 271 |
+
362 rected. During that process it became evident that some of the recordings were ill suited at this stage as they often contained poetry, nursery rhymes and in some cases singing, where the ASR system could not be expected to do well as the focus was
|
| 272 |
+
|
| 273 |
+
367 on spontaneous speech, where it performed much better (cf. Section 6).
|
| 274 |
+
|
| 275 |
+
As a result, we made use of the detailed meta-data search parameters in the Ísmús portal in order to filter the best in-domain data for further training. We mainly relied on the so-called form parameter (genre) to try to exclude everything but spontaneous speech. This gave much better results and resulted in the 43 hours of post-edited
|
| 276 |
+
|
| 277 |
+
377 data mentioned in Section 4.
|
| 278 |
+
|
| 279 |
+
### 4.3 Normalizing, aligning, segmenting and filtering the transcripts for ASR training
|
| 280 |
+
|
| 281 |
+
378
|
| 282 |
+
|
| 283 |
+
379
|
| 284 |
+
|
| 285 |
+
380
|
| 286 |
+
|
| 287 |
+
A large part of the transcripts did not have time 381
|
| 288 |
+
|
| 289 |
+
alignments and some had OCR spelling errors. 382
|
| 290 |
+
|
| 291 |
+
Therefore, we had to process the utterances before 383
|
| 292 |
+
|
| 293 |
+
using them to train the acoustic model. To do this, 384
|
| 294 |
+
|
| 295 |
+
we first normalized all sentences using the Regina 385
|
| 296 |
+
|
| 297 |
+
normalizer developed in (Sigurðardóttir, 2021) be- 386 fore aligning the transcripts to the audio and segmenting them. This step also removes sections
|
| 298 |
+
|
| 299 |
+
with out-of-vocabulary words, which should ac- 389 count for errors stemming from the OCR.
|
| 300 |
+
|
| 301 |
+
We then filtered those segments, removing any 391 that were deemed unintelligible to an intermediate ASR system. For this, a biased language model
|
| 302 |
+
|
| 303 |
+
is applied to the segment, using words that appear 394
|
| 304 |
+
|
| 305 |
+
in the utterance's transcription. It then removes 396 segments where the system could not decode the words which appeared in the transcript. This is an iterative process, whereby an acoustic model is used to filter the training data, then that data is used to train a new acoustic model, which can then be used to re-align and re-filter the training data. These segmenting and filtering steps were all done with the Kaldi scripts (Segment long utterances nn3 ${)}^{3}$ and (Clean and segment data nn3). ${}^{4}$
|
| 306 |
+
|
| 307 |
+
406
|
| 308 |
+
|
| 309 |
+
## 5 Models (and out-of-domain data)
|
| 310 |
+
|
| 311 |
+
We trained a hybrid ASR system in Kaldi. That 409 is, the language model and acoustic model were trained separately as opposed to an end-to-end system. For the acoustic and language models in the custom ASR system, we expanded the training sets with various out-of-domain data, which will be described in the following sections.
|
| 312 |
+
|
| 313 |
+
416
|
| 314 |
+
|
| 315 |
+
### 5.1 Acoustic Model
|
| 316 |
+
|
| 317 |
+
An acoustic model learns to map audio to a sequence of phonemes. The acoustic model is
|
| 318 |
+
|
| 319 |
+
a TDNN (time-delayed neural network) chain 421 model trained in Kaldi. It was trained on the in-domain data described above, but also on various out-of-domain data, which included the following datasets:
|
| 320 |
+
|
| 321 |
+
426
|
| 322 |
+
|
| 323 |
+
430
|
| 324 |
+
|
| 325 |
+
431
|
| 326 |
+
|
| 327 |
+
---
|
| 328 |
+
|
| 329 |
+
${}^{3}$ https://github.com/kaldi-asr/kaldi/ blob/master/egs/wsj/s5/steps/cleanup/ segment_long_utterances_nnet3.sh
|
| 330 |
+
|
| 331 |
+
${}^{4}$ https://github.com/kaldi-asr/kaldi/ blob/master/egs/wsj/s5/steps/cleanup/ clean_and_segment_data_nnet3.sh
|
| 332 |
+
|
| 333 |
+
---
|
| 334 |
+
|
| 335 |
+
1. Althingi’s Parliamentary Speeches. ${}^{5}$ A corpus of 514.5 hours of recorded speech from the Icelandic parliament (Helgadóttir et al., 2017a)
|
| 336 |
+
|
| 337 |
+
2. 114.6 hours of speech from the first Samró- mur release, ${}^{6}$ leaving out children.
|
| 338 |
+
|
| 339 |
+
3. 173.1 hours of unverified Samrómur data, ${}^{7}$ containing only speech with ${50} +$ year old men and ${60} +$ year old women.
|
| 340 |
+
|
| 341 |
+
4. 228.2 hours of the RÚV TV unknown speakers dataset. ${}^{8}$
|
| 342 |
+
|
| 343 |
+
Data augmentation was also used to triple the entire training set. We added artificial noise and reverberation. For noisy data sets, e.g. call-center data sets, this is said to give better results than speed perturbations (Ko et al., 2017) and as was described earlier, background noise and disturbances are not uncommon in the data.
|
| 344 |
+
|
| 345 |
+
### 5.2 Language Model
|
| 346 |
+
|
| 347 |
+
A language model is necessary for outputting coherent texts, it learns a probability distribution for word sequences from a training corpus. The language model is an n-gram language model; 3- gram for decoding and 4-gram for rescoring. It was trained on in-domain data from the Gamli training set described in 4.2, both already existing ones and those resulting from the proofread ASR output. The out-of-domain data stems from the following sources:
|
| 348 |
+
|
| 349 |
+
1. The Icelandic Gigaword Corpus (IGC) (Ste-ingrímsson et al., 2018). We use word forms from the 2022 version of the IGC. ${}^{9}$
|
| 350 |
+
|
| 351 |
+
2. Ethnographic data from the National Museum of Iceland in Sarpur. ${}^{10}$
|
| 352 |
+
|
| 353 |
+
3. Audio file descriptions from Ismús ${}^{11}$ for their content.
|
| 354 |
+
|
| 355 |
+
4. Place name data from the Icelandic Place 486
|
| 356 |
+
|
| 357 |
+
Name Collection. ${}^{12}$ 487
|
| 358 |
+
|
| 359 |
+
488
|
| 360 |
+
|
| 361 |
+
### 5.3 Vocabulary and Pronunciation Dictionary
|
| 362 |
+
|
| 363 |
+
489
|
| 364 |
+
|
| 365 |
+
490
|
| 366 |
+
|
| 367 |
+
The pronunciation dictionary maps words to se- 491
|
| 368 |
+
|
| 369 |
+
quences of phonemes. For the vocabulary we 492 used:
|
| 370 |
+
|
| 371 |
+
1. All the word forms from The Database of Icelandic Morphology (Bjarnadóttir et al.,
|
| 372 |
+
|
| 373 |
+
2019). 497
|
| 374 |
+
|
| 375 |
+
2. OOV words from audio file descriptions in Is- 499 mús.
|
| 376 |
+
|
| 377 |
+
3. Vocabulary from the training set (only the 502 data that was manually transcribed and not
|
| 378 |
+
|
| 379 |
+
the OCR data); manually checked and added 504 where appropriate.
|
| 380 |
+
|
| 381 |
+
4. OOV words from Sarpur; (manually checked 507 and added where appropriate).
|
| 382 |
+
|
| 383 |
+
To get the phonemic transcriptions of each word a G2P model based on the Icelandic Pronunciation Dictionary for Language Technology ${}^{13}$ was used.
|
| 384 |
+
|
| 385 |
+
## 6 Evaluation
|
| 386 |
+
|
| 387 |
+
To assess the final ASR system's performance on the test set, we use Samrómur TDNN model as a baseline. This is a baseline model from a wellknown dataset of read Icelandic speech. While the ASR baseline system, Samrómur achieved 53.4% WER on the Gamli test set, the final ASR system performed much better, achieving 22.1% WER on the same set, as shown in Table 2. This compares the two overall systems, each including their own acoustic model, language model, and vocabulary.
|
| 388 |
+
|
| 389 |
+
To investigate the differences in the two systems, we also compare the performance when taking demographic information into account in Fig-
|
| 390 |
+
|
| 391 |
+
ure 3. As stated earlier, the test set contains 10 529 speakers and a total of 8 hours of audio.
|
| 392 |
+
|
| 393 |
+
There appears to be a possible slight correlation between age and WER for the baseline system but not for the final system. Though it should be noted that the test set has too few data points to draw any significant conclusions. There is one outlier in the test set for both systems, an 85 year old man
|
| 394 |
+
|
| 395 |
+
539
|
| 396 |
+
|
| 397 |
+
---
|
| 398 |
+
|
| 399 |
+
${}^{5}$ Available at: http://hdl.handle.net/20.500.12537/277
|
| 400 |
+
|
| 401 |
+
6 Available at: http://hdl.handle.net/20.500.12537/189
|
| 402 |
+
|
| 403 |
+
${}^{7}$ Available at: http://hdl.handle.net/20.500.12537/265
|
| 404 |
+
|
| 405 |
+
8 Available at: http://hdl.handle.net/20.500.12537/191
|
| 406 |
+
|
| 407 |
+
9http://hdl.handle.net/20.500.12537/ 254
|
| 408 |
+
|
| 409 |
+
${}^{10}$ https://sarpur.is/
|
| 410 |
+
|
| 411 |
+
11https://ismus.is/
|
| 412 |
+
|
| 413 |
+
12 nafnid.is
|
| 414 |
+
|
| 415 |
+
${}^{13}$ Available at: http://hdl.handle.net/20.500.12537/99
|
| 416 |
+
|
| 417 |
+
---
|
| 418 |
+
|
| 419 |
+
540
|
| 420 |
+
|
| 421 |
+

|
| 422 |
+
|
| 423 |
+
Figure 3: WER on the Gamli test set for the 10 unique speakers in the test set based on demographic information
|
| 424 |
+
|
| 425 |
+
541
|
| 426 |
+
|
| 427 |
+
546
|
| 428 |
+
|
| 429 |
+
551 recorded in 1966, upon manual inspection of the audio it seems the speaker has particularly slurred speech and there is some noise from the recording equipment.
|
| 430 |
+
|
| 431 |
+
<table><tr><td/><td>WER</td><td>OOV-rate total words</td><td>OOV-rate unique words</td></tr><tr><td>Baseline (Samrómur)</td><td>53.4%</td><td>1.1%</td><td>6.8%</td></tr><tr><td>Final</td><td>22.1%</td><td>0.5%</td><td>3.1%</td></tr></table>
|
| 432 |
+
|
| 433 |
+
Table 2: ASR performance on the Gamli oral history test set
|
| 434 |
+
|
| 435 |
+
## 7 Conclusion and Future Work
|
| 436 |
+
|
| 437 |
+
In this paper we have presented Gamli, a corpus
|
| 438 |
+
|
| 439 |
+
583 suitable for training speech recognition systems, we have aligned and segmented Icelandic oral histories from manual transcriptions (both OCR from typewritten transcripts and post-edited from ASR output), and filtered out unintelligible segments.
|
| 440 |
+
|
| 441 |
+
588 We have described the compilation of the corpus, which has been published under an open license, the origins of the data and evaluation of an ASR system trained on the corpus. We have shown that using the corpus along with other rele-
|
| 442 |
+
|
| 443 |
+
593 vant datasets can substantially lower WER for his-
|
| 444 |
+
|
| 445 |
+
torical speech data, from 53.4% from a baseline 594
|
| 446 |
+
|
| 447 |
+
model to 22.1%. We also draw the conclusion that 595
|
| 448 |
+
|
| 449 |
+
it could be combined with other ASR training sets 596
|
| 450 |
+
|
| 451 |
+
which lack in data from older speakers in order to 597
|
| 452 |
+
|
| 453 |
+
reduce the word error rate for such speakers. 598
|
| 454 |
+
|
| 455 |
+
Our final ASR system will be used to automati- 599 600 cally transcribe the entire ethnographic audio data stored in Ismús, i.e. 2,300 hours of audio. We expect the outcome of that process to be in line with the results presented in this paper, with verse,
|
| 456 |
+
|
| 457 |
+
nursery rhymes, singing etc. still remaining a chal- 605 lenge for the customised model, but accuracy for
|
| 458 |
+
|
| 459 |
+
spontaneous speech to be more reliant on audio 607 quality and clarity of speech. Where the quality of these two factors is high, we expect the system to
|
| 460 |
+
|
| 461 |
+
perform well. 610
|
| 462 |
+
|
| 463 |
+
Even though the WER may differ substantially for some files, the general outcome will nonetheless be a somewhat readable version of the Is-mús ethnographic collection. That output can sub-
|
| 464 |
+
|
| 465 |
+
sequently be used in a number of ways: mak- 615 ing the data in Ismús more accessible for the
|
| 466 |
+
|
| 467 |
+
user, both laymen and researchers, indecing the 617 archives for search queries (useful for longer audio files where the description can not do the en-
|
| 468 |
+
|
| 469 |
+
tire content justice), and as a hypothesis transcript 620 for post-editing of more transcripts.
|
| 470 |
+
|
| 471 |
+
The Gamli corpus itself should provide an inter- 622 esting challenge to ASR researchers interested in
|
| 472 |
+
|
| 473 |
+
spontaneous speech, older speakers, noisy audio, 625 historical recordings and historical dialects.
|
| 474 |
+
|
| 475 |
+
627
|
| 476 |
+
|
| 477 |
+
## References
|
| 478 |
+
|
| 479 |
+
628
|
| 480 |
+
|
| 481 |
+
629
|
| 482 |
+
|
| 483 |
+
Kristín Bjarnadóttir, Kristín Ingibjörg Hlynsdóttir, and 630
|
| 484 |
+
|
| 485 |
+
Steinbór Steingrímsson. 2019. DIM: The Database 631
|
| 486 |
+
|
| 487 |
+
of Icelandic Morphology. In Proceedings of the 632
|
| 488 |
+
|
| 489 |
+
22nd Nordic Conference on Computational Linguis- 633 tics, Turku, Finland.
|
| 490 |
+
|
| 491 |
+
634
|
| 492 |
+
|
| 493 |
+
Michael Gref, Nike Matthiesen, Sreeni- 635
|
| 494 |
+
|
| 495 |
+
vasa Hikkal Venugopala, Shalaka Satheesh, 636
|
| 496 |
+
|
| 497 |
+
Aswinkumar Vijayananth, Duc Bach Ha, 637
|
| 498 |
+
|
| 499 |
+
Sven Behnke, and Joachim Köhler. 2022. 638 https://doi.org/10.48550/ARXIV.2201.06868 A
|
| 500 |
+
|
| 501 |
+
study on the ambiguity in human annotation of ger- 639
|
| 502 |
+
|
| 503 |
+
man oral history interviews for perceived emotion 640
|
| 504 |
+
|
| 505 |
+
recognition and sentiment analysis. 641
|
| 506 |
+
|
| 507 |
+
642
|
| 508 |
+
|
| 509 |
+
Michael Gref, Oliver Walter, Christoph Schmidt, Sven 643
|
| 510 |
+
|
| 511 |
+
Behnke, and Joachim Köhler. 2020. Multi-staged 644
|
| 512 |
+
|
| 513 |
+
cross-lingual acoustic model adaption for robust 645 speech recognition in real-world applications-a case
|
| 514 |
+
|
| 515 |
+
study on german oral history interviews. arXiv 646
|
| 516 |
+
|
| 517 |
+
preprint arXiv:2005.12562. 647
|
| 518 |
+
|
| 519 |
+
Staffan Hedström, Judy Y. Fong, Ragn-
|
| 520 |
+
|
| 521 |
+
649 heiður pórhallsdóttir, David Erik Mollberg, Smári Freyr Guǒmundsson, Ölafur Helgi Jónsson, Sunneva Porsteinsdóttir, Eydís Huld Magnúsdóttir, and Jon Gudnason. 2022. http://hdl.handle.net/20.500.12537/265 Samro-mur unverified 22.07. CLARIN-IS.
|
| 522 |
+
|
| 523 |
+
Inga Rún Helgadóttir, Róbert Kjaran, Anna Björk Nikulásdóttir, and Jón Guönason. 2017a. http://hdl.handle.net/20.500.12537/277 Althingi's parliamentary speeches. CLARIN-IS.
|
| 524 |
+
|
| 525 |
+
Inga Rún Helgadóttir, Róbert Kjaran, Anna Björk Nikulásdóttir, and Jón Guönason. 2017b. Build-
|
| 526 |
+
|
| 527 |
+
661 ing an asr corpus using althingi's parliamentary speeches. In Interspeech.
|
| 528 |
+
|
| 529 |
+
663 Photina Jaeyun Jang and Alexander G Hauptmann.
|
| 530 |
+
|
| 531 |
+
664 1999. Improving acoustic models with captioned multimedia speech. In Proceedings IEEE Interna-
|
| 532 |
+
|
| 533 |
+
666 tional Conference on Multimedia Computing and Systems, volume 2, pages 767-771. IEEE.
|
| 534 |
+
|
| 535 |
+
Tom Ko, Vijayaditya Peddinti, Daniel Povey, Michael L Seltzer, and Sanjeev Khudanpur. 2017. A study on data augmentation of reverberant speech for robust speech recognition. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5220-5224. IEEE.
|
| 536 |
+
|
| 537 |
+
Richard P. Lippmann. 1997. https://doi.org/https://doi.org/10.1016/S0167- 6393(97)00021-6 Speech recognition by machines and humans. Speech Communication, 22(1):1-15.
|
| 538 |
+
|
| 539 |
+
David Erik Mollberg, Ölafur Helgi Jónsson, Sun-neva THorsteinsdóttir, Steinbór Steingrímsson, Ey-dís Huld Magnúsdóttir, and Jón Guönason. 2020.
|
| 540 |
+
|
| 541 |
+
681 Samrómur: Crowd-sourcing data collection for icelandic speech recognition. In International Conference on Language Resources and Evaluation.
|
| 542 |
+
|
| 543 |
+
Anna B Nikulásdóttir, Inga R Helgadóttir, Matthías Pé- tursson, and Jón Guönason. 2018. Open asr for ice-
|
| 544 |
+
|
| 545 |
+
686 landic: Resources and a baseline system. In Proc. LREC, volume 2018.
|
| 546 |
+
|
| 547 |
+
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. https://doi.org/10.1109/ICASSP.2015.7178964
|
| 548 |
+
|
| 549 |
+
691 Librispeech: An asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206-5210.
|
| 550 |
+
|
| 551 |
+
Michael Picheny, Zóltan Tüske, Brian Kingsbury, Kar-tik Audhkhasi, Xiaodong Cui, and George Saon. 2019. Challenging the boundaries of speech recognition: The malach corpus. arXiv preprint arXiv:1908.03455.
|
| 552 |
+
|
| 553 |
+
Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas 701 Burget, Ondrej Glembek, Nagendra Goel, Mirko
|
| 554 |
+
|
| 555 |
+
Hannemann, Petr Motlicek, Yanmin Qian, Petr 702
|
| 556 |
+
|
| 557 |
+
Schwarz, Jan Silovsky, Georg Stemmer, and Karel 703
|
| 558 |
+
|
| 559 |
+
Vesely. 2011. The kaldi speech recognition toolkit. 704
|
| 560 |
+
|
| 561 |
+
In IEEE 2011 Workshop on Automatic Speech 705
|
| 562 |
+
|
| 563 |
+
Recognition and Understanding. IEEE Signal Pro- 706 cessing Society. IEEE Catalog No.: CFP11SRW-USB.
|
| 564 |
+
|
| 565 |
+
708
|
| 566 |
+
|
| 567 |
+
Josef Psutka, Pavel Ircing, Josef V Psutka, Vlasta Radová, William J Byrne, Jan Hajič, Samuel Gust-man, and Bhuvana Ramabhadran. 2002. Automatic transcription of czech language oral history in the malach project: Resources and initial experiments. In Text, Speech and Dialogue: 5th International Conference, TSD 2002 Brno, Czech Republic, September 9-12, 2002 Proceedings 5, pages 253- 260. Springer.
|
| 568 |
+
|
| 569 |
+
Helga Svala Sigurðardóttir. 2021. http://hdl.handle.net/20.500.12537/158 Text normalization corpus 21.10 (2021-10-25). CLARIN-IS.
|
| 570 |
+
|
| 571 |
+
Atli Sigurgeirsson, borsteinn Gunnarsson, Gunnar Örnölfsson, Eydís Magnúsdóttir, Ragnheiður Pórhallsdóttir, Stefán Jónsson, and Jón Guönason. 2021. https://aclanthology.org/2021.nodalida-main.50 Talrómur: A large Icelandic TTS corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 440-444, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden.
|
| 572 |
+
|
| 573 |
+
Steinbór Steingrímsson, Jón Guònason, Sigrún Helgadóttir, and Eiríkur Rögnvaldsson. 2017. https://aclanthology.org/W17-0229 Málrómur: A manually verified corpus of recorded Icelandic speech. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 237-240, Gothenburg, Sweden. Association for Computational Linguistics.
|
| 574 |
+
|
| 575 |
+
Steinbór Steingrímsson, Sigrún Helgadóttir, Eiríkur Rögnvaldsson, Starkaður Barkarson, and Jón Guö- nason. 2018. https://aclanthology.org/L18-1690 Risamálheild: A very large Icelandic text corpus.
|
| 576 |
+
|
| 577 |
+
In Proceedings of the Eleventh International Confer- 740 ence on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Re-
|
| 578 |
+
|
| 579 |
+
sources Association (ELRA). 743
|
| 580 |
+
|
| 581 |
+
Ravichander Vipperla, Steve Renals, and Joe Frankel.
|
| 582 |
+
|
| 583 |
+
2008. Longitudinal study of asr performance on 745
|
| 584 |
+
|
| 585 |
+
ageing voices. 746
|
| 586 |
+
|
| 587 |
+
Rósa Porsteinsdóttir. 2013. Ismús (íslenskur músík-og menningararfur): An open-access database. The Retrospective Methods Network Newsletter, 7:97-
|
| 588 |
+
|
| 589 |
+
101. 750
|
| 590 |
+
|
| 591 |
+
751 752 753 754 755
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/QhOp8oE2Pm/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,483 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ GAMLI - ICELANDIC ORAL HISTORY CORPUS: DESIGN, COLLECTION AND EVALUATION
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author
|
| 10 |
+
|
| 11 |
+
Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
006 Affiliation / Address line 2 Affiliation / Address line 3
|
| 14 |
+
|
| 15 |
+
email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 3
|
| 24 |
+
|
| 25 |
+
email@domain
|
| 26 |
+
|
| 27 |
+
Anonymousest Author 057
|
| 28 |
+
|
| 29 |
+
Affiliation / Address line 1 058
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 32 |
+
|
| 33 |
+
063
|
| 34 |
+
|
| 35 |
+
§ ABSTRACT
|
| 36 |
+
|
| 37 |
+
013 This paper presents Gamli, an ASR corpus for Icelandic oral histories, the first of its kind for this language, derived from the
|
| 38 |
+
|
| 39 |
+
016 Ísmús ethnographic collection. Corpora for oral histories differ in various ways
|
| 40 |
+
|
| 41 |
+
018 from corpora for general ASR, namely they contain spontaneous speech, multiple speakers per channel, noisy environ-
|
| 42 |
+
|
| 43 |
+
021 ments, the effects of historic recording equipment, and typically a large propor-
|
| 44 |
+
|
| 45 |
+
023 tion of elderly speakers. Gamli contains 188 hours of aligned speech and tran-
|
| 46 |
+
|
| 47 |
+
026 scripts, split into a training set and a test set. We describe our approach for creating
|
| 48 |
+
|
| 49 |
+
028 the transcripts, through both Optical Character Recognition of previous transcripts and post-editing of ASR output. We also
|
| 50 |
+
|
| 51 |
+
031 describe our approach for aligning, segmenting, and filtering the corpus and fi-
|
| 52 |
+
|
| 53 |
+
033 nally training a Kaldi ASR system, which achieves 22.1% word error rate (WER) on the Gamli test set, a substantial improvement from 53.4% word error rate from a baseline general ASR system for Ice-
|
| 54 |
+
|
| 55 |
+
038 landic.
|
| 56 |
+
|
| 57 |
+
§ 1 INTRODUCTION
|
| 58 |
+
|
| 59 |
+
Icelandic open-licensed speech corpora have in re-
|
| 60 |
+
|
| 61 |
+
043 cent years grown in volume and numbers, there are now Talrómur (Sigurgeirsson et al., 2021), Málrómur (Steingrímsson et al., 2017), Samró- mur (Mollberg et al., 2020) and the Althingi's Parliamentary Speeches corpus (Helgadóttir et al., 2017b; Nikulásdóttir et al., 2018) to name a few. However both historical speech and older speakers are underrepresented in these corpora. For instance, regarding older speakers, in Samrómur, the largest open-licensed ASR corpus for Icelandic
|
| 62 |
+
|
| 63 |
+
053 (2233 hours in the latest release (Hedström et al.,
|
| 64 |
+
|
| 65 |
+
2022)), only 4.8% of speakers are over 60 years 065 old.
|
| 66 |
+
|
| 67 |
+
Gamli, the oral history speech corpus presented 067 in this paper differs from that in many ways. Firstly, it contains, predominantly, spontaneous
|
| 68 |
+
|
| 69 |
+
speech in the form of interviews, secondly, it has a 070 very high ratio of older speakers (94.8% of speak-
|
| 70 |
+
|
| 71 |
+
ers are over 60 years old), thirdly, background 072 noise is common as well as noise artefacts from
|
| 72 |
+
|
| 73 |
+
historical recording equipment and lastly, historic 075 dialects (word choice and accent) are much more
|
| 74 |
+
|
| 75 |
+
prevalent than in existing corpora. 077
|
| 76 |
+
|
| 77 |
+
The corpus contains 188 hours of aligned speech and transcripts split into a training set and
|
| 78 |
+
|
| 79 |
+
a test set. This data, based on valuable historical 080 20th century recordings stored at the Department
|
| 80 |
+
|
| 81 |
+
of Ethnology and Folklore at The Árni Magnús- 082 son Institute for Icelandic Studies, is therefore an important addition to the existing Icelandic speech
|
| 82 |
+
|
| 83 |
+
corpora. ${}^{1}$ 085
|
| 84 |
+
|
| 85 |
+
The custom ASR system presented in this pa-
|
| 86 |
+
|
| 87 |
+
per along with the corpus will in due course be 087 used to automatically transcribe all of the ethnographic audio recordings stored at the institute.
|
| 88 |
+
|
| 89 |
+
The transcripts will then be made available on the 090 online portal Ismús ${}^{2}$ and paired with the respective
|
| 90 |
+
|
| 91 |
+
recording. 092
|
| 92 |
+
|
| 93 |
+
§ 2 RELATED WORK
|
| 94 |
+
|
| 95 |
+
For many years, ASR systems have been trained
|
| 96 |
+
|
| 97 |
+
on unaligned transcriptions (Panayotov et al., 097 2015) and even approximate transcriptions of spontaneous speech (Jang and Hauptmann, 1999). In the case of Icelandic ASR for spontaneous speech, there has been an ongoing project (Hel-gadóttir et al., 2017b), (Helgadóttir et al., 2017a) to align and filter Icelandic parliamentary transcripts for ASR in order to reduce the manual work
|
| 98 |
+
|
| 99 |
+
107 involved in transcribing parliamentary proceed-
|
| 100 |
+
|
| 101 |
+
${}^{1}$ The corpus is available under an open license at https: //anonymo.us/gamli
|
| 102 |
+
|
| 103 |
+
2 www.ismus.is
|
| 104 |
+
|
| 105 |
+
109 ings. Creating the corpora involves text normalization, time-alignment, and filtering utterances.
|
| 106 |
+
|
| 107 |
+
While ASR for oral histories is new for Icelandic, it is already being used in other languages. For example, the first large project was the MALACH project (Psutka et al., 2002) in 2002, where ASR transcriptions were used for indexing oral history archives and making them more searchable. However, some authors still consider oral history speech recognition an open problem (Picheny et al., 2019; Gref et al., 2020) and a recent study (Gref et al., 2022) found that human word error rate was ${8.7}\%$ on a German oral history corpus (taking into account case-sensitivity and annotation of hesitations). Whereas (Lippmann, 1997) found a human word error rate of less than 4% on the Switchboard corpus of spontaneous telephony speech and less than 0.4% on the Wall Street Journal corpus of clear read speech. This suggests that the minimum possible word error rate for ASR might be much higher on oral histories than it is for cleaner speech corpora.
|
| 108 |
+
|
| 109 |
+
One other factor that makes oral history ASR an interesting challenge is the particularly high ratio of older speakers. It has been noted by (Vipperla et al., 2008) that for general ASR models, WER correlates strongly with age, even throughout a single speakers lifetime. This could be caused by multiple changes in aging voices, such as slower speaking rate, changes in F0 (decrease for males and increase for females), increase in jitter and shimmer (all from (Vipperla et al., 2008)), some of which could be mitigated by increasing the number of older speakers in the training set. However, other changes might not be so easily solved, such as a reduction of tongue and jaw strength and an increase in breathiness (all from (Vipperla et al., 2008)) which could reduce articulatory precision.
|
| 110 |
+
|
| 111 |
+
§ 3 ORIGIN OF THE CORPUS
|
| 112 |
+
|
| 113 |
+
The ethnography collection of the Department of Ethnology and Folklore at The Árni Magnússon Institute for Icelandic Studies contains more than 2,300 hours of audio recordings of oral heritage and traditions, with a little less than 2,500 interviewees. The oldest material are recordings made on wax cylinders in the early 20th century and the collection is continually expanding with new material being added every year.
|
| 114 |
+
|
| 115 |
+
The bulk of the collection, however, consists
|
| 116 |
+
|
| 117 |
+
of recordings from the 1960's and 1970's, mainly 162
|
| 118 |
+
|
| 119 |
+
the work of three collectors. Their focus was 163 to gather ethnographic material from the whole country, first and foremost from older generations - the majority of the informants were born before or around the turn of the 20th century,
|
| 120 |
+
|
| 121 |
+
This resulted in an extensive collection of leg- 168 ends and fairy tales, accounts of beliefs and customs, poems, hymns, nursery rhymes, Icelandic ballads (rímur), occasional verses and more, with the material being variously spoken, sung or chanted. Apart from recited verse and that which is sung or chanted the speech is spontaneous. Accompanying the recordings is detailed metadata on the speaker, time and location of recording, as well as various other parameters such as genre (for different kinds of verse or prose material, e.g. poems or nursery rhymes, fairy tales or legends etc.), mode of performance (sung, chanted, spoken), key words, content (short summary, description), tale-types and motifs (in folktales and legends).
|
| 122 |
+
|
| 123 |
+
§ 3.1 SPEAKER DISTRIBUTION IN THE COLLECTION
|
| 124 |
+
|
| 125 |
+
185
|
| 126 |
+
|
| 127 |
+
In their work the collectors mainly relied on a snowball method of sorts, asking speakers to point them to other possible informants, as well as contacting teachers or clergy to enquire about interesting subjects in their region. Speaker profession is often listed in the metadata, but there is no information about education, and most of the speakers were common people, i.e. workers, farmers, fishermen, housewives etc., with little formal education.
|
| 128 |
+
|
| 129 |
+
Gender was probably not a decisive factor at the outset and the total ratio is ${57.6}\%$ male speakers and 42.4% female, i.e. based on the number of speakers. However, if audio length for each gender is included the difference increases quite a bit, i.e. 1504 hours (65%) for men vs. 821 hours (35%) for women.
|
| 130 |
+
|
| 131 |
+
As mentioned, the data in the collection also stands out in that that the age of the speakers is higher than in other existing Icelandic corpora. The oldest speaker in the collection was 105 years old at the time of recording in 1954 and the oldest speaker in the collection, with regards to date of birth, was born in 1827, and recorded in 1904 (not included in the Gamli corpus). In fact, 72.4% of the speakers are older than 63 and ${31.4}\%$ are 71 - 80 years old. In Gamli this ratio is substantially
|
| 132 |
+
|
| 133 |
+
higher, as detailed in Section 4. 215
|
| 134 |
+
|
| 135 |
+
§ 3.2 REGIONAL FEATURES IN PRONUNCIATION
|
| 136 |
+
|
| 137 |
+
217 The speakers in the collection are from all over the country and therefore reflect the various regional differences in pronunciation much better than recently recorded speech corpora such as Samró-
|
| 138 |
+
|
| 139 |
+
222 mur, due to the fact that these regional features either have already more or less disappeared or are gradually disappearing. Amongst these features is for example the "hard" pronunciation of $/\mathrm{p},\mathrm{t},\mathrm{k}/$ (still a distinct feature) and voiced pronunciation of $/\mathrm{l},\mathrm{m},\mathrm{n}/$ before $/\mathrm{p},\mathrm{t},\mathrm{k}/$ in North-Iceland, ${rn}$ -, ${rl}$ -pronunciation in South-East-Iceland, monoph-
|
| 140 |
+
|
| 141 |
+
229 thongs before $/\mathrm{{ng}},\mathrm{{nk}}/$ in the North-West etc.
|
| 142 |
+
|
| 143 |
+
230 While these features are not tagged in any way
|
| 144 |
+
|
| 145 |
+
231 in the Gamli corpus, the ASR system trained on
|
| 146 |
+
|
| 147 |
+
232 the corpus seems to prove well on these features, with possibly the exception of labial or velar stops
|
| 148 |
+
|
| 149 |
+
234 before $\left\lbrack \partial \right\rbrack$ , such as $\left\lbrack {\operatorname{hap}\partial \mathrm{I}}\right\rbrack$ instead of $\left\lbrack {\operatorname{hav}\partial \mathrm{I}}\right\rbrack$ for ${haf\delta i}$ or $\left\lbrack {\operatorname{lak}\delta \mathrm{I}}\right\rbrack$ instead of $\left\lbrack {\operatorname{lay}\delta \mathrm{I}}\right\rbrack$ for $\operatorname{lag}{\delta i}$ . We have, however, not inspected this systematically, so it needs further looking into to state the precision with any certainty.
|
| 150 |
+
|
| 151 |
+
§ 3.3 RECORDING PROCEDURE
|
| 152 |
+
|
| 153 |
+
Most of the recordings were made at the speakers' homes, in many cases in elderly homes, and carried out by the interviewer. It was not uncommon that other people, e.g. children, spouses etc. were present during the recording sessions, but they were in most cases not meant to play a part in the recording. Because of this, and for various
|
| 154 |
+
|
| 155 |
+
249 other reasons, some background noise and disturbances occur in the recordings, e.g. children playing, traffic sounds, phones ringing etc., but these are generally not prominent.
|
| 156 |
+
|
| 157 |
+
254 Much of the recordings were recorded using high quality reel-to-reel tape recording devices, although some were done by amateurs who weren't as well equipped, whereas a part of the recordings are from the recording studios of The Ice-
|
| 158 |
+
|
| 159 |
+
259 landic National Broadcasting Service (Porsteins-dóttir, 2013).
|
| 160 |
+
|
| 161 |
+
The digitalization of these recordings began in the late 1990's and continued into the early 2000's with the recordings being converted into WAV for-
|
| 162 |
+
|
| 163 |
+
264 mat as well as compressed MP3s for online use.
|
| 164 |
+
|
| 165 |
+
§ 4 CORPUS CONTENT
|
| 166 |
+
|
| 167 |
+
Gamli contains 188 hours of transcribed audio
|
| 168 |
+
|
| 169 |
+
269 broken down into
|
| 170 |
+
|
| 171 |
+
1. $\sim {145}$ hours from optical character recogni- 270
|
| 172 |
+
|
| 173 |
+
tion (OCR) of previous transcriptions in var- 271
|
| 174 |
+
|
| 175 |
+
ious formats 272
|
| 176 |
+
|
| 177 |
+
273
|
| 178 |
+
|
| 179 |
+
2. $\sim {43}$ hours of new transcriptions (post-edited 274
|
| 180 |
+
|
| 181 |
+
from ASR output) 275
|
| 182 |
+
|
| 183 |
+
276
|
| 184 |
+
|
| 185 |
+
The 145 hours include $\sim 8$ hours defined as a test
|
| 186 |
+
|
| 187 |
+
set, which was manually reviewed and corrected 278
|
| 188 |
+
|
| 189 |
+
and annotated with speaker ID and time align- 279
|
| 190 |
+
|
| 191 |
+
ments in the annotation tool ${ELAN}$ . The test set 280
|
| 192 |
+
|
| 193 |
+
contains recordings with 10 speakers, 5 women 281 (239 minutes) and 5 men (219 minutes), plus the
|
| 194 |
+
|
| 195 |
+
interviewers ( 4 men) and serves for evaluating the 283 system's performance.
|
| 196 |
+
|
| 197 |
+
A validation set has not been defined for the cor-
|
| 198 |
+
|
| 199 |
+
pus as the acoustic model training in Kaldi (Povey 286 et al., 2011) used a random sample of the training
|
| 200 |
+
|
| 201 |
+
corpus for validation. 288
|
| 202 |
+
|
| 203 |
+
max width=
|
| 204 |
+
|
| 205 |
+
Data split Hours Male speakers Female speakers Total speakers
|
| 206 |
+
|
| 207 |
+
1-5
|
| 208 |
+
Training 180 115 85 200
|
| 209 |
+
|
| 210 |
+
1-5
|
| 211 |
+
Test 8 5 5 10
|
| 212 |
+
|
| 213 |
+
1-5
|
| 214 |
+
|
| 215 |
+
Table 1: Data splits in Gamli
|
| 216 |
+
|
| 217 |
+
291
|
| 218 |
+
|
| 219 |
+
293
|
| 220 |
+
|
| 221 |
+
§ 4.1 SPEAKER DISTRIBUTION IN THE CORPUS
|
| 222 |
+
|
| 223 |
+
296
|
| 224 |
+
|
| 225 |
+
The corpus contains 210 unique speakers, 90
|
| 226 |
+
|
| 227 |
+
women and 120 men (plus the interviewers: 13 298 men and 1 woman). At the outset we aimed to have the gender ratio as equal as possible in the
|
| 228 |
+
|
| 229 |
+
acoustic training data, but with three men surpass- 301 ing 20 hours of speech each (with one topping at
|
| 230 |
+
|
| 231 |
+
29 hours) and accounting for more than one third 303 of the entire data, that picture became quite distorted. As a result the gender bias in the corpus is
|
| 232 |
+
|
| 233 |
+
even greater than in the collection itself, which is 306 unfortunate, but simply reflects the data that was
|
| 234 |
+
|
| 235 |
+
at hand, i.e. ${73.5}\%$ vs. ${26.5}\%$ , cf. Section 4.2. 308 309 The age ranges from 38 to 99, but most of the 310 speakers are ${60} + \left( {{94.8}\% }\right)$ , as shown in Figure 1, and the average age of the speakers is 77 years.
|
| 236 |
+
|
| 237 |
+
This ratio is unprecedented in all existing corpora 313 for Icelandic speech (cf. 4.8% in Samrómur as referred to in Section 1) and makes Gamli an important addition to that collection.
|
| 238 |
+
|
| 239 |
+
§ 4.2 CORPUS COMPILATION
|
| 240 |
+
|
| 241 |
+
318
|
| 242 |
+
|
| 243 |
+
As mentioned, the largest part of the corpus, about
|
| 244 |
+
|
| 245 |
+
145 hours, stems from OCR of transcriptions at 320
|
| 246 |
+
|
| 247 |
+
the Department of Ethnology and Folklore at The 321
|
| 248 |
+
|
| 249 |
+
Årni Magnússon Institute for Icelandic Studies. 322
|
| 250 |
+
|
| 251 |
+
These transcripts that were generated over several 323
|
| 252 |
+
|
| 253 |
+
324
|
| 254 |
+
|
| 255 |
+
< g r a p h i c s >
|
| 256 |
+
|
| 257 |
+
Figure 1: Age distribution of unique speakers in the training set
|
| 258 |
+
|
| 259 |
+
325
|
| 260 |
+
|
| 261 |
+
329
|
| 262 |
+
|
| 263 |
+
330
|
| 264 |
+
|
| 265 |
+
335
|
| 266 |
+
|
| 267 |
+
339
|
| 268 |
+
|
| 269 |
+
< g r a p h i c s >
|
| 270 |
+
|
| 271 |
+
Figure 2: Age distribution of unique speakers in the test set
|
| 272 |
+
|
| 273 |
+
340 decades are not all in the same format (e.g. typewritten, dot printed, printed Word documents) and therefore needed first to be processed, i.e. scanned and OCRed (the results of which varied depending on the format). These transcripts were then catalogued and paired with the respective recordings.
|
| 274 |
+
|
| 275 |
+
Once this ready data had been processed the first ASR output was produced and manually cor-
|
| 276 |
+
|
| 277 |
+
362 rected. During that process it became evident that some of the recordings were ill suited at this stage as they often contained poetry, nursery rhymes and in some cases singing, where the ASR system could not be expected to do well as the focus was
|
| 278 |
+
|
| 279 |
+
367 on spontaneous speech, where it performed much better (cf. Section 6).
|
| 280 |
+
|
| 281 |
+
As a result, we made use of the detailed meta-data search parameters in the Ísmús portal in order to filter the best in-domain data for further training. We mainly relied on the so-called form parameter (genre) to try to exclude everything but spontaneous speech. This gave much better results and resulted in the 43 hours of post-edited
|
| 282 |
+
|
| 283 |
+
377 data mentioned in Section 4.
|
| 284 |
+
|
| 285 |
+
§ 4.3 NORMALIZING, ALIGNING, SEGMENTING AND FILTERING THE TRANSCRIPTS FOR ASR TRAINING
|
| 286 |
+
|
| 287 |
+
378
|
| 288 |
+
|
| 289 |
+
379
|
| 290 |
+
|
| 291 |
+
380
|
| 292 |
+
|
| 293 |
+
A large part of the transcripts did not have time 381
|
| 294 |
+
|
| 295 |
+
alignments and some had OCR spelling errors. 382
|
| 296 |
+
|
| 297 |
+
Therefore, we had to process the utterances before 383
|
| 298 |
+
|
| 299 |
+
using them to train the acoustic model. To do this, 384
|
| 300 |
+
|
| 301 |
+
we first normalized all sentences using the Regina 385
|
| 302 |
+
|
| 303 |
+
normalizer developed in (Sigurðardóttir, 2021) be- 386 fore aligning the transcripts to the audio and segmenting them. This step also removes sections
|
| 304 |
+
|
| 305 |
+
with out-of-vocabulary words, which should ac- 389 count for errors stemming from the OCR.
|
| 306 |
+
|
| 307 |
+
We then filtered those segments, removing any 391 that were deemed unintelligible to an intermediate ASR system. For this, a biased language model
|
| 308 |
+
|
| 309 |
+
is applied to the segment, using words that appear 394
|
| 310 |
+
|
| 311 |
+
in the utterance's transcription. It then removes 396 segments where the system could not decode the words which appeared in the transcript. This is an iterative process, whereby an acoustic model is used to filter the training data, then that data is used to train a new acoustic model, which can then be used to re-align and re-filter the training data. These segmenting and filtering steps were all done with the Kaldi scripts (Segment long utterances nn3 ${)}^{3}$ and (Clean and segment data nn3). ${}^{4}$
|
| 312 |
+
|
| 313 |
+
406
|
| 314 |
+
|
| 315 |
+
§ 5 MODELS (AND OUT-OF-DOMAIN DATA)
|
| 316 |
+
|
| 317 |
+
We trained a hybrid ASR system in Kaldi. That 409 is, the language model and acoustic model were trained separately as opposed to an end-to-end system. For the acoustic and language models in the custom ASR system, we expanded the training sets with various out-of-domain data, which will be described in the following sections.
|
| 318 |
+
|
| 319 |
+
416
|
| 320 |
+
|
| 321 |
+
§ 5.1 ACOUSTIC MODEL
|
| 322 |
+
|
| 323 |
+
An acoustic model learns to map audio to a sequence of phonemes. The acoustic model is
|
| 324 |
+
|
| 325 |
+
a TDNN (time-delayed neural network) chain 421 model trained in Kaldi. It was trained on the in-domain data described above, but also on various out-of-domain data, which included the following datasets:
|
| 326 |
+
|
| 327 |
+
426
|
| 328 |
+
|
| 329 |
+
430
|
| 330 |
+
|
| 331 |
+
431
|
| 332 |
+
|
| 333 |
+
${}^{3}$ https://github.com/kaldi-asr/kaldi/ blob/master/egs/wsj/s5/steps/cleanup/ segment_long_utterances_nnet3.sh
|
| 334 |
+
|
| 335 |
+
${}^{4}$ https://github.com/kaldi-asr/kaldi/ blob/master/egs/wsj/s5/steps/cleanup/ clean_and_segment_data_nnet3.sh
|
| 336 |
+
|
| 337 |
+
1. Althingi’s Parliamentary Speeches. ${}^{5}$ A corpus of 514.5 hours of recorded speech from the Icelandic parliament (Helgadóttir et al., 2017a)
|
| 338 |
+
|
| 339 |
+
2. 114.6 hours of speech from the first Samró- mur release, ${}^{6}$ leaving out children.
|
| 340 |
+
|
| 341 |
+
3. 173.1 hours of unverified Samrómur data, ${}^{7}$ containing only speech with ${50} +$ year old men and ${60} +$ year old women.
|
| 342 |
+
|
| 343 |
+
4. 228.2 hours of the RÚV TV unknown speakers dataset. ${}^{8}$
|
| 344 |
+
|
| 345 |
+
Data augmentation was also used to triple the entire training set. We added artificial noise and reverberation. For noisy data sets, e.g. call-center data sets, this is said to give better results than speed perturbations (Ko et al., 2017) and as was described earlier, background noise and disturbances are not uncommon in the data.
|
| 346 |
+
|
| 347 |
+
§ 5.2 LANGUAGE MODEL
|
| 348 |
+
|
| 349 |
+
A language model is necessary for outputting coherent texts, it learns a probability distribution for word sequences from a training corpus. The language model is an n-gram language model; 3- gram for decoding and 4-gram for rescoring. It was trained on in-domain data from the Gamli training set described in 4.2, both already existing ones and those resulting from the proofread ASR output. The out-of-domain data stems from the following sources:
|
| 350 |
+
|
| 351 |
+
1. The Icelandic Gigaword Corpus (IGC) (Ste-ingrímsson et al., 2018). We use word forms from the 2022 version of the IGC. ${}^{9}$
|
| 352 |
+
|
| 353 |
+
2. Ethnographic data from the National Museum of Iceland in Sarpur. ${}^{10}$
|
| 354 |
+
|
| 355 |
+
3. Audio file descriptions from Ismús ${}^{11}$ for their content.
|
| 356 |
+
|
| 357 |
+
4. Place name data from the Icelandic Place 486
|
| 358 |
+
|
| 359 |
+
Name Collection. ${}^{12}$ 487
|
| 360 |
+
|
| 361 |
+
488
|
| 362 |
+
|
| 363 |
+
§ 5.3 VOCABULARY AND PRONUNCIATION DICTIONARY
|
| 364 |
+
|
| 365 |
+
489
|
| 366 |
+
|
| 367 |
+
490
|
| 368 |
+
|
| 369 |
+
The pronunciation dictionary maps words to se- 491
|
| 370 |
+
|
| 371 |
+
quences of phonemes. For the vocabulary we 492 used:
|
| 372 |
+
|
| 373 |
+
1. All the word forms from The Database of Icelandic Morphology (Bjarnadóttir et al.,
|
| 374 |
+
|
| 375 |
+
2019). 497
|
| 376 |
+
|
| 377 |
+
2. OOV words from audio file descriptions in Is- 499 mús.
|
| 378 |
+
|
| 379 |
+
3. Vocabulary from the training set (only the 502 data that was manually transcribed and not
|
| 380 |
+
|
| 381 |
+
the OCR data); manually checked and added 504 where appropriate.
|
| 382 |
+
|
| 383 |
+
4. OOV words from Sarpur; (manually checked 507 and added where appropriate).
|
| 384 |
+
|
| 385 |
+
To get the phonemic transcriptions of each word a G2P model based on the Icelandic Pronunciation Dictionary for Language Technology ${}^{13}$ was used.
|
| 386 |
+
|
| 387 |
+
§ 6 EVALUATION
|
| 388 |
+
|
| 389 |
+
To assess the final ASR system's performance on the test set, we use Samrómur TDNN model as a baseline. This is a baseline model from a wellknown dataset of read Icelandic speech. While the ASR baseline system, Samrómur achieved 53.4% WER on the Gamli test set, the final ASR system performed much better, achieving 22.1% WER on the same set, as shown in Table 2. This compares the two overall systems, each including their own acoustic model, language model, and vocabulary.
|
| 390 |
+
|
| 391 |
+
To investigate the differences in the two systems, we also compare the performance when taking demographic information into account in Fig-
|
| 392 |
+
|
| 393 |
+
ure 3. As stated earlier, the test set contains 10 529 speakers and a total of 8 hours of audio.
|
| 394 |
+
|
| 395 |
+
There appears to be a possible slight correlation between age and WER for the baseline system but not for the final system. Though it should be noted that the test set has too few data points to draw any significant conclusions. There is one outlier in the test set for both systems, an 85 year old man
|
| 396 |
+
|
| 397 |
+
539
|
| 398 |
+
|
| 399 |
+
${}^{5}$ Available at: http://hdl.handle.net/20.500.12537/277
|
| 400 |
+
|
| 401 |
+
6 Available at: http://hdl.handle.net/20.500.12537/189
|
| 402 |
+
|
| 403 |
+
${}^{7}$ Available at: http://hdl.handle.net/20.500.12537/265
|
| 404 |
+
|
| 405 |
+
8 Available at: http://hdl.handle.net/20.500.12537/191
|
| 406 |
+
|
| 407 |
+
9http://hdl.handle.net/20.500.12537/ 254
|
| 408 |
+
|
| 409 |
+
${}^{10}$ https://sarpur.is/
|
| 410 |
+
|
| 411 |
+
11https://ismus.is/
|
| 412 |
+
|
| 413 |
+
12 nafnid.is
|
| 414 |
+
|
| 415 |
+
${}^{13}$ Available at: http://hdl.handle.net/20.500.12537/99
|
| 416 |
+
|
| 417 |
+
540
|
| 418 |
+
|
| 419 |
+
< g r a p h i c s >
|
| 420 |
+
|
| 421 |
+
Figure 3: WER on the Gamli test set for the 10 unique speakers in the test set based on demographic information
|
| 422 |
+
|
| 423 |
+
541
|
| 424 |
+
|
| 425 |
+
546
|
| 426 |
+
|
| 427 |
+
551 recorded in 1966, upon manual inspection of the audio it seems the speaker has particularly slurred speech and there is some noise from the recording equipment.
|
| 428 |
+
|
| 429 |
+
max width=
|
| 430 |
+
|
| 431 |
+
X WER OOV-rate total words OOV-rate unique words
|
| 432 |
+
|
| 433 |
+
1-4
|
| 434 |
+
Baseline (Samrómur) 53.4% 1.1% 6.8%
|
| 435 |
+
|
| 436 |
+
1-4
|
| 437 |
+
Final 22.1% 0.5% 3.1%
|
| 438 |
+
|
| 439 |
+
1-4
|
| 440 |
+
|
| 441 |
+
Table 2: ASR performance on the Gamli oral history test set
|
| 442 |
+
|
| 443 |
+
§ 7 CONCLUSION AND FUTURE WORK
|
| 444 |
+
|
| 445 |
+
In this paper we have presented Gamli, a corpus
|
| 446 |
+
|
| 447 |
+
583 suitable for training speech recognition systems, we have aligned and segmented Icelandic oral histories from manual transcriptions (both OCR from typewritten transcripts and post-edited from ASR output), and filtered out unintelligible segments.
|
| 448 |
+
|
| 449 |
+
588 We have described the compilation of the corpus, which has been published under an open license, the origins of the data and evaluation of an ASR system trained on the corpus. We have shown that using the corpus along with other rele-
|
| 450 |
+
|
| 451 |
+
593 vant datasets can substantially lower WER for his-
|
| 452 |
+
|
| 453 |
+
torical speech data, from 53.4% from a baseline 594
|
| 454 |
+
|
| 455 |
+
model to 22.1%. We also draw the conclusion that 595
|
| 456 |
+
|
| 457 |
+
it could be combined with other ASR training sets 596
|
| 458 |
+
|
| 459 |
+
which lack in data from older speakers in order to 597
|
| 460 |
+
|
| 461 |
+
reduce the word error rate for such speakers. 598
|
| 462 |
+
|
| 463 |
+
Our final ASR system will be used to automati- 599 600 cally transcribe the entire ethnographic audio data stored in Ismús, i.e. 2,300 hours of audio. We expect the outcome of that process to be in line with the results presented in this paper, with verse,
|
| 464 |
+
|
| 465 |
+
nursery rhymes, singing etc. still remaining a chal- 605 lenge for the customised model, but accuracy for
|
| 466 |
+
|
| 467 |
+
spontaneous speech to be more reliant on audio 607 quality and clarity of speech. Where the quality of these two factors is high, we expect the system to
|
| 468 |
+
|
| 469 |
+
perform well. 610
|
| 470 |
+
|
| 471 |
+
Even though the WER may differ substantially for some files, the general outcome will nonetheless be a somewhat readable version of the Is-mús ethnographic collection. That output can sub-
|
| 472 |
+
|
| 473 |
+
sequently be used in a number of ways: mak- 615 ing the data in Ismús more accessible for the
|
| 474 |
+
|
| 475 |
+
user, both laymen and researchers, indecing the 617 archives for search queries (useful for longer audio files where the description can not do the en-
|
| 476 |
+
|
| 477 |
+
tire content justice), and as a hypothesis transcript 620 for post-editing of more transcripts.
|
| 478 |
+
|
| 479 |
+
The Gamli corpus itself should provide an inter- 622 esting challenge to ASR researchers interested in
|
| 480 |
+
|
| 481 |
+
spontaneous speech, older speakers, noisy audio, 625 historical recordings and historical dialects.
|
| 482 |
+
|
| 483 |
+
627
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TFZGxtsyk3/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,333 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Adapting an Icelandic morphological database to Faroese
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author
|
| 10 |
+
|
| 11 |
+
004 Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
005 Affiliation / Address line 2 006 Affiliation / Address line 3
|
| 14 |
+
|
| 15 |
+
email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author 057
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1 058
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 22 |
+
|
| 23 |
+
063
|
| 24 |
+
|
| 25 |
+
## Abstract
|
| 26 |
+
|
| 27 |
+
This paper describes the adaptation of the database system developed for the Database of Icelandic Morphology (DIM) to the Faroese language and the creation of the Faroese Morphological Database using that system from lexicographical data collected for a Faroese spellchecker project.
|
| 28 |
+
|
| 29 |
+
## 1 Introduction
|
| 30 |
+
|
| 31 |
+
The Faroese Morphological Database (FMD) ${}^{1}$ is the result of a joint project of the Árni Magnús-son Institute for Icelandic Studies and the University of the Faroe Islands. It consists of entries for Faroese words (lexemes) with with complete paradigms, including variants. Various kinds of metadata are included. It is based on a previously existing project in Iceland, the Database of Icelandic Morphology (Bjarnadóttir et al.,2019) ${}^{2}$ , and makes use of language data collected for a previous Faroese-language project, the spellchecker Rættstavarin. ${}^{3}$ Data from DIM is used in countless language technology projects in Iceland, including smart search engines, spell-checking and hyphenation tools, taggers and parsers, speech recognition tools, online word games, and DIM is also a popular online resource for the general public. It is hoped that the new Faroese sister project will grow to be as successful in spurring the development of language technology in the Faroe Islands and aiding the general public, researchers and language students in the use and study of the Faroese language.
|
| 32 |
+
|
| 33 |
+
064
|
| 34 |
+
|
| 35 |
+
### 1.1 Goals
|
| 36 |
+
|
| 37 |
+
065
|
| 38 |
+
|
| 39 |
+
The aim was to publish the FMD with the available 067 lexical data from Rættstavarin as well as the list of given names published by the Faroese Language
|
| 40 |
+
|
| 41 |
+
Council ${}^{4}$ . The basic features of the DIM system 070 were used to generate all inflected forms, displaying searchable inflectional paradigms on the web and providing data for download, including all the inflected forms with POS tags, lemmas and basic metadata.
|
| 42 |
+
|
| 43 |
+
Secondary goals included adding more meta-data such as tags for specific morphological, syntactic and pronunciation features, dialects, etc. Recent additions to the DIM system were also tested, in anticipation of their future use for Faroese. ${}^{5}$
|
| 44 |
+
|
| 45 |
+
Ultimately, the FMD should include all extant 082 forms of all words in the Faroese language, and they should include as much useful metadata as
|
| 46 |
+
|
| 47 |
+
possible. Of course "all words" is a utopian ideal 085
|
| 48 |
+
|
| 49 |
+
as languages are constantly evolving and more vo- 087 cabulary is both created and discovered, but it is feasible in the relatively near future to have basically added all vocabulary from available digital texts and to have a pipeline for semi-automatically
|
| 50 |
+
|
| 51 |
+
adding newly discovered vocabulary on a regu- 092 lar basis. In this initial project period we focused on readily available data from lexicographical sources.
|
| 52 |
+
|
| 53 |
+
## 2 Linguistic similarity
|
| 54 |
+
|
| 55 |
+
097
|
| 56 |
+
|
| 57 |
+
Faroese and Icelandic share many features such as three grammatical genders, masculine, feminine and neuter, and the four-case system of nominative, accusative, dative and genitive. Although the genitive is used much less in Faroese than Icelandic, it certainly exists and is morphologically
|
| 58 |
+
|
| 59 |
+
107 similar. Nouns have inherent gender, while adjectives and determiners inflect for gender. Verbs inflect for mood, tense, person and number (Thráins-son et al., 2012). A full list of inflectional categories will be provided on the FMD website, in the same manner as on the DIM website.
|
| 60 |
+
|
| 61 |
+
---
|
| 62 |
+
|
| 63 |
+
https://bendingar.fo
|
| 64 |
+
|
| 65 |
+
${}^{2}$ https://bin.arnastofnun.is/DMII/
|
| 66 |
+
|
| 67 |
+
${}^{3}$ Rættstavarin is available as part of the Divvun language tool package at https://divvun.org/, and the source code is available on GitHub: https://github.com/giellalt/lang-fao;
|
| 68 |
+
|
| 69 |
+
a description of the project (in Faroese) may be found here: https://www.setur.fo/fo/ setrid/almennar-taenastur-og-grunnar/ raettstavarin/
|
| 70 |
+
|
| 71 |
+
${}^{4}$ http://malrad.fo/page.php?Id=38&l=fo
|
| 72 |
+
|
| 73 |
+
${}^{5}$ See the description of the classification system in Bjar-nadóttir et al. (2019).
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
+
Due to these similarities it was evident from the start that all the tools and methods that have been developed for DIM could be applied to Faroese with only minimal changes; even the web interface can be presented in much the same way, with Faroese linguistic terms replacing the Icelandic terms (e.g. singular, nominative, comparative, etc.). At this initial stage of the project, the focus was on the main features of the system, though detailed tagging was employed for some particularly important or interesting morphological and pronunciation features.
|
| 78 |
+
|
| 79 |
+
The database system for the FMD is run on a copy of the DIM system. More or less the complete software system from DIM has been set up for the FMD. The system includes the database backend, import tools, and website, with both online lookup and export functions for language technology projects.
|
| 80 |
+
|
| 81 |
+
## 3 Building the database
|
| 82 |
+
|
| 83 |
+
The premise of the project was to make use of existing data, and by far the largest set of lexicographical data available was the data from Rættstavarin. It, in turn, is largely derived from data from the electronic version of the Faroese dictionary (Poulsen, 1998; web version 2007, currently available at sprotin.fo). Another piece of low-hanging fruit was the official Faroese Language Council list of given names.
|
| 84 |
+
|
| 85 |
+
### 3.1 System comparison
|
| 86 |
+
|
| 87 |
+
The spellchecker data has words categorised by inflectional category according to a classification scheme which was created for the electronic version of the Faroese dictionary and slightly modified and expanded for the spellchecker. The spellchecker software has a template-based system that generates inflected forms from source files containing a lemma, a single template parameter and the name of the appropriate inflection pattern using a template for each pattern.
|
| 88 |
+
|
| 89 |
+
The FMD (and DIM), somewhat similarly, uses a template-based system to generate inflected forms, though the conventions for parameters
|
| 90 |
+
|
| 91 |
+
are different (more than one parameter may be 162
|
| 92 |
+
|
| 93 |
+
used to represent stem variations) and a relational 163
|
| 94 |
+
|
| 95 |
+
database system is used rather than text files. The 164 inflected forms are then stored in a table linked to the main table containing word entries. Additionally, a set of switches enables or disables the
|
| 96 |
+
|
| 97 |
+
generation of specific sections of the inflectional 168 paradigm such as singular or plural, definite and indefinite forms for nouns, the different moods, voices and participles of a verb, etc. The first step for each inflection pattern, then, was to create a template for it. Then the list of words with that pattern from the spellchecker data could, in theory, be transformed with a simple script to the correct import format, as long as the inflectional patterns
|
| 98 |
+
|
| 99 |
+
were compatible. 178
|
| 100 |
+
|
| 101 |
+
### 3.2 Adapted classification and error correction
|
| 102 |
+
|
| 103 |
+
180
|
| 104 |
+
|
| 105 |
+
Indeed, the FMD has largely followed the 183 spellchecker's inflection classification scheme, but
|
| 106 |
+
|
| 107 |
+
it has been necessary to add new patterns to ac- 185 count for the subtler variations in word inflections in Faroese. For example, a number of words had been assigned a pattern which correctly accounts for their most usual or regular inflected forms, but fails to account for certain variant forms, perhaps remnants of an older inflection, perhaps novel variants, sometimes dialectal forms, archaic forms or forms used in fixed expressions. Unless assigned a different inflection template, these words
|
| 108 |
+
|
| 109 |
+
would therefore be missing some of their inflected 195 forms. In other cases the templates would have produced erroneous inflected forms.
|
| 110 |
+
|
| 111 |
+
Some accidental errors were inherited from the Faroese dictionary, while some had been intro-
|
| 112 |
+
|
| 113 |
+
duced by the spellchecker project, and many of 200 them were clearly the result of lack of care either in choosing the correct pattern, e.g. forgetting that a neuter noun whose stem ends in $- s$ needs to a pattern that doesn’t add an extra $- s$ in the genitive singular form, or in typing the pattern name, e.g. writing kv6 (feminine pattern 6) instead of $\mathrm{k}6$ (masculine pattern 6). These could often be corrected by assigning the words another existing pattern, but for many words new templates were needed. In some cases a word needs a pattern of its own due to its irregularity of inflection. There were also other errors in the spellchecker data such as typos and spelling errors and incorrectly entered
|
| 114 |
+
|
| 115 |
+
template parameters. 215
|
| 116 |
+
|
| 117 |
+
It quickly became apparent that the number of
|
| 118 |
+
|
| 119 |
+
217 errors in the source material was too great to leave unchecked. It would also be easier to identify and correct them early on while still working with the data in text files, rather than risking overwriting subsequent edits to database entries, particularly comment fields and other metadata, by updating them en masse later on.
|
| 120 |
+
|
| 121 |
+
The database system also requires that words be designated as base words or compounds, and a binary split point is required for compounds, e.g. the compound noun havnarkona is written havnar_kona in the lemma field to indicate that it is composed of havnar- and kona. Compounding had been indicated to some extent in the
|
| 122 |
+
|
| 123 |
+
232 spellchecker data, but haphazardly and also with some errors.
|
| 124 |
+
|
| 125 |
+
234 These factors led to the conclusion that all words needed to be reviewed manually, though often somewhat cursorily due to time limitations, chiefly focusing on splitting compounds and checking for obvious errors. Along the way, tagging of morphological, usage and pronunciation characteristics was begun, and it was considered desirable that certain of them should always be tagged if possible, in particular: restriction of a word to a region or dialect; archaic, obsolete or rare usage; irregular correspondence of spelling and pronunciation; and unusual word formation patterns. This became a secondary goal of word review and, while it made it somewhat more time-consuming, it reduces the need to run through the data a second time later on, which would be even more time-consuming, and therefore serves our long-term goals well. The delay caused by manual review meant that there was no time to gather vocabulary from more sources in this round of the project, but the data has been greatly enriched and its quality improved, so it has been well worth it.
|
| 126 |
+
|
| 127 |
+
### 3.3 Importation
|
| 128 |
+
|
| 129 |
+
Data is imported into the FMD via text files with each line containing a single word entry, and may include many required and optional database fields, including the headword, the name of the inflection template, switches to limit the paradigm, and various metadata fields. These were generated semi-automatically from the spellchecker word lists and other sources using regular-expression scripting and then manually reviewed. Templates
|
| 130 |
+
|
| 131 |
+
269 have been created manually or sometimes semi-
|
| 132 |
+
|
| 133 |
+
automatically from other templates. 270
|
| 134 |
+
|
| 135 |
+
271
|
| 136 |
+
|
| 137 |
+
#### 3.3.1 Nouns
|
| 138 |
+
|
| 139 |
+
272
|
| 140 |
+
|
| 141 |
+
The inflection of nouns was generally fairly easy 273
|
| 142 |
+
|
| 143 |
+
to handle as they don't have as many inflected 274
|
| 144 |
+
|
| 145 |
+
forms as adjectives or verbs and most of their pat- 275
|
| 146 |
+
|
| 147 |
+
terns were already well defined. Even so, many 276 new patterns for nouns needed to be created. For example, weak masculine nouns had only 5 basic patterns in the spellchecker data, with 3 more
|
| 148 |
+
|
| 149 |
+
mixed patterns (combinations of two basic pat- 281 terns) and one pattern with an irregular variant, a
|
| 150 |
+
|
| 151 |
+
total of 9 . In comparison, the FMD currently has 283 17 different templates for weak masculine nouns. This disparity is largely due to compounds with in-
|
| 152 |
+
|
| 153 |
+
ternal inflection; e.g. lítlibeiggi 'little brother' (ac- 286 cusative lítlabeiggja) has a more complex inflec-
|
| 154 |
+
|
| 155 |
+
tion than pápabeiggi 'father's brother' (accusative 288 pápabeiggja). As the FMD template system has each inflected form generated from one stem and an inflectional ending, these words usually require more "stems" than other words, to account for the changes in the first half of the compound due to its separate inflection. The Faroese dictionary had not classed these words separately from compounds with an immutable first half and the spellchecker made no provision for them, al-
|
| 156 |
+
|
| 157 |
+
though the spellchecker project had already iden- 298 tified them as problematic. However, such compounds are known in Icelandic and had been dealt
|
| 158 |
+
|
| 159 |
+
with successfully in DIM. The FMD has followed 301 the DIM practice of creating a separate version of
|
| 160 |
+
|
| 161 |
+
each template for internally inflected compounds 303 where required.
|
| 162 |
+
|
| 163 |
+
#### 3.3.2 Verbs and adjectives
|
| 164 |
+
|
| 165 |
+
306
|
| 166 |
+
|
| 167 |
+
Verbs and adjectives have many more inflected
|
| 168 |
+
|
| 169 |
+
forms than nouns, both in Faroese and Icelandic, 308 and partial information on the inflection of these word classes in the available sources were a problem in both projects.
|
| 170 |
+
|
| 171 |
+
Verb paradigms in the Faroese dictionary are 313 limited, omitting first and second person singular conjugations, as well as the imperative and conjunctive (optative) moods and the present participle and the mediopassive voice. Adjective paradigms also lacked comparative and superlative forms. These were added in the spellchecker project along with expansion of verb conjugation, but the spellchecker data still contains only active voice conjugations for most verbs, and the com-
|
| 172 |
+
|
| 173 |
+
parative and superlative forms of irregular adjec- 323 tives were not obvious.
|
| 174 |
+
|
| 175 |
+
325 In the FMD, the verb templates now support full personal conjugation in active and mediopassive voice and a full declension of the past participle, and full paradigms are also displayed for all adjectives. Variant forms, contained in the Faroese dictionary but not found in the inflection tables or the spellchecker paradigms, have been added to the FMD. Additional variant forms from textual sources such as online media and the card index of word citations (Seðlasavnið) ${}^{6}$ at the University of the Faroe Islands, have also been added.
|
| 176 |
+
|
| 177 |
+
Some software modifications were required to support Faroese verbs and adjectives, both of which can be useful for Icelandic as well. The mediopassive imperative singular (without pronominal clitic) had not previously been supported, but proved to be necessary for both languages. The indefinite inflection of the comparative occurs in most Faroese adjectives and was consequently added to the system. This category also exists in Icelandic but is extremely rare.
|
| 178 |
+
|
| 179 |
+
The greater number of inflected forms of verbs, the need for expanding their paradigms and the greater number of irregular verbs than irregular nouns made the creation of verb templates more time-consuming, but on the other hand, there are over nine time as many nouns as verbs, which reduced the time needed for review of individual words, so that, overall, the nouns took more time.
|
| 180 |
+
|
| 181 |
+
#### 3.3.3 Other parts of speech
|
| 182 |
+
|
| 183 |
+
Inflection patterns for pronouns, determiners, articles and numerals have been created based on data gathered from the relevant dictionary entries, the spellchecker data, and from the Faroese grammar by Thráinsson et al. (2012). These never had inflection tables in the dictionary, only inline mentions of inflected forms and usage examples. The inflection of these word classes is relatively simple and does not contain problems on a different scale from the work on Icelandic. Uninflected word classes are also included in the data, but these present no problems and most of them have been added to the FMD.
|
| 184 |
+
|
| 185 |
+
## 4 Present state
|
| 186 |
+
|
| 187 |
+
Currently, the FMD contains over 72,000 entries. These include close to 67,000 words added from the spellchecker word lists and about 3,000 more
|
| 188 |
+
|
| 189 |
+
taken directly from the dictionary, either via dic- 378
|
| 190 |
+
|
| 191 |
+
tionary data collected for the spellchecker project 379
|
| 192 |
+
|
| 193 |
+
or manual lookup on the web, and 1,688 given 380
|
| 194 |
+
|
| 195 |
+
names from the Faroese Language Council's name 381 list. Several hundred words have been added from other sources such as web texts and other pub-
|
| 196 |
+
|
| 197 |
+
lished texts, Wiktionary ${}^{7}$ , and Thráinsson et al. 384 (2012).
|
| 198 |
+
|
| 199 |
+
### 4.1 Future additions
|
| 200 |
+
|
| 201 |
+
The FMD currently does not cover proper nouns 389 well. More are needed e.g. place names, company names and surnames. Many of these may be sourced from government lists, phone directories, etc. The Faroese Text Collection ${}^{8}$ has been used as a rough gauge of the completeness of the FMD and can serve as a source for further general vocabulary. Although it only has 1.1 million tokens, at this early stage in the development of the Faroese morphological database it yields some interesting material. It can continue to provide a means of evaluating the progress of the database, i.e. what proportion of unique tokens in the corpus are already in the database and whether the most frequent word forms in the corpus are included. After most or all of the vocabulary in the Faroese Text Collection has been added we will hopefully have access to a much larger Faroese corpus. We expect that there will be a number of erroneous and nonstandard forms in the corpus data; these will be added to a special part of the database dedicated to that purpose.
|
| 202 |
+
|
| 203 |
+
411
|
| 204 |
+
|
| 205 |
+
## References
|
| 206 |
+
|
| 207 |
+
414
|
| 208 |
+
|
| 209 |
+
Kristín Bjarnadóttir, Kristín Ingibjörg Hlyns- 416 dóttir, and Steinbór Steingrímsson. 2019. https://www.aclweb.org/anthology/W19-6116.pdf DIM: The Database of Icelandic Morphology. In Proceedings of the 22nd Nordic Conference on Computational Linguistics (NoDaLiDa 2019), pages 146-154.
|
| 210 |
+
|
| 211 |
+
Jóhan Hendrik W. Poulsen. 1998. Føroysk orðabók. Føroya Fróðskaparfelag, Tórshavn, Faroe Islands.
|
| 212 |
+
|
| 213 |
+
Höskuldur Thráinsson, Hjalmar P. Petersen, Jógvan 1 Lon Jacobsen, and Zakaris Svabo Hansen. 2012. Faroese - an overview and reference grammar, second edition. Faroe University Press and Linguistic
|
| 214 |
+
|
| 215 |
+
431
|
| 216 |
+
|
| 217 |
+
---
|
| 218 |
+
|
| 219 |
+
${}^{7}$ https://en.wiktionary.org/wiki/ Category:Faroese_language
|
| 220 |
+
|
| 221 |
+
8 https://spraakbanken.gu.se/en/ resources/fts
|
| 222 |
+
|
| 223 |
+
6 https://sedlasavn.setur.fo/
|
| 224 |
+
|
| 225 |
+
---
|
| 226 |
+
|
| 227 |
+
432 Institute, University of Iceland, Tórshavn, Faroe Is- 486
|
| 228 |
+
|
| 229 |
+
433 lands and Reykjavík, Iceland. 487
|
| 230 |
+
|
| 231 |
+
434 488
|
| 232 |
+
|
| 233 |
+
435 489
|
| 234 |
+
|
| 235 |
+
436 490
|
| 236 |
+
|
| 237 |
+
437 491
|
| 238 |
+
|
| 239 |
+
438 492
|
| 240 |
+
|
| 241 |
+
439 493
|
| 242 |
+
|
| 243 |
+
494
|
| 244 |
+
|
| 245 |
+
495
|
| 246 |
+
|
| 247 |
+
496
|
| 248 |
+
|
| 249 |
+
443 497
|
| 250 |
+
|
| 251 |
+
444 498
|
| 252 |
+
|
| 253 |
+
445 499
|
| 254 |
+
|
| 255 |
+
446 500
|
| 256 |
+
|
| 257 |
+
447 501
|
| 258 |
+
|
| 259 |
+
448 502
|
| 260 |
+
|
| 261 |
+
449 503
|
| 262 |
+
|
| 263 |
+
450 504
|
| 264 |
+
|
| 265 |
+
451 505
|
| 266 |
+
|
| 267 |
+
452 506
|
| 268 |
+
|
| 269 |
+
453 507
|
| 270 |
+
|
| 271 |
+
508
|
| 272 |
+
|
| 273 |
+
455 509
|
| 274 |
+
|
| 275 |
+
456
|
| 276 |
+
|
| 277 |
+
457
|
| 278 |
+
|
| 279 |
+
458 512
|
| 280 |
+
|
| 281 |
+
459
|
| 282 |
+
|
| 283 |
+
460 514
|
| 284 |
+
|
| 285 |
+
461
|
| 286 |
+
|
| 287 |
+
462
|
| 288 |
+
|
| 289 |
+
463 517
|
| 290 |
+
|
| 291 |
+
464
|
| 292 |
+
|
| 293 |
+
465 519
|
| 294 |
+
|
| 295 |
+
466 520
|
| 296 |
+
|
| 297 |
+
467 521
|
| 298 |
+
|
| 299 |
+
468 522
|
| 300 |
+
|
| 301 |
+
469 523
|
| 302 |
+
|
| 303 |
+
470 524
|
| 304 |
+
|
| 305 |
+
471 525
|
| 306 |
+
|
| 307 |
+
472 526
|
| 308 |
+
|
| 309 |
+
473 527
|
| 310 |
+
|
| 311 |
+
474 528
|
| 312 |
+
|
| 313 |
+
475 529
|
| 314 |
+
|
| 315 |
+
476 530
|
| 316 |
+
|
| 317 |
+
477 531
|
| 318 |
+
|
| 319 |
+
478 532
|
| 320 |
+
|
| 321 |
+
479 533
|
| 322 |
+
|
| 323 |
+
480 534
|
| 324 |
+
|
| 325 |
+
481 535
|
| 326 |
+
|
| 327 |
+
482 536
|
| 328 |
+
|
| 329 |
+
483 537
|
| 330 |
+
|
| 331 |
+
484 538
|
| 332 |
+
|
| 333 |
+
485 539
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TFZGxtsyk3/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,199 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ ADAPTING AN ICELANDIC MORPHOLOGICAL DATABASE TO FAROESE
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author
|
| 10 |
+
|
| 11 |
+
004 Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
005 Affiliation / Address line 2 006 Affiliation / Address line 3
|
| 14 |
+
|
| 15 |
+
email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author 057
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1 058
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 22 |
+
|
| 23 |
+
063
|
| 24 |
+
|
| 25 |
+
§ ABSTRACT
|
| 26 |
+
|
| 27 |
+
This paper describes the adaptation of the database system developed for the Database of Icelandic Morphology (DIM) to the Faroese language and the creation of the Faroese Morphological Database using that system from lexicographical data collected for a Faroese spellchecker project.
|
| 28 |
+
|
| 29 |
+
§ 1 INTRODUCTION
|
| 30 |
+
|
| 31 |
+
The Faroese Morphological Database (FMD) ${}^{1}$ is the result of a joint project of the Árni Magnús-son Institute for Icelandic Studies and the University of the Faroe Islands. It consists of entries for Faroese words (lexemes) with with complete paradigms, including variants. Various kinds of metadata are included. It is based on a previously existing project in Iceland, the Database of Icelandic Morphology (Bjarnadóttir et al.,2019) ${}^{2}$ , and makes use of language data collected for a previous Faroese-language project, the spellchecker Rættstavarin. ${}^{3}$ Data from DIM is used in countless language technology projects in Iceland, including smart search engines, spell-checking and hyphenation tools, taggers and parsers, speech recognition tools, online word games, and DIM is also a popular online resource for the general public. It is hoped that the new Faroese sister project will grow to be as successful in spurring the development of language technology in the Faroe Islands and aiding the general public, researchers and language students in the use and study of the Faroese language.
|
| 32 |
+
|
| 33 |
+
064
|
| 34 |
+
|
| 35 |
+
§ 1.1 GOALS
|
| 36 |
+
|
| 37 |
+
065
|
| 38 |
+
|
| 39 |
+
The aim was to publish the FMD with the available 067 lexical data from Rættstavarin as well as the list of given names published by the Faroese Language
|
| 40 |
+
|
| 41 |
+
Council ${}^{4}$ . The basic features of the DIM system 070 were used to generate all inflected forms, displaying searchable inflectional paradigms on the web and providing data for download, including all the inflected forms with POS tags, lemmas and basic metadata.
|
| 42 |
+
|
| 43 |
+
Secondary goals included adding more meta-data such as tags for specific morphological, syntactic and pronunciation features, dialects, etc. Recent additions to the DIM system were also tested, in anticipation of their future use for Faroese. ${}^{5}$
|
| 44 |
+
|
| 45 |
+
Ultimately, the FMD should include all extant 082 forms of all words in the Faroese language, and they should include as much useful metadata as
|
| 46 |
+
|
| 47 |
+
possible. Of course "all words" is a utopian ideal 085
|
| 48 |
+
|
| 49 |
+
as languages are constantly evolving and more vo- 087 cabulary is both created and discovered, but it is feasible in the relatively near future to have basically added all vocabulary from available digital texts and to have a pipeline for semi-automatically
|
| 50 |
+
|
| 51 |
+
adding newly discovered vocabulary on a regu- 092 lar basis. In this initial project period we focused on readily available data from lexicographical sources.
|
| 52 |
+
|
| 53 |
+
§ 2 LINGUISTIC SIMILARITY
|
| 54 |
+
|
| 55 |
+
097
|
| 56 |
+
|
| 57 |
+
Faroese and Icelandic share many features such as three grammatical genders, masculine, feminine and neuter, and the four-case system of nominative, accusative, dative and genitive. Although the genitive is used much less in Faroese than Icelandic, it certainly exists and is morphologically
|
| 58 |
+
|
| 59 |
+
107 similar. Nouns have inherent gender, while adjectives and determiners inflect for gender. Verbs inflect for mood, tense, person and number (Thráins-son et al., 2012). A full list of inflectional categories will be provided on the FMD website, in the same manner as on the DIM website.
|
| 60 |
+
|
| 61 |
+
https://bendingar.fo
|
| 62 |
+
|
| 63 |
+
${}^{2}$ https://bin.arnastofnun.is/DMII/
|
| 64 |
+
|
| 65 |
+
${}^{3}$ Rættstavarin is available as part of the Divvun language tool package at https://divvun.org/, and the source code is available on GitHub: https://github.com/giellalt/lang-fao;
|
| 66 |
+
|
| 67 |
+
a description of the project (in Faroese) may be found here: https://www.setur.fo/fo/ setrid/almennar-taenastur-og-grunnar/ raettstavarin/
|
| 68 |
+
|
| 69 |
+
${}^{4}$ http://malrad.fo/page.php?Id=38&l=fo
|
| 70 |
+
|
| 71 |
+
${}^{5}$ See the description of the classification system in Bjar-nadóttir et al. (2019).
|
| 72 |
+
|
| 73 |
+
Due to these similarities it was evident from the start that all the tools and methods that have been developed for DIM could be applied to Faroese with only minimal changes; even the web interface can be presented in much the same way, with Faroese linguistic terms replacing the Icelandic terms (e.g. singular, nominative, comparative, etc.). At this initial stage of the project, the focus was on the main features of the system, though detailed tagging was employed for some particularly important or interesting morphological and pronunciation features.
|
| 74 |
+
|
| 75 |
+
The database system for the FMD is run on a copy of the DIM system. More or less the complete software system from DIM has been set up for the FMD. The system includes the database backend, import tools, and website, with both online lookup and export functions for language technology projects.
|
| 76 |
+
|
| 77 |
+
§ 3 BUILDING THE DATABASE
|
| 78 |
+
|
| 79 |
+
The premise of the project was to make use of existing data, and by far the largest set of lexicographical data available was the data from Rættstavarin. It, in turn, is largely derived from data from the electronic version of the Faroese dictionary (Poulsen, 1998; web version 2007, currently available at sprotin.fo). Another piece of low-hanging fruit was the official Faroese Language Council list of given names.
|
| 80 |
+
|
| 81 |
+
§ 3.1 SYSTEM COMPARISON
|
| 82 |
+
|
| 83 |
+
The spellchecker data has words categorised by inflectional category according to a classification scheme which was created for the electronic version of the Faroese dictionary and slightly modified and expanded for the spellchecker. The spellchecker software has a template-based system that generates inflected forms from source files containing a lemma, a single template parameter and the name of the appropriate inflection pattern using a template for each pattern.
|
| 84 |
+
|
| 85 |
+
The FMD (and DIM), somewhat similarly, uses a template-based system to generate inflected forms, though the conventions for parameters
|
| 86 |
+
|
| 87 |
+
are different (more than one parameter may be 162
|
| 88 |
+
|
| 89 |
+
used to represent stem variations) and a relational 163
|
| 90 |
+
|
| 91 |
+
database system is used rather than text files. The 164 inflected forms are then stored in a table linked to the main table containing word entries. Additionally, a set of switches enables or disables the
|
| 92 |
+
|
| 93 |
+
generation of specific sections of the inflectional 168 paradigm such as singular or plural, definite and indefinite forms for nouns, the different moods, voices and participles of a verb, etc. The first step for each inflection pattern, then, was to create a template for it. Then the list of words with that pattern from the spellchecker data could, in theory, be transformed with a simple script to the correct import format, as long as the inflectional patterns
|
| 94 |
+
|
| 95 |
+
were compatible. 178
|
| 96 |
+
|
| 97 |
+
§ 3.2 ADAPTED CLASSIFICATION AND ERROR CORRECTION
|
| 98 |
+
|
| 99 |
+
180
|
| 100 |
+
|
| 101 |
+
Indeed, the FMD has largely followed the 183 spellchecker's inflection classification scheme, but
|
| 102 |
+
|
| 103 |
+
it has been necessary to add new patterns to ac- 185 count for the subtler variations in word inflections in Faroese. For example, a number of words had been assigned a pattern which correctly accounts for their most usual or regular inflected forms, but fails to account for certain variant forms, perhaps remnants of an older inflection, perhaps novel variants, sometimes dialectal forms, archaic forms or forms used in fixed expressions. Unless assigned a different inflection template, these words
|
| 104 |
+
|
| 105 |
+
would therefore be missing some of their inflected 195 forms. In other cases the templates would have produced erroneous inflected forms.
|
| 106 |
+
|
| 107 |
+
Some accidental errors were inherited from the Faroese dictionary, while some had been intro-
|
| 108 |
+
|
| 109 |
+
duced by the spellchecker project, and many of 200 them were clearly the result of lack of care either in choosing the correct pattern, e.g. forgetting that a neuter noun whose stem ends in $- s$ needs to a pattern that doesn’t add an extra $- s$ in the genitive singular form, or in typing the pattern name, e.g. writing kv6 (feminine pattern 6) instead of $\mathrm{k}6$ (masculine pattern 6). These could often be corrected by assigning the words another existing pattern, but for many words new templates were needed. In some cases a word needs a pattern of its own due to its irregularity of inflection. There were also other errors in the spellchecker data such as typos and spelling errors and incorrectly entered
|
| 110 |
+
|
| 111 |
+
template parameters. 215
|
| 112 |
+
|
| 113 |
+
It quickly became apparent that the number of
|
| 114 |
+
|
| 115 |
+
217 errors in the source material was too great to leave unchecked. It would also be easier to identify and correct them early on while still working with the data in text files, rather than risking overwriting subsequent edits to database entries, particularly comment fields and other metadata, by updating them en masse later on.
|
| 116 |
+
|
| 117 |
+
The database system also requires that words be designated as base words or compounds, and a binary split point is required for compounds, e.g. the compound noun havnarkona is written havnar_kona in the lemma field to indicate that it is composed of havnar- and kona. Compounding had been indicated to some extent in the
|
| 118 |
+
|
| 119 |
+
232 spellchecker data, but haphazardly and also with some errors.
|
| 120 |
+
|
| 121 |
+
234 These factors led to the conclusion that all words needed to be reviewed manually, though often somewhat cursorily due to time limitations, chiefly focusing on splitting compounds and checking for obvious errors. Along the way, tagging of morphological, usage and pronunciation characteristics was begun, and it was considered desirable that certain of them should always be tagged if possible, in particular: restriction of a word to a region or dialect; archaic, obsolete or rare usage; irregular correspondence of spelling and pronunciation; and unusual word formation patterns. This became a secondary goal of word review and, while it made it somewhat more time-consuming, it reduces the need to run through the data a second time later on, which would be even more time-consuming, and therefore serves our long-term goals well. The delay caused by manual review meant that there was no time to gather vocabulary from more sources in this round of the project, but the data has been greatly enriched and its quality improved, so it has been well worth it.
|
| 122 |
+
|
| 123 |
+
§ 3.3 IMPORTATION
|
| 124 |
+
|
| 125 |
+
Data is imported into the FMD via text files with each line containing a single word entry, and may include many required and optional database fields, including the headword, the name of the inflection template, switches to limit the paradigm, and various metadata fields. These were generated semi-automatically from the spellchecker word lists and other sources using regular-expression scripting and then manually reviewed. Templates
|
| 126 |
+
|
| 127 |
+
269 have been created manually or sometimes semi-
|
| 128 |
+
|
| 129 |
+
automatically from other templates. 270
|
| 130 |
+
|
| 131 |
+
271
|
| 132 |
+
|
| 133 |
+
§ 3.3.1 NOUNS
|
| 134 |
+
|
| 135 |
+
272
|
| 136 |
+
|
| 137 |
+
The inflection of nouns was generally fairly easy 273
|
| 138 |
+
|
| 139 |
+
to handle as they don't have as many inflected 274
|
| 140 |
+
|
| 141 |
+
forms as adjectives or verbs and most of their pat- 275
|
| 142 |
+
|
| 143 |
+
terns were already well defined. Even so, many 276 new patterns for nouns needed to be created. For example, weak masculine nouns had only 5 basic patterns in the spellchecker data, with 3 more
|
| 144 |
+
|
| 145 |
+
mixed patterns (combinations of two basic pat- 281 terns) and one pattern with an irregular variant, a
|
| 146 |
+
|
| 147 |
+
total of 9 . In comparison, the FMD currently has 283 17 different templates for weak masculine nouns. This disparity is largely due to compounds with in-
|
| 148 |
+
|
| 149 |
+
ternal inflection; e.g. lítlibeiggi 'little brother' (ac- 286 cusative lítlabeiggja) has a more complex inflec-
|
| 150 |
+
|
| 151 |
+
tion than pápabeiggi 'father's brother' (accusative 288 pápabeiggja). As the FMD template system has each inflected form generated from one stem and an inflectional ending, these words usually require more "stems" than other words, to account for the changes in the first half of the compound due to its separate inflection. The Faroese dictionary had not classed these words separately from compounds with an immutable first half and the spellchecker made no provision for them, al-
|
| 152 |
+
|
| 153 |
+
though the spellchecker project had already iden- 298 tified them as problematic. However, such compounds are known in Icelandic and had been dealt
|
| 154 |
+
|
| 155 |
+
with successfully in DIM. The FMD has followed 301 the DIM practice of creating a separate version of
|
| 156 |
+
|
| 157 |
+
each template for internally inflected compounds 303 where required.
|
| 158 |
+
|
| 159 |
+
§ 3.3.2 VERBS AND ADJECTIVES
|
| 160 |
+
|
| 161 |
+
306
|
| 162 |
+
|
| 163 |
+
Verbs and adjectives have many more inflected
|
| 164 |
+
|
| 165 |
+
forms than nouns, both in Faroese and Icelandic, 308 and partial information on the inflection of these word classes in the available sources were a problem in both projects.
|
| 166 |
+
|
| 167 |
+
Verb paradigms in the Faroese dictionary are 313 limited, omitting first and second person singular conjugations, as well as the imperative and conjunctive (optative) moods and the present participle and the mediopassive voice. Adjective paradigms also lacked comparative and superlative forms. These were added in the spellchecker project along with expansion of verb conjugation, but the spellchecker data still contains only active voice conjugations for most verbs, and the com-
|
| 168 |
+
|
| 169 |
+
parative and superlative forms of irregular adjec- 323 tives were not obvious.
|
| 170 |
+
|
| 171 |
+
325 In the FMD, the verb templates now support full personal conjugation in active and mediopassive voice and a full declension of the past participle, and full paradigms are also displayed for all adjectives. Variant forms, contained in the Faroese dictionary but not found in the inflection tables or the spellchecker paradigms, have been added to the FMD. Additional variant forms from textual sources such as online media and the card index of word citations (Seðlasavnið) ${}^{6}$ at the University of the Faroe Islands, have also been added.
|
| 172 |
+
|
| 173 |
+
Some software modifications were required to support Faroese verbs and adjectives, both of which can be useful for Icelandic as well. The mediopassive imperative singular (without pronominal clitic) had not previously been supported, but proved to be necessary for both languages. The indefinite inflection of the comparative occurs in most Faroese adjectives and was consequently added to the system. This category also exists in Icelandic but is extremely rare.
|
| 174 |
+
|
| 175 |
+
The greater number of inflected forms of verbs, the need for expanding their paradigms and the greater number of irregular verbs than irregular nouns made the creation of verb templates more time-consuming, but on the other hand, there are over nine time as many nouns as verbs, which reduced the time needed for review of individual words, so that, overall, the nouns took more time.
|
| 176 |
+
|
| 177 |
+
§ 3.3.3 OTHER PARTS OF SPEECH
|
| 178 |
+
|
| 179 |
+
Inflection patterns for pronouns, determiners, articles and numerals have been created based on data gathered from the relevant dictionary entries, the spellchecker data, and from the Faroese grammar by Thráinsson et al. (2012). These never had inflection tables in the dictionary, only inline mentions of inflected forms and usage examples. The inflection of these word classes is relatively simple and does not contain problems on a different scale from the work on Icelandic. Uninflected word classes are also included in the data, but these present no problems and most of them have been added to the FMD.
|
| 180 |
+
|
| 181 |
+
§ 4 PRESENT STATE
|
| 182 |
+
|
| 183 |
+
Currently, the FMD contains over 72,000 entries. These include close to 67,000 words added from the spellchecker word lists and about 3,000 more
|
| 184 |
+
|
| 185 |
+
taken directly from the dictionary, either via dic- 378
|
| 186 |
+
|
| 187 |
+
tionary data collected for the spellchecker project 379
|
| 188 |
+
|
| 189 |
+
or manual lookup on the web, and 1,688 given 380
|
| 190 |
+
|
| 191 |
+
names from the Faroese Language Council's name 381 list. Several hundred words have been added from other sources such as web texts and other pub-
|
| 192 |
+
|
| 193 |
+
lished texts, Wiktionary ${}^{7}$ , and Thráinsson et al. 384 (2012).
|
| 194 |
+
|
| 195 |
+
§ 4.1 FUTURE ADDITIONS
|
| 196 |
+
|
| 197 |
+
The FMD currently does not cover proper nouns 389 well. More are needed e.g. place names, company names and surnames. Many of these may be sourced from government lists, phone directories, etc. The Faroese Text Collection ${}^{8}$ has been used as a rough gauge of the completeness of the FMD and can serve as a source for further general vocabulary. Although it only has 1.1 million tokens, at this early stage in the development of the Faroese morphological database it yields some interesting material. It can continue to provide a means of evaluating the progress of the database, i.e. what proportion of unique tokens in the corpus are already in the database and whether the most frequent word forms in the corpus are included. After most or all of the vocabulary in the Faroese Text Collection has been added we will hopefully have access to a much larger Faroese corpus. We expect that there will be a number of erroneous and nonstandard forms in the corpus data; these will be added to a special part of the database dedicated to that purpose.
|
| 198 |
+
|
| 199 |
+
411
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TqEvrDbInx/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,767 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Integrating rules and neural nets for morphological tagging of Norwegian Results and challenges
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
056
|
| 8 |
+
|
| 9 |
+
057
|
| 10 |
+
|
| 11 |
+
Anonymous Author
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 1
|
| 14 |
+
|
| 15 |
+
Affiliation / Address line 2
|
| 16 |
+
|
| 17 |
+
Affiliation / Address line 3
|
| 18 |
+
|
| 19 |
+
email@domain
|
| 20 |
+
|
| 21 |
+
Anonymouser Author
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 1
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 2
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 3
|
| 28 |
+
|
| 29 |
+
email@domain
|
| 30 |
+
|
| 31 |
+
Anonymousest Author 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 1 059
|
| 34 |
+
|
| 35 |
+
Affiliation / Address line 2 060
|
| 36 |
+
|
| 37 |
+
Affiliation / Address line 3
|
| 38 |
+
|
| 39 |
+
email@domain 062
|
| 40 |
+
|
| 41 |
+
## Abstract
|
| 42 |
+
|
| 43 |
+
In this paper, we report on efforts to improve the Oslo-Bergen Tagger for Norwegian morphological tagging by using a hybrid system that combines the output of the rule-based Constraint Grammar tagger with a neural sequence-to-sequence model trained for tagging. The results are very promising for cases where the two systems intersect in tokenisation and morphological analysis, but problems remain in integrating the two systems in many cases.
|
| 44 |
+
|
| 45 |
+
## 1 Introduction
|
| 46 |
+
|
| 47 |
+
The Oslo-Bergen Tagger (OBT, Hagen and Johannessen 2003; Johannessen et al. 2012) is a widely used tool for morphological tagging of Norwegian text. It has existed in various incarnations for
|
| 48 |
+
|
| 49 |
+
033 around 25 years, first as a purely rule-based system and later coupled with a statistical module for disambiguation. In this paper, we report on our recent efforts to bring the system into the age of neural networks and show that, even today, the rules boost accuracy considerably over a purely neural system, although there are challenges in combining rules and neural nets due to divergent tokeni-sations.
|
| 50 |
+
|
| 51 |
+
The structure of the paper is as follows: In section 2 we give some historical background on OBT and Section 3 describes the current status of its rule-based component. Section 4 describes the training and evaluation data that we have used in developing the new system. Section 5 then provides the details of how our neural system was trained while Section 6 describes how it was combined with the rule system. Section 7 evaluates the performance of the neural system alone as well as
|
| 52 |
+
|
| 53 |
+
053 the combined system. Section 8 concludes.
|
| 54 |
+
|
| 55 |
+
## 2 History of the Oslo-Bergen Tagger
|
| 56 |
+
|
| 57 |
+
065
|
| 58 |
+
|
| 59 |
+
The Oslo-Bergen Tagger was originally developed 067 between 1996 and 1998 by the Tagger Project at the University of Oslo. Rules for morphologi-
|
| 60 |
+
|
| 61 |
+
cal and syntactic disambiguation were written in 070 the first version of the Constraint Grammar frame-
|
| 62 |
+
|
| 63 |
+
work (Karlsson et al., 1995), retrospectively called 072 CG1. The rules were parsed by the only existing CG rule interpreter at the time, developed by Ling-
|
| 64 |
+
|
| 65 |
+
soft AB. The input to CG disambiguation rules is 075
|
| 66 |
+
|
| 67 |
+
multitagged text, i.e., text where each token has 077 been annotated with all possible lexical analyses. Hence, the project also developed a lexicon with
|
| 68 |
+
|
| 69 |
+
lemmas and inflected forms (later known as Norsk 080 ordbank) and a combined tokenizer/multitagger.
|
| 70 |
+
|
| 71 |
+
The tagger was developed for both Bokmål and 082 Nynorsk, the two written varieties of Norwegian. In this article, we will only focus on the Bokmål
|
| 72 |
+
|
| 73 |
+
version of the tagger, and only on the tokenizer 085 and the morphological disambiguation.
|
| 74 |
+
|
| 75 |
+
The first version of the tagger was tested on an 087 unseen evaluation corpus with a wide variety of text genres and achieved an F1-score of 97.2 (Ha-
|
| 76 |
+
|
| 77 |
+
gen and Johannessen, 2003, 90). The numbers be- 090 hind the F1-score - a precision of 95.4 and recall
|
| 78 |
+
|
| 79 |
+
of 99.0 - reveal that the tagger leaves some ambi- 092 guity but makes relatively few errors. At the time, this was considered acceptable as the tagger was mostly used to annotate written corpora for linguistic research, where a high recall was consid-
|
| 80 |
+
|
| 81 |
+
ered more important than a high precision. 097
|
| 82 |
+
|
| 83 |
+
In 2000 the rule interpreter was replaced by a reimplementation in Allegro Common Lisp made by Paul Meurer in cooperation with the Text Laboratory at the University of Oslo. At the time, Meurer was employed at Aksis in Bergen, and hence the tagger was named the Oslo-Bergen Tagger (OBT).
|
| 84 |
+
|
| 85 |
+
Some years later the need for a new upgrade be-
|
| 86 |
+
|
| 87 |
+
came urgent. Firstly, OBT was quite slow. This 107 was not a big problem in 2000, but soon our cor-
|
| 88 |
+
|
| 89 |
+
109 pora were getting bigger, and speed became important. The project Norwegian Newspaper Corpus (2007-2009) gave the Text Laboratory the opportunity to translate the CG1 rules to the new more efficient and expressive CG3 format and to use a faster rule interpreter made by the VISL project at the University of Southern Denmark. Secondly, the ambiguities that were left in the output from OBT made the tagger unsuitable for many language technology purposes and applications that require the text to be completely disambiguated. We therefore extended OBT with a statistical module, implemented as a Hidden Markov Model, that disambiguated the remaining morphological ambiguities and also provided the system with a new feature: disambiguation of lemmas. The new OBT+Stat system achieved an accuracy of around 96 percent (Johannessen et al., 2012).
|
| 90 |
+
|
| 91 |
+
In the version of the tagger presented here, we have replaced the original HMM module with one that is based on neural networks. We do this for two reasons: First, the new module employs technology that has proven to yield superior results in a variety of NLP tasks. Secondly, the original module did not take into consideration the ambiguity left by the CG rules, meaning that the HMM might select a tag that was previously removed by the disambiguation rules or not even present in the tagger lexicon. The new machine learning module ranks possible readings by probability, allowing us to find the most probable reading (if any) in the intersection between its output and the remaining CG readings, hence not discarding the work that has already been done by the CG disambiguation rules if the intersection is non-empty, but leaving a question as to what to do if the intersection is empty.
|
| 92 |
+
|
| 93 |
+
## 3 The rule-based tokenizer and tagger
|
| 94 |
+
|
| 95 |
+
In this section, we first present some of the main tasks for the tokenizer and multitagger before we give a short description of the constraint grammar module. The tokenizer uses a lexicon with all possible lexical readings, where a reading is a combination of a lemma and a morphosyntac-tic tag chosen from a set of 149 possible analyses. ${}^{1}$ The lexicon was originally based on Norsk
|
| 96 |
+
|
| 97 |
+
ordbank2005,2but has since been updated with 162
|
| 98 |
+
|
| 99 |
+
words more recently introduced into the language 163 (such as tvitre 'tweet'). The newest version of the tokenizer is written in Python and mirrors in most cases the original tokenizer written in Perl. There is one major exception: The original system
|
| 100 |
+
|
| 101 |
+
from the late '90s worked according to the strat- 168 egy "Disambiguate as soon as possible" (Karlsson et al., 1995). This resulted in fixed expressions like blant annet ('among other things' - adverb) and etter hvert ('little by little' - preposition) being allowed - and disambiguated - in the lexicon. In the recent version of the tokenizer, such expressions are removed from the lexicon and the possible ambiguity is dealt with in the CG module. The main principle for the tokenizer is therefore to split tokens on blank space or a sentence delimiter like a full stop or a question mark. For each token identified, the original word form is rendered inside a <word>-tag and looked up in the lexicon. Non-sentence initial capitalized words are identified as proper nouns. Words that exist in the lexicon are assigned all readings found there. If the word is not found in the lexicon and not identified as a proper noun, the word is sent to a compound analyzer. Most unknown words will get an analysis here, as many of them are productively created compounds. Some words will still get the tag ukjent ('unknown') from the tokenizer. These words are often dialect words not standardized in the lexicon or foreign words. Figure A in the Appendix shows how the tokenizer and multitagger deals with the sentence ${TV}$ -programmet "Ut $i$ na-turen" begynner kl. 21.15. ("The TV program "Ut i naturen" starts at 21.15.'), which has quotation marks, abbreviations, and a time expression.
|
| 102 |
+
|
| 103 |
+
The tokenizer also identifies sentences using sentence delimiters. A list of known abbreviations and linguistic rules, like the rule "the word including the full stop character is an abbreviation if the word is in the abbreviation list or if the following word is not capitalized", identifies abbreviations like ${kl}$ . (abbreviation for "o'clock" used to specify time in Norwegian) in Figure A. Headlines are also identified by rules and get their own tag.
|
| 104 |
+
|
| 105 |
+
The constraint grammar module takes tokenized and multitagged text as input and its main task is to reduce the number of readings to ideally one per word. The number of readings left by the multitag-
|
| 106 |
+
|
| 107 |
+
215 ger varies a lot. In the test corpus used in this article (which will be further described in Section 4) there are on average 2,04 readings per word. After the CG rules are applied, there are on average 1,09 readings left per word.
|
| 108 |
+
|
| 109 |
+
---
|
| 110 |
+
|
| 111 |
+
${}^{1}$ The complete list is available at http://tekstlab.uio.no/obt-ny/morfosyn.html
|
| 112 |
+
|
| 113 |
+
2https://www.nb.no/sprakbanken/en/ resource-catalogue/oai-nb-no-sbr-5/
|
| 114 |
+
|
| 115 |
+
---
|
| 116 |
+
|
| 117 |
+
Figure B in the Appendix shows the output from the CG module in debug mode for the sentence Rosa cupcakes hører kanskje med når man skal ha bloggtreff? ('Pink cupcakes might be part of a blog meeting?'). Readings that have been removed starting with ";" and the ID numbers of the rules applied are appended to each reading. Note that the English loan word cupcakes is not identified in the lexicon or in the compound analyzer and has got the tag ukjent 'unknown'. The compound bloggtreff 'blog meeting' was not in the lexicon but has got two readings from the compound analyzer. As the examples show, there are both REMOVE rules (remove a reading) and SELECT rules (select a reading). A rule can be very simple, like rule 2430 in Figure 1 that says "select the verb infinitive reading if the verb to the left is a modal auxiliary and not in the set of dangerous infinitives (= not likely infinitives)".
|
| 118 |
+
|
| 119 |
+
---
|
| 120 |
+
|
| 121 |
+
#:2430
|
| 122 |
+
|
| 123 |
+
SELECT:2430 (verb inf) IF
|
| 124 |
+
|
| 125 |
+
(NOT 0 farlige-inf)
|
| 126 |
+
|
| 127 |
+
(-1m - hj - verb)
|
| 128 |
+
|
| 129 |
+
i
|
| 130 |
+
|
| 131 |
+
---
|
| 132 |
+
|
| 133 |
+
## Figure 1: Simple SELECT rule
|
| 134 |
+
|
| 135 |
+
Figure 2 shows an example of a more complex rule with linked context conditions somewhere to the right in the sentence. The rule says: "choose the subjunction reading - if somewhere to the right there is a safe noun or pronoun (stop looking if a word on the way has a reading that is not an adverb, adjective or determinative) - and - if there is a word in the present or past tense after the noun/pronoun (adverbs between are fine)."
|
| 136 |
+
|
| 137 |
+
---
|
| 138 |
+
|
| 139 |
+
#:2579
|
| 140 |
+
|
| 141 |
+
SELECT:2579 (sbu) IF
|
| 142 |
+
|
| 143 |
+
(...)
|
| 144 |
+
|
| 145 |
+
(**1C subst/pron BARRIER
|
| 146 |
+
|
| 147 |
+
ikke-adv-adj-det)
|
| 148 |
+
|
| 149 |
+
(**1C subst/pron LINK *1
|
| 150 |
+
|
| 151 |
+
ikke-adv LINK 0 pres/pret)
|
| 152 |
+
|
| 153 |
+
i
|
| 154 |
+
|
| 155 |
+
---
|
| 156 |
+
|
| 157 |
+
## Figure 2: More complex SELECT rule
|
| 158 |
+
|
| 159 |
+
The CG grammar for Bokmål has more than
|
| 160 |
+
|
| 161 |
+
269 2300 rules. 1995 of them are SELECT rules.
|
| 162 |
+
|
| 163 |
+
Some rules apply to all possible words, while 270
|
| 164 |
+
|
| 165 |
+
some are rules for specific word forms. When the 271 original CG grammar was developed, a training corpus of 100000 words from novels, newspapers and magazines was used. For each new rule added to the grammar, we checked how the rule worked
|
| 166 |
+
|
| 167 |
+
by looking at recall and precision. Most rules 276 remove or choose readings without making too many errors. But in the last period of the project, we made around 250 heuristic rules to speed up
|
| 168 |
+
|
| 169 |
+
the disambiguation. These rules were riskier but in 281 our small training corpus, they worked well. Later
|
| 170 |
+
|
| 171 |
+
in this article, we will see whether the combination 283 of the CG rules and the neural net is affected if the heuristic rules are removed from the grammar.
|
| 172 |
+
|
| 173 |
+
286
|
| 174 |
+
|
| 175 |
+
## 4 Training and evaluation data
|
| 176 |
+
|
| 177 |
+
The training and evaluation corpus that was used 288 in earlier stages of development of the OBT system is no longer suitable because the tagset and the tokenisation principles have evolved. Instead of bringing this corpus up to date, we chose to use the Norwegian Dependency Treebank (NDT, Solberg et al. 2014) in the development of the new version of OBT. The Bokmål part of NDT is around 300 000 tokens and consists of blog text, news text, parliament proceedings and government white papers.
|
| 178 |
+
|
| 179 |
+
The NDT CoNLL data were converted to the
|
| 180 |
+
|
| 181 |
+
format of the OBT. We also extracted the pure text 301 and ran OBT on it without statistical disambigua-
|
| 182 |
+
|
| 183 |
+
tion, to compare the outputs. If the NDT analy- 303 sis was not among the analyses produced by OBT, we either corrected the NDT annotation if that was the source of the error, or changed the rules of the OBT system if that could easily be done. This pro-
|
| 184 |
+
|
| 185 |
+
cess was iterated a few times. Notice that during 308 this period, the whole data set was used for development, as is common with rule-based systems. The goal was to improve both the accuracy of the rule-based disambiguation and the quality of the training data for the neural component.
|
| 186 |
+
|
| 187 |
+
The performance of the rule-based system by the end of this phase is shown in Table 1. When heuristic rules are used, we see that in 7.5% of cases, OBT produces an ambiguous analysis containing the correct tag as one possibility, whereas ${1.8}\%$ of tokens are only given (one or more) wrong analyses. Disabling the heuristic rules reduces the number of wrong tags by ${0.2}\%$ but at the cost of an
|
| 188 |
+
|
| 189 |
+
increase of ${3.3}\%$ of tokens that get an ambiguous 323 analysis containing the correct tag.
|
| 190 |
+
|
| 191 |
+
325 The role of the statistical system is to pick the correct analysis in the ambiguous cases. On its own the neural net might be able to predict the right analysis even in cases where the rules are wrong. However, this analysis will be discarded when we intersect its output with the rules.
|
| 192 |
+
|
| 193 |
+
with heuristic rules
|
| 194 |
+
|
| 195 |
+
<table><tr><td>unambiguous correct</td><td>280650</td><td>(90.7%)</td></tr><tr><td>ambiguous incl. correct</td><td>23219</td><td>(7.5%)</td></tr><tr><td>wrong</td><td>5413</td><td>(1.8%)</td></tr><tr><td colspan="3">without heuristic rules</td></tr><tr><td>unambiguous correct</td><td>270830</td><td>(87.6%)</td></tr><tr><td>ambiguous incl. correct</td><td>33597</td><td>(10.8%)</td></tr><tr><td>wrong</td><td>4855</td><td>(1.6%)</td></tr></table>
|
| 196 |
+
|
| 197 |
+
Table 1: Performance of the rule-based system
|
| 198 |
+
|
| 199 |
+
For the training of the neural system, we then split the corpus into train-dev-test sets. While doing this, we made sure the output tags in the training set covered all output tags in the dev and test sets to ensure that the model was trained with samples from all tags. We do this by, first, initializing the Python random seed as 0 , then, splitting the data and checking if the training set covers all tags. If it does not, we increase the random seed by one and do the same until we find a training set that covers all the tags in the other sets. This way, we randomly split the dataset into 80-10-10 percent partitions to obtain train-dev-test datasets respectively.
|
| 200 |
+
|
| 201 |
+
Finally, the data was reformatted for the neural network. Figure 3 shows an example of input and output for a sentence. The input is the tokenized form of the sentence. The output is the sequence of serialized tags for each token in the input. The token <next_token> is an indicator that all tags of the corresponding input token have finished and tags of the next input token start afterward.
|
| 202 |
+
|
| 203 |
+
---
|
| 204 |
+
|
| 205 |
+
INPUT: Men det er bare noe jeg tror .
|
| 206 |
+
|
| 207 |
+
OUTPUT :
|
| 208 |
+
|
| 209 |
+
:konj: clb <next_token>
|
| 210 |
+
|
| 211 |
+
:pron: 3 ent noyt pers <next_token>
|
| 212 |
+
|
| 213 |
+
:verb: pres <next_token>
|
| 214 |
+
|
| 215 |
+
:adv: <next_token>
|
| 216 |
+
|
| 217 |
+
:pron: 3 ent noyt pers <next_token>
|
| 218 |
+
|
| 219 |
+
:pron: 1 ent hum nom pers <next_token>
|
| 220 |
+
|
| 221 |
+
:verb: pres <next_token>
|
| 222 |
+
|
| 223 |
+
\$punc\$ : clb: <punkt>
|
| 224 |
+
|
| 225 |
+
---
|
| 226 |
+
|
| 227 |
+
Figure 3: An example input and output for a sentence.
|
| 228 |
+
|
| 229 |
+
367 377
|
| 230 |
+
|
| 231 |
+
## 5 The neural system
|
| 232 |
+
|
| 233 |
+
378
|
| 234 |
+
|
| 235 |
+
379
|
| 236 |
+
|
| 237 |
+
Recently, a BERT (Devlin et al., 2018) pre-trained 380
|
| 238 |
+
|
| 239 |
+
encoder (nb-bert-base) was published by the Nor- 381
|
| 240 |
+
|
| 241 |
+
wegian National Digital Library (Kummervold 382 et al., 2021). This pre-trained encoder for Nor-
|
| 242 |
+
|
| 243 |
+
wegian provides a rich feature set that was pre- 384 viously lacking for the language. Furthermore, since the tagged corpus is very small in comparison to the corpus the pre-trained model was trained on, it is important to use the pre-trained model
|
| 244 |
+
|
| 245 |
+
in order to be able to generalize to unseen data. 389 Therefore, we follow an approach similar to that
|
| 246 |
+
|
| 247 |
+
of Omelianchuk et al. (2020) and use a sequence- 391 to-sequence (seq2seq) setting to tag the sentences using the pre-trained model.
|
| 248 |
+
|
| 249 |
+
Sequence-to-sequence models have two main 394
|
| 250 |
+
|
| 251 |
+
components: an encoder and a decoder. The 396 encoder side is set as the encoder nb-bert-base (NbAiLab, 2021). For the decoder, we randomly initialize 6 layers of size 768 with 12 attention heads. The decoder also has cross-attention layers as it was shown to be effective in seq2seq training (Gheini et al., 2021). We freeze the encoder weights throughout the training since using the encoder as a feature extraction mechanism in this way was shown to be beneficial (Zoph et al., 2016) and is a common practice (Gheini et al., 2021). We use the EncoderDecoderModel provided by the HuggingFace transformers library (Wolf et al., 2020) to configure and train a model.
|
| 252 |
+
|
| 253 |
+
The encoder-decoder model gets its input as the identifiers of the tokens (token numbers) in the input vocabulary and outputs the token numbers in the output vocabulary. Thus, the input and output are tokenized using these vocabularies. Since
|
| 254 |
+
|
| 255 |
+
the encoder model had already been trained (nb- 416 bert-base) using the widely-utilized sub-word tok-enizer Wordpiece (Wu et al., 2016), we use that to-kenizer as provided by the Huggingface Tokeniz-ers library. For the decoder side, since our vocabulary size is very small and obvious ( 82 tags and 5 extra special tokens such as [CLS] and [SEP]), we do not need to train a special tokenizer. We define the vocabulary manually with these output tokens for use by the Wordpiece tokenizer.
|
| 256 |
+
|
| 257 |
+
The training configuration is as follows: We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.0001 . We set the batch size to 16 sentences as this is the amount the graphic
|
| 258 |
+
|
| 259 |
+
cards could handle. We use the negative log- 431 likelihood loss (Yao et al., 2020) to compute the loss in each batch between the model output and the expected output. For any other parameter not mentioned in this section, we use the default value defined by version 4.17.0 of the Transformers library in the objects of the following types: Bert-Config, EncoderDecoderModel, EncoderDecoder-Config, and BertModel.
|
| 260 |
+
|
| 261 |
+
We evaluate the model using the dev set during the training. We do this by using the BLEU score (Papineni et al., 2002) that is widely utilized to evaluate seq2seq models. We compute the BLEU score between the expected output and the model output for each sentence. We get the average of these scores for the whole dev set. We run the training for 300 epochs and keep the model that results in the maximum average BLEU score for the dev set.
|
| 262 |
+
|
| 263 |
+
## 6 Combining neural nets and rules
|
| 264 |
+
|
| 265 |
+
As mentioned in section 2, the current system prefers tags that are found in the intersection between the output of the CG rules and that of the neural network. Ideally, we would be able to find such intersections for each individual token separately. However, since the probability of a reading for a particular token depends on the selected readings for all other tokens in the sentence, the only viable option is to consider readings for entire sentences. Thus, for each input sentence, we find the list of possible readings produced by the network and calculate its probability. Then for each reading in this list, ordered by decreasing probability, we go through each token and check whether the tag assigned by the network is also found among those left by the CG disambiguation rules. If it is not found, we skip to the next reading in the list. If it is found, we go on to check the next token, and so on until we reach the end of the sentence, at which point the reading is picked as the selected one for the sentence. For the present test set, we find intersecting tags for all tokens for 1412 of the 2003 sentences (70.5%). The cases with missing intersections may be due to differences in either tokenisation (205 cases) or tag assignments (386 cases) between the two systems. When the tokeni-sations are different, it is not clear what to do. But if the tokens are the same, but the tag assignments differ, we can default to the most probable reading in the neural net output. We explore this option in
|
| 266 |
+
|
| 267 |
+
485 Section 7.2.
|
| 268 |
+
|
| 269 |
+
Figure 4 shows a case where the tokenisation 486
|
| 270 |
+
|
| 271 |
+
of the neural system does not match with the gold 487 data in the test set. The neural system has split the initial, unknown proper name at a hyphen, whereas the CG tagger keeps it as one token. Since tokenisation is part of a preprocessing step and misalignments in tokenisation is a problem to be solved separately from tag assignment, in this paper we focus primarily on cases where the two systems do produce matching tokenisation, while improving the tokenisation match will be part of future work.
|
| 272 |
+
|
| 273 |
+
Neural net: Garosu - gil , som betyr [...] CG: Garosu-gil , som betyr [...]
|
| 274 |
+
|
| 275 |
+
Figure 4: Mismatching tokenisation 502
|
| 276 |
+
|
| 277 |
+
---
|
| 278 |
+
|
| 279 |
+
<img src="https://cdn.noedgeai.com/01964130-10f1-7393-8885-4908e736cdb0_4.jpg?x=837&y=829&w=633&h=835&r=0"/>
|
| 280 |
+
|
| 281 |
+
Figure 5: Non-intersecting tags
|
| 282 |
+
|
| 283 |
+
---
|
| 284 |
+
|
| 285 |
+
504
|
| 286 |
+
|
| 287 |
+
519
|
| 288 |
+
|
| 289 |
+
522
|
| 290 |
+
|
| 291 |
+
524
|
| 292 |
+
|
| 293 |
+
Figure 5 shows the problem of mismatching 529 tags. For the first word, the CG tagger has left five possible analyses, and the neural net has correctly disambiguated to the plural adjective reading. However, OBT did not recognize the second word, cupcakes, and has therefore left an ukjent ('unknown') tag while the neural system has no analysis with that tag. Instead, the most probable analysis of the sentence according to the neural
|
| 294 |
+
|
| 295 |
+
net has cupcakes correctly as an indefinite plural 539 noun. However, since tag probabilities are conditional on all other tags in the sentence these two analyses are incomparable: it is not safe to disambiguate the CG analysis of rosa based on this analysis from the neural net, especially not when the mismatching tag is on the neighbouring word cupcakes.
|
| 296 |
+
|
| 297 |
+
<table><tr><td>system</td><td>accuracy</td></tr><tr><td>pure ML</td><td>96.9%</td></tr><tr><td>OBT + ML</td><td>99.0%</td></tr><tr><td>OBT w/o heur. + ML</td><td>99.0%</td></tr></table>
|
| 298 |
+
|
| 299 |
+
Table 2: Accuracy of different systems, sentences with intersecting tags
|
| 300 |
+
|
| 301 |
+
In this particular case, the neural net is correct in its analysis of cupcakes. In general, it might be safe to assume that the neural system is correct in cases where the CG tagger assigns ukjent, and this is an option we will pursue in future research. However, as we will see in the Section 7 the neural system is often incorrect in cases where the tags do not intersect. Solving this problem may require more training data or fine-tuning the parameters of the tag generation process of the decoder of the seq2seq model.
|
| 302 |
+
|
| 303 |
+
## 7 Evaluation and error analysis
|
| 304 |
+
|
| 305 |
+
### 7.1 Sentences with intersecting tags
|
| 306 |
+
|
| 307 |
+
We first focus on the restricted cases where the ML system and the CG grammars not only have matching tokenisations but also intersecting tags. We evaluate three different setups: 1 . the trained neural net used as a stand-alone morphological tagger 2. the rule-based system intersected with the neural net as described in Section 6 3 . as the previous, but without the heuristic rules.
|
| 308 |
+
|
| 309 |
+
The performance of the three systems is shown in Table 2. Because we evaluate on intersecting tags only, the numbers do not show the actual performance of the system on running text. They do however clearly show that in the ${70.5}\%$ of cases where the tags intersect, the rules strongly improve the performance of the systems: two-thirds of the tokens that are mistagged by the neural net now get a correct analysis. We also see that it makes no difference whether we run the system with or without the heuristic rules: the reduction of wrong tags that we saw in Table 1 is balanced out by the increase in ambiguity. On the sentences where this setup works, the performance is extremely good
|
| 310 |
+
|
| 311 |
+
at an accuracy of ${99.0}\%$ . By contrast, the widely 594
|
| 312 |
+
|
| 313 |
+
used Spacy tagger reports an accuracy of ${95.0}\%$ 595
|
| 314 |
+
|
| 315 |
+
for morphological tagging of Norwegian UD. ${}^{3}$ 596
|
| 316 |
+
|
| 317 |
+
Since removing the heuristic rules gave no in- 597 crease in performance, we focus on the setup with
|
| 318 |
+
|
| 319 |
+
the full rule set in the following. This system 600 mistags 184 tokens (out of 18612 in total in the matching sentences of the test set), whereas the pure ML system mistags 565 tokens. However, the error profile of the two systems is quite different,
|
| 320 |
+
|
| 321 |
+
suggesting possibilities for further improvement. 605
|
| 322 |
+
|
| 323 |
+
Tables 3 and 4 show the twelve most com-
|
| 324 |
+
|
| 325 |
+
mon error types of the systems. We see that a 607 relatively common error in the OBT + ML system involves perfect participles which often co-
|
| 326 |
+
|
| 327 |
+
exist with homonymous adjectives in Norwegian 610 (as in other Germanic languages, cf. English 'bored') with often very slight or no semantic difference. OBT+ML overapplies the adjective analysis (in three different varieties) compared to the gold data, for a total of ${14} + {10} + 6 = {30}$ errors. By contrast, the ML system on its own makes only $8 + 8 = {16}$ errors of this kind, suggesting that the rules disambiguate wrongly. Performance might therefore increase if we leave this decision to the neural net, though it is worth mentioning that this system makes 6 errors in the opposite direction (which only happens twice when the rules are used and therefore does not show up in the table). Apart from errors with participles, all other frequent errors involve gender assignment or number
|
| 328 |
+
|
| 329 |
+
assignment on indefinite neuter nouns. The lat- 627 ter distinction is hard to make because these indefinite neuters make no morphological distinction between singular and plural and the context is not always clear. As for the gender errors, at least
|
| 330 |
+
|
| 331 |
+
some of these are errors in the gold tags that were 632 not caught in our manual correction. The feminine/masculine distinction has disappeared in the Oslo dialect of Norwegian (Lødrup, 2013) and it may have been hard for the annotators to choose
|
| 332 |
+
|
| 333 |
+
the correct tag. Another debatable case is gender 637 assignment on proper nouns, which is often missing from the ML system output, but is also not systematic in the gold data. Here it may be better to just standardise on not assigning gender to proper nouns.
|
| 334 |
+
|
| 335 |
+
647
|
| 336 |
+
|
| 337 |
+
---
|
| 338 |
+
|
| 339 |
+
${}^{3}$ See https://spacy.io/models/nb.As the Norwegian UD corpus (Ovrelid and Hohle, 2016) is an automatic conversion of the NDT corpus, the complexity of the tasks should be comparable, although the test split is not identical.
|
| 340 |
+
|
| 341 |
+
---
|
| 342 |
+
|
| 343 |
+
648 702
|
| 344 |
+
|
| 345 |
+
649 703 650 704 651 705
|
| 346 |
+
|
| 347 |
+
<table><tr><td>Gold tag</td><td>Predicted tag</td><td>Freq</td></tr><tr><td>[':verb:', 'perf-part']</td><td>[':adj:', '<perf-part>', 'ent', 'm/f', 'ub']</td><td>14</td></tr><tr><td>[':subst:', 'appell', 'ent', 'mask', 'ub']</td><td>[':subst:', 'appell', 'ent', 'fem', 'ub']</td><td>13</td></tr><tr><td>[':verb:', 'perf-part']</td><td>[':adj:', '<perf-part>', 'ent', 'nøyt', 'ub']</td><td>10</td></tr><tr><td>[':adj:', 'ent', 'nøyt', 'pos', 'ub']</td><td>[':adj:', 'ent', 'm/f', 'pos', 'ub']</td><td>10</td></tr><tr><td>[':subst:', 'appell', 'fl', 'mask', 'ub']</td><td>[':subst:', 'appell', 'fem', 'fl', 'ub']</td><td>9</td></tr><tr><td>[':subst:', 'appell', 'ent', 'nøyt', 'ub']</td><td>[':subst:', 'appell', 'fl', 'nøyt', 'ub']</td><td>8</td></tr><tr><td>[':verb:', 'perf-part']</td><td>[':adj:', 'ent', 'm/f', 'pos', 'ub']</td><td>6</td></tr><tr><td>[':subst:', 'appell', 'be', 'fl', 'mask']</td><td>[':subst:', 'appell', 'be', 'fem', 'fl']</td><td>5</td></tr><tr><td>[':subst:', 'appell', 'be', 'ent', 'mask']</td><td>[':subst:', 'prop']</td><td>5</td></tr><tr><td>[':pron:', '3', 'fl', 'pers']</td><td>[':det:', 'fl', 'kvant']</td><td>4</td></tr><tr><td>[':subst:', 'appell', 'ent', 'mask', 'ub']</td><td>[':subst:', 'appell', 'ent', 'nøyt', 'ub']</td><td>4</td></tr><tr><td>[':subst:', 'appell', 'fl', 'nøyt', 'ub']</td><td>[':subst:', 'appell', 'ent', 'nøyt', 'ub']</td><td>4</td></tr></table>
|
| 348 |
+
|
| 349 |
+
Table 3: Most frequent errors, OBT + ML
|
| 350 |
+
|
| 351 |
+
652 706
|
| 352 |
+
|
| 353 |
+
653 707
|
| 354 |
+
|
| 355 |
+
654 708
|
| 356 |
+
|
| 357 |
+
655 709
|
| 358 |
+
|
| 359 |
+
656 710
|
| 360 |
+
|
| 361 |
+
657 711
|
| 362 |
+
|
| 363 |
+
658 712
|
| 364 |
+
|
| 365 |
+
659 713
|
| 366 |
+
|
| 367 |
+
660 714
|
| 368 |
+
|
| 369 |
+
661 715
|
| 370 |
+
|
| 371 |
+
662 716
|
| 372 |
+
|
| 373 |
+
663 717
|
| 374 |
+
|
| 375 |
+
664 718
|
| 376 |
+
|
| 377 |
+
665 719
|
| 378 |
+
|
| 379 |
+
666 720
|
| 380 |
+
|
| 381 |
+
667 721
|
| 382 |
+
|
| 383 |
+
668 722
|
| 384 |
+
|
| 385 |
+
<table><tr><td>Gold tag</td><td>Predicted tag</td><td>Freq</td></tr><tr><td>[':subst:', 'appell', 'ent', 'mask', 'ub']</td><td>[ ':subst:', 'appell', 'ent', 'fem', 'ub']</td><td>13</td></tr><tr><td>[':adj:', 'ent', 'nøyt', 'pos', 'ub']</td><td>[':adj:', 'ent', 'm/f', 'pos', 'ub']</td><td>12</td></tr><tr><td>[':subst:', 'appell', 'fl', 'mask', 'ub']</td><td>[':subst:', 'appell', 'fem', 'fl', 'ub']</td><td>10</td></tr><tr><td>[':verb:', 'perf-part']</td><td>[':adj:', '<perf-part>', 'ent', 'm/f', 'ub']</td><td>8</td></tr><tr><td>[':verb:', 'perf-part']</td><td>[':adj:', '<perf-part>', 'ent', 'nøyt', 'ub']</td><td>8</td></tr><tr><td>[':subst:', 'mask', 'prop']</td><td>[':subst:', 'prop']</td><td>8</td></tr><tr><td>[':subst:', 'appell', 'ent', 'nøyt', 'ub']</td><td>[':subst:', 'appell', 'fl', 'nøyt', 'ub']</td><td>8</td></tr><tr><td>[':subst:', 'appell', 'ent', 'mask', 'ub']</td><td>[':subst:', 'appell', 'ent', 'nøyt', 'ub']</td><td>8</td></tr><tr><td>[':subst:', 'appell', 'ent', 'fem', 'ub']</td><td>[':subst:', 'appell', 'ent', 'mask', 'ub']</td><td>7</td></tr><tr><td>[':subst:', 'appell', 'be', 'fl', 'mask']</td><td>[':subst:', 'appell', 'be', 'fem', 'fl']</td><td>6</td></tr><tr><td>[':subst:', 'appell', 'ent', 'mask', 'ub']</td><td>[':prep:']</td><td>6</td></tr><tr><td>[':adj:', '<perf-part>', 'ent', 'm/f', 'ub']</td><td>[':verb:', 'perf-part']</td><td>6</td></tr></table>
|
| 386 |
+
|
| 387 |
+
Table 4: Most frequent errors, ML system (intersecting tags only)
|
| 388 |
+
|
| 389 |
+
669 723
|
| 390 |
+
|
| 391 |
+
670 724
|
| 392 |
+
|
| 393 |
+
671 725
|
| 394 |
+
|
| 395 |
+
672 726
|
| 396 |
+
|
| 397 |
+
673 727
|
| 398 |
+
|
| 399 |
+
674 728
|
| 400 |
+
|
| 401 |
+
675 729
|
| 402 |
+
|
| 403 |
+
676 730
|
| 404 |
+
|
| 405 |
+
677 731
|
| 406 |
+
|
| 407 |
+
678 732
|
| 408 |
+
|
| 409 |
+
679 733
|
| 410 |
+
|
| 411 |
+
680 734
|
| 412 |
+
|
| 413 |
+
681 735
|
| 414 |
+
|
| 415 |
+
682 736
|
| 416 |
+
|
| 417 |
+
683 737
|
| 418 |
+
|
| 419 |
+
684 738
|
| 420 |
+
|
| 421 |
+
685 739
|
| 422 |
+
|
| 423 |
+
686 740
|
| 424 |
+
|
| 425 |
+
687 741
|
| 426 |
+
|
| 427 |
+
<table><tr><td>Gold tag</td><td>Predicted tag</td><td>Freq</td></tr><tr><td>[':adj:', 'ent', 'nøyt', 'pos', 'ub']</td><td>[':adj:', 'ent', 'm/f', 'pos', 'ub']</td><td>24</td></tr><tr><td>[':subst:', 'appell', 'ent', 'mask', 'ub']</td><td>[':prep:']</td><td>24</td></tr><tr><td>[':prep:']</td><td>[':subst:', 'appell', 'ent', 'mask', 'ub']</td><td>19</td></tr><tr><td>[':verb:', 'pres']</td><td>[':prep:']</td><td>18</td></tr><tr><td>[':subst:', 'appell', 'ent', 'mask', 'ub']</td><td>[':subst:', 'appell', 'ent', 'fem', 'ub']</td><td>18</td></tr><tr><td>[':prep:']</td><td>[':subst:', 'prop']</td><td>17</td></tr><tr><td>[':subst:', 'appell', 'ent', 'fem', 'ub']</td><td>[':subst:', 'appell', 'ent', 'mask', 'ub']</td><td>16</td></tr><tr><td>[':prep:']</td><td>['\$punc\$', ':<komma >:']</td><td>15</td></tr><tr><td>[':prep:']</td><td>[':verb:', 'pres']</td><td>14</td></tr><tr><td>[':subst:', 'appell', 'fl', 'mask', 'ub']</td><td>[':subst:', 'appell', 'fem', 'fl', 'ub']</td><td>14</td></tr><tr><td>[':subst:', 'mask', 'prop']</td><td>[':subst:', 'prop']</td><td>14</td></tr><tr><td>[':subst:', 'appell', 'ent', 'mask', 'ub']</td><td>[':subst:', 'appell', 'ent', 'nøyt', 'ub']</td><td>14</td></tr></table>
|
| 428 |
+
|
| 429 |
+
Table 5: Most frequent errors, ML system (all matching tokenisations)
|
| 430 |
+
|
| 431 |
+
688 742
|
| 432 |
+
|
| 433 |
+
689 743
|
| 434 |
+
|
| 435 |
+
690 744
|
| 436 |
+
|
| 437 |
+
691 745
|
| 438 |
+
|
| 439 |
+
692 746
|
| 440 |
+
|
| 441 |
+
693 747
|
| 442 |
+
|
| 443 |
+
694 748
|
| 444 |
+
|
| 445 |
+
695 749
|
| 446 |
+
|
| 447 |
+
696 750
|
| 448 |
+
|
| 449 |
+
697 751
|
| 450 |
+
|
| 451 |
+
698 752
|
| 452 |
+
|
| 453 |
+
699 753
|
| 454 |
+
|
| 455 |
+
700 754
|
| 456 |
+
|
| 457 |
+
701 755
|
| 458 |
+
|
| 459 |
+
757
|
| 460 |
+
|
| 461 |
+
<table><tr><td>system</td><td>accuracy</td></tr><tr><td>pure ML</td><td>92.8%</td></tr><tr><td>OBT w/o heur. + ML</td><td>94.1%</td></tr></table>
|
| 462 |
+
|
| 463 |
+
Table 6: Accuracy of different systems, all sentences with matching tokenisation
|
| 464 |
+
|
| 465 |
+
### 7.2 All sentences with matching tokenisations
|
| 466 |
+
|
| 467 |
+
To test whether the neural system can be trusted in cases where there is no overlap in tag assignment, we also evaluate the system on all sentences where the tokenisation is matching. We test two setups: one where we use the (non-heuristic) rules plus the neural system as described above, but default to the output of the neural tagger in cases where there is no overlap, and one where we only use the best ML tag. The performance of the two setups is given in Table 6
|
| 468 |
+
|
| 469 |
+
As we can see, the results drop considerably. Overall performance is now below that of the Spacy tagger. Put another way: when we evaluate all sentences with matching tokenisations, the size increases by 8036 tokens from 18612 to 26648, but the number of errors increases from 565 to 1940, i.e. 1375, indicating an error rate of 17.1% on the tokens where the intersection with the output of the CG tagger is empty. Table 5 shows the frequency of errors, which looks very different from Table 4. Most strikingly, there are now many errors involving the part-of-speech tag :prep: (preposition), which is both over- and underpre-dicted by the system. Prepositions are a closed class in Norwegian, as in many other languages, and so it is surprising that the system goes wrong in so many cases here.
|
| 470 |
+
|
| 471 |
+
We used an encoder-decoder model to generate the tags given a sentence. This is a different approach from the majority of the work on tagging using deep learning, where the task is formalized as a sequence classification task. We have chosen to use this architecture as we have 82 tags in the gold data that would require training many sequence classifiers or a single classifier that would require many classes (tag combinations) ${}^{4}$ to be trained on. Since there are many layers between the input and output of our model ( 12 Bert, and 6 decoder layers), the model sometimes misses the syntactic alignment between the input and the output. This is, we believe, the main reason for the
|
| 472 |
+
|
| 473 |
+
809
|
| 474 |
+
|
| 475 |
+
mismatches. 810
|
| 476 |
+
|
| 477 |
+
For future work, we focus on solving the issues 811
|
| 478 |
+
|
| 479 |
+
with mismatching and incorrect tagging. We plan 812
|
| 480 |
+
|
| 481 |
+
to use accuracy as the evaluation metric to select 813
|
| 482 |
+
|
| 483 |
+
the best-performing model using the dev set. In 814
|
| 484 |
+
|
| 485 |
+
addition, we plan to use various constraining con- 815
|
| 486 |
+
|
| 487 |
+
figurations of beam search on generating tags. In 816 our experiments, we observed that beam search considerably slowed down the evaluation on the dev set resulting in an overall performance drop in
|
| 488 |
+
|
| 489 |
+
the training process. Thus, we plan to experiment 821 with the performance of a beam search-based evaluation by applying it for various epoch intervals but not all intervals. And finally, we plan to pick the best tag-set from the output of beam-search by
|
| 490 |
+
|
| 491 |
+
introducing manual rules to avoid mismatching. 826
|
| 492 |
+
|
| 493 |
+
## 8 Conclusion
|
| 494 |
+
|
| 495 |
+
828
|
| 496 |
+
|
| 497 |
+
We have presented a hybrid system for tagging Norwegian texts, based on intersecting the output
|
| 498 |
+
|
| 499 |
+
of a rule-based Constraint Grammar system and 831 a neural sequence-to-sequence model based on a
|
| 500 |
+
|
| 501 |
+
large, pre-trained language model. Our results so 833 far indicate that there are both great opportunities and considerable challenges in making such a sys-
|
| 502 |
+
|
| 503 |
+
tem work. 836
|
| 504 |
+
|
| 505 |
+
On the plus side, we observe that when the to-
|
| 506 |
+
|
| 507 |
+
kenisations of the two systems match and the in- 838 tersection of the possible analyses is non-empty,
|
| 508 |
+
|
| 509 |
+
performance is extremely good at ${99.0}\%$ . On the 841 downside, it is challenging to make the two sys-
|
| 510 |
+
|
| 511 |
+
tems work together; in about ${10}\%$ of cases, the 843 tokenisation does not match, and in around 20% of cases, the intersection of analyses is empty. We have seen that in some cases, it is tempting to let the neural system overrule the rules, but overall its
|
| 512 |
+
|
| 513 |
+
performance in these cases is not good. Hence our 848 overall priority in future work will be to improve the neural system.
|
| 514 |
+
|
| 515 |
+
851
|
| 516 |
+
|
| 517 |
+
## References
|
| 518 |
+
|
| 519 |
+
853
|
| 520 |
+
|
| 521 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding.http://arxiv.org/abs/1810.04805.
|
| 522 |
+
|
| 523 |
+
Mozhdeh Gheini, Xiang Ren, and Jonathan May. 2021. Cross-attention is all you need: Adapting pretrained Transformers for machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1754-1765, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 524 |
+
|
| 525 |
+
858
|
| 526 |
+
|
| 527 |
+
863
|
| 528 |
+
|
| 529 |
+
---
|
| 530 |
+
|
| 531 |
+
${}^{4}$ See the tag combinations: http://tekstlab.uio.no/obt-ny/english/morphosyn.html
|
| 532 |
+
|
| 533 |
+
---
|
| 534 |
+
|
| 535 |
+
Kristin Hagen and Janne Bondi Johannessen. 2003. 865 Parsing nordic languages (panola) - norsk versjon. In Henrik Holmboe, editor, Nordisk Sprogteknologi 2002, pages 89-96. Museum Tusculanum, Copenhagen.
|
| 536 |
+
|
| 537 |
+
Janne Bondi Johannessen, Kristin Hagen, André Lynum, and Anders Nøklestad. 2012. Obt+stat. In Gisle Andersen, editor, Exploring Newspaper Language: Using the web to create and investigate a large corpus of modern Norwegian, pages 51-66. John Benjamins, Amsterdam.
|
| 538 |
+
|
| 539 |
+
Fred Karlsson, Atro Voutilainen, Juha Heikkilä, and Arto Anttila, editors. 1995. Constraint Grammar: A Language-Independent Framework for Parsing Unrestricted Text. Mouton de Gruyter, Berlin.
|
| 540 |
+
|
| 541 |
+
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR,
|
| 542 |
+
|
| 543 |
+
880 abs/1412.6980.
|
| 544 |
+
|
| 545 |
+
Per E Kummervold, Javier De la Rosa, Freddy Wet-jen, and Svein Arne Brygfjeld. 2021. Operationaliz-ing a national digital library: The case for a Norwegian transformer model. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 20-29, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden.
|
| 546 |
+
|
| 547 |
+
Helge Lødrup. 2013. Hvor mange genus er det i oslo-dialekten? Maal og Minne, 103(2).
|
| 548 |
+
|
| 549 |
+
NbAiLab. 2021. Norwegian Transformer Model. https://github.com/NbAiLab/ notram/tree/0c90d6b28008df514c4ac8 47e4c9d68f4709a181, Accessed: 12.12.2022.
|
| 550 |
+
|
| 551 |
+
Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. GECToR - grammatical error correction: Tag, not rewrite. In Proceedings of the 15th Workshop on Innovative Use of NLP for Building Educational Applications, pages 163--170. Association for Computational Linguistics.
|
| 552 |
+
|
| 553 |
+
Lilja Øvrelid and Petter Hohle. 2016. Universal dependencies for norwegian. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1579-1585.
|
| 554 |
+
|
| 555 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
|
| 556 |
+
|
| 557 |
+
Per Erik Solberg, Arne Skjærholt, Lilja Øvrelid, Kristin Hagen, and Janne Bondi Johannessen. 2014. The Norwegian dependency treebank. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 789-795, Reykjavik, Iceland. European Language Resources
|
| 558 |
+
|
| 559 |
+
917 Association (ELRA).
|
| 560 |
+
|
| 561 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien 918
|
| 562 |
+
|
| 563 |
+
Chaumond, Clement Delangue, Anthony Moi, Pier- 919
|
| 564 |
+
|
| 565 |
+
ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- 920
|
| 566 |
+
|
| 567 |
+
icz, Joe Davison, Sam Shleifer, Patrick von Platen, 921
|
| 568 |
+
|
| 569 |
+
Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, 922 Teven Le Scao, Sylvain Gugger, Mariama Drame,
|
| 570 |
+
|
| 571 |
+
Quentin Lhoest, and Alexander Rush. 2020. Trans- 923
|
| 572 |
+
|
| 573 |
+
formers: State-of-the-art natural language process- 924
|
| 574 |
+
|
| 575 |
+
ing. In Proceedings of the 2020 Conference on Em- 925 pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. As-
|
| 576 |
+
|
| 577 |
+
sociation for Computational Linguistics. 928
|
| 578 |
+
|
| 579 |
+
929
|
| 580 |
+
|
| 581 |
+
930
|
| 582 |
+
|
| 583 |
+
931
|
| 584 |
+
|
| 585 |
+
932
|
| 586 |
+
|
| 587 |
+
933
|
| 588 |
+
|
| 589 |
+
934
|
| 590 |
+
|
| 591 |
+
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V 935
|
| 592 |
+
|
| 593 |
+
Le, Mohammad Norouzi, Wolfgang Macherey, 936 Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural ma-
|
| 594 |
+
|
| 595 |
+
chine translation system: Bridging the gap between 939 human and machine translation. arXiv preprint
|
| 596 |
+
|
| 597 |
+
arXiv:1609.08144. 941
|
| 598 |
+
|
| 599 |
+
942
|
| 600 |
+
|
| 601 |
+
943
|
| 602 |
+
|
| 603 |
+
944
|
| 604 |
+
|
| 605 |
+
945
|
| 606 |
+
|
| 607 |
+
946
|
| 608 |
+
|
| 609 |
+
947
|
| 610 |
+
|
| 611 |
+
948
|
| 612 |
+
|
| 613 |
+
Hengshuai Yao, Dong-lai Zhu, Bei Jiang, and Peng Yu. 949 2020. Negative log likelihood ratio loss for deep neural network classification. In Proceedings of the
|
| 614 |
+
|
| 615 |
+
Future Technologies Conference (FTC) 2019, pages 951 276-282, Cham. Springer International Publishing.
|
| 616 |
+
|
| 617 |
+
954
|
| 618 |
+
|
| 619 |
+
955
|
| 620 |
+
|
| 621 |
+
956
|
| 622 |
+
|
| 623 |
+
957
|
| 624 |
+
|
| 625 |
+
958
|
| 626 |
+
|
| 627 |
+
959
|
| 628 |
+
|
| 629 |
+
Barret Zoph, Deniz Yuret, Jonathan May, and Kevin 960
|
| 630 |
+
|
| 631 |
+
Knight. 2016. Transfer learning for low-resource 961
|
| 632 |
+
|
| 633 |
+
neural machine translation. In Proceedings of the 962 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568-1575, Austin, Texas. Association for Computational Linguistics.
|
| 634 |
+
|
| 635 |
+
966
|
| 636 |
+
|
| 637 |
+
967
|
| 638 |
+
|
| 639 |
+
968
|
| 640 |
+
|
| 641 |
+
969
|
| 642 |
+
|
| 643 |
+
970
|
| 644 |
+
|
| 645 |
+
971 972 973 974 975
|
| 646 |
+
|
| 647 |
+
976 <word>Tv-programmet</word> "<tv-programmet>"
|
| 648 |
+
|
| 649 |
+
977 "tv-program" subst appell noyt be ent 978 samset-leks <*program> <+programmet> 979 <word><</word> "<≪>" 980 "\$«" <anf> 981 <word>Ut</word> 982 "<ut>" "ut" prep 983 "ut" adv 984 <word>i</word> 985 "<i>" "i" prep 986 "i" subst appell mask ub ent 987 <word>naturen</word> 988 "<naturen>" "natur" subst appell mask be ent 989 <word>»</word> 990 "<>>>" 991 "\$»" <anf> 992 <word>begynner</word> "<begynner>" 993 "begynne" verb pres 994 "begynner" subst appell mask ub ent 995 <word>kl.</word> "<kl.>" 996 "kl." subst appell fork 997 <word>21.15</word> 998 "<21.15>" "21.15" subst <klokke> 999 "21.15" det kvant 1000 <word>.</word> 1001 "<.>" 1002 "\$." clb <<< <punkt> <<< 1003 1004 Figure A: Tokenized and multitagged sentence 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025
|
| 650 |
+
|
| 651 |
+
## Appendix: sample multitagger and CG output
|
| 652 |
+
|
| 653 |
+
1026
|
| 654 |
+
|
| 655 |
+
---
|
| 656 |
+
|
| 657 |
+
1027
|
| 658 |
+
|
| 659 |
+
1028
|
| 660 |
+
|
| 661 |
+
1029
|
| 662 |
+
|
| 663 |
+
1030
|
| 664 |
+
|
| 665 |
+
1031
|
| 666 |
+
|
| 667 |
+
<word>Rosa</word> 1032 "<rosa>"
|
| 668 |
+
|
| 669 |
+
"rosa" adj fl pos 1033
|
| 670 |
+
|
| 671 |
+
"rosa" adj nøyt ub ent pos 1034
|
| 672 |
+
|
| 673 |
+
"rosa" adj ub m/f ent pos 1035 "rosa" subst appell ubøy
|
| 674 |
+
|
| 675 |
+
"rose" subst appell fem be ent 1036
|
| 676 |
+
|
| 677 |
+
; "rosa" adj be ent pos REMOVE:2311 1037
|
| 678 |
+
|
| 679 |
+
<word>cupcakes</word> 1038 "<cupcakes>"
|
| 680 |
+
|
| 681 |
+
"cupcakes" ukjent 1039
|
| 682 |
+
|
| 683 |
+
<word>horer</word> 1040
|
| 684 |
+
|
| 685 |
+
"<hører>" 1041
|
| 686 |
+
|
| 687 |
+
"hore" verb pres
|
| 688 |
+
|
| 689 |
+
<word>kanskje</word> 1042
|
| 690 |
+
|
| 691 |
+
"<kanskje>" 1043
|
| 692 |
+
|
| 693 |
+
"kanskje" adv 1044
|
| 694 |
+
|
| 695 |
+
<word>med</word> 1045 "<med>"
|
| 696 |
+
|
| 697 |
+
"med" prep 1046
|
| 698 |
+
|
| 699 |
+
<word>når</word> 1047
|
| 700 |
+
|
| 701 |
+
"<nar>" 1048 "når" sbu SELECT:2579
|
| 702 |
+
|
| 703 |
+
; "n&" verb pres SELECT:2579 1049
|
| 704 |
+
|
| 705 |
+
; "n&r" adv REMOVE:3383 1050
|
| 706 |
+
|
| 707 |
+
<word>man</word> 1051 "<man>"
|
| 708 |
+
|
| 709 |
+
"man" pron ent pers hum nom 1052
|
| 710 |
+
|
| 711 |
+
SELECT:3451 1053
|
| 712 |
+
|
| 713 |
+
; "man" subst appell fem ub ent 1054
|
| 714 |
+
|
| 715 |
+
SELECT:3451
|
| 716 |
+
|
| 717 |
+
; "man" subst appell mask ub ent 1055
|
| 718 |
+
|
| 719 |
+
SELECT:3451 1056
|
| 720 |
+
|
| 721 |
+
; "mane" verb imp SELECT:3451 1057
|
| 722 |
+
|
| 723 |
+
<word>skal</word> 1058 "<skal>"
|
| 724 |
+
|
| 725 |
+
"skulle" verb pres <aux1/perf_part> 1059
|
| 726 |
+
|
| 727 |
+
<aux1/infinitiv> 1060
|
| 728 |
+
|
| 729 |
+
<word>ha</word> 1061 "<ha>"
|
| 730 |
+
|
| 731 |
+
"ha" verb inf <aux1/perf_part> 1062
|
| 732 |
+
|
| 733 |
+
SELECT:2430 1063
|
| 734 |
+
|
| 735 |
+
; "ha" interj SELECT:2430 1064
|
| 736 |
+
|
| 737 |
+
; "ha" subst symb REMOVE:3574
|
| 738 |
+
|
| 739 |
+
; "ha" verb imp <aux1/perf_part> 1065
|
| 740 |
+
|
| 741 |
+
SELECT : 2430 1066
|
| 742 |
+
|
| 743 |
+
<word>bloggtreff</word> 1067 "<bloggtreff>"
|
| 744 |
+
|
| 745 |
+
"bloggtreff" subst appell noyt ub ent 1068
|
| 746 |
+
|
| 747 |
+
samset-analyse <+treff> 1069
|
| 748 |
+
|
| 749 |
+
"bloggtreff" subst appell noyt ub fl 1070
|
| 750 |
+
|
| 751 |
+
samset-analyse <+treff> 1071 <word>?</word>
|
| 752 |
+
|
| 753 |
+
"<?>" 1072
|
| 754 |
+
|
| 755 |
+
"\$?" clb <<< <spm> <<< 1073
|
| 756 |
+
|
| 757 |
+
1074
|
| 758 |
+
|
| 759 |
+
1075
|
| 760 |
+
|
| 761 |
+
Figure B: Tokenized, multitagged and disam- 1076
|
| 762 |
+
|
| 763 |
+
biguated sentence 1077
|
| 764 |
+
|
| 765 |
+
---
|
| 766 |
+
|
| 767 |
+
1078 1079
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TqEvrDbInx/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,656 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ INTEGRATING RULES AND NEURAL NETS FOR MORPHOLOGICAL TAGGING OF NORWEGIAN RESULTS AND CHALLENGES
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
056
|
| 8 |
+
|
| 9 |
+
057
|
| 10 |
+
|
| 11 |
+
Anonymous Author
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 1
|
| 14 |
+
|
| 15 |
+
Affiliation / Address line 2
|
| 16 |
+
|
| 17 |
+
Affiliation / Address line 3
|
| 18 |
+
|
| 19 |
+
email@domain
|
| 20 |
+
|
| 21 |
+
Anonymouser Author
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 1
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 2
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 3
|
| 28 |
+
|
| 29 |
+
email@domain
|
| 30 |
+
|
| 31 |
+
Anonymousest Author 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 1 059
|
| 34 |
+
|
| 35 |
+
Affiliation / Address line 2 060
|
| 36 |
+
|
| 37 |
+
Affiliation / Address line 3
|
| 38 |
+
|
| 39 |
+
email@domain 062
|
| 40 |
+
|
| 41 |
+
§ ABSTRACT
|
| 42 |
+
|
| 43 |
+
In this paper, we report on efforts to improve the Oslo-Bergen Tagger for Norwegian morphological tagging by using a hybrid system that combines the output of the rule-based Constraint Grammar tagger with a neural sequence-to-sequence model trained for tagging. The results are very promising for cases where the two systems intersect in tokenisation and morphological analysis, but problems remain in integrating the two systems in many cases.
|
| 44 |
+
|
| 45 |
+
§ 1 INTRODUCTION
|
| 46 |
+
|
| 47 |
+
The Oslo-Bergen Tagger (OBT, Hagen and Johannessen 2003; Johannessen et al. 2012) is a widely used tool for morphological tagging of Norwegian text. It has existed in various incarnations for
|
| 48 |
+
|
| 49 |
+
033 around 25 years, first as a purely rule-based system and later coupled with a statistical module for disambiguation. In this paper, we report on our recent efforts to bring the system into the age of neural networks and show that, even today, the rules boost accuracy considerably over a purely neural system, although there are challenges in combining rules and neural nets due to divergent tokeni-sations.
|
| 50 |
+
|
| 51 |
+
The structure of the paper is as follows: In section 2 we give some historical background on OBT and Section 3 describes the current status of its rule-based component. Section 4 describes the training and evaluation data that we have used in developing the new system. Section 5 then provides the details of how our neural system was trained while Section 6 describes how it was combined with the rule system. Section 7 evaluates the performance of the neural system alone as well as
|
| 52 |
+
|
| 53 |
+
053 the combined system. Section 8 concludes.
|
| 54 |
+
|
| 55 |
+
§ 2 HISTORY OF THE OSLO-BERGEN TAGGER
|
| 56 |
+
|
| 57 |
+
065
|
| 58 |
+
|
| 59 |
+
The Oslo-Bergen Tagger was originally developed 067 between 1996 and 1998 by the Tagger Project at the University of Oslo. Rules for morphologi-
|
| 60 |
+
|
| 61 |
+
cal and syntactic disambiguation were written in 070 the first version of the Constraint Grammar frame-
|
| 62 |
+
|
| 63 |
+
work (Karlsson et al., 1995), retrospectively called 072 CG1. The rules were parsed by the only existing CG rule interpreter at the time, developed by Ling-
|
| 64 |
+
|
| 65 |
+
soft AB. The input to CG disambiguation rules is 075
|
| 66 |
+
|
| 67 |
+
multitagged text, i.e., text where each token has 077 been annotated with all possible lexical analyses. Hence, the project also developed a lexicon with
|
| 68 |
+
|
| 69 |
+
lemmas and inflected forms (later known as Norsk 080 ordbank) and a combined tokenizer/multitagger.
|
| 70 |
+
|
| 71 |
+
The tagger was developed for both Bokmål and 082 Nynorsk, the two written varieties of Norwegian. In this article, we will only focus on the Bokmål
|
| 72 |
+
|
| 73 |
+
version of the tagger, and only on the tokenizer 085 and the morphological disambiguation.
|
| 74 |
+
|
| 75 |
+
The first version of the tagger was tested on an 087 unseen evaluation corpus with a wide variety of text genres and achieved an F1-score of 97.2 (Ha-
|
| 76 |
+
|
| 77 |
+
gen and Johannessen, 2003, 90). The numbers be- 090 hind the F1-score - a precision of 95.4 and recall
|
| 78 |
+
|
| 79 |
+
of 99.0 - reveal that the tagger leaves some ambi- 092 guity but makes relatively few errors. At the time, this was considered acceptable as the tagger was mostly used to annotate written corpora for linguistic research, where a high recall was consid-
|
| 80 |
+
|
| 81 |
+
ered more important than a high precision. 097
|
| 82 |
+
|
| 83 |
+
In 2000 the rule interpreter was replaced by a reimplementation in Allegro Common Lisp made by Paul Meurer in cooperation with the Text Laboratory at the University of Oslo. At the time, Meurer was employed at Aksis in Bergen, and hence the tagger was named the Oslo-Bergen Tagger (OBT).
|
| 84 |
+
|
| 85 |
+
Some years later the need for a new upgrade be-
|
| 86 |
+
|
| 87 |
+
came urgent. Firstly, OBT was quite slow. This 107 was not a big problem in 2000, but soon our cor-
|
| 88 |
+
|
| 89 |
+
109 pora were getting bigger, and speed became important. The project Norwegian Newspaper Corpus (2007-2009) gave the Text Laboratory the opportunity to translate the CG1 rules to the new more efficient and expressive CG3 format and to use a faster rule interpreter made by the VISL project at the University of Southern Denmark. Secondly, the ambiguities that were left in the output from OBT made the tagger unsuitable for many language technology purposes and applications that require the text to be completely disambiguated. We therefore extended OBT with a statistical module, implemented as a Hidden Markov Model, that disambiguated the remaining morphological ambiguities and also provided the system with a new feature: disambiguation of lemmas. The new OBT+Stat system achieved an accuracy of around 96 percent (Johannessen et al., 2012).
|
| 90 |
+
|
| 91 |
+
In the version of the tagger presented here, we have replaced the original HMM module with one that is based on neural networks. We do this for two reasons: First, the new module employs technology that has proven to yield superior results in a variety of NLP tasks. Secondly, the original module did not take into consideration the ambiguity left by the CG rules, meaning that the HMM might select a tag that was previously removed by the disambiguation rules or not even present in the tagger lexicon. The new machine learning module ranks possible readings by probability, allowing us to find the most probable reading (if any) in the intersection between its output and the remaining CG readings, hence not discarding the work that has already been done by the CG disambiguation rules if the intersection is non-empty, but leaving a question as to what to do if the intersection is empty.
|
| 92 |
+
|
| 93 |
+
§ 3 THE RULE-BASED TOKENIZER AND TAGGER
|
| 94 |
+
|
| 95 |
+
In this section, we first present some of the main tasks for the tokenizer and multitagger before we give a short description of the constraint grammar module. The tokenizer uses a lexicon with all possible lexical readings, where a reading is a combination of a lemma and a morphosyntac-tic tag chosen from a set of 149 possible analyses. ${}^{1}$ The lexicon was originally based on Norsk
|
| 96 |
+
|
| 97 |
+
ordbank2005,2but has since been updated with 162
|
| 98 |
+
|
| 99 |
+
words more recently introduced into the language 163 (such as tvitre 'tweet'). The newest version of the tokenizer is written in Python and mirrors in most cases the original tokenizer written in Perl. There is one major exception: The original system
|
| 100 |
+
|
| 101 |
+
from the late '90s worked according to the strat- 168 egy "Disambiguate as soon as possible" (Karlsson et al., 1995). This resulted in fixed expressions like blant annet ('among other things' - adverb) and etter hvert ('little by little' - preposition) being allowed - and disambiguated - in the lexicon. In the recent version of the tokenizer, such expressions are removed from the lexicon and the possible ambiguity is dealt with in the CG module. The main principle for the tokenizer is therefore to split tokens on blank space or a sentence delimiter like a full stop or a question mark. For each token identified, the original word form is rendered inside a <word>-tag and looked up in the lexicon. Non-sentence initial capitalized words are identified as proper nouns. Words that exist in the lexicon are assigned all readings found there. If the word is not found in the lexicon and not identified as a proper noun, the word is sent to a compound analyzer. Most unknown words will get an analysis here, as many of them are productively created compounds. Some words will still get the tag ukjent ('unknown') from the tokenizer. These words are often dialect words not standardized in the lexicon or foreign words. Figure A in the Appendix shows how the tokenizer and multitagger deals with the sentence ${TV}$ -programmet "Ut $i$ na-turen" begynner kl. 21.15. ("The TV program "Ut i naturen" starts at 21.15.'), which has quotation marks, abbreviations, and a time expression.
|
| 102 |
+
|
| 103 |
+
The tokenizer also identifies sentences using sentence delimiters. A list of known abbreviations and linguistic rules, like the rule "the word including the full stop character is an abbreviation if the word is in the abbreviation list or if the following word is not capitalized", identifies abbreviations like ${kl}$ . (abbreviation for "o'clock" used to specify time in Norwegian) in Figure A. Headlines are also identified by rules and get their own tag.
|
| 104 |
+
|
| 105 |
+
The constraint grammar module takes tokenized and multitagged text as input and its main task is to reduce the number of readings to ideally one per word. The number of readings left by the multitag-
|
| 106 |
+
|
| 107 |
+
215 ger varies a lot. In the test corpus used in this article (which will be further described in Section 4) there are on average 2,04 readings per word. After the CG rules are applied, there are on average 1,09 readings left per word.
|
| 108 |
+
|
| 109 |
+
${}^{1}$ The complete list is available at http://tekstlab.uio.no/obt-ny/morfosyn.html
|
| 110 |
+
|
| 111 |
+
2https://www.nb.no/sprakbanken/en/ resource-catalogue/oai-nb-no-sbr-5/
|
| 112 |
+
|
| 113 |
+
Figure B in the Appendix shows the output from the CG module in debug mode for the sentence Rosa cupcakes hører kanskje med når man skal ha bloggtreff? ('Pink cupcakes might be part of a blog meeting?'). Readings that have been removed starting with ";" and the ID numbers of the rules applied are appended to each reading. Note that the English loan word cupcakes is not identified in the lexicon or in the compound analyzer and has got the tag ukjent 'unknown'. The compound bloggtreff 'blog meeting' was not in the lexicon but has got two readings from the compound analyzer. As the examples show, there are both REMOVE rules (remove a reading) and SELECT rules (select a reading). A rule can be very simple, like rule 2430 in Figure 1 that says "select the verb infinitive reading if the verb to the left is a modal auxiliary and not in the set of dangerous infinitives (= not likely infinitives)".
|
| 114 |
+
|
| 115 |
+
#:2430
|
| 116 |
+
|
| 117 |
+
SELECT:2430 (verb inf) IF
|
| 118 |
+
|
| 119 |
+
(NOT 0 farlige-inf)
|
| 120 |
+
|
| 121 |
+
(-1m - hj - verb)
|
| 122 |
+
|
| 123 |
+
i
|
| 124 |
+
|
| 125 |
+
§ FIGURE 1: SIMPLE SELECT RULE
|
| 126 |
+
|
| 127 |
+
Figure 2 shows an example of a more complex rule with linked context conditions somewhere to the right in the sentence. The rule says: "choose the subjunction reading - if somewhere to the right there is a safe noun or pronoun (stop looking if a word on the way has a reading that is not an adverb, adjective or determinative) - and - if there is a word in the present or past tense after the noun/pronoun (adverbs between are fine)."
|
| 128 |
+
|
| 129 |
+
#:2579
|
| 130 |
+
|
| 131 |
+
SELECT:2579 (sbu) IF
|
| 132 |
+
|
| 133 |
+
(...)
|
| 134 |
+
|
| 135 |
+
(**1C subst/pron BARRIER
|
| 136 |
+
|
| 137 |
+
ikke-adv-adj-det)
|
| 138 |
+
|
| 139 |
+
(**1C subst/pron LINK *1
|
| 140 |
+
|
| 141 |
+
ikke-adv LINK 0 pres/pret)
|
| 142 |
+
|
| 143 |
+
i
|
| 144 |
+
|
| 145 |
+
§ FIGURE 2: MORE COMPLEX SELECT RULE
|
| 146 |
+
|
| 147 |
+
The CG grammar for Bokmål has more than
|
| 148 |
+
|
| 149 |
+
269 2300 rules. 1995 of them are SELECT rules.
|
| 150 |
+
|
| 151 |
+
Some rules apply to all possible words, while 270
|
| 152 |
+
|
| 153 |
+
some are rules for specific word forms. When the 271 original CG grammar was developed, a training corpus of 100000 words from novels, newspapers and magazines was used. For each new rule added to the grammar, we checked how the rule worked
|
| 154 |
+
|
| 155 |
+
by looking at recall and precision. Most rules 276 remove or choose readings without making too many errors. But in the last period of the project, we made around 250 heuristic rules to speed up
|
| 156 |
+
|
| 157 |
+
the disambiguation. These rules were riskier but in 281 our small training corpus, they worked well. Later
|
| 158 |
+
|
| 159 |
+
in this article, we will see whether the combination 283 of the CG rules and the neural net is affected if the heuristic rules are removed from the grammar.
|
| 160 |
+
|
| 161 |
+
286
|
| 162 |
+
|
| 163 |
+
§ 4 TRAINING AND EVALUATION DATA
|
| 164 |
+
|
| 165 |
+
The training and evaluation corpus that was used 288 in earlier stages of development of the OBT system is no longer suitable because the tagset and the tokenisation principles have evolved. Instead of bringing this corpus up to date, we chose to use the Norwegian Dependency Treebank (NDT, Solberg et al. 2014) in the development of the new version of OBT. The Bokmål part of NDT is around 300 000 tokens and consists of blog text, news text, parliament proceedings and government white papers.
|
| 166 |
+
|
| 167 |
+
The NDT CoNLL data were converted to the
|
| 168 |
+
|
| 169 |
+
format of the OBT. We also extracted the pure text 301 and ran OBT on it without statistical disambigua-
|
| 170 |
+
|
| 171 |
+
tion, to compare the outputs. If the NDT analy- 303 sis was not among the analyses produced by OBT, we either corrected the NDT annotation if that was the source of the error, or changed the rules of the OBT system if that could easily be done. This pro-
|
| 172 |
+
|
| 173 |
+
cess was iterated a few times. Notice that during 308 this period, the whole data set was used for development, as is common with rule-based systems. The goal was to improve both the accuracy of the rule-based disambiguation and the quality of the training data for the neural component.
|
| 174 |
+
|
| 175 |
+
The performance of the rule-based system by the end of this phase is shown in Table 1. When heuristic rules are used, we see that in 7.5% of cases, OBT produces an ambiguous analysis containing the correct tag as one possibility, whereas ${1.8}\%$ of tokens are only given (one or more) wrong analyses. Disabling the heuristic rules reduces the number of wrong tags by ${0.2}\%$ but at the cost of an
|
| 176 |
+
|
| 177 |
+
increase of ${3.3}\%$ of tokens that get an ambiguous 323 analysis containing the correct tag.
|
| 178 |
+
|
| 179 |
+
325 The role of the statistical system is to pick the correct analysis in the ambiguous cases. On its own the neural net might be able to predict the right analysis even in cases where the rules are wrong. However, this analysis will be discarded when we intersect its output with the rules.
|
| 180 |
+
|
| 181 |
+
with heuristic rules
|
| 182 |
+
|
| 183 |
+
max width=
|
| 184 |
+
|
| 185 |
+
unambiguous correct 280650 (90.7%)
|
| 186 |
+
|
| 187 |
+
1-3
|
| 188 |
+
ambiguous incl. correct 23219 (7.5%)
|
| 189 |
+
|
| 190 |
+
1-3
|
| 191 |
+
wrong 5413 (1.8%)
|
| 192 |
+
|
| 193 |
+
1-3
|
| 194 |
+
3|c|without heuristic rules
|
| 195 |
+
|
| 196 |
+
1-3
|
| 197 |
+
unambiguous correct 270830 (87.6%)
|
| 198 |
+
|
| 199 |
+
1-3
|
| 200 |
+
ambiguous incl. correct 33597 (10.8%)
|
| 201 |
+
|
| 202 |
+
1-3
|
| 203 |
+
wrong 4855 (1.6%)
|
| 204 |
+
|
| 205 |
+
1-3
|
| 206 |
+
|
| 207 |
+
Table 1: Performance of the rule-based system
|
| 208 |
+
|
| 209 |
+
For the training of the neural system, we then split the corpus into train-dev-test sets. While doing this, we made sure the output tags in the training set covered all output tags in the dev and test sets to ensure that the model was trained with samples from all tags. We do this by, first, initializing the Python random seed as 0, then, splitting the data and checking if the training set covers all tags. If it does not, we increase the random seed by one and do the same until we find a training set that covers all the tags in the other sets. This way, we randomly split the dataset into 80-10-10 percent partitions to obtain train-dev-test datasets respectively.
|
| 210 |
+
|
| 211 |
+
Finally, the data was reformatted for the neural network. Figure 3 shows an example of input and output for a sentence. The input is the tokenized form of the sentence. The output is the sequence of serialized tags for each token in the input. The token <next_token> is an indicator that all tags of the corresponding input token have finished and tags of the next input token start afterward.
|
| 212 |
+
|
| 213 |
+
INPUT: Men det er bare noe jeg tror .
|
| 214 |
+
|
| 215 |
+
OUTPUT :
|
| 216 |
+
|
| 217 |
+
:konj: clb <next_token>
|
| 218 |
+
|
| 219 |
+
:pron: 3 ent noyt pers <next_token>
|
| 220 |
+
|
| 221 |
+
:verb: pres <next_token>
|
| 222 |
+
|
| 223 |
+
:adv: <next_token>
|
| 224 |
+
|
| 225 |
+
:pron: 3 ent noyt pers <next_token>
|
| 226 |
+
|
| 227 |
+
:pron: 1 ent hum nom pers <next_token>
|
| 228 |
+
|
| 229 |
+
:verb: pres <next_token>
|
| 230 |
+
|
| 231 |
+
$punc$ : clb: <punkt>
|
| 232 |
+
|
| 233 |
+
Figure 3: An example input and output for a sentence.
|
| 234 |
+
|
| 235 |
+
367 377
|
| 236 |
+
|
| 237 |
+
§ 5 THE NEURAL SYSTEM
|
| 238 |
+
|
| 239 |
+
378
|
| 240 |
+
|
| 241 |
+
379
|
| 242 |
+
|
| 243 |
+
Recently, a BERT (Devlin et al., 2018) pre-trained 380
|
| 244 |
+
|
| 245 |
+
encoder (nb-bert-base) was published by the Nor- 381
|
| 246 |
+
|
| 247 |
+
wegian National Digital Library (Kummervold 382 et al., 2021). This pre-trained encoder for Nor-
|
| 248 |
+
|
| 249 |
+
wegian provides a rich feature set that was pre- 384 viously lacking for the language. Furthermore, since the tagged corpus is very small in comparison to the corpus the pre-trained model was trained on, it is important to use the pre-trained model
|
| 250 |
+
|
| 251 |
+
in order to be able to generalize to unseen data. 389 Therefore, we follow an approach similar to that
|
| 252 |
+
|
| 253 |
+
of Omelianchuk et al. (2020) and use a sequence- 391 to-sequence (seq2seq) setting to tag the sentences using the pre-trained model.
|
| 254 |
+
|
| 255 |
+
Sequence-to-sequence models have two main 394
|
| 256 |
+
|
| 257 |
+
components: an encoder and a decoder. The 396 encoder side is set as the encoder nb-bert-base (NbAiLab, 2021). For the decoder, we randomly initialize 6 layers of size 768 with 12 attention heads. The decoder also has cross-attention layers as it was shown to be effective in seq2seq training (Gheini et al., 2021). We freeze the encoder weights throughout the training since using the encoder as a feature extraction mechanism in this way was shown to be beneficial (Zoph et al., 2016) and is a common practice (Gheini et al., 2021). We use the EncoderDecoderModel provided by the HuggingFace transformers library (Wolf et al., 2020) to configure and train a model.
|
| 258 |
+
|
| 259 |
+
The encoder-decoder model gets its input as the identifiers of the tokens (token numbers) in the input vocabulary and outputs the token numbers in the output vocabulary. Thus, the input and output are tokenized using these vocabularies. Since
|
| 260 |
+
|
| 261 |
+
the encoder model had already been trained (nb- 416 bert-base) using the widely-utilized sub-word tok-enizer Wordpiece (Wu et al., 2016), we use that to-kenizer as provided by the Huggingface Tokeniz-ers library. For the decoder side, since our vocabulary size is very small and obvious ( 82 tags and 5 extra special tokens such as [CLS] and [SEP]), we do not need to train a special tokenizer. We define the vocabulary manually with these output tokens for use by the Wordpiece tokenizer.
|
| 262 |
+
|
| 263 |
+
The training configuration is as follows: We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.0001 . We set the batch size to 16 sentences as this is the amount the graphic
|
| 264 |
+
|
| 265 |
+
cards could handle. We use the negative log- 431 likelihood loss (Yao et al., 2020) to compute the loss in each batch between the model output and the expected output. For any other parameter not mentioned in this section, we use the default value defined by version 4.17.0 of the Transformers library in the objects of the following types: Bert-Config, EncoderDecoderModel, EncoderDecoder-Config, and BertModel.
|
| 266 |
+
|
| 267 |
+
We evaluate the model using the dev set during the training. We do this by using the BLEU score (Papineni et al., 2002) that is widely utilized to evaluate seq2seq models. We compute the BLEU score between the expected output and the model output for each sentence. We get the average of these scores for the whole dev set. We run the training for 300 epochs and keep the model that results in the maximum average BLEU score for the dev set.
|
| 268 |
+
|
| 269 |
+
§ 6 COMBINING NEURAL NETS AND RULES
|
| 270 |
+
|
| 271 |
+
As mentioned in section 2, the current system prefers tags that are found in the intersection between the output of the CG rules and that of the neural network. Ideally, we would be able to find such intersections for each individual token separately. However, since the probability of a reading for a particular token depends on the selected readings for all other tokens in the sentence, the only viable option is to consider readings for entire sentences. Thus, for each input sentence, we find the list of possible readings produced by the network and calculate its probability. Then for each reading in this list, ordered by decreasing probability, we go through each token and check whether the tag assigned by the network is also found among those left by the CG disambiguation rules. If it is not found, we skip to the next reading in the list. If it is found, we go on to check the next token, and so on until we reach the end of the sentence, at which point the reading is picked as the selected one for the sentence. For the present test set, we find intersecting tags for all tokens for 1412 of the 2003 sentences (70.5%). The cases with missing intersections may be due to differences in either tokenisation (205 cases) or tag assignments (386 cases) between the two systems. When the tokeni-sations are different, it is not clear what to do. But if the tokens are the same, but the tag assignments differ, we can default to the most probable reading in the neural net output. We explore this option in
|
| 272 |
+
|
| 273 |
+
485 Section 7.2.
|
| 274 |
+
|
| 275 |
+
Figure 4 shows a case where the tokenisation 486
|
| 276 |
+
|
| 277 |
+
of the neural system does not match with the gold 487 data in the test set. The neural system has split the initial, unknown proper name at a hyphen, whereas the CG tagger keeps it as one token. Since tokenisation is part of a preprocessing step and misalignments in tokenisation is a problem to be solved separately from tag assignment, in this paper we focus primarily on cases where the two systems do produce matching tokenisation, while improving the tokenisation match will be part of future work.
|
| 278 |
+
|
| 279 |
+
Neural net: Garosu - gil, som betyr [...] CG: Garosu-gil, som betyr [...]
|
| 280 |
+
|
| 281 |
+
Figure 4: Mismatching tokenisation 502
|
| 282 |
+
|
| 283 |
+
<img src="https://cdn.noedgeai.com/01964130-10f1-7393-8885-4908e736cdb0_4.jpg?x=837&y=829&w=633&h=835&r=0"/>
|
| 284 |
+
|
| 285 |
+
Figure 5: Non-intersecting tags
|
| 286 |
+
|
| 287 |
+
504
|
| 288 |
+
|
| 289 |
+
519
|
| 290 |
+
|
| 291 |
+
522
|
| 292 |
+
|
| 293 |
+
524
|
| 294 |
+
|
| 295 |
+
Figure 5 shows the problem of mismatching 529 tags. For the first word, the CG tagger has left five possible analyses, and the neural net has correctly disambiguated to the plural adjective reading. However, OBT did not recognize the second word, cupcakes, and has therefore left an ukjent ('unknown') tag while the neural system has no analysis with that tag. Instead, the most probable analysis of the sentence according to the neural
|
| 296 |
+
|
| 297 |
+
net has cupcakes correctly as an indefinite plural 539 noun. However, since tag probabilities are conditional on all other tags in the sentence these two analyses are incomparable: it is not safe to disambiguate the CG analysis of rosa based on this analysis from the neural net, especially not when the mismatching tag is on the neighbouring word cupcakes.
|
| 298 |
+
|
| 299 |
+
max width=
|
| 300 |
+
|
| 301 |
+
system accuracy
|
| 302 |
+
|
| 303 |
+
1-2
|
| 304 |
+
pure ML 96.9%
|
| 305 |
+
|
| 306 |
+
1-2
|
| 307 |
+
OBT + ML 99.0%
|
| 308 |
+
|
| 309 |
+
1-2
|
| 310 |
+
OBT w/o heur. + ML 99.0%
|
| 311 |
+
|
| 312 |
+
1-2
|
| 313 |
+
|
| 314 |
+
Table 2: Accuracy of different systems, sentences with intersecting tags
|
| 315 |
+
|
| 316 |
+
In this particular case, the neural net is correct in its analysis of cupcakes. In general, it might be safe to assume that the neural system is correct in cases where the CG tagger assigns ukjent, and this is an option we will pursue in future research. However, as we will see in the Section 7 the neural system is often incorrect in cases where the tags do not intersect. Solving this problem may require more training data or fine-tuning the parameters of the tag generation process of the decoder of the seq2seq model.
|
| 317 |
+
|
| 318 |
+
§ 7 EVALUATION AND ERROR ANALYSIS
|
| 319 |
+
|
| 320 |
+
§ 7.1 SENTENCES WITH INTERSECTING TAGS
|
| 321 |
+
|
| 322 |
+
We first focus on the restricted cases where the ML system and the CG grammars not only have matching tokenisations but also intersecting tags. We evaluate three different setups: 1 . the trained neural net used as a stand-alone morphological tagger 2. the rule-based system intersected with the neural net as described in Section 6 3 . as the previous, but without the heuristic rules.
|
| 323 |
+
|
| 324 |
+
The performance of the three systems is shown in Table 2. Because we evaluate on intersecting tags only, the numbers do not show the actual performance of the system on running text. They do however clearly show that in the ${70.5}\%$ of cases where the tags intersect, the rules strongly improve the performance of the systems: two-thirds of the tokens that are mistagged by the neural net now get a correct analysis. We also see that it makes no difference whether we run the system with or without the heuristic rules: the reduction of wrong tags that we saw in Table 1 is balanced out by the increase in ambiguity. On the sentences where this setup works, the performance is extremely good
|
| 325 |
+
|
| 326 |
+
at an accuracy of ${99.0}\%$ . By contrast, the widely 594
|
| 327 |
+
|
| 328 |
+
used Spacy tagger reports an accuracy of ${95.0}\%$ 595
|
| 329 |
+
|
| 330 |
+
for morphological tagging of Norwegian UD. ${}^{3}$ 596
|
| 331 |
+
|
| 332 |
+
Since removing the heuristic rules gave no in- 597 crease in performance, we focus on the setup with
|
| 333 |
+
|
| 334 |
+
the full rule set in the following. This system 600 mistags 184 tokens (out of 18612 in total in the matching sentences of the test set), whereas the pure ML system mistags 565 tokens. However, the error profile of the two systems is quite different,
|
| 335 |
+
|
| 336 |
+
suggesting possibilities for further improvement. 605
|
| 337 |
+
|
| 338 |
+
Tables 3 and 4 show the twelve most com-
|
| 339 |
+
|
| 340 |
+
mon error types of the systems. We see that a 607 relatively common error in the OBT + ML system involves perfect participles which often co-
|
| 341 |
+
|
| 342 |
+
exist with homonymous adjectives in Norwegian 610 (as in other Germanic languages, cf. English 'bored') with often very slight or no semantic difference. OBT+ML overapplies the adjective analysis (in three different varieties) compared to the gold data, for a total of ${14} + {10} + 6 = {30}$ errors. By contrast, the ML system on its own makes only $8 + 8 = {16}$ errors of this kind, suggesting that the rules disambiguate wrongly. Performance might therefore increase if we leave this decision to the neural net, though it is worth mentioning that this system makes 6 errors in the opposite direction (which only happens twice when the rules are used and therefore does not show up in the table). Apart from errors with participles, all other frequent errors involve gender assignment or number
|
| 343 |
+
|
| 344 |
+
assignment on indefinite neuter nouns. The lat- 627 ter distinction is hard to make because these indefinite neuters make no morphological distinction between singular and plural and the context is not always clear. As for the gender errors, at least
|
| 345 |
+
|
| 346 |
+
some of these are errors in the gold tags that were 632 not caught in our manual correction. The feminine/masculine distinction has disappeared in the Oslo dialect of Norwegian (Lødrup, 2013) and it may have been hard for the annotators to choose
|
| 347 |
+
|
| 348 |
+
the correct tag. Another debatable case is gender 637 assignment on proper nouns, which is often missing from the ML system output, but is also not systematic in the gold data. Here it may be better to just standardise on not assigning gender to proper nouns.
|
| 349 |
+
|
| 350 |
+
647
|
| 351 |
+
|
| 352 |
+
${}^{3}$ See https://spacy.io/models/nb.As the Norwegian UD corpus (Ovrelid and Hohle, 2016) is an automatic conversion of the NDT corpus, the complexity of the tasks should be comparable, although the test split is not identical.
|
| 353 |
+
|
| 354 |
+
648 702
|
| 355 |
+
|
| 356 |
+
649 703 650 704 651 705
|
| 357 |
+
|
| 358 |
+
max width=
|
| 359 |
+
|
| 360 |
+
Gold tag Predicted tag Freq
|
| 361 |
+
|
| 362 |
+
1-3
|
| 363 |
+
[':verb:', 'perf-part'] [':adj:', '<perf-part>', 'ent', 'm/f', 'ub']</perf-part> 14
|
| 364 |
+
|
| 365 |
+
1-3
|
| 366 |
+
[':subst:', 'appell', 'ent', 'mask', 'ub'] [':subst:', 'appell', 'ent', 'fem', 'ub'] 13
|
| 367 |
+
|
| 368 |
+
1-3
|
| 369 |
+
[':verb:', 'perf-part'] [':adj:', '<perf-part>', 'ent', 'nøyt', 'ub']</perf-part> 10
|
| 370 |
+
|
| 371 |
+
1-3
|
| 372 |
+
[':adj:', 'ent', 'nøyt', 'pos', 'ub'] [':adj:', 'ent', 'm/f', 'pos', 'ub'] 10
|
| 373 |
+
|
| 374 |
+
1-3
|
| 375 |
+
[':subst:', 'appell', 'fl', 'mask', 'ub'] [':subst:', 'appell', 'fem', 'fl', 'ub'] 9
|
| 376 |
+
|
| 377 |
+
1-3
|
| 378 |
+
[':subst:', 'appell', 'ent', 'nøyt', 'ub'] [':subst:', 'appell', 'fl', 'nøyt', 'ub'] 8
|
| 379 |
+
|
| 380 |
+
1-3
|
| 381 |
+
[':verb:', 'perf-part'] [':adj:', 'ent', 'm/f', 'pos', 'ub'] 6
|
| 382 |
+
|
| 383 |
+
1-3
|
| 384 |
+
[':subst:', 'appell', 'be', 'fl', 'mask'] [':subst:', 'appell', 'be', 'fem', 'fl'] 5
|
| 385 |
+
|
| 386 |
+
1-3
|
| 387 |
+
[':subst:', 'appell', 'be', 'ent', 'mask'] [':subst:', 'prop'] 5
|
| 388 |
+
|
| 389 |
+
1-3
|
| 390 |
+
[':pron:', '3', 'fl', 'pers'] [':det:', 'fl', 'kvant'] 4
|
| 391 |
+
|
| 392 |
+
1-3
|
| 393 |
+
[':subst:', 'appell', 'ent', 'mask', 'ub'] [':subst:', 'appell', 'ent', 'nøyt', 'ub'] 4
|
| 394 |
+
|
| 395 |
+
1-3
|
| 396 |
+
[':subst:', 'appell', 'fl', 'nøyt', 'ub'] [':subst:', 'appell', 'ent', 'nøyt', 'ub'] 4
|
| 397 |
+
|
| 398 |
+
1-3
|
| 399 |
+
|
| 400 |
+
Table 3: Most frequent errors, OBT + ML
|
| 401 |
+
|
| 402 |
+
652 706
|
| 403 |
+
|
| 404 |
+
653 707
|
| 405 |
+
|
| 406 |
+
654 708
|
| 407 |
+
|
| 408 |
+
655 709
|
| 409 |
+
|
| 410 |
+
656 710
|
| 411 |
+
|
| 412 |
+
657 711
|
| 413 |
+
|
| 414 |
+
658 712
|
| 415 |
+
|
| 416 |
+
659 713
|
| 417 |
+
|
| 418 |
+
660 714
|
| 419 |
+
|
| 420 |
+
661 715
|
| 421 |
+
|
| 422 |
+
662 716
|
| 423 |
+
|
| 424 |
+
663 717
|
| 425 |
+
|
| 426 |
+
664 718
|
| 427 |
+
|
| 428 |
+
665 719
|
| 429 |
+
|
| 430 |
+
666 720
|
| 431 |
+
|
| 432 |
+
667 721
|
| 433 |
+
|
| 434 |
+
668 722
|
| 435 |
+
|
| 436 |
+
max width=
|
| 437 |
+
|
| 438 |
+
Gold tag Predicted tag Freq
|
| 439 |
+
|
| 440 |
+
1-3
|
| 441 |
+
[':subst:', 'appell', 'ent', 'mask', 'ub'] [ ':subst:', 'appell', 'ent', 'fem', 'ub'] 13
|
| 442 |
+
|
| 443 |
+
1-3
|
| 444 |
+
[':adj:', 'ent', 'nøyt', 'pos', 'ub'] [':adj:', 'ent', 'm/f', 'pos', 'ub'] 12
|
| 445 |
+
|
| 446 |
+
1-3
|
| 447 |
+
[':subst:', 'appell', 'fl', 'mask', 'ub'] [':subst:', 'appell', 'fem', 'fl', 'ub'] 10
|
| 448 |
+
|
| 449 |
+
1-3
|
| 450 |
+
[':verb:', 'perf-part'] [':adj:', '<perf-part>', 'ent', 'm/f', 'ub']</perf-part> 8
|
| 451 |
+
|
| 452 |
+
1-3
|
| 453 |
+
[':verb:', 'perf-part'] [':adj:', '<perf-part>', 'ent', 'nøyt', 'ub']</perf-part> 8
|
| 454 |
+
|
| 455 |
+
1-3
|
| 456 |
+
[':subst:', 'mask', 'prop'] [':subst:', 'prop'] 8
|
| 457 |
+
|
| 458 |
+
1-3
|
| 459 |
+
[':subst:', 'appell', 'ent', 'nøyt', 'ub'] [':subst:', 'appell', 'fl', 'nøyt', 'ub'] 8
|
| 460 |
+
|
| 461 |
+
1-3
|
| 462 |
+
[':subst:', 'appell', 'ent', 'mask', 'ub'] [':subst:', 'appell', 'ent', 'nøyt', 'ub'] 8
|
| 463 |
+
|
| 464 |
+
1-3
|
| 465 |
+
[':subst:', 'appell', 'ent', 'fem', 'ub'] [':subst:', 'appell', 'ent', 'mask', 'ub'] 7
|
| 466 |
+
|
| 467 |
+
1-3
|
| 468 |
+
[':subst:', 'appell', 'be', 'fl', 'mask'] [':subst:', 'appell', 'be', 'fem', 'fl'] 6
|
| 469 |
+
|
| 470 |
+
1-3
|
| 471 |
+
[':subst:', 'appell', 'ent', 'mask', 'ub'] [':prep:'] 6
|
| 472 |
+
|
| 473 |
+
1-3
|
| 474 |
+
[':adj:', '<perf-part>', 'ent', 'm/f', 'ub']</perf-part> [':verb:', 'perf-part'] 6
|
| 475 |
+
|
| 476 |
+
1-3
|
| 477 |
+
|
| 478 |
+
Table 4: Most frequent errors, ML system (intersecting tags only)
|
| 479 |
+
|
| 480 |
+
669 723
|
| 481 |
+
|
| 482 |
+
670 724
|
| 483 |
+
|
| 484 |
+
671 725
|
| 485 |
+
|
| 486 |
+
672 726
|
| 487 |
+
|
| 488 |
+
673 727
|
| 489 |
+
|
| 490 |
+
674 728
|
| 491 |
+
|
| 492 |
+
675 729
|
| 493 |
+
|
| 494 |
+
676 730
|
| 495 |
+
|
| 496 |
+
677 731
|
| 497 |
+
|
| 498 |
+
678 732
|
| 499 |
+
|
| 500 |
+
679 733
|
| 501 |
+
|
| 502 |
+
680 734
|
| 503 |
+
|
| 504 |
+
681 735
|
| 505 |
+
|
| 506 |
+
682 736
|
| 507 |
+
|
| 508 |
+
683 737
|
| 509 |
+
|
| 510 |
+
684 738
|
| 511 |
+
|
| 512 |
+
685 739
|
| 513 |
+
|
| 514 |
+
686 740
|
| 515 |
+
|
| 516 |
+
687 741
|
| 517 |
+
|
| 518 |
+
max width=
|
| 519 |
+
|
| 520 |
+
Gold tag Predicted tag Freq
|
| 521 |
+
|
| 522 |
+
1-3
|
| 523 |
+
[':adj:', 'ent', 'nøyt', 'pos', 'ub'] [':adj:', 'ent', 'm/f', 'pos', 'ub'] 24
|
| 524 |
+
|
| 525 |
+
1-3
|
| 526 |
+
[':subst:', 'appell', 'ent', 'mask', 'ub'] [':prep:'] 24
|
| 527 |
+
|
| 528 |
+
1-3
|
| 529 |
+
[':prep:'] [':subst:', 'appell', 'ent', 'mask', 'ub'] 19
|
| 530 |
+
|
| 531 |
+
1-3
|
| 532 |
+
[':verb:', 'pres'] [':prep:'] 18
|
| 533 |
+
|
| 534 |
+
1-3
|
| 535 |
+
[':subst:', 'appell', 'ent', 'mask', 'ub'] [':subst:', 'appell', 'ent', 'fem', 'ub'] 18
|
| 536 |
+
|
| 537 |
+
1-3
|
| 538 |
+
[':prep:'] [':subst:', 'prop'] 17
|
| 539 |
+
|
| 540 |
+
1-3
|
| 541 |
+
[':subst:', 'appell', 'ent', 'fem', 'ub'] [':subst:', 'appell', 'ent', 'mask', 'ub'] 16
|
| 542 |
+
|
| 543 |
+
1-3
|
| 544 |
+
[':prep:'] ['$punc$', ':<komma>:']</komma> 15
|
| 545 |
+
|
| 546 |
+
1-3
|
| 547 |
+
[':prep:'] [':verb:', 'pres'] 14
|
| 548 |
+
|
| 549 |
+
1-3
|
| 550 |
+
[':subst:', 'appell', 'fl', 'mask', 'ub'] [':subst:', 'appell', 'fem', 'fl', 'ub'] 14
|
| 551 |
+
|
| 552 |
+
1-3
|
| 553 |
+
[':subst:', 'mask', 'prop'] [':subst:', 'prop'] 14
|
| 554 |
+
|
| 555 |
+
1-3
|
| 556 |
+
[':subst:', 'appell', 'ent', 'mask', 'ub'] [':subst:', 'appell', 'ent', 'nøyt', 'ub'] 14
|
| 557 |
+
|
| 558 |
+
1-3
|
| 559 |
+
|
| 560 |
+
Table 5: Most frequent errors, ML system (all matching tokenisations)
|
| 561 |
+
|
| 562 |
+
688 742
|
| 563 |
+
|
| 564 |
+
689 743
|
| 565 |
+
|
| 566 |
+
690 744
|
| 567 |
+
|
| 568 |
+
691 745
|
| 569 |
+
|
| 570 |
+
692 746
|
| 571 |
+
|
| 572 |
+
693 747
|
| 573 |
+
|
| 574 |
+
694 748
|
| 575 |
+
|
| 576 |
+
695 749
|
| 577 |
+
|
| 578 |
+
696 750
|
| 579 |
+
|
| 580 |
+
697 751
|
| 581 |
+
|
| 582 |
+
698 752
|
| 583 |
+
|
| 584 |
+
699 753
|
| 585 |
+
|
| 586 |
+
700 754
|
| 587 |
+
|
| 588 |
+
701 755
|
| 589 |
+
|
| 590 |
+
757
|
| 591 |
+
|
| 592 |
+
max width=
|
| 593 |
+
|
| 594 |
+
system accuracy
|
| 595 |
+
|
| 596 |
+
1-2
|
| 597 |
+
pure ML 92.8%
|
| 598 |
+
|
| 599 |
+
1-2
|
| 600 |
+
OBT w/o heur. + ML 94.1%
|
| 601 |
+
|
| 602 |
+
1-2
|
| 603 |
+
|
| 604 |
+
Table 6: Accuracy of different systems, all sentences with matching tokenisation
|
| 605 |
+
|
| 606 |
+
§ 7.2 ALL SENTENCES WITH MATCHING TOKENISATIONS
|
| 607 |
+
|
| 608 |
+
To test whether the neural system can be trusted in cases where there is no overlap in tag assignment, we also evaluate the system on all sentences where the tokenisation is matching. We test two setups: one where we use the (non-heuristic) rules plus the neural system as described above, but default to the output of the neural tagger in cases where there is no overlap, and one where we only use the best ML tag. The performance of the two setups is given in Table 6
|
| 609 |
+
|
| 610 |
+
As we can see, the results drop considerably. Overall performance is now below that of the Spacy tagger. Put another way: when we evaluate all sentences with matching tokenisations, the size increases by 8036 tokens from 18612 to 26648, but the number of errors increases from 565 to 1940, i.e. 1375, indicating an error rate of 17.1% on the tokens where the intersection with the output of the CG tagger is empty. Table 5 shows the frequency of errors, which looks very different from Table 4. Most strikingly, there are now many errors involving the part-of-speech tag :prep: (preposition), which is both over- and underpre-dicted by the system. Prepositions are a closed class in Norwegian, as in many other languages, and so it is surprising that the system goes wrong in so many cases here.
|
| 611 |
+
|
| 612 |
+
We used an encoder-decoder model to generate the tags given a sentence. This is a different approach from the majority of the work on tagging using deep learning, where the task is formalized as a sequence classification task. We have chosen to use this architecture as we have 82 tags in the gold data that would require training many sequence classifiers or a single classifier that would require many classes (tag combinations) ${}^{4}$ to be trained on. Since there are many layers between the input and output of our model ( 12 Bert, and 6 decoder layers), the model sometimes misses the syntactic alignment between the input and the output. This is, we believe, the main reason for the
|
| 613 |
+
|
| 614 |
+
809
|
| 615 |
+
|
| 616 |
+
mismatches. 810
|
| 617 |
+
|
| 618 |
+
For future work, we focus on solving the issues 811
|
| 619 |
+
|
| 620 |
+
with mismatching and incorrect tagging. We plan 812
|
| 621 |
+
|
| 622 |
+
to use accuracy as the evaluation metric to select 813
|
| 623 |
+
|
| 624 |
+
the best-performing model using the dev set. In 814
|
| 625 |
+
|
| 626 |
+
addition, we plan to use various constraining con- 815
|
| 627 |
+
|
| 628 |
+
figurations of beam search on generating tags. In 816 our experiments, we observed that beam search considerably slowed down the evaluation on the dev set resulting in an overall performance drop in
|
| 629 |
+
|
| 630 |
+
the training process. Thus, we plan to experiment 821 with the performance of a beam search-based evaluation by applying it for various epoch intervals but not all intervals. And finally, we plan to pick the best tag-set from the output of beam-search by
|
| 631 |
+
|
| 632 |
+
introducing manual rules to avoid mismatching. 826
|
| 633 |
+
|
| 634 |
+
§ 8 CONCLUSION
|
| 635 |
+
|
| 636 |
+
828
|
| 637 |
+
|
| 638 |
+
We have presented a hybrid system for tagging Norwegian texts, based on intersecting the output
|
| 639 |
+
|
| 640 |
+
of a rule-based Constraint Grammar system and 831 a neural sequence-to-sequence model based on a
|
| 641 |
+
|
| 642 |
+
large, pre-trained language model. Our results so 833 far indicate that there are both great opportunities and considerable challenges in making such a sys-
|
| 643 |
+
|
| 644 |
+
tem work. 836
|
| 645 |
+
|
| 646 |
+
On the plus side, we observe that when the to-
|
| 647 |
+
|
| 648 |
+
kenisations of the two systems match and the in- 838 tersection of the possible analyses is non-empty,
|
| 649 |
+
|
| 650 |
+
performance is extremely good at ${99.0}\%$ . On the 841 downside, it is challenging to make the two sys-
|
| 651 |
+
|
| 652 |
+
tems work together; in about ${10}\%$ of cases, the 843 tokenisation does not match, and in around 20% of cases, the intersection of analyses is empty. We have seen that in some cases, it is tempting to let the neural system overrule the rules, but overall its
|
| 653 |
+
|
| 654 |
+
performance in these cases is not good. Hence our 848 overall priority in future work will be to improve the neural system.
|
| 655 |
+
|
| 656 |
+
851
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/UcWZrerHDCe/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,527 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# NoCoLA: The Norwegian Corpus of Linguistic Acceptability
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 First Author
|
| 10 |
+
|
| 11 |
+
004 Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
005 Affiliation / Address line 2 006 Affiliation / Address line 3
|
| 14 |
+
|
| 15 |
+
email@domain
|
| 16 |
+
|
| 17 |
+
Second Author 057
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1 058
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 22 |
+
|
| 23 |
+
063
|
| 24 |
+
|
| 25 |
+
## Abstract
|
| 26 |
+
|
| 27 |
+
While there has been a surge of large language models for Norwegian in recent years, we lack any tool to evaluate their understanding of grammaticality. We present two new Norwegian datasets for this task.
|
| 28 |
+
|
| 29 |
+
018 ${\mathbf{{NoCoLA}}}_{\text{class }}$ is a supervised binary classification task where the goal is to discriminate between acceptable and non-acceptable
|
| 30 |
+
|
| 31 |
+
021 sentences. On the other hand, ${\mathbf{{NoCoLA}}}_{\text{zero }}$ is a purely diagnostic task for evaluating
|
| 32 |
+
|
| 33 |
+
023 the grammatical judgement of a language model in a completely zero-shot manner,
|
| 34 |
+
|
| 35 |
+
026 i.e. without any further training. In this paper, we describe both datasets in detail,
|
| 36 |
+
|
| 37 |
+
028 show how to use them for different flavors of language models, and conduct a comparative study of the existing Norwegian
|
| 38 |
+
|
| 39 |
+
031 language models.
|
| 40 |
+
|
| 41 |
+
033
|
| 42 |
+
|
| 43 |
+
## 1 Introduction
|
| 44 |
+
|
| 45 |
+
Large pre-trained language models have recently led to a revolution in natural language processing (NLP) as they substantially increased the performance of most NLP tools (Peters et al., 2018; De-
|
| 46 |
+
|
| 47 |
+
038 vlin et al., 2019). Large language models were originally developed for English, but a surge of Norwegian-based models has recently followed (Kutuzov et al., 2021; Kummervold et al., 2021; Hofmann et al., 2022). The remaining issue is that the Norwegian linguistic resources do not contain a large range of tasks to evaluate and compare these models on, as opposed to the English benchmark suites like GLUE (Wang et al., 2018), SuperGLUE (Wang et al., 2019) or GLGE (Liu et al., 2021), to name a few.
|
| 48 |
+
|
| 49 |
+
We present two new datasets for evaluating the understanding language models have of Norwegian grammar, jointly called the Norwegian corpus of
|
| 50 |
+
|
| 51 |
+
053 linguistic acceptability (NoCoLA). Our work is
|
| 52 |
+
|
| 53 |
+
#Incorrect (inflection): Samfunnet ville bli mer forn@yet. #Correct: Samfunnet ville bli mer fornøyd. #Incorrect (word choice) : Jeg er ikke nordmann, med jeg trives i Norge. #Correct: Jeg er ikke nordmann, men jeg trives i Norge.
|
| 54 |
+
|
| 55 |
+
Listing 1: Two illustrative examples of incorrect / correct sentence pairs from ${\mathbf{{NoCoLA}}}_{\text{zero }}$ . The English translations: "Society would be happier" and "I'm not Norwegian, but I enjoy living in Norway."
|
| 56 |
+
|
| 57 |
+
065
|
| 58 |
+
|
| 59 |
+
067
|
| 60 |
+
|
| 61 |
+
069
|
| 62 |
+
|
| 63 |
+
070
|
| 64 |
+
|
| 65 |
+
072
|
| 66 |
+
|
| 67 |
+
075
|
| 68 |
+
|
| 69 |
+
limited to the most widely used of the written stan- 077 dards for Norwegian, namely Bokmål. This paper
|
| 70 |
+
|
| 71 |
+
proposes two different views on the same set of 080 sentences, each with a slightly different purpose:
|
| 72 |
+
|
| 73 |
+
- NoCoLA ${}_{\text{class }}$ is a collection of sentences split 082 into two classes: grammatically acceptable and non-acceptable. Thus, it is a binary classifica-
|
| 74 |
+
|
| 75 |
+
tion task, where a language model is expected to 085 be first fine-tuned on the training data split. This
|
| 76 |
+
|
| 77 |
+
task is more practically-oriented and evaluates 087 the fine-tuning abilities of a language model. The downside is that we cannot tell if the per-
|
| 78 |
+
|
| 79 |
+
formance comes from its innate abilities or if it 090
|
| 80 |
+
|
| 81 |
+
was obtained from the supervised fine-tuning. 092
|
| 82 |
+
|
| 83 |
+
- NoCoL ${\mathbf{A}}_{\text{zero }}$ is a collection of pairs of sentences, where only one of them is grammatically acceptable. Here, we do not fine-tune on this task at all, the language model gives a probability to
|
| 84 |
+
|
| 85 |
+
each of the two sentences, and we measure how 097 often the correct one gets a higher probability. While not as practical as the first task, the zero-shot evaluation provides a better estimate of the innate grammatical understanding.
|
| 86 |
+
|
| 87 |
+
We provide a comprehensive evaluation of the ex- 102 isting Norwegian language models and release the data and code for an easy evaluation of new Norwegian models. ${}^{1}$
|
| 88 |
+
|
| 89 |
+
107
|
| 90 |
+
|
| 91 |
+
---
|
| 92 |
+
|
| 93 |
+
anonymized.for/review
|
| 94 |
+
|
| 95 |
+
---
|
| 96 |
+
|
| 97 |
+
## 2 Related work
|
| 98 |
+
|
| 99 |
+
The closest equivalent of our ${\mathbf{{NoCoLA}}}_{\text{class }}$ dataset is the English Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2019), while ${\mathbf{{NoCoLA}}}_{\text{zero }}$ roughly follows the The Benchmark of Linguistic Minimal Pairs for English and the English (BLiMP; Warstadt et al., 2020).
|
| 100 |
+
|
| 101 |
+
CoLA. This dataset consists of 10600 acceptable and non-acceptable sentences collected manually from the linguistics literature, with the goal of covering specific linguistic phenomena - and the morphological, syntactic and semantic violation of rules connected to those phenomena. By collecting the data in this manner, one ensures that the dataset represents language phenomena that are central to human linguistic competence according to linguists. CoLA has become a standard task for evaluating English language models after it was included in the GLUE benchmark for natural language understanding (Wang et al., 2018).
|
| 102 |
+
|
| 103 |
+
BLiMP. The BLiMP dataset consists of 67000 minimal pairs, all of them generated artificially. Some examples of phenomena covered in the dataset are determiner-noun agreement, verb argument structure and irregular verb-forms. Each pair differs only on one single parameter, namely the element that leads to the non-acceptability.
|
| 104 |
+
|
| 105 |
+
Comparison with NoCoLA. Our datasets fill the same purpose for evaluation of language models in Norwegian as CoLA and BLiMP do for English. However, the source of the sentences is different. Our data consists of naturally produced sentences, instead of controlled and artificially generated ones. Where CoLA collects sentences that are handpicked by linguists to represent specific linguistic phenomena, our sentences contain errors that mirror the natural distribution of errors in texts by second language learners. Thus, NoCoLA gives an indication of how well a given language model distinguishes between acceptable and non-acceptable Norwegian text, but not of how well it understands the full range of possible grammatical phenomena of the language. NoCoLA is also substantially larger than CoLA, with almost 15 times more examples. The NoCoLA error types are not comparable to BLiMP, where the error-types describe the underlying grammatical problem. Instead, the NoCoLA error-types describe the
|
| 106 |
+
|
| 107 |
+
161 changes that need to be made to correct the errors.
|
| 108 |
+
|
| 109 |
+
## 3 Datasets description
|
| 110 |
+
|
| 111 |
+
162
|
| 112 |
+
|
| 113 |
+
163
|
| 114 |
+
|
| 115 |
+
### 3.1 ASK corpus
|
| 116 |
+
|
| 117 |
+
164
|
| 118 |
+
|
| 119 |
+
Both ${\mathbf{{NoCoLA}}}_{\text{class }}$ and ${\mathbf{{NoCoLA}}}_{\text{zero }}$ require a 165 source for both acceptable and non-acceptable sentences. The latter is hard to come by in most nat-
|
| 120 |
+
|
| 121 |
+
uralistic text by adult native speakers. Our source 168 for both NoCoLA datasets is the ASK Corpus - A Language Learner Corpus of Norwegian as a Second Language (Tenfjord et al., 2006). It consists of submissions by second language learners of Norwegian Bokmål around the year 2000, each having of one or more essays. The essays are written as solutions to two separate Norwegian language exams, which are estimated in Berggren (2019) to be approximately CEFR-levels B1 and B2. The texts are limited to one of the written standards for Norwegian, namely Bokmål.
|
| 122 |
+
|
| 123 |
+
There are 1935 submissions, with 46000 original sentences in total. Each essay has been manually corrected by native speakers, hereby called cor-rectors. The errors in the corpus are annotated with a set of error-codes, which indicate the change that needs to be done to correct the original passage. For instance, "F" indicates wrong morpho-syntactic category, while "PUNCM" means that punctuation is missing, and needs to be added. We have merged some of the error-codes so that we have a medium-grained way of understanding the performance of the models on the different types of errors found in ${\mathbf{{NoCoLA}}}_{\text{zero. }}$ . A short explanation of these error-
|
| 124 |
+
|
| 125 |
+
codes can be found in the appendix. 195
|
| 126 |
+
|
| 127 |
+
### 3.2 Conversion from ASK to NoCoLA
|
| 128 |
+
|
| 129 |
+
Sentence merging. For the NoCoLA datasets we want sentences as the unit for evaluation. There-
|
| 130 |
+
|
| 131 |
+
fore we need to split the continuous text of ASK 200 into sentences. However, since some of the corrections suggested by the correctors affect the way the text is split into sentences, and we need alignment between the acceptable and non-acceptable in the pairs for ${\mathbf{{NoCoLA}}}_{\text{zero }}$ , we decided to always keep the longest available version in cases where there is disagreement between both versions. The principle applies to both datasets. Thus, the unit referred to as "sentence" in this paper can consist of multiple sentences.
|
| 132 |
+
|
| 133 |
+
Error extraction. For each of these sentences, we first extract a corrected (acceptable) version. In order to test only minimal errors and to label
|
| 134 |
+
|
| 135 |
+
each non-acceptable sentence with an error-type, 215
|
| 136 |
+
|
| 137 |
+
217 we generate one non-acceptable sentence for each error found in the originals. Therefore we extract almost 100000 non-acceptable sentences, as many of the original sentences have multiple errors.
|
| 138 |
+
|
| 139 |
+
<table><tr><td>Dataset</td><td>Train</td><td>$\mathbf{{Dev}}$</td><td>Test</td></tr><tr><td>${\mathbf{{NoCoLA}}}_{class}$</td><td>116 195</td><td>14 289</td><td>14 383</td></tr><tr><td>${\mathbf{{NoCoLA}}}_{zero}$</td><td>-</td><td>-</td><td>99 115</td></tr></table>
|
| 140 |
+
|
| 141 |
+
Table 1: Number of sentences and sentence pairs, respectively, for both NoCoLA datasets.
|
| 142 |
+
|
| 143 |
+
229
|
| 144 |
+
|
| 145 |
+
Post-processing. We did a few additional adjustments to the dataset. All sentences are heuristically
|
| 146 |
+
|
| 147 |
+
232 detokenized and removed if they contain an uneven count of quotation marks. If no error type
|
| 148 |
+
|
| 149 |
+
234 is mentioned for a given correction, we also remove that sentence. In the original ASK dataset, sensitive words have been replaced by placehold-ers like "@sted" (place) and "@navn" (name) for anonymization purposes. We replace each placeholder with a substitute representation of that category, i.e. "Oslo" instead of "@sted", to normalize all sentences. In rare occasions, these replacements might cause some sentences to become erroneous, since the possible genitive and plural conjugations in the original texts are not annotated with separate placeholder-tokens.
|
| 150 |
+
|
| 151 |
+
Conversion results. The final dataset contains 144867 sentences, 31.5% of which are acceptable. ${\mathbf{{NoCoLA}}}_{\text{class }}$ has been shuffled and then randomly split by the authors to ensure unbiased development and test sentences. The split has been done in an approximate 80:10:10 ratio, resulting in the sentence-level statistics from Table 1.
|
| 152 |
+
|
| 153 |
+
## 4 Baseline models
|
| 154 |
+
|
| 155 |
+
### 4.1 Evaluation of ${\mathrm{{NoCoLA}}}_{\text{class }}$
|
| 156 |
+
|
| 157 |
+
In order to evaluate language models on No- ${\mathbf{{CoLA}}}_{\text{class }}$ , we use the standard fine-tuning approach from Devlin et al. (2019). Accordingly, every sentence is tokenized, prepended by a special [CLS] token, appended by a [SEP] token and input to a pre-trained language model. Subsequently, the contextualized representation of the special [CLS] token is fed into a binary MLP classifier. The pre-trained weights of the language model are further trained together with the classi-
|
| 158 |
+
|
| 159 |
+
269 fier weights.
|
| 160 |
+
|
| 161 |
+
### 4.2 Evaluation of ${\mathrm{{NoCoLA}}}_{\text{zero }}$
|
| 162 |
+
|
| 163 |
+
270
|
| 164 |
+
|
| 165 |
+
One disadvantage of ${\mathbf{{NoCoLA}}}_{\text{class }}$ is that the re- 271 sults are skewed by the second-stage supervised training and it can be problematic to disentangle the properties of the LM from the classifier (Be-
|
| 166 |
+
|
| 167 |
+
linkov, 2022). In contrast, pure LM-based evalu- 276 ation of ${\mathbf{{NoCoLA}}}_{zero}$ attempts to measure the linguistic knowledge of a language model in a zero-shot manner - without any additional training. The dataset consists of 99115 sentence pairs; each pair
|
| 168 |
+
|
| 169 |
+
differs minimally on the surface level, but only 281 one of the sentences is acceptable. We can use the intrinsic ability of language models to assign a probability to every sentence and test how often a language model assigns a higher probability to the
|
| 170 |
+
|
| 171 |
+
correct sentence, as in (Warstadt et al., 2020). 286
|
| 172 |
+
|
| 173 |
+
CLM evaluation. The causal language models are trained to estimate $p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{ < t}}\right)$ for sentence $\mathbf{s}$ and token ${\mathbf{s}}_{t}$ where ${\mathbf{s}}_{ < t} = \left( {{\mathbf{s}}_{i} \mid i < t}\right)$ ; then the sentence log-probability is simply given by $\log p\left( \mathbf{s}\right) =$ $\mathop{\sum }\limits_{{t = 1}}^{N}\log p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{ < t}}\right)$ .
|
| 174 |
+
|
| 175 |
+
MLM evaluation. The issue with masked language models is that they are not designed to calculate the joint probability; they are trained to estimate $p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{\smallsetminus t}}\right)$ - the likelihood of a token ${s}_{t}$ given its bidirectional context ${\mathbf{s}}_{\smallsetminus t} = \left( {{\mathbf{s}}_{i} \mid i \neq t}\right)$ . We can however still use MLMs to infer a score for each sentence where a higher score corresponds to a more likely sentence. Wang and Cho (2019) defined pseudo-log-likelihood score of a sentence $s$
|
| 176 |
+
|
| 177 |
+
with model $\theta$ as 303
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
\operatorname{PLL}\left( \mathbf{s}\right) = \frac{1}{N}\mathop{\sum }\limits_{{t = 1}}^{N}\log p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{\smallsetminus t};\theta }\right) .
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
Salazar et al. (2020) tested PLL and found that it 308 produces accurate predictions on BLiMP. We adopt their approach and evaluate our models with PLL.
|
| 184 |
+
|
| 185 |
+
## 5 Results
|
| 186 |
+
|
| 187 |
+
### 5.1 Results on ${\mathbf{{NoCoLA}}}_{\text{class }}$
|
| 188 |
+
|
| 189 |
+
313
|
| 190 |
+
|
| 191 |
+
The results from benchmarking the publicly available Norwegian language models on the classification task can be seen in Table 3. The classification accuracy is around ${80}\%$ for for these models. One exception is the slightly older NorBERT 1, which performs substantially worse, even if being trained on clean Norwegian data: Wikipedia and newspaper articles (Kutuzov et al., 2021). We use the
|
| 192 |
+
|
| 193 |
+
English BERT ${}_{\text{base }}$ as a naive baseline, which gives 323
|
| 194 |
+
|
| 195 |
+
325 379
|
| 196 |
+
|
| 197 |
+
<table><tr><td>Model</td><td>✓</td><td/><td/><td>✓</td><td/><td/><td/><td/><td/><td/><td>Overall</td></tr><tr><td>${\mathrm{{BERT}}}_{\text{base }}$ (Devlin et al. 2019)</td><td>50.70</td><td>53.55</td><td>63.43</td><td>60.44</td><td>51.69</td><td>79.33</td><td>51.85</td><td>82.54</td><td>54.31</td><td>54.11</td><td>59.48</td></tr><tr><td>mBERTbase (Devlin et al. 2019)</td><td>79.92</td><td>69.05</td><td>90.74</td><td>76.91</td><td>78.84</td><td>83.97</td><td>74.88</td><td>87.88</td><td>78.72</td><td>80.44</td><td>79.53</td></tr><tr><td>XLM-R base (Conneau et al. 2020)</td><td>91.43</td><td>85.28</td><td>92.60</td><td>87.43</td><td>87.56</td><td>83.93</td><td>84.33</td><td>90.60</td><td>89.63</td><td>91.96</td><td>88.02</td></tr><tr><td>ScandiBERT (Hofmann et al. 2022)</td><td>93.43</td><td>89.79</td><td>90.84</td><td>90.14</td><td>90.05</td><td>87.10</td><td>90.08</td><td>90.55</td><td>85.82</td><td>90.68</td><td>90.27</td></tr><tr><td>NB-BERT base (Kummervold et al. 2021)</td><td>93.76</td><td>89.19</td><td>97.14</td><td>86.54</td><td>92.48</td><td>73.98</td><td>90.94</td><td>92.73</td><td>91.15</td><td>94.70</td><td>89.04</td></tr><tr><td>NorBERT 1 (Kutuzov et al., 2021)</td><td>93.46</td><td>88.46</td><td>94.54</td><td>88.66</td><td>89.41</td><td>88.46</td><td>92.01</td><td>94.26</td><td>90.83</td><td>93.05</td><td>90.83</td></tr><tr><td>NorBERT 2 (Kutuzov et al., 2021)</td><td>91.66</td><td>88.20</td><td>96.88</td><td>89.22</td><td>90.91</td><td>75.82</td><td>92.67</td><td>93.13</td><td>74.18</td><td>92.69</td><td>88.51</td></tr><tr><td>XLM-Rlarge (Conneau et al., 2020)</td><td>92.54</td><td>88.17</td><td>90.06</td><td>88.57</td><td>89.28</td><td>80.84</td><td>84.52</td><td>91.35</td><td>89.70</td><td>93.24</td><td>88.27</td></tr><tr><td>NB-BERTlarge (Kummervold et al. 2021)</td><td>95.20</td><td>92.41</td><td>95.16</td><td>91.47</td><td>91.92</td><td>85.33</td><td>93.36</td><td>17.01</td><td>89.56</td><td>92.87</td><td>90.51</td></tr></table>
|
| 198 |
+
|
| 199 |
+
Table 2: The accuracy values of zero-shot evaluation on ${\mathbf{{NoCoLA}}}_{\text{zero. }}$ Fine-grained results over different error types are reported (Appendix A), as well as the overall average over all sentence pairs in the datasets.
|
| 200 |
+
|
| 201 |
+
378
|
| 202 |
+
|
| 203 |
+
380
|
| 204 |
+
|
| 205 |
+
381
|
| 206 |
+
|
| 207 |
+
382
|
| 208 |
+
|
| 209 |
+
383
|
| 210 |
+
|
| 211 |
+
385
|
| 212 |
+
|
| 213 |
+
386
|
| 214 |
+
|
| 215 |
+
389
|
| 216 |
+
|
| 217 |
+
391
|
| 218 |
+
|
| 219 |
+
330 384
|
| 220 |
+
|
| 221 |
+
393
|
| 222 |
+
|
| 223 |
+
340 394 as a lower bound on the performance of any decent Norwegian language models. has the worst perfor-
|
| 224 |
+
|
| 225 |
+
<table><tr><td>Model</td><td>$\mathbf{{Lang}.}$</td><td>Size</td><td>Accuracy</td><td>$\mathbf{{MCC}}$</td></tr><tr><td>${\mathrm{{BERT}}}_{\text{base }}$</td><td>en</td><td>110M</td><td>${69.56}^{\pm {0.37}}$</td><td>${23.99}^{\pm {0.41}}$</td></tr><tr><td>${\mathrm{{mBERT}}}_{\text{base }}$</td><td>multi</td><td>178M</td><td>${75.28}^{\pm {0.66}}$</td><td>${46.39}^{\pm {0.67}}$</td></tr><tr><td>${\mathrm{{XLM} - R}}_{\text{base }}$</td><td>multi</td><td>278M</td><td>${79.29}^{\pm {0.20}}$</td><td>${55.14}^{\pm {0.36}}$</td></tr><tr><td>ScandiBERT</td><td>multi</td><td>124M</td><td>${80.25}^{\pm {0.33}}$</td><td>${57.12}^{\pm {0.37}}$</td></tr><tr><td>NB-BERT ${}_{\text{base}}$</td><td>no</td><td>178M</td><td>${80.69}^{\pm {0.44}}$</td><td>${58.10}^{\pm {0.48}}$</td></tr><tr><td>NorBERT 1</td><td>no</td><td>111M</td><td>${71.53}^{\pm {0.80}}$</td><td>${35.85}^{\pm {1.70}}$</td></tr><tr><td>NorBERT 2</td><td>no</td><td>125M</td><td>${79.99}^{\pm {0.27}}$</td><td>${56.09}^{\pm {0.30}}$</td></tr><tr><td/><td/><td/><td/><td/></tr><tr><td>XLM-Rlarge</td><td>multi</td><td>560M</td><td>${81.03}^{\pm {0.27}}$</td><td>${58.56}^{\pm {0.30}}$</td></tr><tr><td>NB-BERT ${}_{\text{large }}$</td><td>no</td><td>355M</td><td>${\mathbf{{81.43}}}^{\pm {0.32}}$</td><td>${\mathbf{{59.68}}}^{\pm {0.14}}$</td></tr></table>
|
| 226 |
+
|
| 227 |
+
Table 3: Accuracy and the Matthews correlation coefficient (Matthews, 1975), the main metric of ${\mathbf{{NoCoLA}}}_{\text{class }}$ . We report the mean and standard deviation across five runs on the test split.
|
| 228 |
+
|
| 229 |
+
362 mance of all our models. The two largest models give a small increase in performance compared to the moderately sized versions of the same models.
|
| 230 |
+
|
| 231 |
+
### 5.2 Results on ${\mathbf{{NoCoLA}}}_{\text{zero }}$
|
| 232 |
+
|
| 233 |
+
367 On the raw zero-shot diagnostic task (Table 2), all models trained on Norwegian or Scandinavian languages perform well with results around ${90}\%$ accuracy. The best performance comes, perhaps surprisingly, from NorBERT 1 - possibly because it was pre-trained on a relatively small clean corpus. Remarkably, increased number of parameters does not seem to improve performance on this task.
|
| 234 |
+
|
| 235 |
+
We have also included accuracy scores for the in-
|
| 236 |
+
|
| 237 |
+
377 dividual error-types, these fine-grained scores can
|
| 238 |
+
|
| 239 |
+
be used as a helpful cue for NLP researchers who 396 develop new language models. Comparably low scores can signal a problem their training corpus
|
| 240 |
+
|
| 241 |
+
or with their tokenizer. For example, the two mod- 399 els NB-BERTs are relatively weak on punctuation-
|
| 242 |
+
|
| 243 |
+
related errors. The large version is trained on 401 uncased data, which explains its inability of this
|
| 244 |
+
|
| 245 |
+
model to understand the case-related errors. Scan- 404 diBERT performs comparably to the Norwegian
|
| 246 |
+
|
| 247 |
+
ones on most parameters except for spelling. 406
|
| 248 |
+
|
| 249 |
+
## 6 Conclusion
|
| 250 |
+
|
| 251 |
+
408
|
| 252 |
+
|
| 253 |
+
In this paper we have proposed NoCoLA, the 409 first dataset for linguistic acceptance in Norwegian
|
| 254 |
+
|
| 255 |
+
Bokmal. We showed how to use it for measuring 411 the linguistic knowledge of language models on both a classification task and a zero-shot probability comparison task. We have described how the datasets were created and what their motivation is,
|
| 256 |
+
|
| 257 |
+
compared them to related work in English NLP 416 and showed how to use them for fine-grained error analysis of language models.
|
| 258 |
+
|
| 259 |
+
Lastly, we evaluated all existing Norwegian language models on both proposed tasks. These re-
|
| 260 |
+
|
| 261 |
+
sults suggest that models trained specifically for 421 Norwegian or Scandinavian languages perform better at discriminating between acceptable an non-acceptable sentences. The classification results also
|
| 262 |
+
|
| 263 |
+
show that linguistic acceptability is a relatively hard 426 task, as none of the models achieved more than ${60}\%$ on the main MCC metric. The results on our diagnostic dataset highlight some shortcoming of
|
| 264 |
+
|
| 265 |
+
the existing models. We will release all evaluation 430
|
| 266 |
+
|
| 267 |
+
sources in the camera-ready version. 431
|
| 268 |
+
|
| 269 |
+
## References
|
| 270 |
+
|
| 271 |
+
433 Yonatan Belinkov. 2022. Probing Classifiers: Promises, Shortcomings, and Advances. Computational Lin- 435 guistics, 48(1):207-219.
|
| 272 |
+
|
| 273 |
+
Stig Johan Berggren. 2019. Automated assessment of 438 norwegian 12 essays using multi-task learning. master thesis, university of oslo.
|
| 274 |
+
|
| 275 |
+
440 Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised 443 cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- 445 ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Linguistics.
|
| 276 |
+
|
| 277 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 278 |
+
|
| 279 |
+
Valentin Hofmann, Goran Glavaš, Nikola Ljubešić, Janet B. Pierrehumbert, and Hinrich Schütze. 2022. Geographic adaptation of pretrained language models.
|
| 280 |
+
|
| 281 |
+
Per E Kummervold, Javier De la Rosa, Freddy Wet-jen, and Svein Arne Brygfjeld. 2021. Operationaliz-ing a national digital library: The case for a Norwegian transformer model. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 20-29, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden.
|
| 282 |
+
|
| 283 |
+
Andrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja $\varnothing$ vrelid, and Stephan Oepen. 2021. Large-scale con-textualised language modelling for Norwegian. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 30-40, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden.
|
| 284 |
+
|
| 285 |
+
Dayiheng Liu, Yu Yan, Yeyun Gong, Weizhen Qi, Hang Zhang, Jian Jiao, Weizhu Chen, Jie Fu, Linjun Shou, Ming Gong, Pengcheng Wang, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Ruofei Zhang, Winnie Wu, Ming Zhou, and Nan Duan. 2021. GLGE: A new general language generation evaluation benchmark. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 408-420, Online. Association for Computational Linguistics.
|
| 286 |
+
|
| 287 |
+
B.W. Matthews. 1975. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochimica et Biophysica Acta (BBA) - Protein Struc- 485 ture, 405(2):442-451.
|
| 288 |
+
|
| 289 |
+
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt 486
|
| 290 |
+
|
| 291 |
+
Gardner, Christopher Clark, Kenton Lee, and Luke 487
|
| 292 |
+
|
| 293 |
+
Zettlemoyer. 2018. Deep contextualized word repre- 488
|
| 294 |
+
|
| 295 |
+
sentations. In Proceedings of the 2018 Conference of 489
|
| 296 |
+
|
| 297 |
+
the North American Chapter of the Association for 490 Computational Linguistics: Human Language Tech-
|
| 298 |
+
|
| 299 |
+
nologies, Volume 1 (Long Papers), pages 2227-2237, 491
|
| 300 |
+
|
| 301 |
+
New Orleans, Louisiana. Association for Computa- 492 tional Linguistics.
|
| 302 |
+
|
| 303 |
+
Julian Salazar, Davis Liang, Toan Q. Nguyen, and Ka-trin Kirchhoff. 2020. Masked language model scoring. In Proceedings of the 58th Annual Meeting of
|
| 304 |
+
|
| 305 |
+
the Association for Computational Linguistics, pages 497 2699-2712, Online. Association for Computational
|
| 306 |
+
|
| 307 |
+
Linguistics. 499
|
| 308 |
+
|
| 309 |
+
Kari Tenfjord, Paul Meurer, and Knut Hofland. 2006. In The ASK Corpus - A Language Learner Corpus of
|
| 310 |
+
|
| 311 |
+
Norwegian as a Second Language. Proceedings from 502 5th International Conference on Language Resources
|
| 312 |
+
|
| 313 |
+
and Evaluation (LREC), Genova 2006. [link]. 504
|
| 314 |
+
|
| 315 |
+
Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov ran-
|
| 316 |
+
|
| 317 |
+
dom field language model. In Proceedings of the 507 Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 30-36, Min-
|
| 318 |
+
|
| 319 |
+
neapolis, Minnesota. Association for Computational 509 Linguistics.
|
| 320 |
+
|
| 321 |
+
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Aman-preet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems, volume 32. Curran Associates,
|
| 322 |
+
|
| 323 |
+
Inc. 517
|
| 324 |
+
|
| 325 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix
|
| 326 |
+
|
| 327 |
+
Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: 519 A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the
|
| 328 |
+
|
| 329 |
+
2018 EMNLP Workshop BlackboxNLP: Analyzing 522 and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Com-
|
| 330 |
+
|
| 331 |
+
putational Linguistics. 524
|
| 332 |
+
|
| 333 |
+
Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo-hananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020. BLiMP: The benchmark of linguistic minimal pairs for English. Transactions of the
|
| 334 |
+
|
| 335 |
+
Association for Computational Linguistics, 8:377- 529 392.
|
| 336 |
+
|
| 337 |
+
Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational
|
| 338 |
+
|
| 339 |
+
Linguistics, 7:625-641. 534
|
| 340 |
+
|
| 341 |
+
535
|
| 342 |
+
|
| 343 |
+
536
|
| 344 |
+
|
| 345 |
+
537
|
| 346 |
+
|
| 347 |
+
538 539
|
| 348 |
+
|
| 349 |
+
## A ${\mathbf{{NoCoLA}}}_{\text{zero }}$ error types
|
| 350 |
+
|
| 351 |
+
541
|
| 352 |
+
|
| 353 |
+
542 - Inflection: wrong form of word. Merged from 543 ASK-codes "F": wrong morpho-syntactic form 544 and "INFL": suffix from correct category, but 545 wrong form for this particular word. " Jeg vet 546 ikke hvorfor jeg har valgt dette oppgaven." “I 547 do not know why I have chosen this task."
|
| 354 |
+
|
| 355 |
+
548 - Word choice: wrong choice of word. Merged from ASK-codes "W": wrong word and "FL": word from another language. "Jeg er et eksem- 551 pelfor det." "I am an example of that"
|
| 356 |
+
|
| 357 |
+
553 - Spelling: wrong spelling of word, corresponding to ASK-code "ORT". "De er en rik fammilie." "They are a rich family."
|
| 358 |
+
|
| 359 |
+
556 - Missing: word should be added. Corresponding to ASK-code "M". "Norge kan bidra veldig mye 558 på Europeiske planet." "Norway can contribute a lot at the European level."
|
| 360 |
+
|
| 361 |
+
561 - Superfluous: word should be removed. Corresponding to ASK-code "R". "Da mistet jeg den 563 beste vennen min i hele livet mitt." "Then I lost the best friend in my whole life."
|
| 362 |
+
|
| 363 |
+
- Punctuation: add or remove punctuation. Cor- 566 responding to ASK-codes "PUNC", "PUNCM" and "PUNCR". "Hva skal jeg gjøre etterpâ." 568 "What should we do afterwards?"
|
| 364 |
+
|
| 365 |
+
- Word order: wrong order of words or phrases. Corresponding to ASK-code "O". "Hvis du har tillatelse, du kan fiske også." "If you have a lisence, you can fish as well."
|
| 366 |
+
|
| 367 |
+
- Capitalization: add or remove capitalization. Corresponding to ASK-code "CAP". "n' $\dot{\mathbf{a}}$ liker jeg meg godt i Oslo." "Now I enjoy myself in Oslo” 578
|
| 368 |
+
|
| 369 |
+
- Compounding: deviation regarding compounding. Corresponding to ASK-codes "PART" and "SPL". "Etter pã skal jeg studere for à bli syke-pleier." "Afterwards I want to study to become 583 a nurse." - Derivation: deviation regarding derivation. Corresponding to ASK-code "DER". "Derfor er jeg helt enig med forbudelse mot krenkende ut- 588 talelser." "Therefore I completely agree with the ban on offensive statements."
|
| 370 |
+
|
| 371 |
+
- Other: any other error
|
| 372 |
+
|
| 373 |
+
592
|
| 374 |
+
|
| 375 |
+
593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616
|
| 376 |
+
|
| 377 |
+
617 618 619
|
| 378 |
+
|
| 379 |
+
620
|
| 380 |
+
|
| 381 |
+
621
|
| 382 |
+
|
| 383 |
+
622
|
| 384 |
+
|
| 385 |
+
623
|
| 386 |
+
|
| 387 |
+
624
|
| 388 |
+
|
| 389 |
+
625 626
|
| 390 |
+
|
| 391 |
+
627
|
| 392 |
+
|
| 393 |
+
628
|
| 394 |
+
|
| 395 |
+
629
|
| 396 |
+
|
| 397 |
+
630 631 632 633 634
|
| 398 |
+
|
| 399 |
+
635 636 637
|
| 400 |
+
|
| 401 |
+
638
|
| 402 |
+
|
| 403 |
+
639
|
| 404 |
+
|
| 405 |
+
640
|
| 406 |
+
|
| 407 |
+
641
|
| 408 |
+
|
| 409 |
+
642
|
| 410 |
+
|
| 411 |
+
643
|
| 412 |
+
|
| 413 |
+
644
|
| 414 |
+
|
| 415 |
+
645 646 647
|
| 416 |
+
|
| 417 |
+
648 702
|
| 418 |
+
|
| 419 |
+
649 703
|
| 420 |
+
|
| 421 |
+
650 704
|
| 422 |
+
|
| 423 |
+
651 705
|
| 424 |
+
|
| 425 |
+
652 706
|
| 426 |
+
|
| 427 |
+
653 707
|
| 428 |
+
|
| 429 |
+
654 708
|
| 430 |
+
|
| 431 |
+
655 709
|
| 432 |
+
|
| 433 |
+
656 710
|
| 434 |
+
|
| 435 |
+
657 711
|
| 436 |
+
|
| 437 |
+
658 712
|
| 438 |
+
|
| 439 |
+
659 713
|
| 440 |
+
|
| 441 |
+
660 714
|
| 442 |
+
|
| 443 |
+
661 715
|
| 444 |
+
|
| 445 |
+

|
| 446 |
+
|
| 447 |
+
Figure 1: Distribution of error types in the NoCoLA datasets.
|
| 448 |
+
|
| 449 |
+
662 716
|
| 450 |
+
|
| 451 |
+
663 717
|
| 452 |
+
|
| 453 |
+
664 718
|
| 454 |
+
|
| 455 |
+
665 719
|
| 456 |
+
|
| 457 |
+
666 720
|
| 458 |
+
|
| 459 |
+
667 721
|
| 460 |
+
|
| 461 |
+
668 722
|
| 462 |
+
|
| 463 |
+
669 723
|
| 464 |
+
|
| 465 |
+
670 724
|
| 466 |
+
|
| 467 |
+
671 725
|
| 468 |
+
|
| 469 |
+
672 726
|
| 470 |
+
|
| 471 |
+
673 727
|
| 472 |
+
|
| 473 |
+
674 728
|
| 474 |
+
|
| 475 |
+
675 729
|
| 476 |
+
|
| 477 |
+
676 730
|
| 478 |
+
|
| 479 |
+
677 731
|
| 480 |
+
|
| 481 |
+
678 732
|
| 482 |
+
|
| 483 |
+
679 733
|
| 484 |
+
|
| 485 |
+
680 734
|
| 486 |
+
|
| 487 |
+
681 735
|
| 488 |
+
|
| 489 |
+
682 736
|
| 490 |
+
|
| 491 |
+
683 737
|
| 492 |
+
|
| 493 |
+
684 738
|
| 494 |
+
|
| 495 |
+
685 739
|
| 496 |
+
|
| 497 |
+
686 740
|
| 498 |
+
|
| 499 |
+
687 741
|
| 500 |
+
|
| 501 |
+
688 742
|
| 502 |
+
|
| 503 |
+
689 743
|
| 504 |
+
|
| 505 |
+
690 744
|
| 506 |
+
|
| 507 |
+
691 745
|
| 508 |
+
|
| 509 |
+
692 746
|
| 510 |
+
|
| 511 |
+
693 747
|
| 512 |
+
|
| 513 |
+
694 748
|
| 514 |
+
|
| 515 |
+
695 749
|
| 516 |
+
|
| 517 |
+
696 750
|
| 518 |
+
|
| 519 |
+
697 751
|
| 520 |
+
|
| 521 |
+
698 752
|
| 522 |
+
|
| 523 |
+
699 753
|
| 524 |
+
|
| 525 |
+
700 754
|
| 526 |
+
|
| 527 |
+
701 755
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/UcWZrerHDCe/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,338 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ NOCOLA: THE NORWEGIAN CORPUS OF LINGUISTIC ACCEPTABILITY
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 First Author
|
| 10 |
+
|
| 11 |
+
004 Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
005 Affiliation / Address line 2 006 Affiliation / Address line 3
|
| 14 |
+
|
| 15 |
+
email@domain
|
| 16 |
+
|
| 17 |
+
Second Author 057
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1 058
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 22 |
+
|
| 23 |
+
063
|
| 24 |
+
|
| 25 |
+
§ ABSTRACT
|
| 26 |
+
|
| 27 |
+
While there has been a surge of large language models for Norwegian in recent years, we lack any tool to evaluate their understanding of grammaticality. We present two new Norwegian datasets for this task.
|
| 28 |
+
|
| 29 |
+
018 ${\mathbf{{NoCoLA}}}_{\text{ class }}$ is a supervised binary classification task where the goal is to discriminate between acceptable and non-acceptable
|
| 30 |
+
|
| 31 |
+
021 sentences. On the other hand, ${\mathbf{{NoCoLA}}}_{\text{ zero }}$ is a purely diagnostic task for evaluating
|
| 32 |
+
|
| 33 |
+
023 the grammatical judgement of a language model in a completely zero-shot manner,
|
| 34 |
+
|
| 35 |
+
026 i.e. without any further training. In this paper, we describe both datasets in detail,
|
| 36 |
+
|
| 37 |
+
028 show how to use them for different flavors of language models, and conduct a comparative study of the existing Norwegian
|
| 38 |
+
|
| 39 |
+
031 language models.
|
| 40 |
+
|
| 41 |
+
033
|
| 42 |
+
|
| 43 |
+
§ 1 INTRODUCTION
|
| 44 |
+
|
| 45 |
+
Large pre-trained language models have recently led to a revolution in natural language processing (NLP) as they substantially increased the performance of most NLP tools (Peters et al., 2018; De-
|
| 46 |
+
|
| 47 |
+
038 vlin et al., 2019). Large language models were originally developed for English, but a surge of Norwegian-based models has recently followed (Kutuzov et al., 2021; Kummervold et al., 2021; Hofmann et al., 2022). The remaining issue is that the Norwegian linguistic resources do not contain a large range of tasks to evaluate and compare these models on, as opposed to the English benchmark suites like GLUE (Wang et al., 2018), SuperGLUE (Wang et al., 2019) or GLGE (Liu et al., 2021), to name a few.
|
| 48 |
+
|
| 49 |
+
We present two new datasets for evaluating the understanding language models have of Norwegian grammar, jointly called the Norwegian corpus of
|
| 50 |
+
|
| 51 |
+
053 linguistic acceptability (NoCoLA). Our work is
|
| 52 |
+
|
| 53 |
+
#Incorrect (inflection): Samfunnet ville bli mer forn@yet. #Correct: Samfunnet ville bli mer fornøyd. #Incorrect (word choice) : Jeg er ikke nordmann, med jeg trives i Norge. #Correct: Jeg er ikke nordmann, men jeg trives i Norge.
|
| 54 |
+
|
| 55 |
+
Listing 1: Two illustrative examples of incorrect / correct sentence pairs from ${\mathbf{{NoCoLA}}}_{\text{ zero }}$ . The English translations: "Society would be happier" and "I'm not Norwegian, but I enjoy living in Norway."
|
| 56 |
+
|
| 57 |
+
065
|
| 58 |
+
|
| 59 |
+
067
|
| 60 |
+
|
| 61 |
+
069
|
| 62 |
+
|
| 63 |
+
070
|
| 64 |
+
|
| 65 |
+
072
|
| 66 |
+
|
| 67 |
+
075
|
| 68 |
+
|
| 69 |
+
limited to the most widely used of the written stan- 077 dards for Norwegian, namely Bokmål. This paper
|
| 70 |
+
|
| 71 |
+
proposes two different views on the same set of 080 sentences, each with a slightly different purpose:
|
| 72 |
+
|
| 73 |
+
* NoCoLA ${}_{\text{ class }}$ is a collection of sentences split 082 into two classes: grammatically acceptable and non-acceptable. Thus, it is a binary classifica-
|
| 74 |
+
|
| 75 |
+
tion task, where a language model is expected to 085 be first fine-tuned on the training data split. This
|
| 76 |
+
|
| 77 |
+
task is more practically-oriented and evaluates 087 the fine-tuning abilities of a language model. The downside is that we cannot tell if the per-
|
| 78 |
+
|
| 79 |
+
formance comes from its innate abilities or if it 090
|
| 80 |
+
|
| 81 |
+
was obtained from the supervised fine-tuning. 092
|
| 82 |
+
|
| 83 |
+
* NoCoL ${\mathbf{A}}_{\text{ zero }}$ is a collection of pairs of sentences, where only one of them is grammatically acceptable. Here, we do not fine-tune on this task at all, the language model gives a probability to
|
| 84 |
+
|
| 85 |
+
each of the two sentences, and we measure how 097 often the correct one gets a higher probability. While not as practical as the first task, the zero-shot evaluation provides a better estimate of the innate grammatical understanding.
|
| 86 |
+
|
| 87 |
+
We provide a comprehensive evaluation of the ex- 102 isting Norwegian language models and release the data and code for an easy evaluation of new Norwegian models. ${}^{1}$
|
| 88 |
+
|
| 89 |
+
107
|
| 90 |
+
|
| 91 |
+
anonymized.for/review
|
| 92 |
+
|
| 93 |
+
§ 2 RELATED WORK
|
| 94 |
+
|
| 95 |
+
The closest equivalent of our ${\mathbf{{NoCoLA}}}_{\text{ class }}$ dataset is the English Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2019), while ${\mathbf{{NoCoLA}}}_{\text{ zero }}$ roughly follows the The Benchmark of Linguistic Minimal Pairs for English and the English (BLiMP; Warstadt et al., 2020).
|
| 96 |
+
|
| 97 |
+
CoLA. This dataset consists of 10600 acceptable and non-acceptable sentences collected manually from the linguistics literature, with the goal of covering specific linguistic phenomena - and the morphological, syntactic and semantic violation of rules connected to those phenomena. By collecting the data in this manner, one ensures that the dataset represents language phenomena that are central to human linguistic competence according to linguists. CoLA has become a standard task for evaluating English language models after it was included in the GLUE benchmark for natural language understanding (Wang et al., 2018).
|
| 98 |
+
|
| 99 |
+
BLiMP. The BLiMP dataset consists of 67000 minimal pairs, all of them generated artificially. Some examples of phenomena covered in the dataset are determiner-noun agreement, verb argument structure and irregular verb-forms. Each pair differs only on one single parameter, namely the element that leads to the non-acceptability.
|
| 100 |
+
|
| 101 |
+
Comparison with NoCoLA. Our datasets fill the same purpose for evaluation of language models in Norwegian as CoLA and BLiMP do for English. However, the source of the sentences is different. Our data consists of naturally produced sentences, instead of controlled and artificially generated ones. Where CoLA collects sentences that are handpicked by linguists to represent specific linguistic phenomena, our sentences contain errors that mirror the natural distribution of errors in texts by second language learners. Thus, NoCoLA gives an indication of how well a given language model distinguishes between acceptable and non-acceptable Norwegian text, but not of how well it understands the full range of possible grammatical phenomena of the language. NoCoLA is also substantially larger than CoLA, with almost 15 times more examples. The NoCoLA error types are not comparable to BLiMP, where the error-types describe the underlying grammatical problem. Instead, the NoCoLA error-types describe the
|
| 102 |
+
|
| 103 |
+
161 changes that need to be made to correct the errors.
|
| 104 |
+
|
| 105 |
+
§ 3 DATASETS DESCRIPTION
|
| 106 |
+
|
| 107 |
+
162
|
| 108 |
+
|
| 109 |
+
163
|
| 110 |
+
|
| 111 |
+
§ 3.1 ASK CORPUS
|
| 112 |
+
|
| 113 |
+
164
|
| 114 |
+
|
| 115 |
+
Both ${\mathbf{{NoCoLA}}}_{\text{ class }}$ and ${\mathbf{{NoCoLA}}}_{\text{ zero }}$ require a 165 source for both acceptable and non-acceptable sentences. The latter is hard to come by in most nat-
|
| 116 |
+
|
| 117 |
+
uralistic text by adult native speakers. Our source 168 for both NoCoLA datasets is the ASK Corpus - A Language Learner Corpus of Norwegian as a Second Language (Tenfjord et al., 2006). It consists of submissions by second language learners of Norwegian Bokmål around the year 2000, each having of one or more essays. The essays are written as solutions to two separate Norwegian language exams, which are estimated in Berggren (2019) to be approximately CEFR-levels B1 and B2. The texts are limited to one of the written standards for Norwegian, namely Bokmål.
|
| 118 |
+
|
| 119 |
+
There are 1935 submissions, with 46000 original sentences in total. Each essay has been manually corrected by native speakers, hereby called cor-rectors. The errors in the corpus are annotated with a set of error-codes, which indicate the change that needs to be done to correct the original passage. For instance, "F" indicates wrong morpho-syntactic category, while "PUNCM" means that punctuation is missing, and needs to be added. We have merged some of the error-codes so that we have a medium-grained way of understanding the performance of the models on the different types of errors found in ${\mathbf{{NoCoLA}}}_{\text{ zero. }}$ . A short explanation of these error-
|
| 120 |
+
|
| 121 |
+
codes can be found in the appendix. 195
|
| 122 |
+
|
| 123 |
+
§ 3.2 CONVERSION FROM ASK TO NOCOLA
|
| 124 |
+
|
| 125 |
+
Sentence merging. For the NoCoLA datasets we want sentences as the unit for evaluation. There-
|
| 126 |
+
|
| 127 |
+
fore we need to split the continuous text of ASK 200 into sentences. However, since some of the corrections suggested by the correctors affect the way the text is split into sentences, and we need alignment between the acceptable and non-acceptable in the pairs for ${\mathbf{{NoCoLA}}}_{\text{ zero }}$ , we decided to always keep the longest available version in cases where there is disagreement between both versions. The principle applies to both datasets. Thus, the unit referred to as "sentence" in this paper can consist of multiple sentences.
|
| 128 |
+
|
| 129 |
+
Error extraction. For each of these sentences, we first extract a corrected (acceptable) version. In order to test only minimal errors and to label
|
| 130 |
+
|
| 131 |
+
each non-acceptable sentence with an error-type, 215
|
| 132 |
+
|
| 133 |
+
217 we generate one non-acceptable sentence for each error found in the originals. Therefore we extract almost 100000 non-acceptable sentences, as many of the original sentences have multiple errors.
|
| 134 |
+
|
| 135 |
+
max width=
|
| 136 |
+
|
| 137 |
+
Dataset Train $\mathbf{{Dev}}$ Test
|
| 138 |
+
|
| 139 |
+
1-4
|
| 140 |
+
${\mathbf{{NoCoLA}}}_{class}$ 116 195 14 289 14 383
|
| 141 |
+
|
| 142 |
+
1-4
|
| 143 |
+
${\mathbf{{NoCoLA}}}_{zero}$ - - 99 115
|
| 144 |
+
|
| 145 |
+
1-4
|
| 146 |
+
|
| 147 |
+
Table 1: Number of sentences and sentence pairs, respectively, for both NoCoLA datasets.
|
| 148 |
+
|
| 149 |
+
229
|
| 150 |
+
|
| 151 |
+
Post-processing. We did a few additional adjustments to the dataset. All sentences are heuristically
|
| 152 |
+
|
| 153 |
+
232 detokenized and removed if they contain an uneven count of quotation marks. If no error type
|
| 154 |
+
|
| 155 |
+
234 is mentioned for a given correction, we also remove that sentence. In the original ASK dataset, sensitive words have been replaced by placehold-ers like "@sted" (place) and "@navn" (name) for anonymization purposes. We replace each placeholder with a substitute representation of that category, i.e. "Oslo" instead of "@sted", to normalize all sentences. In rare occasions, these replacements might cause some sentences to become erroneous, since the possible genitive and plural conjugations in the original texts are not annotated with separate placeholder-tokens.
|
| 156 |
+
|
| 157 |
+
Conversion results. The final dataset contains 144867 sentences, 31.5% of which are acceptable. ${\mathbf{{NoCoLA}}}_{\text{ class }}$ has been shuffled and then randomly split by the authors to ensure unbiased development and test sentences. The split has been done in an approximate 80:10:10 ratio, resulting in the sentence-level statistics from Table 1.
|
| 158 |
+
|
| 159 |
+
§ 4 BASELINE MODELS
|
| 160 |
+
|
| 161 |
+
§ 4.1 EVALUATION OF ${\MATHRM{{NOCOLA}}}_{\TEXT{ CLASS }}$
|
| 162 |
+
|
| 163 |
+
In order to evaluate language models on No- ${\mathbf{{CoLA}}}_{\text{ class }}$ , we use the standard fine-tuning approach from Devlin et al. (2019). Accordingly, every sentence is tokenized, prepended by a special [CLS] token, appended by a [SEP] token and input to a pre-trained language model. Subsequently, the contextualized representation of the special [CLS] token is fed into a binary MLP classifier. The pre-trained weights of the language model are further trained together with the classi-
|
| 164 |
+
|
| 165 |
+
269 fier weights.
|
| 166 |
+
|
| 167 |
+
§ 4.2 EVALUATION OF ${\MATHRM{{NOCOLA}}}_{\TEXT{ ZERO }}$
|
| 168 |
+
|
| 169 |
+
270
|
| 170 |
+
|
| 171 |
+
One disadvantage of ${\mathbf{{NoCoLA}}}_{\text{ class }}$ is that the re- 271 sults are skewed by the second-stage supervised training and it can be problematic to disentangle the properties of the LM from the classifier (Be-
|
| 172 |
+
|
| 173 |
+
linkov, 2022). In contrast, pure LM-based evalu- 276 ation of ${\mathbf{{NoCoLA}}}_{zero}$ attempts to measure the linguistic knowledge of a language model in a zero-shot manner - without any additional training. The dataset consists of 99115 sentence pairs; each pair
|
| 174 |
+
|
| 175 |
+
differs minimally on the surface level, but only 281 one of the sentences is acceptable. We can use the intrinsic ability of language models to assign a probability to every sentence and test how often a language model assigns a higher probability to the
|
| 176 |
+
|
| 177 |
+
correct sentence, as in (Warstadt et al., 2020). 286
|
| 178 |
+
|
| 179 |
+
CLM evaluation. The causal language models are trained to estimate $p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{ < t}}\right)$ for sentence $\mathbf{s}$ and token ${\mathbf{s}}_{t}$ where ${\mathbf{s}}_{ < t} = \left( {{\mathbf{s}}_{i} \mid i < t}\right)$ ; then the sentence log-probability is simply given by $\log p\left( \mathbf{s}\right) =$ $\mathop{\sum }\limits_{{t = 1}}^{N}\log p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{ < t}}\right)$ .
|
| 180 |
+
|
| 181 |
+
MLM evaluation. The issue with masked language models is that they are not designed to calculate the joint probability; they are trained to estimate $p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{\smallsetminus t}}\right)$ - the likelihood of a token ${s}_{t}$ given its bidirectional context ${\mathbf{s}}_{\smallsetminus t} = \left( {{\mathbf{s}}_{i} \mid i \neq t}\right)$ . We can however still use MLMs to infer a score for each sentence where a higher score corresponds to a more likely sentence. Wang and Cho (2019) defined pseudo-log-likelihood score of a sentence $s$
|
| 182 |
+
|
| 183 |
+
with model $\theta$ as 303
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\operatorname{PLL}\left( \mathbf{s}\right) = \frac{1}{N}\mathop{\sum }\limits_{{t = 1}}^{N}\log p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{\smallsetminus t};\theta }\right) .
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
Salazar et al. (2020) tested PLL and found that it 308 produces accurate predictions on BLiMP. We adopt their approach and evaluate our models with PLL.
|
| 190 |
+
|
| 191 |
+
§ 5 RESULTS
|
| 192 |
+
|
| 193 |
+
§ 5.1 RESULTS ON ${\MATHBF{{NOCOLA}}}_{\TEXT{ CLASS }}$
|
| 194 |
+
|
| 195 |
+
313
|
| 196 |
+
|
| 197 |
+
The results from benchmarking the publicly available Norwegian language models on the classification task can be seen in Table 3. The classification accuracy is around ${80}\%$ for for these models. One exception is the slightly older NorBERT 1, which performs substantially worse, even if being trained on clean Norwegian data: Wikipedia and newspaper articles (Kutuzov et al., 2021). We use the
|
| 198 |
+
|
| 199 |
+
English BERT ${}_{\text{ base }}$ as a naive baseline, which gives 323
|
| 200 |
+
|
| 201 |
+
325 379
|
| 202 |
+
|
| 203 |
+
max width=
|
| 204 |
+
|
| 205 |
+
Model ✓ X X ✓ X X X X X X Overall
|
| 206 |
+
|
| 207 |
+
1-12
|
| 208 |
+
${\mathrm{{BERT}}}_{\text{ base }}$ (Devlin et al. 2019) 50.70 53.55 63.43 60.44 51.69 79.33 51.85 82.54 54.31 54.11 59.48
|
| 209 |
+
|
| 210 |
+
1-12
|
| 211 |
+
mBERTbase (Devlin et al. 2019) 79.92 69.05 90.74 76.91 78.84 83.97 74.88 87.88 78.72 80.44 79.53
|
| 212 |
+
|
| 213 |
+
1-12
|
| 214 |
+
XLM-R base (Conneau et al. 2020) 91.43 85.28 92.60 87.43 87.56 83.93 84.33 90.60 89.63 91.96 88.02
|
| 215 |
+
|
| 216 |
+
1-12
|
| 217 |
+
ScandiBERT (Hofmann et al. 2022) 93.43 89.79 90.84 90.14 90.05 87.10 90.08 90.55 85.82 90.68 90.27
|
| 218 |
+
|
| 219 |
+
1-12
|
| 220 |
+
NB-BERT base (Kummervold et al. 2021) 93.76 89.19 97.14 86.54 92.48 73.98 90.94 92.73 91.15 94.70 89.04
|
| 221 |
+
|
| 222 |
+
1-12
|
| 223 |
+
NorBERT 1 (Kutuzov et al., 2021) 93.46 88.46 94.54 88.66 89.41 88.46 92.01 94.26 90.83 93.05 90.83
|
| 224 |
+
|
| 225 |
+
1-12
|
| 226 |
+
NorBERT 2 (Kutuzov et al., 2021) 91.66 88.20 96.88 89.22 90.91 75.82 92.67 93.13 74.18 92.69 88.51
|
| 227 |
+
|
| 228 |
+
1-12
|
| 229 |
+
XLM-Rlarge (Conneau et al., 2020) 92.54 88.17 90.06 88.57 89.28 80.84 84.52 91.35 89.70 93.24 88.27
|
| 230 |
+
|
| 231 |
+
1-12
|
| 232 |
+
NB-BERTlarge (Kummervold et al. 2021) 95.20 92.41 95.16 91.47 91.92 85.33 93.36 17.01 89.56 92.87 90.51
|
| 233 |
+
|
| 234 |
+
1-12
|
| 235 |
+
|
| 236 |
+
Table 2: The accuracy values of zero-shot evaluation on ${\mathbf{{NoCoLA}}}_{\text{ zero. }}$ Fine-grained results over different error types are reported (Appendix A), as well as the overall average over all sentence pairs in the datasets.
|
| 237 |
+
|
| 238 |
+
378
|
| 239 |
+
|
| 240 |
+
380
|
| 241 |
+
|
| 242 |
+
381
|
| 243 |
+
|
| 244 |
+
382
|
| 245 |
+
|
| 246 |
+
383
|
| 247 |
+
|
| 248 |
+
385
|
| 249 |
+
|
| 250 |
+
386
|
| 251 |
+
|
| 252 |
+
389
|
| 253 |
+
|
| 254 |
+
391
|
| 255 |
+
|
| 256 |
+
330 384
|
| 257 |
+
|
| 258 |
+
393
|
| 259 |
+
|
| 260 |
+
340 394 as a lower bound on the performance of any decent Norwegian language models. has the worst perfor-
|
| 261 |
+
|
| 262 |
+
max width=
|
| 263 |
+
|
| 264 |
+
Model $\mathbf{{Lang}.}$ Size Accuracy $\mathbf{{MCC}}$
|
| 265 |
+
|
| 266 |
+
1-5
|
| 267 |
+
${\mathrm{{BERT}}}_{\text{ base }}$ en 110M ${69.56}^{\pm {0.37}}$ ${23.99}^{\pm {0.41}}$
|
| 268 |
+
|
| 269 |
+
1-5
|
| 270 |
+
${\mathrm{{mBERT}}}_{\text{ base }}$ multi 178M ${75.28}^{\pm {0.66}}$ ${46.39}^{\pm {0.67}}$
|
| 271 |
+
|
| 272 |
+
1-5
|
| 273 |
+
${\mathrm{{XLM} - R}}_{\text{ base }}$ multi 278M ${79.29}^{\pm {0.20}}$ ${55.14}^{\pm {0.36}}$
|
| 274 |
+
|
| 275 |
+
1-5
|
| 276 |
+
ScandiBERT multi 124M ${80.25}^{\pm {0.33}}$ ${57.12}^{\pm {0.37}}$
|
| 277 |
+
|
| 278 |
+
1-5
|
| 279 |
+
NB-BERT ${}_{\text{ base }}$ no 178M ${80.69}^{\pm {0.44}}$ ${58.10}^{\pm {0.48}}$
|
| 280 |
+
|
| 281 |
+
1-5
|
| 282 |
+
NorBERT 1 no 111M ${71.53}^{\pm {0.80}}$ ${35.85}^{\pm {1.70}}$
|
| 283 |
+
|
| 284 |
+
1-5
|
| 285 |
+
NorBERT 2 no 125M ${79.99}^{\pm {0.27}}$ ${56.09}^{\pm {0.30}}$
|
| 286 |
+
|
| 287 |
+
1-5
|
| 288 |
+
X X X X X
|
| 289 |
+
|
| 290 |
+
1-5
|
| 291 |
+
XLM-Rlarge multi 560M ${81.03}^{\pm {0.27}}$ ${58.56}^{\pm {0.30}}$
|
| 292 |
+
|
| 293 |
+
1-5
|
| 294 |
+
NB-BERT ${}_{\text{ large }}$ no 355M ${\mathbf{{81.43}}}^{\pm {0.32}}$ ${\mathbf{{59.68}}}^{\pm {0.14}}$
|
| 295 |
+
|
| 296 |
+
1-5
|
| 297 |
+
|
| 298 |
+
Table 3: Accuracy and the Matthews correlation coefficient (Matthews, 1975), the main metric of ${\mathbf{{NoCoLA}}}_{\text{ class }}$ . We report the mean and standard deviation across five runs on the test split.
|
| 299 |
+
|
| 300 |
+
362 mance of all our models. The two largest models give a small increase in performance compared to the moderately sized versions of the same models.
|
| 301 |
+
|
| 302 |
+
§ 5.2 RESULTS ON ${\MATHBF{{NOCOLA}}}_{\TEXT{ ZERO }}$
|
| 303 |
+
|
| 304 |
+
367 On the raw zero-shot diagnostic task (Table 2), all models trained on Norwegian or Scandinavian languages perform well with results around ${90}\%$ accuracy. The best performance comes, perhaps surprisingly, from NorBERT 1 - possibly because it was pre-trained on a relatively small clean corpus. Remarkably, increased number of parameters does not seem to improve performance on this task.
|
| 305 |
+
|
| 306 |
+
We have also included accuracy scores for the in-
|
| 307 |
+
|
| 308 |
+
377 dividual error-types, these fine-grained scores can
|
| 309 |
+
|
| 310 |
+
be used as a helpful cue for NLP researchers who 396 develop new language models. Comparably low scores can signal a problem their training corpus
|
| 311 |
+
|
| 312 |
+
or with their tokenizer. For example, the two mod- 399 els NB-BERTs are relatively weak on punctuation-
|
| 313 |
+
|
| 314 |
+
related errors. The large version is trained on 401 uncased data, which explains its inability of this
|
| 315 |
+
|
| 316 |
+
model to understand the case-related errors. Scan- 404 diBERT performs comparably to the Norwegian
|
| 317 |
+
|
| 318 |
+
ones on most parameters except for spelling. 406
|
| 319 |
+
|
| 320 |
+
§ 6 CONCLUSION
|
| 321 |
+
|
| 322 |
+
408
|
| 323 |
+
|
| 324 |
+
In this paper we have proposed NoCoLA, the 409 first dataset for linguistic acceptance in Norwegian
|
| 325 |
+
|
| 326 |
+
Bokmal. We showed how to use it for measuring 411 the linguistic knowledge of language models on both a classification task and a zero-shot probability comparison task. We have described how the datasets were created and what their motivation is,
|
| 327 |
+
|
| 328 |
+
compared them to related work in English NLP 416 and showed how to use them for fine-grained error analysis of language models.
|
| 329 |
+
|
| 330 |
+
Lastly, we evaluated all existing Norwegian language models on both proposed tasks. These re-
|
| 331 |
+
|
| 332 |
+
sults suggest that models trained specifically for 421 Norwegian or Scandinavian languages perform better at discriminating between acceptable an non-acceptable sentences. The classification results also
|
| 333 |
+
|
| 334 |
+
show that linguistic acceptability is a relatively hard 426 task, as none of the models achieved more than ${60}\%$ on the main MCC metric. The results on our diagnostic dataset highlight some shortcoming of
|
| 335 |
+
|
| 336 |
+
the existing models. We will release all evaluation 430
|
| 337 |
+
|
| 338 |
+
sources in the camera-ready version. 431
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/V5PGSHHJEw/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,417 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
## Neural Text-to-Speech Synthesis for Vöro
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author
|
| 10 |
+
|
| 11 |
+
004 Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
005 Affiliation / Address line 2 006 Affiliation / Address line 3 007 email@domain
|
| 14 |
+
|
| 15 |
+
Anonymouser Author
|
| 16 |
+
|
| 17 |
+
Affiliation / Address line 1
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 2
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 3
|
| 22 |
+
|
| 23 |
+
email@domain
|
| 24 |
+
|
| 25 |
+
Anonymousest Author 057
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 1 058
|
| 28 |
+
|
| 29 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 30 |
+
|
| 31 |
+
063
|
| 32 |
+
|
| 33 |
+
011
|
| 34 |
+
|
| 35 |
+
## Abstract
|
| 36 |
+
|
| 37 |
+
013 This paper presents the first high-quality neural text-to-speech (TTS) system for Vöro, a minority language spoken in
|
| 38 |
+
|
| 39 |
+
016 Southern Estonia. By leveraging existing Estonian TTS models and datasets, we an-
|
| 40 |
+
|
| 41 |
+
018 alyze whether common low-resource NLP techniques, such as cross-lingual transfer learning from related languages or multi-
|
| 42 |
+
|
| 43 |
+
021 task learning, can benefit our low-resource use case. Our results show that we can
|
| 44 |
+
|
| 45 |
+
023 achieve high-quality Vöro TTS without transfer learning and that using more diverse training data can even decrease syn-
|
| 46 |
+
|
| 47 |
+
026 thesis quality. While these techniques may still be useful in some cases, our work
|
| 48 |
+
|
| 49 |
+
028 highlights the need for caution when applied in specific low-resource scenarios, and it can provide valuable insights for fu-
|
| 50 |
+
|
| 51 |
+
031 ture low-resource research and efforts in
|
| 52 |
+
|
| 53 |
+
033 preserving minority languages.
|
| 54 |
+
|
| 55 |
+
## 1 Introduction
|
| 56 |
+
|
| 57 |
+
The advancements in neural text-to-speech (TTS)
|
| 58 |
+
|
| 59 |
+
036 technology have greatly improved the quality of speech synthesis for many languages. However,
|
| 60 |
+
|
| 61 |
+
038 despite the potential benefits of TTS for facilitating accessibility and language preservation, developing TTS systems for low-resource languages remains challenging due to the limited availability of training data for these languages.
|
| 62 |
+
|
| 63 |
+
Vöro, a Finno-Ugric minority language spoken in Southern Estonia, serves as a great example of a low-resource language that could benefit from TTS technology. While linguistic resources for Vöro are limited, the language is closely related to Estonian - a high-resource Finno-Ugric language with significantly more datasets, tools, and pre-trained models.
|
| 64 |
+
|
| 65 |
+
The goal of this paper is to present the first high-
|
| 66 |
+
|
| 67 |
+
053 quality neural TTS system for Vöro and evaluate
|
| 68 |
+
|
| 69 |
+
064
|
| 70 |
+
|
| 71 |
+
various low-resource NLP techniques for improv- 065 ing synthesis quality for the language. By lever-
|
| 72 |
+
|
| 73 |
+
aging existing Estonian TTS models and datasets, 067 we investigate the impact of transfer learning from related languages and multi-speaker and multilin-
|
| 74 |
+
|
| 75 |
+
gual approaches on the TTS quality of Vöro. 070
|
| 76 |
+
|
| 77 |
+
The main contributions of this paper are: 072
|
| 78 |
+
|
| 79 |
+
1. We develop the first high-quality neural text-to-speech system for Vöro and make it pub-
|
| 80 |
+
|
| 81 |
+
licly available ${}^{1}$ . 075
|
| 82 |
+
|
| 83 |
+
2. We show that having only 1.5 hours of Vöro 077 speech data per speaker is sufficient to de-
|
| 84 |
+
|
| 85 |
+
velop TTS systems for low-resource lan- 079
|
| 86 |
+
|
| 87 |
+
guages without using cross-lingual transfer 080
|
| 88 |
+
|
| 89 |
+
learning or additional monolingual data. 082
|
| 90 |
+
|
| 91 |
+
3. We highlight the potential negative effects of 083
|
| 92 |
+
|
| 93 |
+
diversifying low-resource TTS datasets with 084
|
| 94 |
+
|
| 95 |
+
data from closely related languages. 085
|
| 96 |
+
|
| 97 |
+
086
|
| 98 |
+
|
| 99 |
+
## 2 Background
|
| 100 |
+
|
| 101 |
+
087
|
| 102 |
+
|
| 103 |
+
088
|
| 104 |
+
|
| 105 |
+
As neural text-to-speech models require vast 089
|
| 106 |
+
|
| 107 |
+
amounts of data, existing research has proposed 090 several approaches to mitigate the issue of in-
|
| 108 |
+
|
| 109 |
+
sufficient training data. For example, several 092 works have shown that cross-lingual pretraining improves the quality of low-resource TTS systems
|
| 110 |
+
|
| 111 |
+
(Chen et al., 2019; Xu et al., 2020). 095
|
| 112 |
+
|
| 113 |
+
In a survey on multilingual strategies for low-
|
| 114 |
+
|
| 115 |
+
resource TTS, Do et al. (2021) evaluated the use- 097 fulness of using multilingual datasets for improving low-resource language performance. They observed that for sequence-to-sequence models, including additional data from other languages is al-
|
| 116 |
+
|
| 117 |
+
most always beneficial and often overweighs the 102 negative effect of having a lower ratio of target data in the entire training dataset. The authors also noted that there is no clear evidence that
|
| 118 |
+
|
| 119 |
+
107 using supporting languages from the same language family is more beneficial but claimed that using a shared input representation space (such as phonemes) may be more important.
|
| 120 |
+
|
| 121 |
+
---
|
| 122 |
+
|
| 123 |
+
${}^{1}$ Link will be added after the anonymization period
|
| 124 |
+
|
| 125 |
+
---
|
| 126 |
+
|
| 127 |
+
At the same time, using closely related languages to boost low-resource performance has been successfully used for many text-based NLP tasks, including for developing Finno-Ugric machine translation systems that also include the Vöro language (Tars et al., 2021). Unfortunately, the usage of neural methods for Vöro has so far been limited to this example. There is also no existing research on Vöru TTS. While the Estonian Language Institute and the Vöro Institute have collaborated to create an HMM-based TTS system for Vöro ${}^{2}$ , this work has not been which has not been described in research.
|
| 128 |
+
|
| 129 |
+
## 3 Methodology
|
| 130 |
+
|
| 131 |
+
In this section, we present our methodology and experiment setup. Our approach evaluates the benefits of low-resource TTS approaches when training non-autoregressive Transformer-based models (Ren et al., 2019; Lańcucki, 2021). We focus on three common strategies - cross-lingual transfer learning from a pre-trained Estonian TTS model, combining data from multiple Vöro speakers, and including Estonian data to create a multilingual system. Additionally, we explore data augmentation to handle the orthographic variation of Vöro.
|
| 132 |
+
|
| 133 |
+
### 3.1 Datasets
|
| 134 |
+
|
| 135 |
+
Our experiments used speech data from two Vöro speakers - an adult male and a child (female). Both datasets were attained from the Estonian Language Institute and contained an identical set of 1132 sentences, out of which 100 were set aside for evaluation purposes.
|
| 136 |
+
|
| 137 |
+
The Estonian dataset consisted of 6 male and 4 female speakers from the Speech Corpus of Estonian News Sentences (Fishel et al., 2020) and the Estonian Language Institute's audiobook corpora (Piits, 2022a, b). A subset of 1000 sentences per speaker was selected from the Estonian corpora to balance the training dataset.
|
| 138 |
+
|
| 139 |
+
The audio files were resampled at ${22050}\mathrm{\;{Hz}}$ and converted into mel-spectrograms using a Hann window with a frame size of 1024 and a hop length of 256. The mel-spectrogram frames were
|
| 140 |
+
|
| 141 |
+
aligned to the graphemes using the Estonian align- 162
|
| 142 |
+
|
| 143 |
+
ment model by (Alumäe et al., 2018). Training a 163 separate alignment model for Vöro was also considered, but initial testing showed that the Estonian model was successfully able to produce high-quality alignments. The alignment was also used
|
| 144 |
+
|
| 145 |
+
to trip excessive pauses in the audio. 168
|
| 146 |
+
|
| 147 |
+
All datasets were lowercased, and punctuation was normalized to a limited set of characters to reduce the vocabulary size. In total, the training dataset contained 3 hours of Vöro and 14 hours of Estonian speech.
|
| 148 |
+
|
| 149 |
+
### 3.2 Data augmentation
|
| 150 |
+
|
| 151 |
+
175
|
| 152 |
+
|
| 153 |
+
While the Vöro dataset follows a standardized
|
| 154 |
+
|
| 155 |
+
version of Vöro orthography, many speakers and 178 well-known news outlets do not conform to this
|
| 156 |
+
|
| 157 |
+
standard. For example, the glottal stop(q)may be 180 omitted or used only when it affects the meaning of the word, and some speakers may also use an apostrophe instead the letter $q$ . Similarly, an apostrophe or an acute accent that marks palatalization
|
| 158 |
+
|
| 159 |
+
is often used only when it affects the meaning. 185
|
| 160 |
+
|
| 161 |
+
To create a system that could successfully synthesize speech from all common written formats
|
| 162 |
+
|
| 163 |
+
of Vöro, we considered this an important chal- 188 lenge. As there are no existing NLP tools for Vöro
|
| 164 |
+
|
| 165 |
+
that would allow us to analyze these features au- 190 tomatically, we decided to use data augmentation to generate orthographic alternatives where glottal
|
| 166 |
+
|
| 167 |
+
stops or palatalization features were removed for 193 the system to cope with different orthographies.
|
| 168 |
+
|
| 169 |
+
Additionally, while our dataset contained the 195 letter $y$ , all cases of it were replaced with $\widetilde{o}$ as they are no longer differentiated according to the orthographic standardization changes from 2005.
|
| 170 |
+
|
| 171 |
+
### 3.3 Model Configuration
|
| 172 |
+
|
| 173 |
+
200
|
| 174 |
+
|
| 175 |
+
All models were trained using an open-source implementation of a non-autoregressive Transformer-based (Vaswani et al., 2017) model. The architecture is similar to FastPitch (Lańcucki, 2021) with explicit duration and pitch prediction components. An existing multispeaker model for Estonian (Rätsep et al., 2022) was used for our cross-lingual transfer learning experiments. In multispeaker systems, the speaker identity was marked with a prepended global style token (Wang et al., 2018).
|
| 176 |
+
|
| 177 |
+
We trained models with three different data configurations - single-speaker Vöro models for each
|
| 178 |
+
|
| 179 |
+
speaker, multi-speaker Vöro models with both 215 speakers, and multi-speaker multilingual models
|
| 180 |
+
|
| 181 |
+
---
|
| 182 |
+
|
| 183 |
+
${}^{2}$ https://www.eki.ee/~indrek/voru/ index.php
|
| 184 |
+
|
| 185 |
+
---
|
| 186 |
+
|
| 187 |
+
217 with both Estonian and Vöro data. For each data configuration, we also trained another model, which was initialized using the weights of the existing Estonian model. All models were trained for at least ${400}\mathrm{\;k}$ steps and using identical hyper-parameters.
|
| 188 |
+
|
| 189 |
+
## 4 Results
|
| 190 |
+
|
| 191 |
+
To assess the quality of the models, we conducted a mean opinion score (MOS) (Chu and Peng,2001) evaluation ${}^{3}$ among volunteers from
|
| 192 |
+
|
| 193 |
+
229 the Vöro community. The evaluators were required to know the Vöro language but did not have to be native speakers. Of the 41 volunteers, 6 con-
|
| 194 |
+
|
| 195 |
+
232 sidered themselves native speakers, and 9 had a self-reported Vöru level of C1 or higher. Many
|
| 196 |
+
|
| 197 |
+
234 participants with lower levels of Vöru knowledge also mentioned that their passive language skills were higher as they mostly used Vöro when communicating with older family members who were native speakers.
|
| 198 |
+
|
| 199 |
+
The evaluation used a subset of 50 random sentences per speaker (100 total per method) from the held-out dataset, and the samples were generated using pretrained HiFiGAN (Kong et al., 2020) models. The appropriate model for each speaker was selected by evaluating samples generated with multiple vocoder models. For the lower-pitched male speaker, we used a model trained on the VCTK dataset (Yamagishi et al., 2019), and
|
| 200 |
+
|
| 201 |
+
249 for the child speaker, we used a model trained on the LJ Speech (Ito and Johnson, 2017) corpus and finetuned on Tacotron 2 (Shen et al., 2018) output. We also included ground truth samples from the held-out dataset and ground truth samples con-
|
| 202 |
+
|
| 203 |
+
254 verter to mel-spectrograms and reconstructed by the same vocoder models.
|
| 204 |
+
|
| 205 |
+
The evaluation results can be seen in Table 1. Expectedly, ground truth samples in their original and reconstructed forms scored the highest
|
| 206 |
+
|
| 207 |
+
259 among the participants. From the TTS models, the highest scores we given to single-speaker models. These were followed by the multi-speaker Vöro models, but the performance drop from the single-speaker models should not be considered significant. The multilingual models showed consistently worse performance compared to the monolingual models. Additionally, we observe minor
|
| 208 |
+
|
| 209 |
+
269
|
| 210 |
+
|
| 211 |
+
<table><tr><td>Method</td><td>MOS</td></tr><tr><td>Ground truth</td><td>${4.03} \pm {0.12}$</td></tr><tr><td>Ground truth + vocoder</td><td>${3.83} \pm {0.13}$</td></tr><tr><td>Single-speaker</td><td>${3.55} \pm {0.15}$</td></tr><tr><td>Single-speaker (transfer)</td><td>${3.62} \pm {0.15}$</td></tr><tr><td>Multi-speaker</td><td>${3.43} \pm {0.15}$</td></tr><tr><td>Multi-speaker (transfer)</td><td>${3.50} \pm {0.13}$</td></tr><tr><td>Multilingual</td><td>${3.10} \pm {0.15}$</td></tr><tr><td>Multilingual (transfer)</td><td>${3.29} \pm {0.15}$</td></tr></table>
|
| 212 |
+
|
| 213 |
+
Table 1: Mean opinion scores with 95% confidence intervals on the held-out dataset.
|
| 214 |
+
|
| 215 |
+
270
|
| 216 |
+
|
| 217 |
+
271
|
| 218 |
+
|
| 219 |
+
272
|
| 220 |
+
|
| 221 |
+
273
|
| 222 |
+
|
| 223 |
+
274
|
| 224 |
+
|
| 225 |
+
275
|
| 226 |
+
|
| 227 |
+
276
|
| 228 |
+
|
| 229 |
+
281
|
| 230 |
+
|
| 231 |
+
283 benefits from using cross-lingual transfer learning.
|
| 232 |
+
|
| 233 |
+
In addition to scoring samples, participants 286 were encouraged to comment on their overall im-
|
| 234 |
+
|
| 235 |
+
pressions of speech quality and the evaluation pro- 288 cess. Many expressed a positive surprise about synthesis quality and mentioned the presence of TTS artifacts, such as crackling, as their main evaluation criteria. Some participants also noted that while almost all samples were intelligible, they did not always sound like a native Vöro speaker, especially when producing the glottal stop sound. Unfortunately, as the participants did not know which models produced which samples, further analysis would be needed to assess whether all models are equally prone to this issue and
|
| 236 |
+
|
| 237 |
+
whether it can also be observed in ground truth 301 examples.
|
| 238 |
+
|
| 239 |
+
## 5 Discussion and Future Work
|
| 240 |
+
|
| 241 |
+
303
|
| 242 |
+
|
| 243 |
+
Unexpectedly, our MOS evaluation results are in 305
|
| 244 |
+
|
| 245 |
+
conflict with existing low-resource TTS litera- 306 ture that reports benefits from diversifying training
|
| 246 |
+
|
| 247 |
+
data with samples from other speakers or related 308 languages and from using cross-lingual transfer learning. This brings into question both the usefulness of these techniques as well as our approach.
|
| 248 |
+
|
| 249 |
+
Firstly, it could be argued that the observations 313 about the low negative performance impact of data imbalance by Do et al. (2021) may not apply to non-autoregressive Transformer-based systems, as the study focused on other methods, such as re-
|
| 250 |
+
|
| 251 |
+
current or convolutional neural networks. There- 318 fore, the performance drop in multilingual models could still be caused by an imbalance between the two languages in the dataset. Alternatively, as our model size was dictated by the existing pretrained
|
| 252 |
+
|
| 253 |
+
Estonian models, it may lack sufficient capacity to 323 work in a multilingual setting.
|
| 254 |
+
|
| 255 |
+
---
|
| 256 |
+
|
| 257 |
+
${}^{3}$ A link to evaluation samples will be added after the anonymization period
|
| 258 |
+
|
| 259 |
+
---
|
| 260 |
+
|
| 261 |
+
325 Additionally, it is possible that we should no longer consider Vöro a low-resource language. Based on initial testing, we found that the required amount of speech data for Transformer-based to produce coherent speech is between 1-2 hours, and improvements from using more data are significantly less noticeable. Similar observations in reduced data requirements for Transformer-based models have also been recently reported by Pine et al. (2022). In our case, we had 1.5 hours of speech per speaker, and it may have been sufficient for us not to benefit from additional data from other speakers. However, a more detailed evaluation methodology could be considered to measure the effects on specific features of synthetic speech, such as prosodic variability or pronunciation mistakes.
|
| 262 |
+
|
| 263 |
+
As our work focused on creating a high-quality system for Vöro without applying artificial constraints, these points were not explicitly explored in our work. However, in the future, low-resource TTS strategies should be further reviewed specifically for Transformer-based architectures and for different levels of resource constraint. Until then, these strategies should be used with caution and evaluated for each specific low-resource scenario.
|
| 264 |
+
|
| 265 |
+
## 6 Conclusion
|
| 266 |
+
|
| 267 |
+
This article presented the first high-quality neural text-to-speech system for the Vöro language. We explored the usage of Estonian TTS models and datasets to boost the performance of our low-resource use case.
|
| 268 |
+
|
| 269 |
+
Our results suggest that we can achieve high-quality Vöro TTS without transfer learning or us-
|
| 270 |
+
|
| 271 |
+
362 ing data from multiple speakers or closely related languages. While these techniques may still be helpful in some cases, we highlight the need for further research and evaluation when applied in
|
| 272 |
+
|
| 273 |
+
367 specific low-resource scenarios.
|
| 274 |
+
|
| 275 |
+
## References
|
| 276 |
+
|
| 277 |
+
Tanel Alumäe, Ottokar Tilk, and Asadullah. 2018. Advanced rich transcription system for Estonian speech. In Human Language Technologies - the Baltic Perspective: Proceedings of the Eighth International Conference, pages 1-8. IOS Press.
|
| 278 |
+
|
| 279 |
+
Yuan-Jui Chen, Tao Tu, Cheng chieh Yeh, and Hung- 377 Yi Lee. 2019. End-to-End Text-to-Speech for
|
| 280 |
+
|
| 281 |
+
Low-Resource Languages by Cross-Lingual Trans- 378
|
| 282 |
+
|
| 283 |
+
fer Learning. In Proc. Interspeech 2019, pages 379
|
| 284 |
+
|
| 285 |
+
2075-2079. 380
|
| 286 |
+
|
| 287 |
+
381
|
| 288 |
+
|
| 289 |
+
Min Chu and Hu Peng. 2001. An objective mea- 382 sure for estimating MOS of synthesized speech. In
|
| 290 |
+
|
| 291 |
+
EUROSPEECH 2001, 7th European Conference on 383
|
| 292 |
+
|
| 293 |
+
Speech Communication, pages 2087-2090. ISCA. 384
|
| 294 |
+
|
| 295 |
+
Phat Do, Matt Coler, Jelske Dijkstra, and Esther Klab-bers. 2021. A Systematic Review and Analysis of Multilingual Data Strategies in Text-to-Speech for Low-Resource Languages. In Proc. Interspeech
|
| 296 |
+
|
| 297 |
+
2021, pages 16-20. 389
|
| 298 |
+
|
| 299 |
+
Mark Fishel, Annika Laumets-Tättar, and Liisa Rätsep. 2020. Speech corpus of Estonian news sentences. https://doi.org/10.15155/
|
| 300 |
+
|
| 301 |
+
9-00-0000-0000-0000-001ABL. 394
|
| 302 |
+
|
| 303 |
+
Keith Ito and Linda Johnson. 2017. The LJ
|
| 304 |
+
|
| 305 |
+
Speech dataset. https://keithito.com/ 396 LJ-Speech-Dataset/.
|
| 306 |
+
|
| 307 |
+
Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020. HiFi-GAN: Generative adversarial networks for efficient and high fidelity speech synthesis. In ${Ad}$ - vances in Neural Information Processing Systems, pages 17022-17033. Curran Associates, Inc.
|
| 308 |
+
|
| 309 |
+
Adrian Lańcucki. 2021. FastPitch: Parallel text-to-speech with pitch prediction. In 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6588-6592.
|
| 310 |
+
|
| 311 |
+
Liisi Piits. 2022a. Estonian female voice audiobook corpus for speech synthesis. https://doi.org/10.15155/ 3-00-0000-0000-0000-090D4L.
|
| 312 |
+
|
| 313 |
+
Liisi Piits. 2022b. Estonian male voice audiobook corpus for speech synthesis. https://doi.org/10.15155/ 3-00-0000-0000-0000-08BF4L.
|
| 314 |
+
|
| 315 |
+
416
|
| 316 |
+
|
| 317 |
+
Aidan Pine, Dan Wells, Nathan Brinklow, Patrick Littell, and Korin Richmond. 2022. Requirements and motivations of low-resource speech synthesis for language revitalization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland. Association for Computational Linguistics.
|
| 318 |
+
|
| 319 |
+
Liisa Rätsep, Rasmus Lellep, and Mark Fishel. 2022. Estonian text-to-speech synthesis with non-autoregressive transformers. Baltic Journal of Mod-
|
| 320 |
+
|
| 321 |
+
ern Computing, 10. 426
|
| 322 |
+
|
| 323 |
+
Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. FastSpeech: Fast, robust and controllable text to speech. In ${Ad}$ - vances in Neural Information Processing Systems.
|
| 324 |
+
|
| 325 |
+
Curran Associates, Inc. 431
|
| 326 |
+
|
| 327 |
+
432 Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike 486 433 Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng 487 434 Chen, Yu Zhang, Yuxuan Wang, R. J. Skerry- 488
|
| 328 |
+
|
| 329 |
+
435 Ryan, Rif A. Saurous, Yannis Agiomyrgiannakis, 489 436 and Yonghui Wu. 2018. Natural TTS synthesis by 490 conditioning WaveNet on mel spectrogram predic-
|
| 330 |
+
|
| 331 |
+
437 tions. In 2018 IEEE International Conference on 491
|
| 332 |
+
|
| 333 |
+
438 Acoustics, Speech and Signal Processing (ICASSP), 492
|
| 334 |
+
|
| 335 |
+
439 pages 4779-4783. 493
|
| 336 |
+
|
| 337 |
+
440 Maali Tars, Andre Tättar, and Mark Fišel. 2021. Ex- 494
|
| 338 |
+
|
| 339 |
+
441 tremely low-resource machine translation for closely 495
|
| 340 |
+
|
| 341 |
+
442 related languages. In Proceedings of the 23rd 496 443 Nordic Conference on Computational Linguistics 497 444 (NoDaLiDa), pages 41-52, Reykjavik, Iceland (On- 498
|
| 342 |
+
|
| 343 |
+
445 line). Linköping University Electronic Press, Swe- 499 den. 500
|
| 344 |
+
|
| 345 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob 501 448 Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz 502
|
| 346 |
+
|
| 347 |
+
Kaiser, and Illia Polosukhin. 2017. Attention is all 503
|
| 348 |
+
|
| 349 |
+
you need. In Advances in Neural Information Pro- 504 450 cessing Systems. Curran Associates, Inc.
|
| 350 |
+
|
| 351 |
+
505
|
| 352 |
+
|
| 353 |
+
Yuxuan Wang, Daisy Stanton, Yu Zhang, R. J. Skerry- 506
|
| 354 |
+
|
| 355 |
+
Ryan, Eric Battenberg, Joel Shor, Ying Xiao, Fei 507
|
| 356 |
+
|
| 357 |
+
Ren, Ye Jia, and Rif A. Saurous. 2018. Style tokens: 508 Unsupervised style modeling, control and transfer
|
| 358 |
+
|
| 359 |
+
in end-to-end speech synthesis. In arXiv preprint 509
|
| 360 |
+
|
| 361 |
+
arXiv:1803.09017. 510
|
| 362 |
+
|
| 363 |
+
511
|
| 364 |
+
|
| 365 |
+
Jin Xu, Xu Tan, Yi Ren, Tao Qin, Jian Li, Sheng Zhao, 512 and Tie-Yan Liu. 2020. Lrspeech: Extremely low-resource speech synthesis and recognition. In arXiv
|
| 366 |
+
|
| 367 |
+
preprint arXiv:2008.03687. 514
|
| 368 |
+
|
| 369 |
+
515
|
| 370 |
+
|
| 371 |
+
Junichi Yamagishi, Cristophe Veaux, and Kirsten Mac- 516
|
| 372 |
+
|
| 373 |
+
Donald. 2019. CSTR VCTK corpus: English multi- 517 speaker corpus for CSTR voice cloning toolkit (ver-
|
| 374 |
+
|
| 375 |
+
sion 0.92 ). https://datashare.ed.ac.uk/ 518
|
| 376 |
+
|
| 377 |
+
465 handle/10283/3443. 519
|
| 378 |
+
|
| 379 |
+
520
|
| 380 |
+
|
| 381 |
+
521
|
| 382 |
+
|
| 383 |
+
522
|
| 384 |
+
|
| 385 |
+
523
|
| 386 |
+
|
| 387 |
+
470 524
|
| 388 |
+
|
| 389 |
+
525
|
| 390 |
+
|
| 391 |
+
526
|
| 392 |
+
|
| 393 |
+
527
|
| 394 |
+
|
| 395 |
+
528
|
| 396 |
+
|
| 397 |
+
475 529
|
| 398 |
+
|
| 399 |
+
530
|
| 400 |
+
|
| 401 |
+
531
|
| 402 |
+
|
| 403 |
+
532
|
| 404 |
+
|
| 405 |
+
479 533
|
| 406 |
+
|
| 407 |
+
480 534
|
| 408 |
+
|
| 409 |
+
481 535
|
| 410 |
+
|
| 411 |
+
482 536
|
| 412 |
+
|
| 413 |
+
483 537
|
| 414 |
+
|
| 415 |
+
484 538
|
| 416 |
+
|
| 417 |
+
485 539
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/V5PGSHHJEw/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,289 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ NEURAL TEXT-TO-SPEECH SYNTHESIS FOR VÖRO
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author
|
| 10 |
+
|
| 11 |
+
004 Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
005 Affiliation / Address line 2 006 Affiliation / Address line 3 007 email@domain
|
| 14 |
+
|
| 15 |
+
Anonymouser Author
|
| 16 |
+
|
| 17 |
+
Affiliation / Address line 1
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 2
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 3
|
| 22 |
+
|
| 23 |
+
email@domain
|
| 24 |
+
|
| 25 |
+
Anonymousest Author 057
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 1 058
|
| 28 |
+
|
| 29 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 30 |
+
|
| 31 |
+
063
|
| 32 |
+
|
| 33 |
+
011
|
| 34 |
+
|
| 35 |
+
§ ABSTRACT
|
| 36 |
+
|
| 37 |
+
013 This paper presents the first high-quality neural text-to-speech (TTS) system for Vöro, a minority language spoken in
|
| 38 |
+
|
| 39 |
+
016 Southern Estonia. By leveraging existing Estonian TTS models and datasets, we an-
|
| 40 |
+
|
| 41 |
+
018 alyze whether common low-resource NLP techniques, such as cross-lingual transfer learning from related languages or multi-
|
| 42 |
+
|
| 43 |
+
021 task learning, can benefit our low-resource use case. Our results show that we can
|
| 44 |
+
|
| 45 |
+
023 achieve high-quality Vöro TTS without transfer learning and that using more diverse training data can even decrease syn-
|
| 46 |
+
|
| 47 |
+
026 thesis quality. While these techniques may still be useful in some cases, our work
|
| 48 |
+
|
| 49 |
+
028 highlights the need for caution when applied in specific low-resource scenarios, and it can provide valuable insights for fu-
|
| 50 |
+
|
| 51 |
+
031 ture low-resource research and efforts in
|
| 52 |
+
|
| 53 |
+
033 preserving minority languages.
|
| 54 |
+
|
| 55 |
+
§ 1 INTRODUCTION
|
| 56 |
+
|
| 57 |
+
The advancements in neural text-to-speech (TTS)
|
| 58 |
+
|
| 59 |
+
036 technology have greatly improved the quality of speech synthesis for many languages. However,
|
| 60 |
+
|
| 61 |
+
038 despite the potential benefits of TTS for facilitating accessibility and language preservation, developing TTS systems for low-resource languages remains challenging due to the limited availability of training data for these languages.
|
| 62 |
+
|
| 63 |
+
Vöro, a Finno-Ugric minority language spoken in Southern Estonia, serves as a great example of a low-resource language that could benefit from TTS technology. While linguistic resources for Vöro are limited, the language is closely related to Estonian - a high-resource Finno-Ugric language with significantly more datasets, tools, and pre-trained models.
|
| 64 |
+
|
| 65 |
+
The goal of this paper is to present the first high-
|
| 66 |
+
|
| 67 |
+
053 quality neural TTS system for Vöro and evaluate
|
| 68 |
+
|
| 69 |
+
064
|
| 70 |
+
|
| 71 |
+
various low-resource NLP techniques for improv- 065 ing synthesis quality for the language. By lever-
|
| 72 |
+
|
| 73 |
+
aging existing Estonian TTS models and datasets, 067 we investigate the impact of transfer learning from related languages and multi-speaker and multilin-
|
| 74 |
+
|
| 75 |
+
gual approaches on the TTS quality of Vöro. 070
|
| 76 |
+
|
| 77 |
+
The main contributions of this paper are: 072
|
| 78 |
+
|
| 79 |
+
1. We develop the first high-quality neural text-to-speech system for Vöro and make it pub-
|
| 80 |
+
|
| 81 |
+
licly available ${}^{1}$ . 075
|
| 82 |
+
|
| 83 |
+
2. We show that having only 1.5 hours of Vöro 077 speech data per speaker is sufficient to de-
|
| 84 |
+
|
| 85 |
+
velop TTS systems for low-resource lan- 079
|
| 86 |
+
|
| 87 |
+
guages without using cross-lingual transfer 080
|
| 88 |
+
|
| 89 |
+
learning or additional monolingual data. 082
|
| 90 |
+
|
| 91 |
+
3. We highlight the potential negative effects of 083
|
| 92 |
+
|
| 93 |
+
diversifying low-resource TTS datasets with 084
|
| 94 |
+
|
| 95 |
+
data from closely related languages. 085
|
| 96 |
+
|
| 97 |
+
086
|
| 98 |
+
|
| 99 |
+
§ 2 BACKGROUND
|
| 100 |
+
|
| 101 |
+
087
|
| 102 |
+
|
| 103 |
+
088
|
| 104 |
+
|
| 105 |
+
As neural text-to-speech models require vast 089
|
| 106 |
+
|
| 107 |
+
amounts of data, existing research has proposed 090 several approaches to mitigate the issue of in-
|
| 108 |
+
|
| 109 |
+
sufficient training data. For example, several 092 works have shown that cross-lingual pretraining improves the quality of low-resource TTS systems
|
| 110 |
+
|
| 111 |
+
(Chen et al., 2019; Xu et al., 2020). 095
|
| 112 |
+
|
| 113 |
+
In a survey on multilingual strategies for low-
|
| 114 |
+
|
| 115 |
+
resource TTS, Do et al. (2021) evaluated the use- 097 fulness of using multilingual datasets for improving low-resource language performance. They observed that for sequence-to-sequence models, including additional data from other languages is al-
|
| 116 |
+
|
| 117 |
+
most always beneficial and often overweighs the 102 negative effect of having a lower ratio of target data in the entire training dataset. The authors also noted that there is no clear evidence that
|
| 118 |
+
|
| 119 |
+
107 using supporting languages from the same language family is more beneficial but claimed that using a shared input representation space (such as phonemes) may be more important.
|
| 120 |
+
|
| 121 |
+
${}^{1}$ Link will be added after the anonymization period
|
| 122 |
+
|
| 123 |
+
At the same time, using closely related languages to boost low-resource performance has been successfully used for many text-based NLP tasks, including for developing Finno-Ugric machine translation systems that also include the Vöro language (Tars et al., 2021). Unfortunately, the usage of neural methods for Vöro has so far been limited to this example. There is also no existing research on Vöru TTS. While the Estonian Language Institute and the Vöro Institute have collaborated to create an HMM-based TTS system for Vöro ${}^{2}$ , this work has not been which has not been described in research.
|
| 124 |
+
|
| 125 |
+
§ 3 METHODOLOGY
|
| 126 |
+
|
| 127 |
+
In this section, we present our methodology and experiment setup. Our approach evaluates the benefits of low-resource TTS approaches when training non-autoregressive Transformer-based models (Ren et al., 2019; Lańcucki, 2021). We focus on three common strategies - cross-lingual transfer learning from a pre-trained Estonian TTS model, combining data from multiple Vöro speakers, and including Estonian data to create a multilingual system. Additionally, we explore data augmentation to handle the orthographic variation of Vöro.
|
| 128 |
+
|
| 129 |
+
§ 3.1 DATASETS
|
| 130 |
+
|
| 131 |
+
Our experiments used speech data from two Vöro speakers - an adult male and a child (female). Both datasets were attained from the Estonian Language Institute and contained an identical set of 1132 sentences, out of which 100 were set aside for evaluation purposes.
|
| 132 |
+
|
| 133 |
+
The Estonian dataset consisted of 6 male and 4 female speakers from the Speech Corpus of Estonian News Sentences (Fishel et al., 2020) and the Estonian Language Institute's audiobook corpora (Piits, 2022a, b). A subset of 1000 sentences per speaker was selected from the Estonian corpora to balance the training dataset.
|
| 134 |
+
|
| 135 |
+
The audio files were resampled at ${22050}\mathrm{\;{Hz}}$ and converted into mel-spectrograms using a Hann window with a frame size of 1024 and a hop length of 256. The mel-spectrogram frames were
|
| 136 |
+
|
| 137 |
+
aligned to the graphemes using the Estonian align- 162
|
| 138 |
+
|
| 139 |
+
ment model by (Alumäe et al., 2018). Training a 163 separate alignment model for Vöro was also considered, but initial testing showed that the Estonian model was successfully able to produce high-quality alignments. The alignment was also used
|
| 140 |
+
|
| 141 |
+
to trip excessive pauses in the audio. 168
|
| 142 |
+
|
| 143 |
+
All datasets were lowercased, and punctuation was normalized to a limited set of characters to reduce the vocabulary size. In total, the training dataset contained 3 hours of Vöro and 14 hours of Estonian speech.
|
| 144 |
+
|
| 145 |
+
§ 3.2 DATA AUGMENTATION
|
| 146 |
+
|
| 147 |
+
175
|
| 148 |
+
|
| 149 |
+
While the Vöro dataset follows a standardized
|
| 150 |
+
|
| 151 |
+
version of Vöro orthography, many speakers and 178 well-known news outlets do not conform to this
|
| 152 |
+
|
| 153 |
+
standard. For example, the glottal stop(q)may be 180 omitted or used only when it affects the meaning of the word, and some speakers may also use an apostrophe instead the letter $q$ . Similarly, an apostrophe or an acute accent that marks palatalization
|
| 154 |
+
|
| 155 |
+
is often used only when it affects the meaning. 185
|
| 156 |
+
|
| 157 |
+
To create a system that could successfully synthesize speech from all common written formats
|
| 158 |
+
|
| 159 |
+
of Vöro, we considered this an important chal- 188 lenge. As there are no existing NLP tools for Vöro
|
| 160 |
+
|
| 161 |
+
that would allow us to analyze these features au- 190 tomatically, we decided to use data augmentation to generate orthographic alternatives where glottal
|
| 162 |
+
|
| 163 |
+
stops or palatalization features were removed for 193 the system to cope with different orthographies.
|
| 164 |
+
|
| 165 |
+
Additionally, while our dataset contained the 195 letter $y$ , all cases of it were replaced with $\widetilde{o}$ as they are no longer differentiated according to the orthographic standardization changes from 2005.
|
| 166 |
+
|
| 167 |
+
§ 3.3 MODEL CONFIGURATION
|
| 168 |
+
|
| 169 |
+
200
|
| 170 |
+
|
| 171 |
+
All models were trained using an open-source implementation of a non-autoregressive Transformer-based (Vaswani et al., 2017) model. The architecture is similar to FastPitch (Lańcucki, 2021) with explicit duration and pitch prediction components. An existing multispeaker model for Estonian (Rätsep et al., 2022) was used for our cross-lingual transfer learning experiments. In multispeaker systems, the speaker identity was marked with a prepended global style token (Wang et al., 2018).
|
| 172 |
+
|
| 173 |
+
We trained models with three different data configurations - single-speaker Vöro models for each
|
| 174 |
+
|
| 175 |
+
speaker, multi-speaker Vöro models with both 215 speakers, and multi-speaker multilingual models
|
| 176 |
+
|
| 177 |
+
${}^{2}$ https://www.eki.ee/ĩndrek/voru/ index.php
|
| 178 |
+
|
| 179 |
+
217 with both Estonian and Vöro data. For each data configuration, we also trained another model, which was initialized using the weights of the existing Estonian model. All models were trained for at least ${400}\mathrm{\;k}$ steps and using identical hyper-parameters.
|
| 180 |
+
|
| 181 |
+
§ 4 RESULTS
|
| 182 |
+
|
| 183 |
+
To assess the quality of the models, we conducted a mean opinion score (MOS) (Chu and Peng,2001) evaluation ${}^{3}$ among volunteers from
|
| 184 |
+
|
| 185 |
+
229 the Vöro community. The evaluators were required to know the Vöro language but did not have to be native speakers. Of the 41 volunteers, 6 con-
|
| 186 |
+
|
| 187 |
+
232 sidered themselves native speakers, and 9 had a self-reported Vöru level of C1 or higher. Many
|
| 188 |
+
|
| 189 |
+
234 participants with lower levels of Vöru knowledge also mentioned that their passive language skills were higher as they mostly used Vöro when communicating with older family members who were native speakers.
|
| 190 |
+
|
| 191 |
+
The evaluation used a subset of 50 random sentences per speaker (100 total per method) from the held-out dataset, and the samples were generated using pretrained HiFiGAN (Kong et al., 2020) models. The appropriate model for each speaker was selected by evaluating samples generated with multiple vocoder models. For the lower-pitched male speaker, we used a model trained on the VCTK dataset (Yamagishi et al., 2019), and
|
| 192 |
+
|
| 193 |
+
249 for the child speaker, we used a model trained on the LJ Speech (Ito and Johnson, 2017) corpus and finetuned on Tacotron 2 (Shen et al., 2018) output. We also included ground truth samples from the held-out dataset and ground truth samples con-
|
| 194 |
+
|
| 195 |
+
254 verter to mel-spectrograms and reconstructed by the same vocoder models.
|
| 196 |
+
|
| 197 |
+
The evaluation results can be seen in Table 1. Expectedly, ground truth samples in their original and reconstructed forms scored the highest
|
| 198 |
+
|
| 199 |
+
259 among the participants. From the TTS models, the highest scores we given to single-speaker models. These were followed by the multi-speaker Vöro models, but the performance drop from the single-speaker models should not be considered significant. The multilingual models showed consistently worse performance compared to the monolingual models. Additionally, we observe minor
|
| 200 |
+
|
| 201 |
+
269
|
| 202 |
+
|
| 203 |
+
max width=
|
| 204 |
+
|
| 205 |
+
Method MOS
|
| 206 |
+
|
| 207 |
+
1-2
|
| 208 |
+
Ground truth ${4.03} \pm {0.12}$
|
| 209 |
+
|
| 210 |
+
1-2
|
| 211 |
+
Ground truth + vocoder ${3.83} \pm {0.13}$
|
| 212 |
+
|
| 213 |
+
1-2
|
| 214 |
+
Single-speaker ${3.55} \pm {0.15}$
|
| 215 |
+
|
| 216 |
+
1-2
|
| 217 |
+
Single-speaker (transfer) ${3.62} \pm {0.15}$
|
| 218 |
+
|
| 219 |
+
1-2
|
| 220 |
+
Multi-speaker ${3.43} \pm {0.15}$
|
| 221 |
+
|
| 222 |
+
1-2
|
| 223 |
+
Multi-speaker (transfer) ${3.50} \pm {0.13}$
|
| 224 |
+
|
| 225 |
+
1-2
|
| 226 |
+
Multilingual ${3.10} \pm {0.15}$
|
| 227 |
+
|
| 228 |
+
1-2
|
| 229 |
+
Multilingual (transfer) ${3.29} \pm {0.15}$
|
| 230 |
+
|
| 231 |
+
1-2
|
| 232 |
+
|
| 233 |
+
Table 1: Mean opinion scores with 95% confidence intervals on the held-out dataset.
|
| 234 |
+
|
| 235 |
+
270
|
| 236 |
+
|
| 237 |
+
271
|
| 238 |
+
|
| 239 |
+
272
|
| 240 |
+
|
| 241 |
+
273
|
| 242 |
+
|
| 243 |
+
274
|
| 244 |
+
|
| 245 |
+
275
|
| 246 |
+
|
| 247 |
+
276
|
| 248 |
+
|
| 249 |
+
281
|
| 250 |
+
|
| 251 |
+
283 benefits from using cross-lingual transfer learning.
|
| 252 |
+
|
| 253 |
+
In addition to scoring samples, participants 286 were encouraged to comment on their overall im-
|
| 254 |
+
|
| 255 |
+
pressions of speech quality and the evaluation pro- 288 cess. Many expressed a positive surprise about synthesis quality and mentioned the presence of TTS artifacts, such as crackling, as their main evaluation criteria. Some participants also noted that while almost all samples were intelligible, they did not always sound like a native Vöro speaker, especially when producing the glottal stop sound. Unfortunately, as the participants did not know which models produced which samples, further analysis would be needed to assess whether all models are equally prone to this issue and
|
| 256 |
+
|
| 257 |
+
whether it can also be observed in ground truth 301 examples.
|
| 258 |
+
|
| 259 |
+
§ 5 DISCUSSION AND FUTURE WORK
|
| 260 |
+
|
| 261 |
+
303
|
| 262 |
+
|
| 263 |
+
Unexpectedly, our MOS evaluation results are in 305
|
| 264 |
+
|
| 265 |
+
conflict with existing low-resource TTS litera- 306 ture that reports benefits from diversifying training
|
| 266 |
+
|
| 267 |
+
data with samples from other speakers or related 308 languages and from using cross-lingual transfer learning. This brings into question both the usefulness of these techniques as well as our approach.
|
| 268 |
+
|
| 269 |
+
Firstly, it could be argued that the observations 313 about the low negative performance impact of data imbalance by Do et al. (2021) may not apply to non-autoregressive Transformer-based systems, as the study focused on other methods, such as re-
|
| 270 |
+
|
| 271 |
+
current or convolutional neural networks. There- 318 fore, the performance drop in multilingual models could still be caused by an imbalance between the two languages in the dataset. Alternatively, as our model size was dictated by the existing pretrained
|
| 272 |
+
|
| 273 |
+
Estonian models, it may lack sufficient capacity to 323 work in a multilingual setting.
|
| 274 |
+
|
| 275 |
+
${}^{3}$ A link to evaluation samples will be added after the anonymization period
|
| 276 |
+
|
| 277 |
+
325 Additionally, it is possible that we should no longer consider Vöro a low-resource language. Based on initial testing, we found that the required amount of speech data for Transformer-based to produce coherent speech is between 1-2 hours, and improvements from using more data are significantly less noticeable. Similar observations in reduced data requirements for Transformer-based models have also been recently reported by Pine et al. (2022). In our case, we had 1.5 hours of speech per speaker, and it may have been sufficient for us not to benefit from additional data from other speakers. However, a more detailed evaluation methodology could be considered to measure the effects on specific features of synthetic speech, such as prosodic variability or pronunciation mistakes.
|
| 278 |
+
|
| 279 |
+
As our work focused on creating a high-quality system for Vöro without applying artificial constraints, these points were not explicitly explored in our work. However, in the future, low-resource TTS strategies should be further reviewed specifically for Transformer-based architectures and for different levels of resource constraint. Until then, these strategies should be used with caution and evaluated for each specific low-resource scenario.
|
| 280 |
+
|
| 281 |
+
§ 6 CONCLUSION
|
| 282 |
+
|
| 283 |
+
This article presented the first high-quality neural text-to-speech system for the Vöro language. We explored the usage of Estonian TTS models and datasets to boost the performance of our low-resource use case.
|
| 284 |
+
|
| 285 |
+
Our results suggest that we can achieve high-quality Vöro TTS without transfer learning or us-
|
| 286 |
+
|
| 287 |
+
362 ing data from multiple speakers or closely related languages. While these techniques may still be helpful in some cases, we highlight the need for further research and evaluation when applied in
|
| 288 |
+
|
| 289 |
+
367 specific low-resource scenarios.
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/Vzp2aRidnh/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,849 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## ASR Language Resources for Faroese
|
| 2 |
+
|
| 3 |
+
054
|
| 4 |
+
|
| 5 |
+
055
|
| 6 |
+
|
| 7 |
+
056
|
| 8 |
+
|
| 9 |
+
Anonymous Author
|
| 10 |
+
|
| 11 |
+
Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 2
|
| 14 |
+
|
| 15 |
+
Affiliation / Address line 3
|
| 16 |
+
|
| 17 |
+
email@domain
|
| 18 |
+
|
| 19 |
+
Anonymouser Author
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 1
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 2
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 3
|
| 26 |
+
|
| 27 |
+
email@domain
|
| 28 |
+
|
| 29 |
+
Anonymousest Author 057
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 1 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 34 |
+
|
| 35 |
+
063
|
| 36 |
+
|
| 37 |
+
## Abstract
|
| 38 |
+
|
| 39 |
+
The aim of this work is to present a set of novel language resources in Faroese suitable for the field of Automatic Speech Recognition including: an ASR corpus comprised of 109 hours of transcribed speech data, acoustic models in systems such as WAV2VEC2, NVIDIA-NeMo, Kaldi and PocketSphinx; a set of n-gram language models and a set of pronunciation dictionaries with two different variants of Faroese. We also show comparison results between the distinct acoustic models presented here. All the resources exposed in this document are publicly available under creative commons licences.
|
| 40 |
+
|
| 41 |
+
## 1 Introduction
|
| 42 |
+
|
| 43 |
+
As the digital world has become increasingly prominent and omnipresent in most human activities, the use of more and better language technologies has become a pressing need. For this reason, more and more governments are investing in the development of all kinds of linguistic resources that allow their citizens to be part of the new digital era, with all the benefits it entails. Language technology initiatives in the main regions of the world such as: Europe (Rehm et al., 2020; Nikulásdóttir et al., 2020; Meister et al., 2010; D'Halleweyn et al., 2006), India (Vikas, 2001; Choudhary, 2021), Africa (Grover et al., 2011), China (Kania et al., 2018), Saudi Arabia (Mae-gaard et al., 2008, 2005) and the Spanish speaking countries (Fernandez et al., 2016); allow us to attest how important language technologies have become in recent times.
|
| 44 |
+
|
| 45 |
+
In synchrony with all the developments mentioned above, it is time to talk about the efforts made for the development of the Faroese language in the digital sphere. The most recent initiative in
|
| 46 |
+
|
| 47 |
+
this regard is the Ravnur Project, founded in the 065 Faroe Islands. Thanks to the resources generated
|
| 48 |
+
|
| 49 |
+
and shared by Ravnur, it has been possible to de- 067 velop all the language resources presented in this document.
|
| 50 |
+
|
| 51 |
+
070
|
| 52 |
+
|
| 53 |
+
### 1.1 Faroese
|
| 54 |
+
|
| 55 |
+
The Faroe Islands is a set of small islands located 072 at the North Atlantic in a half way between Scot-
|
| 56 |
+
|
| 57 |
+
land, Iceland and Norway. It is an autonomous ter- 075 ritory of the Kingdom of Denmark with Faroese as
|
| 58 |
+
|
| 59 |
+
the official language, which is spoken by around 077 54,000people. There are four main dialect areas in the Faroe Islands; north, northwest, central
|
| 60 |
+
|
| 61 |
+
and southern (Petersen, 2022). The Faroe Islands 080 is a bilingual country with Danish as the second
|
| 62 |
+
|
| 63 |
+
official language. While many native speakers of 082 Faroese use Danish for university education or employment in Denmark, Faroese is spoken as a first
|
| 64 |
+
|
| 65 |
+
language by most of the population and is used 085 on all domains, e.g. in education, public sectors,
|
| 66 |
+
|
| 67 |
+
church etc. in the Faroe Islands. The first and, to 087 this date, only Faroese speech synthesis was created in 2005 (Helgason and Gullbein, 2005) by
|
| 68 |
+
|
| 69 |
+
combining efforts from researchers at the Univer- 090 sity of Stockholm and the University of the Faroe
|
| 70 |
+
|
| 71 |
+
Islands and is used by the visually impaired com- 092 munity. Currently, there is a huge demand for Faroese ASR solutions, needed by the deaf, visually impaired and dyslexic communities - and also
|
| 72 |
+
|
| 73 |
+
the general public, who wish to use their mother 097 tongue when interacting with technology.
|
| 74 |
+
|
| 75 |
+
### 1.2 The Ravnur Project
|
| 76 |
+
|
| 77 |
+
The Faroese ASR research project, Ravnur, was assembled in 2019 (Foundation, 2019). The aim of the project was to create open-source resources that could be used to create automatic speech recognition (ASR) systems in Faroese. These resources would also be useful for creating other
|
| 78 |
+
|
| 79 |
+
types of language technologies, as well as for lin- 107 guistic research. The project was founded by public and private initiators and investors, including the Faroese government. The development team consisted of a project leader, a technical leader, three native speaking junior linguists, an IT assistant, five university student assistants, as well as external advisors. The project concluded in the summer of 2022 with the publication of the Basic Language Resource Kit for Faroese (BLARK) (Simonsen et al., 2022; Debess et al., 2022).
|
| 80 |
+
|
| 81 |
+
### 1.3 Basic Language Resource Kit (BLARK) for Faroese
|
| 82 |
+
|
| 83 |
+
A BLARK is defined as the minimal set of language resources needed to create language and speech technology for a language (Krauwer, 2003; Maegaard et al., 2006). A BLARK is ideally language independent, but because languages may have different requirements, the contents of the BLARK may vary in some respects from language to language.
|
| 84 |
+
|
| 85 |
+
So, as Ravnur was an ASR project, the focus was on collecting good quality recordings of Faroese and creating a transcription corpus and pronunciation dictionary. During the course of the project, Ravnur collected 135 hours of recordings of 433 speakers total (249 female speakers and 184 male speakers) reading text of various genres, such as news, blogs, Wikipedia, law texts, GPS commands, word lists etc. The participants self-reported their gender, native language, dialect and age which varies between 15 to 83 years old. The recordings were made on TASCAM DR-40 Linear PCM audio recorders using the built-in stereo microphones in WAVE 16 bit with a sample rate of ${48}\mathrm{{kHz}}$ . All recordings have been manually orthographically transcribed, while part of the speech corpus has been phonetically transcribed. The transcriptions were made by the university student assistants and the three Faroese linguists working for the project. All words that occur in the recordings were put in a pronunciation dictionary. The dictionary includes phonetic transcriptions written in SAMPA and PAROLE PoS-tags (Bilgram and Keson,1998; Keson,1998) ${}^{1}$ .
|
| 86 |
+
|
| 87 |
+
As it can be seen, the BLARK developed by Ravnur is the starting point of the novel machine learning models presented in this work.
|
| 88 |
+
|
| 89 |
+
## 2 The Ravnursson Corpus
|
| 90 |
+
|
| 91 |
+
162
|
| 92 |
+
|
| 93 |
+
163
|
| 94 |
+
|
| 95 |
+
Ravnursson ${}^{2}$ (Hernández Mena and Simonsen, 164 2022) is an ASR corpus with a length of 109 hours, extracted from the BLARK described in section 1.3. Unlike the original BLARK, the
|
| 96 |
+
|
| 97 |
+
Ravnursson only contains the speech files along 168 with their respective transcriptions. The main characteristics of the corpus are the following:
|
| 98 |
+
|
| 99 |
+
- The audio files in this corpus are distributed
|
| 100 |
+
|
| 101 |
+
in a FLAC format at ${16}\mathrm{{kHz}}$ @ 16bit mono. 173
|
| 102 |
+
|
| 103 |
+
- The corpus contains 71,949 speech files from 175 433 speakers.
|
| 104 |
+
|
| 105 |
+
- The corpus is split into train, dev, and test
|
| 106 |
+
|
| 107 |
+
portions. Lengths of every portion are: train 178 $= {100}\mathrm{h}{08}\mathrm{\;m},\operatorname{dev} = 4\mathrm{h}{30}\mathrm{\;m}$ , test $= 4\mathrm{h}{30}\mathrm{\;m}$ .
|
| 108 |
+
|
| 109 |
+
180
|
| 110 |
+
|
| 111 |
+
- The development and test portions have exactly 10 male and 10 female speakers each
|
| 112 |
+
|
| 113 |
+
and both portions have exactly the same size 183 in hours.
|
| 114 |
+
|
| 115 |
+
185
|
| 116 |
+
|
| 117 |
+
- Due to the limited number of prompts to read, only39,945of the71,949prompts in the
|
| 118 |
+
|
| 119 |
+
whole corpus are unique. In other words, 188 ${44.48}\%$ of the prompts in the corpus are re-
|
| 120 |
+
|
| 121 |
+
peated at least once. 190
|
| 122 |
+
|
| 123 |
+
- Despite the repeated prompts in the corpus,
|
| 124 |
+
|
| 125 |
+
the development and test portions do not 193 share speakers with each other or with the
|
| 126 |
+
|
| 127 |
+
training set. 195
|
| 128 |
+
|
| 129 |
+
### 2.1 Analysis of the Repeated Prompts
|
| 130 |
+
|
| 131 |
+
As the number of reading prompts for the corpus 198 was limited during the recording process, the com-
|
| 132 |
+
|
| 133 |
+
mon denominator in the Ravnursson corpus is that 200 one prompt is read by more than one speaker. This is relevant because it is a common practice in ASR to create a language model using the prompts that are found in the train portion of the corpus. That is not recommended for the Ravnursson Corpus as it counts with several prompts shared by all the portions and that will produce an important bias in the language modeling task.
|
| 134 |
+
|
| 135 |
+
Table 1 shows some statistics about the repeated prompts through all the portions of the corpus.
|
| 136 |
+
|
| 137 |
+
215
|
| 138 |
+
|
| 139 |
+
---
|
| 140 |
+
|
| 141 |
+
${}^{2}$ As a matter of fact, the name Ravnursson comes from Ravnur (a tribute to the Ravnur Project) and the suffix "son" which in Icelandic means "son of". Therefore, the name "Ravnursson" means "The (Icelandic) son of Ravnur". The double "ss" is just for aesthetics.
|
| 142 |
+
|
| 143 |
+
${}^{1}$ Both the Faroese SAMPA alphabet (sometimes called FARSAMPA) and PAROLE PoS-tags were created by Ravnur for the BLARK.
|
| 144 |
+
|
| 145 |
+
---
|
| 146 |
+
|
| 147 |
+
The way this table has to be understood is as fol-
|
| 148 |
+
|
| 149 |
+
217 lows: for example, the first row indicates that there is a total of 71,949 reading prompts in the whole corpus;39,945of those are unique and 32,004 are repeated at least once. Therefore, a total of ${44.48}\%$ prompts in the whole corpus are repeated
|
| 150 |
+
|
| 151 |
+
222 at least once. The same applies to the rest of the rows in Table 1.
|
| 152 |
+
|
| 153 |
+
<table><tr><td>Corpus Portion</td><td>Total Prompts</td><td>Unique Prompts</td><td>Repeat. Prompts</td><td>%</td></tr><tr><td>All</td><td>71,949</td><td>39,945</td><td>32,004</td><td>44.48%</td></tr><tr><td>Train</td><td>65,616</td><td>38,646</td><td>26, 970</td><td>41.1%</td></tr><tr><td>Test</td><td>3,002</td><td>2,887</td><td>115</td><td>3.83%</td></tr><tr><td>$\mathbf{{Dev}}$</td><td>3,331</td><td>3,302</td><td>29</td><td>0.87%</td></tr></table>
|
| 154 |
+
|
| 155 |
+
Table 1: Analysis of Repeated Prompts.
|
| 156 |
+
|
| 157 |
+
### 2.2 Corpus Organization
|
| 158 |
+
|
| 159 |
+
The "speech" directory contains all the speech files of the corpus. The files in the speech folder are divided in three directories: train, dev and test. The train portion is sub-divided in three types of recordings: RDATA1O, RDATA1OP and RDATA2; this is due the organization of the recordings in the original BLARK. There, the recordings are divided in Rdata1 and Rdata2.
|
| 160 |
+
|
| 161 |
+
One main difference between Rdata1 and Rdata2 is that the reading environment for Rdata2 was controlled by a software called "PushPrompt" which is included in the original BLARK (Simonsen et al., 2022). Another difference is that in Rdata1 there are some available transcriptions labelled at the phoneme level. The audio files in the speech directory of the Ravnursson corpus are divided in the folders RDATA1O where "O" is for "Orthographic" and RDATA1OP where "O" is for Orthographic and "P" is for phonetic. These categories are just a reminiscence of the original BLARK but it does not imply that the Ravnursson corpus comes with transcriptions at the phonetic level. In the case of the dev and test portions, the data come only from Rdata2 which does not have labels at the phonetic level in the original BLARK.
|
| 162 |
+
|
| 163 |
+
### 2.3 The Metadata File
|
| 164 |
+
|
| 165 |
+
The metadata file is a "tab-separated values file" (TSV) containing all the relevant information of the corpus. The file can be read using the Pandas (McKinney et al., 2010) library in Python and
|
| 166 |
+
|
| 167 |
+
269 it comprises of the following 12 columns:
|
| 168 |
+
|
| 169 |
+
1. id: The filename without the extension 270
|
| 170 |
+
|
| 171 |
+
".flac". 271
|
| 172 |
+
|
| 173 |
+
272
|
| 174 |
+
|
| 175 |
+
2. speaker_id: The filename without the seg- 273
|
| 176 |
+
|
| 177 |
+
ment number. 274
|
| 178 |
+
|
| 179 |
+
275
|
| 180 |
+
|
| 181 |
+
3. filename: Full filename including the exten- 276 sion ".flac".
|
| 182 |
+
|
| 183 |
+
4. sentence_norm: The normalized transcription: no punctuation marks, no digits, lower
|
| 184 |
+
|
| 185 |
+
case letters, one single space between words. 281
|
| 186 |
+
|
| 187 |
+
5. gender: The gender of the speaker: male or 283 female.
|
| 188 |
+
|
| 189 |
+
6. age: The age range of the speaker: 15-35, 36- 286
|
| 190 |
+
|
| 191 |
+
60, 61+ years old. 287
|
| 192 |
+
|
| 193 |
+
7. native_language: "Faroese" in all the cases. 288 289
|
| 194 |
+
|
| 195 |
+
8. dialect: The speaker dialect. 290 291
|
| 196 |
+
|
| 197 |
+
9. created_at: The date when the audio file was
|
| 198 |
+
|
| 199 |
+
recorded. 293
|
| 200 |
+
|
| 201 |
+
10. duration: Duration of the speech file in sec-
|
| 202 |
+
|
| 203 |
+
onds. 296
|
| 204 |
+
|
| 205 |
+
11. sample_rate: ${16kHz}$ in all the cases. 298
|
| 206 |
+
|
| 207 |
+
12. status: The corpus portion: train, test or dev.
|
| 208 |
+
|
| 209 |
+
301
|
| 210 |
+
|
| 211 |
+
### 2.4 Codification of the Audio Filenames
|
| 212 |
+
|
| 213 |
+
In the Ravnursson corpus, the filenames of the au- 303 dio files encode relevant information about the respective speech files. The first row of Table 2, shows a typical audio filename. The second row
|
| 214 |
+
|
| 215 |
+
enumerates the fields of information encoded in 308
|
| 216 |
+
|
| 217 |
+
the filename and the third row shows the same 309
|
| 218 |
+
|
| 219 |
+
filename of row one but broken down in the eight 310 parts as specified in the second row.
|
| 220 |
+
|
| 221 |
+
<table><tr><td colspan="8">MEY01_040319_rok0_0009.flac</td></tr><tr><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td></tr><tr><td>M</td><td>E</td><td>Y</td><td>01</td><td>040319</td><td>rok0</td><td>0009</td><td>.flac</td></tr></table>
|
| 222 |
+
|
| 223 |
+
Table 2: Audio Filename Format.
|
| 224 |
+
|
| 225 |
+
313
|
| 226 |
+
|
| 227 |
+
318
|
| 228 |
+
|
| 229 |
+
The explanation of the information encoded in
|
| 230 |
+
|
| 231 |
+
the filename is at follows: 320
|
| 232 |
+
|
| 233 |
+
1. Gender of the Speaker: $\mathbf{M}$ for male or $\mathbf{K}$ for 322
|
| 234 |
+
|
| 235 |
+
female 323
|
| 236 |
+
|
| 237 |
+
324 2. Dialect Group: $\mathbf{U}$ for Suǒuroy, $\mathbf{A}$ for San-
|
| 238 |
+
|
| 239 |
+
325 doy, $\mathbf{S}$ for Suǒurstreymoy, $\mathbf{E}$ for Noröurstrey-
|
| 240 |
+
|
| 241 |
+
326 moy/Eysturoy (exclusive of Eiði, Gjógv
|
| 242 |
+
|
| 243 |
+
327 og Funningur), $\mathbf{V}$ for Vágar and $\mathbf{N}$ for Norǒuroyggjar (inclusive of Eiǒi, Gjógv og Funningur)
|
| 244 |
+
|
| 245 |
+
330
|
| 246 |
+
|
| 247 |
+
3. Age Group: $\mathbf{Y}$ for "Younger" between 15-35
|
| 248 |
+
|
| 249 |
+
332 years old, $\mathbf{M}$ for "Middle-aged" between 36- 60 years old and $\mathbf{E}$ for "Elderly" 61 years old or older.
|
| 250 |
+
|
| 251 |
+
335
|
| 252 |
+
|
| 253 |
+
4. Number of Speaker in a Group: is a number
|
| 254 |
+
|
| 255 |
+
337 that always consists of two digits and starts
|
| 256 |
+
|
| 257 |
+
338 with01,02,03etc. The first speaker in a
|
| 258 |
+
|
| 259 |
+
339 group with the same gender, dialect group
|
| 260 |
+
|
| 261 |
+
340 and age group (e.g. MEY) gets the number 01. The next speaker in the same group
|
| 262 |
+
|
| 263 |
+
342 gets the number 02 (and his ID is therefore MEY02).
|
| 264 |
+
|
| 265 |
+
## 5. Date: The date when the speech was recorded (day/month/year).
|
| 266 |
+
|
| 267 |
+
6. Type of reading material: This code can only be found in speech files at RDATA1O and RDATA1OP. For more information about the types of reading material please see the documentation of the original BLARK and its directory "readingtexts_1.0".
|
| 268 |
+
|
| 269 |
+
7. Segment Number: In the original BLARK the recording session is distributed as one
|
| 270 |
+
|
| 271 |
+
357 audio file per speaker and it can be very long from the ASR perspective. So, the audio files are subdivided in segments of
|
| 272 |
+
|
| 273 |
+
360 around 10 seconds to fit most of the modern ASR engines. The numbering is con-
|
| 274 |
+
|
| 275 |
+
362 tinuous for each speaker; the only exception is with the files MUY01_180519_set4_0004 and MUY02_190120_eind2_0007. We detected that they are empty and we removed them.
|
| 276 |
+
|
| 277 |
+
367
|
| 278 |
+
|
| 279 |
+
8. File extension: The corpus is distributed in FLAC format.
|
| 280 |
+
|
| 281 |
+
## 3 Acoustic Models
|
| 282 |
+
|
| 283 |
+
The development of the Ravnursson corpus allowed us to create acoustic models in four different ASR systems: WAV2VEC2, NeMo, Kaldi and PocketSphinx. In this section we discuss the details of how we created each of them.
|
| 284 |
+
|
| 285 |
+
### 3.1 WAV2VEC2 Model
|
| 286 |
+
|
| 287 |
+
378
|
| 288 |
+
|
| 289 |
+
379
|
| 290 |
+
|
| 291 |
+
WAV2VEC, released in 2019, is a convolutional 380
|
| 292 |
+
|
| 293 |
+
neural network that takes raw audio as input and 381
|
| 294 |
+
|
| 295 |
+
computes a general representation that can be 382
|
| 296 |
+
|
| 297 |
+
input to a speech recognition system (Schnei- 383
|
| 298 |
+
|
| 299 |
+
der et al., 2019). In 2020, a second version, 384
|
| 300 |
+
|
| 301 |
+
WAV2VEC2 (Baevski et al., 2020) was released. 385
|
| 302 |
+
|
| 303 |
+
Based on WAV2VEC2, the XLSR-53 (Conneau 386 et al., 2020) was also released in 2020. XLSR-53
|
| 304 |
+
|
| 305 |
+
is a open-source model trained with more than ${50}\mathrm{k}$ 388
|
| 306 |
+
|
| 307 |
+
hours of unlabelled speech in 53 languages. It can 389 be used to create acoustic models in any language
|
| 308 |
+
|
| 309 |
+
through a fine-tuning step. 391
|
| 310 |
+
|
| 311 |
+
Using the XLSR-53 as a starting point, we created an acoustic model suitable for Faroese (Her-
|
| 312 |
+
|
| 313 |
+
nandez Mena, 2022b) which is available on a Cre- 394
|
| 314 |
+
|
| 315 |
+
ative Commons licence CCBY4. The fine-tuning 396 process for this model lasted 30 epochs.
|
| 316 |
+
|
| 317 |
+
### 3.2 NeMo Model
|
| 318 |
+
|
| 319 |
+
399
|
| 320 |
+
|
| 321 |
+
NeMo (Neural Modules) is a Python toolkit de-
|
| 322 |
+
|
| 323 |
+
veloped by NVIDIA for creating AI applica- 401 tions. It comes with extendable collections of pre-built modules for automatic speech recognition and natural language processing (Kuchaiev et al., 2019). One of the NeMo modules suitable for speech recognition is called Quartznet (Kri-man et al., 2020) which is a convolutional model trained with Connectionist Temporal Classification (Graves, 2012) or CTC for short.
|
| 324 |
+
|
| 325 |
+
In order to train an ASR model for Faroese in NeMo, we used the public checkpoint "QuartzNet15x5Base-En.nemo" ${}^{3}$ as a starting point. This model was trained with more than $3\mathrm{k}$ hours of English data in a Quartznet archi-
|
| 326 |
+
|
| 327 |
+
tecture during 600 epochs. Based on a work 416 by Huang et al., we fine-tuned the checkpoint with the data of the Ravnursson corpus during 236 epochs, obtaining a first checkpoint able to recognize Faroese. Then, we augmented the initial 100 hours of the training portion of the Ravnursson corpus to 300 hours through speech perturbation using two speed rates: 0.9 and 1.1 . Finally, we fine-tuned our initial checkpoint in Faroese with the augmented data during 163 epochs to obtain a final model (Hernandez Mena, 2022a) which is available on a Creative Commons licence CCBY4.
|
| 328 |
+
|
| 329 |
+
431
|
| 330 |
+
|
| 331 |
+
---
|
| 332 |
+
|
| 333 |
+
${}^{3}$ Available at: https://catalog.ngc.nvidia.com/orgs/nvidia/models/nemospeechmodels/ files
|
| 334 |
+
|
| 335 |
+
---
|
| 336 |
+
|
| 337 |
+
432 486
|
| 338 |
+
|
| 339 |
+
<table><tr><td/><td colspan="10">Points of articulation</td></tr><tr><td rowspan="20">Manners of articulation</td><td>Consonants</td><td>Bi-labial</td><td>Labiodental</td><td>Dental</td><td>Alveolar</td><td>Post-alveolar</td><td>Retroflex</td><td>Palatal</td><td>Velar</td><td>Glottal</td></tr><tr><td>Voiceless Stop</td><td>p</td><td/><td/><td>t</td><td/><td/><td/><td>k</td><td/></tr><tr><td>Voiced Stop</td><td>b</td><td/><td/><td>d</td><td/><td/><td/><td>g</td><td/></tr><tr><td>Voiceless Affricate</td><td/><td/><td/><td/><td>tS</td><td/><td/><td/><td/></tr><tr><td>Voiced Affricate</td><td/><td/><td/><td/><td>dZ</td><td/><td/><td/><td/></tr><tr><td>Voiceless Fricative</td><td/><td>f</td><td>5</td><td>S</td><td>S</td><td>Z</td><td/><td/><td>h</td></tr><tr><td>Voiced Fricative</td><td/><td>V</td><td>4</td><td/><td/><td/><td/><td/><td/></tr><tr><td>Voiceless Nasal</td><td>M</td><td/><td/><td>$X$</td><td/><td/><td/><td>X</td><td/></tr><tr><td>Voiced Nasal</td><td>m</td><td/><td/><td>n</td><td/><td/><td/><td>$\mathrm{N}$</td><td/></tr><tr><td>Voiceless Lateral</td><td/><td/><td/><td>L</td><td/><td/><td/><td/><td/></tr><tr><td>Voiced Lateral</td><td/><td/><td/><td>1</td><td/><td/><td/><td/><td/></tr><tr><td>Approximants</td><td/><td/><td/><td>r</td><td/><td/><td>j</td><td>W</td><td/></tr><tr><td>Vowels</td><td/><td/><td/><td/><td>Front</td><td/><td>Central</td><td/><td>Back</td></tr><tr><td>Close</td><td/><td/><td/><td/><td>i y</td><td/><td>3</td><td/><td>U</td></tr><tr><td/><td/><td/><td/><td/><td/><td>IY</td><td/><td>U</td><td/></tr><tr><td>Close-mid</td><td/><td/><td/><td/><td>e</td><td>2</td><td/><td/><td>O</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>8</td><td/><td/></tr><tr><td>Open-mid</td><td/><td/><td/><td/><td/><td>E 9</td><td/><td/><td>O</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Open</td><td/><td/><td/><td/><td/><td>a</td><td/><td/><td/></tr></table>
|
| 340 |
+
|
| 341 |
+
Table 3: Phonetic Repertoire of Faroese
|
| 342 |
+
|
| 343 |
+
507
|
| 344 |
+
|
| 345 |
+
433 487
|
| 346 |
+
|
| 347 |
+
434 488
|
| 348 |
+
|
| 349 |
+
435 489
|
| 350 |
+
|
| 351 |
+
436 490
|
| 352 |
+
|
| 353 |
+
437 491
|
| 354 |
+
|
| 355 |
+
438 492
|
| 356 |
+
|
| 357 |
+
439 493
|
| 358 |
+
|
| 359 |
+
440
|
| 360 |
+
|
| 361 |
+
441
|
| 362 |
+
|
| 363 |
+
442
|
| 364 |
+
|
| 365 |
+
443 497
|
| 366 |
+
|
| 367 |
+
444
|
| 368 |
+
|
| 369 |
+
445 499
|
| 370 |
+
|
| 371 |
+
446
|
| 372 |
+
|
| 373 |
+
447
|
| 374 |
+
|
| 375 |
+
448 502
|
| 376 |
+
|
| 377 |
+
449
|
| 378 |
+
|
| 379 |
+
450 504
|
| 380 |
+
|
| 381 |
+
509
|
| 382 |
+
|
| 383 |
+
### 3.3 Kaldi Model
|
| 384 |
+
|
| 385 |
+
Kaldi (Povey et al., 2011), released in 2011, is a well established toolkit for speech recognition written in $\mathrm{C} + +$ , which is based on distinct paradigms such as: finite-state transducers (Allauzen et al., 2007), Hidden Markov Models (Juang and Rabiner, 1991), Gaussian Mixture Models (Naeem et al., 2020) as well as neural networks (Rath et al., 2013).
|
| 386 |
+
|
| 387 |
+
Our "Kaldi Recipe for Faroese" (Hernán-dez Mena, 2022) was created using the Ravnurs-son corpus as training data. The recipe produces models based on Hidden Markov Models (HMMs) as well as Neural Networks; in specific, the neural network is an LSTM or "Long Short-Term Memory" (Huang et al., 2017). This recipe requires a 3-gram language model (lm) for decoding, a 4- gram Im for re-scoring and a pronouncing dictionary; elements that are available in our "Faroese Language Models with Pronunciations" (Hernán-dez Mena et al., 2022), discussed in further sections.
|
| 388 |
+
|
| 389 |
+
The recipe is available on Clarin. is ${}^{4}$ under a Creative Commons licence CCBY4.
|
| 390 |
+
|
| 391 |
+
485
|
| 392 |
+
|
| 393 |
+
### 3.4 PocketSphinx Model
|
| 394 |
+
|
| 395 |
+
Sphinx is an old speech recognition system 512
|
| 396 |
+
|
| 397 |
+
based on Hidden Markov Models developed by 514 Carnegie-Mellon University in the late 80's (Lee et al., 1990). Through time, progressive versions of Sphinx have been released up the version 4 . At some point, the version 2 turned into Pock-
|
| 398 |
+
|
| 399 |
+
etSphinx (Huggins-Daines et al., 2006). Pocket- 519 Sphinx was supposed to be a lighter and faster version of Sphinx but nowadays it has become the main version that can be used in real time mode, even in ARM processors. PocketSphinx has long
|
| 400 |
+
|
| 401 |
+
ceased to be a suitable system for research, but 524 nevertheless it still has an active community of users that choose it as a real time speech recognition system in devices with not a great computing power such as Raspberry PI (Upton and Halfacree,
|
| 402 |
+
|
| 403 |
+
2014) or other ARM computers. 529
|
| 404 |
+
|
| 405 |
+
Our PocketSphinx models ${}^{5}$ , trained with the Ravnursson corpus, are suitable for the Pocket-Sphinx Python library available at the Pypi repository ${}^{6}$ . With this library it is possible to perform both standard and real time speech recognition,
|
| 406 |
+
|
| 407 |
+
539
|
| 408 |
+
|
| 409 |
+
---
|
| 410 |
+
|
| 411 |
+
${}^{5}$ Available at: https://github.com/ CarlosDanielMena/RAVNURSSON_FAROESE_ Models_100h
|
| 412 |
+
|
| 413 |
+
${}^{6}$ See: https://pypi.org/project/ pocket sphinx/
|
| 414 |
+
|
| 415 |
+
${}^{4}$ See: http://hdl.handle.net/20.500.12537/305
|
| 416 |
+
|
| 417 |
+
---
|
| 418 |
+
|
| 419 |
+
540 594
|
| 420 |
+
|
| 421 |
+
<table><tr><td>SAMPA</td><td>$\mathbf{{IPA}}$</td><td>SAMPA</td><td>$\mathbf{{IPA}}$</td><td>SAMPA</td><td>$\mathbf{{IPA}}$</td><td>SAMPA</td><td>$\mathbf{{IPA}}$</td></tr><tr><td>p</td><td>${\mathrm{p}}^{\mathrm{h}}$</td><td>m</td><td>m</td><td>e</td><td>e</td><td>aJ</td><td>ai</td></tr><tr><td>b</td><td>b</td><td>M</td><td>$\dot{\mathrm{m}}$</td><td>E</td><td>E</td><td>aW</td><td>au</td></tr><tr><td>t</td><td>${t}^{h}$</td><td>n</td><td>n</td><td>a</td><td>a</td><td>OJ</td><td>oi</td></tr><tr><td>d</td><td>d</td><td>$X$</td><td>$\underset{ \circ }{\text{ n }}$</td><td>$y$</td><td>$y$</td><td>OW</td><td>ou</td></tr><tr><td>$\mathrm{k}$</td><td>${\mathrm{k}}^{\mathrm{h}}$</td><td>$\mathrm{N}$</td><td>IJ</td><td>Y</td><td>Y</td><td>3W</td><td>tu</td></tr><tr><td>g</td><td>g</td><td>$X$</td><td>ij</td><td>2</td><td>$\varnothing$</td><td>EW</td><td>eu</td></tr><tr><td>f</td><td>f</td><td>1</td><td>1</td><td>9</td><td>oe</td><td>9W</td><td>œu</td></tr><tr><td>V</td><td>V</td><td>L</td><td>1</td><td>U</td><td>U</td><td>9J</td><td>cei</td></tr><tr><td>S</td><td>S</td><td>j</td><td>j</td><td>O</td><td>0</td><td>4</td><td>0</td></tr><tr><td>S</td><td>f</td><td>W</td><td>W</td><td>O</td><td>0</td><td>5</td><td>0</td></tr><tr><td>Z</td><td>S</td><td>r</td><td>I</td><td>EA</td><td>ea</td><td>8</td><td>0</td></tr><tr><td>h</td><td>h</td><td>U</td><td>U</td><td>OA</td><td>0a</td><td>H</td><td>Pre-aspiration</td></tr><tr><td>tS</td><td>tʃ</td><td>$\mathrm{i}$</td><td>$\mathrm{i}$</td><td>UJ</td><td>$v\dot{1}$</td><td/><td/></tr><tr><td>dZ</td><td>q</td><td>I</td><td>I</td><td>EJ</td><td>ei</td><td/><td/></tr></table>
|
| 422 |
+
|
| 423 |
+
Table 4: SAMPA vs. IPA Equivalences.
|
| 424 |
+
|
| 425 |
+
608
|
| 426 |
+
|
| 427 |
+
609
|
| 428 |
+
|
| 429 |
+
541 595
|
| 430 |
+
|
| 431 |
+
542 596
|
| 432 |
+
|
| 433 |
+
543 597
|
| 434 |
+
|
| 435 |
+
544 598
|
| 436 |
+
|
| 437 |
+
545 599
|
| 438 |
+
|
| 439 |
+
546 600
|
| 440 |
+
|
| 441 |
+
547 601
|
| 442 |
+
|
| 443 |
+
548 602
|
| 444 |
+
|
| 445 |
+
549 603
|
| 446 |
+
|
| 447 |
+
550 604
|
| 448 |
+
|
| 449 |
+
551 605
|
| 450 |
+
|
| 451 |
+
552 606
|
| 452 |
+
|
| 453 |
+
553 607
|
| 454 |
+
|
| 455 |
+
556 610
|
| 456 |
+
|
| 457 |
+
612 forced-alignment and produce timestamps. The version of PocketSphinx that was available when we produced these models was the number 4 . Few weeks later the version 5 was released but our models remain compatible.
|
| 458 |
+
|
| 459 |
+
## 4 Pronunciation Models
|
| 460 |
+
|
| 461 |
+
The pronunciation models that we discuss in this section is a set of pronouncing dictionaries that are included in our "Faroese Language Models with Pronunciations" (Hernández Mena et al., 2022) along with a number of language models that will be discussed in section 5 . Most of the pronunciations come from the original BLARK, but for convenience, we subdivide them in different dictionaries as follows:
|
| 462 |
+
|
| 463 |
+
- Central_Faroese.dic: It contains pronunciations of the variant of Faroese which is spoken in the capital.
|
| 464 |
+
|
| 465 |
+
- East_Faroese.dic: It contains pronunciation of the northwest variant of Faroese ${}^{7}$ .
|
| 466 |
+
|
| 467 |
+
583 - Ravnursson_Composite_Words.dic: It contains words with hyphens and/or underscores
|
| 468 |
+
|
| 469 |
+
593 that are present in the Ravnursson Corpus. We keep them separate in a different dictionary because these type of composite
|
| 470 |
+
|
| 471 |
+
words can be problematic for a grapheme-to- 617 phoneme (g2p) tool.
|
| 472 |
+
|
| 473 |
+
- BLARK.dic: It contains pronunciations of
|
| 474 |
+
|
| 475 |
+
words that are present in the BLARK but that 620 are not present in any other dictionary of the
|
| 476 |
+
|
| 477 |
+
set. 622
|
| 478 |
+
|
| 479 |
+
- FAROESE_ASR.dic: This dictionary is
|
| 480 |
+
|
| 481 |
+
recommended for ASR experiments in 625 Kaldi or any other ASR system based on
|
| 482 |
+
|
| 483 |
+
phonemes. The dictionary is the mix of 627 Central_Faroese.dic, East_Faroese.dic and Ravnursson_Composite_Words.dic. It is im-
|
| 484 |
+
|
| 485 |
+
portant to clarify that the dictionary can 630 contain words with multiple pronunciations,
|
| 486 |
+
|
| 487 |
+
which is normal in Kaldi-like systems. 632 633
|
| 488 |
+
|
| 489 |
+
### 4.1 Phoneme Sets of Dictionaries
|
| 490 |
+
|
| 491 |
+
634
|
| 492 |
+
|
| 493 |
+
Table 3 shows the phonetic repertoire of Faroese 635
|
| 494 |
+
|
| 495 |
+
using 42 SAMPA symbols. Each of these corre- 637 spond to an individual phoneme that is included in the pronouncing dictionaries described in section 4, except for the vowel "/3/" that only occurs in diphthong. The phonetic repertoire of Faroese
|
| 496 |
+
|
| 497 |
+
includes the following 12 diphthongs: EA, OA, 642 $\mathbf{{UJ}},\mathbf{{EJ}},\mathbf{{aJ}},\mathbf{{aW}},\mathbf{{OJ}},\mathbf{{OW}},\mathbf{{3W}},\mathbf{{EW}},\mathbf{{9W}}$ and $\mathbf{{9J}}$ . Summing the 41 individual phonemes in Table 3, plus the 12 diphthong, plus seven phonemes with
|
| 498 |
+
|
| 499 |
+
pre-aspiration (Hb, Hd, HdZ, Hg, Hp, Ht, HtS), 646
|
| 500 |
+
|
| 501 |
+
we have a total of 60 phonemes. That is the list 647 of 60 phonemes that are included in the dictio-
|
| 502 |
+
|
| 503 |
+
---
|
| 504 |
+
|
| 505 |
+
${}^{7}$ In the most recent dialect classification (Petersen,2022), the islands in the northwest area are classified as being the same dialect area. However, there is a difference in the pronunciation of the digraph ${ei}$ between the westernmost islands and the more central and eastern islands in that dialect area in. Therefore, the westernmost part of the dialect area is not included in our EAST dictionary. For that reason, we have given this dictionary the name EAST. The idea is that this makes it is possible to make WEST, NORTHERN and SOUTHERN dictionaries in the future.
|
| 506 |
+
|
| 507 |
+
---
|
| 508 |
+
|
| 509 |
+
649 naries presented in section 4. To see an equivalence between our SAMPA symbols versus the IPA phonemes, please see Table 4.
|
| 510 |
+
|
| 511 |
+
## 5 Language Models
|
| 512 |
+
|
| 513 |
+
As it was mentioned in section 4, our "Faroese Language Models with Pronunciations" is a set of n-gram language models of distinct sizes that were created using the Faroese text provided in the BLARK, as it provides with text from newspaper articles, parliamentary speeches, books and
|
| 514 |
+
|
| 515 |
+
661 more. The normalization process of that text included to change everything to lowercase, allow only characters belonging to the Faroese alphabet
|
| 516 |
+
|
| 517 |
+
664 and removing punctuation marks.
|
| 518 |
+
|
| 519 |
+
The resulting text has a length of more than
|
| 520 |
+
|
| 521 |
+
666 half million lines of text $({106.3MB}$ approximately). The text was used to create a 3-gram (recommended for decoding) and a 4-gram (recom-
|
| 522 |
+
|
| 523 |
+
669 mended for re-scoring) language models with the SRILM toolkit (Stolcke, 2002). Both the 3-gram and 4-gram models come in pruned and unpruned versions. It is also included a 6-gram language model in binary format suitable for ASR experiments with the NeMo toolkit. In particular, this model was created using KenLM (Heafield, 2011). It is important to mention that all the words present in any of the language models are present in the pronouncing dictionaries for the east and central variants of Faroese (see section 4).
|
| 524 |
+
|
| 525 |
+
## 6 Results
|
| 526 |
+
|
| 527 |
+
Table 5 shows a comparison of the Word Error Rate (WER) obtained with the acoustic models presented in section 3. Results with Pocket-
|
| 528 |
+
|
| 529 |
+
686 Sphinx are not included because PocketSphinx is no longer competitive and the models created with it are destined to perform real time recognition in devices with low computing power as explained in
|
| 530 |
+
|
| 531 |
+
691 section 3.4. The NeMo results include the WER obtained using the 6-gram language model (LM) presented in section 5 as well as the WER obtained with no language model at all. The Kaldi results include the WER obtained with Hidden Markov Models (HMM) only and the WER obtained with the LSTM network. As it can be seen, the best results are obtained with the WAV2VEC2 model.
|
| 532 |
+
|
| 533 |
+
According to our previous experience (Hernan-
|
| 534 |
+
|
| 535 |
+
701 dez Mena et al., 2020; Mena et al., 2022), it is remarkable that the WER obtained with NeMo using a language model and the WER obtained with Kaldi using the LSTM are so close to each other despite of the relatively low amount of training data. This fact reveals that the training method described by Huang et al. is really effective.
|
| 536 |
+
|
| 537 |
+
On the other hand, Table 6 shows the results obtained with the newest system Whisper (Radford et al., 2022). Whisper is a transformer-based speech recognition system trained with ${680}\mathrm{k}$ hours of transcribed data in multiple languages. Whisper is also a multitask system able to perform multilingual speech recognition as well as speech translation and language identification. According to the original paper (Radford et al., 2022), the training set that Whisper uses for translation includes 46 hours of Faroese. Based on this, we decided to test Whisper in its distinct sizes with no fine-tuning step and using the development and test portions of the Ravnursson corpus. As it can be seen in Table 6, we obtained terribly bad WER results, revealing that Whisper needs to be fine-tuned prior to recognize Faroese data; unfortunately, this is beyond the scope of this paper but it will tackle as further work.
|
| 538 |
+
|
| 539 |
+
## 7 Conclusions
|
| 540 |
+
|
| 541 |
+
A major development of Faroese ASR is presented in this work. The Ravnursson project has produced a corpus of 109 hours of transcribed speech and acoustic models for WAV2VEC2, NeMo, Kaldi and PocketSphinx have been developed. Furthermore, the project has also produced a set of n-gram language models of distinct sizes and pronunciation dictionaries in Faroese suitable for ASR experimentation. Quality assessment of the acoustic models are shown in Table 5 where the best results of ${7.60}\%$ WER was achieved by the WAV2VEC2 model. Another interesting result is shown in Table 6 demonstrating that a fine-tuning step is needed for Faroese for the multi-lingual ASR system Whisper.
|
| 542 |
+
|
| 543 |
+
Faroese ASR is no longer under-developed due to this work. The project has lowered the technological threshold for implementing ASR solutions for Faroese in industry and for studying the Faroese language using ASR as a tool. With all the results made available with open licenses, there is no good reason why Faroese ASR should not be included in standard language technology software in the future.
|
| 544 |
+
|
| 545 |
+
756 810
|
| 546 |
+
|
| 547 |
+
<table><tr><td>Corpus Portion</td><td>NeMo SP No LM</td><td>NeMo SP With LM</td><td>Kaldi HMM</td><td>Kaldi LSTM</td><td>WAV2VEC2 XLRS-53</td></tr><tr><td>$\mathbf{{Dev}}$</td><td>20.51%</td><td>13.66%</td><td>20.60%</td><td>12.22%</td><td>5.56%</td></tr><tr><td>Test</td><td>22.81%</td><td>15.95%</td><td>23.44%</td><td>14.04%</td><td>7.60%</td></tr></table>
|
| 548 |
+
|
| 549 |
+
Table 5: WER Results.
|
| 550 |
+
|
| 551 |
+
757 811
|
| 552 |
+
|
| 553 |
+
758 812
|
| 554 |
+
|
| 555 |
+
759 813
|
| 556 |
+
|
| 557 |
+
760 814
|
| 558 |
+
|
| 559 |
+
761 815
|
| 560 |
+
|
| 561 |
+
762 816
|
| 562 |
+
|
| 563 |
+
763
|
| 564 |
+
|
| 565 |
+
<table><tr><td>Whisper Size</td><td>$\mathbf{{Dev}}$ WER</td><td>Test WER</td></tr><tr><td>Tiny</td><td>113.4%</td><td>116.7%</td></tr><tr><td>Base</td><td>112.61%</td><td>113.07%</td></tr><tr><td>Small</td><td>128.05%</td><td>132.64%</td></tr><tr><td>Medium</td><td>116.34%</td><td>119.3%</td></tr><tr><td>Large</td><td>105.93%</td><td>110.25%</td></tr></table>
|
| 566 |
+
|
| 567 |
+
Table 6: Whisper WER Results.
|
| 568 |
+
|
| 569 |
+
764
|
| 570 |
+
|
| 571 |
+
765
|
| 572 |
+
|
| 573 |
+
766
|
| 574 |
+
|
| 575 |
+
767
|
| 576 |
+
|
| 577 |
+
768
|
| 578 |
+
|
| 579 |
+
769
|
| 580 |
+
|
| 581 |
+
770
|
| 582 |
+
|
| 583 |
+
772
|
| 584 |
+
|
| 585 |
+
774
|
| 586 |
+
|
| 587 |
+
## Acknowledgments
|
| 588 |
+
|
| 589 |
+
The text has to be anonymous. The real acknowl-
|
| 590 |
+
|
| 591 |
+
777 edgments will be revealed in the final version of the manuscript. The text has to be anonymous.
|
| 592 |
+
|
| 593 |
+
779 The real acknowledgments will be revealed in the final version of the manuscript. The text has to
|
| 594 |
+
|
| 595 |
+
782 be anonymous. The real acknowledgments will be revealed in the final version of the manuscript.
|
| 596 |
+
|
| 597 |
+
784 The text has to be anonymous. The real acknowledgments will be revealed in the final version of the manuscript.
|
| 598 |
+
|
| 599 |
+
787
|
| 600 |
+
|
| 601 |
+
789
|
| 602 |
+
|
| 603 |
+
## References
|
| 604 |
+
|
| 605 |
+
Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wo-jciech Skut, and Mehryar Mohri. 2007. Openfst: A
|
| 606 |
+
|
| 607 |
+
792 general and efficient weighted finite-state transducer library. In International Conference on Implementation and Application of Automata, pages 11-23.
|
| 608 |
+
|
| 609 |
+
794 Springer.
|
| 610 |
+
|
| 611 |
+
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Pro-
|
| 612 |
+
|
| 613 |
+
799 cessing Systems, 33:12449-12460.
|
| 614 |
+
|
| 615 |
+
Thomas Bilgram and Britt Keson. 1998. The construction of a tagged danish corpus. In Proceedings of the 11th Nordic Conference of Computational Linguistics (NODALIDA 1998), pages 129-139.
|
| 616 |
+
|
| 617 |
+
Narayan Choudhary. 2021. Ldc-il: The indian repository of resources for language technology. Language Resources and Evaluation, 55(3):855-867.
|
| 618 |
+
|
| 619 |
+
Alexis Conneau, Alexei Baevski, Ronan Collobert,
|
| 620 |
+
|
| 621 |
+
809 Abdelrahman Mohamed, and Michael Auli. 2020. Unsupervised cross-lingual representation learning for speech recognition. arXiv preprint arXiv:2006.13979.
|
| 622 |
+
|
| 623 |
+
Iben Nyholm Debess, Sandra Saxov Lamhauge, 821 Annika Simonsen, Peter Juel Henrichsen, Egil Hofgaard, Uni Johannesen, Petur Markus Josenius Hammer, Gunnvør Hoydal Brimnes, Ebba Malena Debess Thomsen, and Beinta Poulsen. 2022. Basic language resource kit 1.0 for faroese.
|
| 624 |
+
|
| 625 |
+
OpenSLR.org. 826
|
| 626 |
+
|
| 627 |
+
Elisabeth D'Halleweyn, Jan Odijk, Lisanne Teunissen, 828 and Catia Cucchiarini. 2006. The dutch-flemish hlt programme stevin: Essential speech and language technology resources. In Proceedings of the Fifth
|
| 628 |
+
|
| 629 |
+
International Conference on Language Resources 831 and Evaluation (LREC'06).
|
| 630 |
+
|
| 631 |
+
833
|
| 632 |
+
|
| 633 |
+
David Pérez Fernandez, Doaa Samy, and Juan de Dios Llorens Gonzalez. 2016. Spanish language technologies plan. In International Workshop on Fu-
|
| 634 |
+
|
| 635 |
+
ture and Emerging Trends in Language Technology, 836 pages 50-60. Springer.
|
| 636 |
+
|
| 637 |
+
838
|
| 638 |
+
|
| 639 |
+
Talutøkni Foundation. 2019. The project ravnur. In Talutékini Foundation.
|
| 640 |
+
|
| 641 |
+
Alex Graves. 2012. Connectionist temporal classifica- 841 tion. In Supervised sequence labelling with recur-
|
| 642 |
+
|
| 643 |
+
rent neural networks, pages 61-93. Springer. 843
|
| 644 |
+
|
| 645 |
+
Aditi Sharma Grover, Gerhard B Van Huyssteen, and Marthinus W Pretorius. 2011. The south african
|
| 646 |
+
|
| 647 |
+
human language technology audit. Language re- 846 sources and evaluation, 45(3):271-288.
|
| 648 |
+
|
| 649 |
+
848
|
| 650 |
+
|
| 651 |
+
Kenneth Heafield. 2011. Kenlm: Faster and smaller language model queries. In Proceedings of the sixth workshop on statistical machine translation, pages 187-197.
|
| 652 |
+
|
| 653 |
+
Pétur Helgason and Sjúrǒur Gullbein. 2005. Færøsk 853 talesyntese: Rapport marts 2005. Nordisk sprogteknologi 2005-Nordic Language Technology, page 51. Carlos Daniel Hernandez Mena.
|
| 654 |
+
|
| 655 |
+
2022a. Acoustic model in faroese: 858
|
| 656 |
+
|
| 657 |
+
stt_fo_quartznet15x5_sp_ep163_100h. hug- 859 gingface.co.
|
| 658 |
+
|
| 659 |
+
Carlos Daniel Hernandez Mena. 2022b. Acoustic model in faroese: wav2vec2-large-xlsr-53-faroese-
|
| 660 |
+
|
| 661 |
+
100h. huggingface.co. 863
|
| 662 |
+
|
| 663 |
+
Carlos Daniel Hernández Mena. 2022. Kaldi recipe for faroese. Clarin.is.
|
| 664 |
+
|
| 665 |
+
Carlos Daniel Hernandez Mena, Albert Gatt, Andrea DeMarco, Claudia Borg, Lonneke van der Plas, Amanda Muscat, and Ian Padovani. 2020. Masri-headset: A maltese corpus for speech recognition. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 6381-6388, Marseille, France. European Language Resources Association.
|
| 666 |
+
|
| 667 |
+
Carlos Daniel Hernández Mena, Sandra Saxov Lamhauge, Iben Nyholm Debess, and Annika Simonsen. 2022. Faroese language models with pronunciations. Clarin.is.
|
| 668 |
+
|
| 669 |
+
Carlos Daniel Hernández Mena and Annika Simonsen. 2022. Ravnursson faroese speech and transcripts. Clarin.is.
|
| 670 |
+
|
| 671 |
+
Jocelyn Huang, Oleksii Kuchaiev, Patrick O'Neill, Vi-taly Lavrukhin, Jason Li, Adriana Flores, Georg Kucsko, and Boris Ginsburg. 2020. Cross-language transfer learning, continuous learning, and domain adaptation for end-to-end automatic speech recognition. arXiv preprint arXiv:2005.04290.
|
| 672 |
+
|
| 673 |
+
Lu Huang, Ji Xu, Jiasong Sun, and Yi Yang. 2017. An improved residual lstm architecture for acoustic modeling. In 2017 2nd International Conference on Computer and Communication Systems (ICCCS), pages 101-105. IEEE.
|
| 674 |
+
|
| 675 |
+
David Huggins-Daines, Mohit Kumar, Arthur Chan, Alan W Black, Mosur Ravishankar, and Alexander I Rudnicky. 2006. Pocketsphinx: A free, real-time continuous speech recognition system for hand-held devices. In 2006 IEEE international conference on acoustics speech and signal processing proceedings, volume 1, pages I-I. IEEE.
|
| 676 |
+
|
| 677 |
+
Biing Hwang Juang and Laurence R Rabiner. 1991. Hidden markov models for speech recognition. Technometrics, 33(3):251-272.
|
| 678 |
+
|
| 679 |
+
Elsa Kania, Paul Triolo, and Graham Webster. 2018. Translation: Chinese government outlines ai ambitions through 2020. New America.
|
| 680 |
+
|
| 681 |
+
Britt Keson. 1998. Vejledning til det danske morfos-yntaktisk taggede parole-korpus. Parole report, Det Danske Sprog-og Litteraturselskab (DSL).
|
| 682 |
+
|
| 683 |
+
Steven Krauwer. 2003. The basic language resource kit (blark) as the first milestone for the language resources roadmap. In Proceedings of SPECOM, page 15.
|
| 684 |
+
|
| 685 |
+
Samuel Kriman, Stanislav Beliaev, Boris Ginsburg, Jocelyn Huang, Oleksii Kuchaiev, Vitaly Lavrukhin, Ryan Leary, Jason Li, and Yang Zhang. 2020. Quartznet: Deep automatic speech recognition with 1d time-channel separable convolutions. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6124-6128. IEEE.
|
| 686 |
+
|
| 687 |
+
865 866 870 872 875
|
| 688 |
+
|
| 689 |
+
880
|
| 690 |
+
|
| 691 |
+
882
|
| 692 |
+
|
| 693 |
+
885
|
| 694 |
+
|
| 695 |
+
887
|
| 696 |
+
|
| 697 |
+
897
|
| 698 |
+
|
| 699 |
+
900
|
| 700 |
+
|
| 701 |
+
902
|
| 702 |
+
|
| 703 |
+
907
|
| 704 |
+
|
| 705 |
+
917
|
| 706 |
+
|
| 707 |
+
Oleksii Kuchaiev, Jason Li, Huyen Nguyen, Olek- 918
|
| 708 |
+
|
| 709 |
+
sii Hrinchuk, Ryan Leary, Boris Ginsburg, Samuel Kriman, Stanislav Beliaev, Vitaly Lavrukhin, Jack Cook, et al. 2019. Nemo: a toolkit for building ai applications using neural modules. arXiv preprint arXiv:1909.09577.
|
| 710 |
+
|
| 711 |
+
K-F Lee, H-W Hon, and Raj Reddy. 1990. An overview of the sphinx speech recognition system. IEEE Transactions on Acoustics, Speech, and Signal Processing, 38(1):35-45.
|
| 712 |
+
|
| 713 |
+
Bente Maegaard, Mohammed Atiyya, Khalid Choukri, Steven Krauwer, Chafic Mokbel, and Mustafa Yaseen. 2008. Medar: Collaboration between european and mediterranean arabic partners to support the development of language technology for arabic. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08).
|
| 714 |
+
|
| 715 |
+
Bente Maegaard, Khalid Choukri, Chafik Mokbel, and Mustafa Yaseen. 2005. Language technology for Arabic. NEMLAR, Center for Sprogteknologi, University of Copenhagen.
|
| 716 |
+
|
| 717 |
+
Bente Maegaard, Steven Krauwer, Khalid Choukri, and Lise Damsgaard Jørgensen. 2006. The blark concept and blark for arabic. In LREC, pages 773-778.
|
| 718 |
+
|
| 719 |
+
Wes McKinney et al. 2010. Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference, pages 51-56. Austin, TX.
|
| 720 |
+
|
| 721 |
+
Einar Meister, Jaak Vilo, and Neeme Kahusk. 2010. National programme for estonian language technology: a pre-final summary. In Human Language Technologies-The Baltic Perspective, pages 11-14. IOS Press.
|
| 722 |
+
|
| 723 |
+
Carlos Daniel Hernandez Mena, David Erik Mollberg, Michal Borský, and Jón Guönason. 2022. Samró- mur children: An icelandic speech corpus. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 995-1002.
|
| 724 |
+
|
| 725 |
+
Saad Naeem, Majid Iqbal, Muhammad Saqib, Muhammad Saad, Muhammad Soban Raza, Zaid Ali, Naveed Akhtar, Mirza Omer Beg, Waseem Shahzad, and Muhhamad Umair Arshad. 2020. Subspace gaussian mixture model for continuous urdu speech recognition using kaldi. In 202014th International Conference on Open Source Systems and Technologies (ICOSST), pages 1-7. IEEE.
|
| 726 |
+
|
| 727 |
+
Anna Björk Nikulásdóttir, Jón Guönason, Anton Karl Ingason, Hrafn Loftsson, Eiríkur Rögn-valdsson, Einar Freyr Sigurösson, and Steinthór Steingrímsson. 2020. Language technology programme for icelandic 2019-2023. arXiv preprint arXiv:2003.09244.
|
| 728 |
+
|
| 729 |
+
Hjalmar P Petersen. 2022. Evidence for the modification of dialect classification of modern spoken faroese. European Journal of Scandinavian Studies, 52(1):43-58.
|
| 730 |
+
|
| 731 |
+
919
|
| 732 |
+
|
| 733 |
+
920
|
| 734 |
+
|
| 735 |
+
921
|
| 736 |
+
|
| 737 |
+
922
|
| 738 |
+
|
| 739 |
+
923
|
| 740 |
+
|
| 741 |
+
924
|
| 742 |
+
|
| 743 |
+
929
|
| 744 |
+
|
| 745 |
+
934
|
| 746 |
+
|
| 747 |
+
936
|
| 748 |
+
|
| 749 |
+
939
|
| 750 |
+
|
| 751 |
+
941
|
| 752 |
+
|
| 753 |
+
946
|
| 754 |
+
|
| 755 |
+
949
|
| 756 |
+
|
| 757 |
+
951
|
| 758 |
+
|
| 759 |
+
954
|
| 760 |
+
|
| 761 |
+
956
|
| 762 |
+
|
| 763 |
+
961
|
| 764 |
+
|
| 765 |
+
966
|
| 766 |
+
|
| 767 |
+
971
|
| 768 |
+
|
| 769 |
+
972 Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas 1026 973 Burget, Ondrej Glembek, Nagendra Goel, Mirko 1027 974 Hannemann, Petr Motlicek, Yanmin Qian, Petr 1028 975 Schwarz, et al. 2011. The kaldi speech recogni- 1029 tion toolkit. In IEEE 2011 workshop on automatic 1030 976 speech recognition and understanding, CONF. IEEE 977 Signal Processing Society. 1031 978 1032
|
| 770 |
+
|
| 771 |
+
979 Alec Radford, Jong Wook Kim, Tao Xu, Greg Brock- 1033 980 man, Christine McLeavey, and Ilya Sutskever. 2022. 1034 Robust speech recognition via large-scale weak su- 981 pervision. arXiv preprint arXiv:2212.04356. 1035 982 1036
|
| 772 |
+
|
| 773 |
+
983 Shakti P Rath, Daniel Povey, Karel Veselỳ, and Jan 1037 984 Cernocký. 2013. Improved feature processing for 1038 deep neural networks. In Interspeech, pages 109- 985 113. 1039 986 1040
|
| 774 |
+
|
| 775 |
+
987 Georg Rehm, Katrin Marheinecke, Stefanie Hegele, 1041 988 Stelios Piperidis, Kalina Bontcheva, Jan Hajič, 1042 Khalid Choukri, Andrejs Vasiljevs, Gerhard Back-
|
| 776 |
+
|
| 777 |
+
989 fried, Christoph Prinz, et al. 2020. The european 1043
|
| 778 |
+
|
| 779 |
+
990 language technology landscape in 2020: Language- 1044
|
| 780 |
+
|
| 781 |
+
centric and human-centric ai for cross-cultural com- 1045
|
| 782 |
+
|
| 783 |
+
munication in multilingual europe. arXiv preprint 1046
|
| 784 |
+
|
| 785 |
+
993 arXiv:2003.13833. 1047
|
| 786 |
+
|
| 787 |
+
Steffen Schneider, Alexei Baevski, Ronan Collobert, 1048
|
| 788 |
+
|
| 789 |
+
995 and Michael Auli. 2019. wav2vec: Unsupervised 1049
|
| 790 |
+
|
| 791 |
+
pre-training for speech recognition. arXiv preprint 1050
|
| 792 |
+
|
| 793 |
+
arXiv:1904.05862. 1051
|
| 794 |
+
|
| 795 |
+
998 Annika Simonsen, Sandra Saxov Lamhauge, Iben Ny- 1052
|
| 796 |
+
|
| 797 |
+
holm Debess, and Peter Juel Henrichsen. 2022. Cre- 1053
|
| 798 |
+
|
| 799 |
+
1000 ating a basic language resource kit for faroese. In 1054
|
| 800 |
+
|
| 801 |
+
Proceedings of the Thirteenth Language Resources 1055
|
| 802 |
+
|
| 803 |
+
and Evaluation Conference, pages 4637-4643. 1056
|
| 804 |
+
|
| 805 |
+
Andreas Stolcke. 2002. Srilm-an extensible language 1057
|
| 806 |
+
|
| 807 |
+
modeling toolkit. In Seventh international confer- 1058
|
| 808 |
+
|
| 809 |
+
1005 ence on spoken language processing. 1059
|
| 810 |
+
|
| 811 |
+
Eben Upton and Gareth Halfacree. 2014. Raspberry Pi 1060
|
| 812 |
+
|
| 813 |
+
user guide. John Wiley & Sons. 1061
|
| 814 |
+
|
| 815 |
+
1008 1062
|
| 816 |
+
|
| 817 |
+
Om Vikas. 2001. Language technology development 1063
|
| 818 |
+
|
| 819 |
+
1010 in india. Ministry of Information Technology. 1064
|
| 820 |
+
|
| 821 |
+
1065
|
| 822 |
+
|
| 823 |
+
1066
|
| 824 |
+
|
| 825 |
+
1013 1067
|
| 826 |
+
|
| 827 |
+
1014 1068
|
| 828 |
+
|
| 829 |
+
1015 1069
|
| 830 |
+
|
| 831 |
+
1016 1070
|
| 832 |
+
|
| 833 |
+
1017 1071
|
| 834 |
+
|
| 835 |
+
1018 1072
|
| 836 |
+
|
| 837 |
+
1019 1073
|
| 838 |
+
|
| 839 |
+
1020 1074
|
| 840 |
+
|
| 841 |
+
1021 1075
|
| 842 |
+
|
| 843 |
+
1022 1076
|
| 844 |
+
|
| 845 |
+
1023 1077
|
| 846 |
+
|
| 847 |
+
1024 1078
|
| 848 |
+
|
| 849 |
+
1025 1079
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/Vzp2aRidnh/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,750 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ ASR LANGUAGE RESOURCES FOR FAROESE
|
| 2 |
+
|
| 3 |
+
054
|
| 4 |
+
|
| 5 |
+
055
|
| 6 |
+
|
| 7 |
+
056
|
| 8 |
+
|
| 9 |
+
Anonymous Author
|
| 10 |
+
|
| 11 |
+
Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 2
|
| 14 |
+
|
| 15 |
+
Affiliation / Address line 3
|
| 16 |
+
|
| 17 |
+
email@domain
|
| 18 |
+
|
| 19 |
+
Anonymouser Author
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 1
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 2
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 3
|
| 26 |
+
|
| 27 |
+
email@domain
|
| 28 |
+
|
| 29 |
+
Anonymousest Author 057
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 1 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 34 |
+
|
| 35 |
+
063
|
| 36 |
+
|
| 37 |
+
§ ABSTRACT
|
| 38 |
+
|
| 39 |
+
The aim of this work is to present a set of novel language resources in Faroese suitable for the field of Automatic Speech Recognition including: an ASR corpus comprised of 109 hours of transcribed speech data, acoustic models in systems such as WAV2VEC2, NVIDIA-NeMo, Kaldi and PocketSphinx; a set of n-gram language models and a set of pronunciation dictionaries with two different variants of Faroese. We also show comparison results between the distinct acoustic models presented here. All the resources exposed in this document are publicly available under creative commons licences.
|
| 40 |
+
|
| 41 |
+
§ 1 INTRODUCTION
|
| 42 |
+
|
| 43 |
+
As the digital world has become increasingly prominent and omnipresent in most human activities, the use of more and better language technologies has become a pressing need. For this reason, more and more governments are investing in the development of all kinds of linguistic resources that allow their citizens to be part of the new digital era, with all the benefits it entails. Language technology initiatives in the main regions of the world such as: Europe (Rehm et al., 2020; Nikulásdóttir et al., 2020; Meister et al., 2010; D'Halleweyn et al., 2006), India (Vikas, 2001; Choudhary, 2021), Africa (Grover et al., 2011), China (Kania et al., 2018), Saudi Arabia (Mae-gaard et al., 2008, 2005) and the Spanish speaking countries (Fernandez et al., 2016); allow us to attest how important language technologies have become in recent times.
|
| 44 |
+
|
| 45 |
+
In synchrony with all the developments mentioned above, it is time to talk about the efforts made for the development of the Faroese language in the digital sphere. The most recent initiative in
|
| 46 |
+
|
| 47 |
+
this regard is the Ravnur Project, founded in the 065 Faroe Islands. Thanks to the resources generated
|
| 48 |
+
|
| 49 |
+
and shared by Ravnur, it has been possible to de- 067 velop all the language resources presented in this document.
|
| 50 |
+
|
| 51 |
+
070
|
| 52 |
+
|
| 53 |
+
§ 1.1 FAROESE
|
| 54 |
+
|
| 55 |
+
The Faroe Islands is a set of small islands located 072 at the North Atlantic in a half way between Scot-
|
| 56 |
+
|
| 57 |
+
land, Iceland and Norway. It is an autonomous ter- 075 ritory of the Kingdom of Denmark with Faroese as
|
| 58 |
+
|
| 59 |
+
the official language, which is spoken by around 077 54,000people. There are four main dialect areas in the Faroe Islands; north, northwest, central
|
| 60 |
+
|
| 61 |
+
and southern (Petersen, 2022). The Faroe Islands 080 is a bilingual country with Danish as the second
|
| 62 |
+
|
| 63 |
+
official language. While many native speakers of 082 Faroese use Danish for university education or employment in Denmark, Faroese is spoken as a first
|
| 64 |
+
|
| 65 |
+
language by most of the population and is used 085 on all domains, e.g. in education, public sectors,
|
| 66 |
+
|
| 67 |
+
church etc. in the Faroe Islands. The first and, to 087 this date, only Faroese speech synthesis was created in 2005 (Helgason and Gullbein, 2005) by
|
| 68 |
+
|
| 69 |
+
combining efforts from researchers at the Univer- 090 sity of Stockholm and the University of the Faroe
|
| 70 |
+
|
| 71 |
+
Islands and is used by the visually impaired com- 092 munity. Currently, there is a huge demand for Faroese ASR solutions, needed by the deaf, visually impaired and dyslexic communities - and also
|
| 72 |
+
|
| 73 |
+
the general public, who wish to use their mother 097 tongue when interacting with technology.
|
| 74 |
+
|
| 75 |
+
§ 1.2 THE RAVNUR PROJECT
|
| 76 |
+
|
| 77 |
+
The Faroese ASR research project, Ravnur, was assembled in 2019 (Foundation, 2019). The aim of the project was to create open-source resources that could be used to create automatic speech recognition (ASR) systems in Faroese. These resources would also be useful for creating other
|
| 78 |
+
|
| 79 |
+
types of language technologies, as well as for lin- 107 guistic research. The project was founded by public and private initiators and investors, including the Faroese government. The development team consisted of a project leader, a technical leader, three native speaking junior linguists, an IT assistant, five university student assistants, as well as external advisors. The project concluded in the summer of 2022 with the publication of the Basic Language Resource Kit for Faroese (BLARK) (Simonsen et al., 2022; Debess et al., 2022).
|
| 80 |
+
|
| 81 |
+
§ 1.3 BASIC LANGUAGE RESOURCE KIT (BLARK) FOR FAROESE
|
| 82 |
+
|
| 83 |
+
A BLARK is defined as the minimal set of language resources needed to create language and speech technology for a language (Krauwer, 2003; Maegaard et al., 2006). A BLARK is ideally language independent, but because languages may have different requirements, the contents of the BLARK may vary in some respects from language to language.
|
| 84 |
+
|
| 85 |
+
So, as Ravnur was an ASR project, the focus was on collecting good quality recordings of Faroese and creating a transcription corpus and pronunciation dictionary. During the course of the project, Ravnur collected 135 hours of recordings of 433 speakers total (249 female speakers and 184 male speakers) reading text of various genres, such as news, blogs, Wikipedia, law texts, GPS commands, word lists etc. The participants self-reported their gender, native language, dialect and age which varies between 15 to 83 years old. The recordings were made on TASCAM DR-40 Linear PCM audio recorders using the built-in stereo microphones in WAVE 16 bit with a sample rate of ${48}\mathrm{{kHz}}$ . All recordings have been manually orthographically transcribed, while part of the speech corpus has been phonetically transcribed. The transcriptions were made by the university student assistants and the three Faroese linguists working for the project. All words that occur in the recordings were put in a pronunciation dictionary. The dictionary includes phonetic transcriptions written in SAMPA and PAROLE PoS-tags (Bilgram and Keson,1998; Keson,1998) ${}^{1}$ .
|
| 86 |
+
|
| 87 |
+
As it can be seen, the BLARK developed by Ravnur is the starting point of the novel machine learning models presented in this work.
|
| 88 |
+
|
| 89 |
+
§ 2 THE RAVNURSSON CORPUS
|
| 90 |
+
|
| 91 |
+
162
|
| 92 |
+
|
| 93 |
+
163
|
| 94 |
+
|
| 95 |
+
Ravnursson ${}^{2}$ (Hernández Mena and Simonsen, 164 2022) is an ASR corpus with a length of 109 hours, extracted from the BLARK described in section 1.3. Unlike the original BLARK, the
|
| 96 |
+
|
| 97 |
+
Ravnursson only contains the speech files along 168 with their respective transcriptions. The main characteristics of the corpus are the following:
|
| 98 |
+
|
| 99 |
+
* The audio files in this corpus are distributed
|
| 100 |
+
|
| 101 |
+
in a FLAC format at ${16}\mathrm{{kHz}}$ @ 16bit mono. 173
|
| 102 |
+
|
| 103 |
+
* The corpus contains 71,949 speech files from 175 433 speakers.
|
| 104 |
+
|
| 105 |
+
* The corpus is split into train, dev, and test
|
| 106 |
+
|
| 107 |
+
portions. Lengths of every portion are: train 178 $= {100}\mathrm{h}{08}\mathrm{\;m},\operatorname{dev} = 4\mathrm{h}{30}\mathrm{\;m}$ , test $= 4\mathrm{h}{30}\mathrm{\;m}$ .
|
| 108 |
+
|
| 109 |
+
180
|
| 110 |
+
|
| 111 |
+
* The development and test portions have exactly 10 male and 10 female speakers each
|
| 112 |
+
|
| 113 |
+
and both portions have exactly the same size 183 in hours.
|
| 114 |
+
|
| 115 |
+
185
|
| 116 |
+
|
| 117 |
+
* Due to the limited number of prompts to read, only39,945of the71,949prompts in the
|
| 118 |
+
|
| 119 |
+
whole corpus are unique. In other words, 188 ${44.48}\%$ of the prompts in the corpus are re-
|
| 120 |
+
|
| 121 |
+
peated at least once. 190
|
| 122 |
+
|
| 123 |
+
* Despite the repeated prompts in the corpus,
|
| 124 |
+
|
| 125 |
+
the development and test portions do not 193 share speakers with each other or with the
|
| 126 |
+
|
| 127 |
+
training set. 195
|
| 128 |
+
|
| 129 |
+
§ 2.1 ANALYSIS OF THE REPEATED PROMPTS
|
| 130 |
+
|
| 131 |
+
As the number of reading prompts for the corpus 198 was limited during the recording process, the com-
|
| 132 |
+
|
| 133 |
+
mon denominator in the Ravnursson corpus is that 200 one prompt is read by more than one speaker. This is relevant because it is a common practice in ASR to create a language model using the prompts that are found in the train portion of the corpus. That is not recommended for the Ravnursson Corpus as it counts with several prompts shared by all the portions and that will produce an important bias in the language modeling task.
|
| 134 |
+
|
| 135 |
+
Table 1 shows some statistics about the repeated prompts through all the portions of the corpus.
|
| 136 |
+
|
| 137 |
+
215
|
| 138 |
+
|
| 139 |
+
${}^{2}$ As a matter of fact, the name Ravnursson comes from Ravnur (a tribute to the Ravnur Project) and the suffix "son" which in Icelandic means "son of". Therefore, the name "Ravnursson" means "The (Icelandic) son of Ravnur". The double "ss" is just for aesthetics.
|
| 140 |
+
|
| 141 |
+
${}^{1}$ Both the Faroese SAMPA alphabet (sometimes called FARSAMPA) and PAROLE PoS-tags were created by Ravnur for the BLARK.
|
| 142 |
+
|
| 143 |
+
The way this table has to be understood is as fol-
|
| 144 |
+
|
| 145 |
+
217 lows: for example, the first row indicates that there is a total of 71,949 reading prompts in the whole corpus;39,945of those are unique and 32,004 are repeated at least once. Therefore, a total of ${44.48}\%$ prompts in the whole corpus are repeated
|
| 146 |
+
|
| 147 |
+
222 at least once. The same applies to the rest of the rows in Table 1.
|
| 148 |
+
|
| 149 |
+
max width=
|
| 150 |
+
|
| 151 |
+
Corpus Portion Total Prompts Unique Prompts Repeat. Prompts %
|
| 152 |
+
|
| 153 |
+
1-5
|
| 154 |
+
All 71,949 39,945 32,004 44.48%
|
| 155 |
+
|
| 156 |
+
1-5
|
| 157 |
+
Train 65,616 38,646 26, 970 41.1%
|
| 158 |
+
|
| 159 |
+
1-5
|
| 160 |
+
Test 3,002 2,887 115 3.83%
|
| 161 |
+
|
| 162 |
+
1-5
|
| 163 |
+
$\mathbf{{Dev}}$ 3,331 3,302 29 0.87%
|
| 164 |
+
|
| 165 |
+
1-5
|
| 166 |
+
|
| 167 |
+
Table 1: Analysis of Repeated Prompts.
|
| 168 |
+
|
| 169 |
+
§ 2.2 CORPUS ORGANIZATION
|
| 170 |
+
|
| 171 |
+
The "speech" directory contains all the speech files of the corpus. The files in the speech folder are divided in three directories: train, dev and test. The train portion is sub-divided in three types of recordings: RDATA1O, RDATA1OP and RDATA2; this is due the organization of the recordings in the original BLARK. There, the recordings are divided in Rdata1 and Rdata2.
|
| 172 |
+
|
| 173 |
+
One main difference between Rdata1 and Rdata2 is that the reading environment for Rdata2 was controlled by a software called "PushPrompt" which is included in the original BLARK (Simonsen et al., 2022). Another difference is that in Rdata1 there are some available transcriptions labelled at the phoneme level. The audio files in the speech directory of the Ravnursson corpus are divided in the folders RDATA1O where "O" is for "Orthographic" and RDATA1OP where "O" is for Orthographic and "P" is for phonetic. These categories are just a reminiscence of the original BLARK but it does not imply that the Ravnursson corpus comes with transcriptions at the phonetic level. In the case of the dev and test portions, the data come only from Rdata2 which does not have labels at the phonetic level in the original BLARK.
|
| 174 |
+
|
| 175 |
+
§ 2.3 THE METADATA FILE
|
| 176 |
+
|
| 177 |
+
The metadata file is a "tab-separated values file" (TSV) containing all the relevant information of the corpus. The file can be read using the Pandas (McKinney et al., 2010) library in Python and
|
| 178 |
+
|
| 179 |
+
269 it comprises of the following 12 columns:
|
| 180 |
+
|
| 181 |
+
1. id: The filename without the extension 270
|
| 182 |
+
|
| 183 |
+
".flac". 271
|
| 184 |
+
|
| 185 |
+
272
|
| 186 |
+
|
| 187 |
+
2. speaker_id: The filename without the seg- 273
|
| 188 |
+
|
| 189 |
+
ment number. 274
|
| 190 |
+
|
| 191 |
+
275
|
| 192 |
+
|
| 193 |
+
3. filename: Full filename including the exten- 276 sion ".flac".
|
| 194 |
+
|
| 195 |
+
4. sentence_norm: The normalized transcription: no punctuation marks, no digits, lower
|
| 196 |
+
|
| 197 |
+
case letters, one single space between words. 281
|
| 198 |
+
|
| 199 |
+
5. gender: The gender of the speaker: male or 283 female.
|
| 200 |
+
|
| 201 |
+
6. age: The age range of the speaker: 15-35, 36- 286
|
| 202 |
+
|
| 203 |
+
60, 61+ years old. 287
|
| 204 |
+
|
| 205 |
+
7. native_language: "Faroese" in all the cases. 288 289
|
| 206 |
+
|
| 207 |
+
8. dialect: The speaker dialect. 290 291
|
| 208 |
+
|
| 209 |
+
9. created_at: The date when the audio file was
|
| 210 |
+
|
| 211 |
+
recorded. 293
|
| 212 |
+
|
| 213 |
+
10. duration: Duration of the speech file in sec-
|
| 214 |
+
|
| 215 |
+
onds. 296
|
| 216 |
+
|
| 217 |
+
11. sample_rate: ${16kHz}$ in all the cases. 298
|
| 218 |
+
|
| 219 |
+
12. status: The corpus portion: train, test or dev.
|
| 220 |
+
|
| 221 |
+
301
|
| 222 |
+
|
| 223 |
+
§ 2.4 CODIFICATION OF THE AUDIO FILENAMES
|
| 224 |
+
|
| 225 |
+
In the Ravnursson corpus, the filenames of the au- 303 dio files encode relevant information about the respective speech files. The first row of Table 2, shows a typical audio filename. The second row
|
| 226 |
+
|
| 227 |
+
enumerates the fields of information encoded in 308
|
| 228 |
+
|
| 229 |
+
the filename and the third row shows the same 309
|
| 230 |
+
|
| 231 |
+
filename of row one but broken down in the eight 310 parts as specified in the second row.
|
| 232 |
+
|
| 233 |
+
max width=
|
| 234 |
+
|
| 235 |
+
8|c|MEY01_040319_rok0_0009.flac
|
| 236 |
+
|
| 237 |
+
1-8
|
| 238 |
+
1 2 3 4 5 6 7 8
|
| 239 |
+
|
| 240 |
+
1-8
|
| 241 |
+
M E Y 01 040319 rok0 0009 .flac
|
| 242 |
+
|
| 243 |
+
1-8
|
| 244 |
+
|
| 245 |
+
Table 2: Audio Filename Format.
|
| 246 |
+
|
| 247 |
+
313
|
| 248 |
+
|
| 249 |
+
318
|
| 250 |
+
|
| 251 |
+
The explanation of the information encoded in
|
| 252 |
+
|
| 253 |
+
the filename is at follows: 320
|
| 254 |
+
|
| 255 |
+
1. Gender of the Speaker: $\mathbf{M}$ for male or $\mathbf{K}$ for 322
|
| 256 |
+
|
| 257 |
+
female 323
|
| 258 |
+
|
| 259 |
+
324 2. Dialect Group: $\mathbf{U}$ for Suǒuroy, $\mathbf{A}$ for San-
|
| 260 |
+
|
| 261 |
+
325 doy, $\mathbf{S}$ for Suǒurstreymoy, $\mathbf{E}$ for Noröurstrey-
|
| 262 |
+
|
| 263 |
+
326 moy/Eysturoy (exclusive of Eiði, Gjógv
|
| 264 |
+
|
| 265 |
+
327 og Funningur), $\mathbf{V}$ for Vágar and $\mathbf{N}$ for Norǒuroyggjar (inclusive of Eiǒi, Gjógv og Funningur)
|
| 266 |
+
|
| 267 |
+
330
|
| 268 |
+
|
| 269 |
+
3. Age Group: $\mathbf{Y}$ for "Younger" between 15-35
|
| 270 |
+
|
| 271 |
+
332 years old, $\mathbf{M}$ for "Middle-aged" between 36- 60 years old and $\mathbf{E}$ for "Elderly" 61 years old or older.
|
| 272 |
+
|
| 273 |
+
335
|
| 274 |
+
|
| 275 |
+
4. Number of Speaker in a Group: is a number
|
| 276 |
+
|
| 277 |
+
337 that always consists of two digits and starts
|
| 278 |
+
|
| 279 |
+
338 with01,02,03etc. The first speaker in a
|
| 280 |
+
|
| 281 |
+
339 group with the same gender, dialect group
|
| 282 |
+
|
| 283 |
+
340 and age group (e.g. MEY) gets the number 01. The next speaker in the same group
|
| 284 |
+
|
| 285 |
+
342 gets the number 02 (and his ID is therefore MEY02).
|
| 286 |
+
|
| 287 |
+
§ 5. DATE: THE DATE WHEN THE SPEECH WAS RECORDED (DAY/MONTH/YEAR).
|
| 288 |
+
|
| 289 |
+
6. Type of reading material: This code can only be found in speech files at RDATA1O and RDATA1OP. For more information about the types of reading material please see the documentation of the original BLARK and its directory "readingtexts_1.0".
|
| 290 |
+
|
| 291 |
+
7. Segment Number: In the original BLARK the recording session is distributed as one
|
| 292 |
+
|
| 293 |
+
357 audio file per speaker and it can be very long from the ASR perspective. So, the audio files are subdivided in segments of
|
| 294 |
+
|
| 295 |
+
360 around 10 seconds to fit most of the modern ASR engines. The numbering is con-
|
| 296 |
+
|
| 297 |
+
362 tinuous for each speaker; the only exception is with the files MUY01_180519_set4_0004 and MUY02_190120_eind2_0007. We detected that they are empty and we removed them.
|
| 298 |
+
|
| 299 |
+
367
|
| 300 |
+
|
| 301 |
+
8. File extension: The corpus is distributed in FLAC format.
|
| 302 |
+
|
| 303 |
+
§ 3 ACOUSTIC MODELS
|
| 304 |
+
|
| 305 |
+
The development of the Ravnursson corpus allowed us to create acoustic models in four different ASR systems: WAV2VEC2, NeMo, Kaldi and PocketSphinx. In this section we discuss the details of how we created each of them.
|
| 306 |
+
|
| 307 |
+
§ 3.1 WAV2VEC2 MODEL
|
| 308 |
+
|
| 309 |
+
378
|
| 310 |
+
|
| 311 |
+
379
|
| 312 |
+
|
| 313 |
+
WAV2VEC, released in 2019, is a convolutional 380
|
| 314 |
+
|
| 315 |
+
neural network that takes raw audio as input and 381
|
| 316 |
+
|
| 317 |
+
computes a general representation that can be 382
|
| 318 |
+
|
| 319 |
+
input to a speech recognition system (Schnei- 383
|
| 320 |
+
|
| 321 |
+
der et al., 2019). In 2020, a second version, 384
|
| 322 |
+
|
| 323 |
+
WAV2VEC2 (Baevski et al., 2020) was released. 385
|
| 324 |
+
|
| 325 |
+
Based on WAV2VEC2, the XLSR-53 (Conneau 386 et al., 2020) was also released in 2020. XLSR-53
|
| 326 |
+
|
| 327 |
+
is a open-source model trained with more than ${50}\mathrm{k}$ 388
|
| 328 |
+
|
| 329 |
+
hours of unlabelled speech in 53 languages. It can 389 be used to create acoustic models in any language
|
| 330 |
+
|
| 331 |
+
through a fine-tuning step. 391
|
| 332 |
+
|
| 333 |
+
Using the XLSR-53 as a starting point, we created an acoustic model suitable for Faroese (Her-
|
| 334 |
+
|
| 335 |
+
nandez Mena, 2022b) which is available on a Cre- 394
|
| 336 |
+
|
| 337 |
+
ative Commons licence CCBY4. The fine-tuning 396 process for this model lasted 30 epochs.
|
| 338 |
+
|
| 339 |
+
§ 3.2 NEMO MODEL
|
| 340 |
+
|
| 341 |
+
399
|
| 342 |
+
|
| 343 |
+
NeMo (Neural Modules) is a Python toolkit de-
|
| 344 |
+
|
| 345 |
+
veloped by NVIDIA for creating AI applica- 401 tions. It comes with extendable collections of pre-built modules for automatic speech recognition and natural language processing (Kuchaiev et al., 2019). One of the NeMo modules suitable for speech recognition is called Quartznet (Kri-man et al., 2020) which is a convolutional model trained with Connectionist Temporal Classification (Graves, 2012) or CTC for short.
|
| 346 |
+
|
| 347 |
+
In order to train an ASR model for Faroese in NeMo, we used the public checkpoint "QuartzNet15x5Base-En.nemo" ${}^{3}$ as a starting point. This model was trained with more than $3\mathrm{k}$ hours of English data in a Quartznet archi-
|
| 348 |
+
|
| 349 |
+
tecture during 600 epochs. Based on a work 416 by Huang et al., we fine-tuned the checkpoint with the data of the Ravnursson corpus during 236 epochs, obtaining a first checkpoint able to recognize Faroese. Then, we augmented the initial 100 hours of the training portion of the Ravnursson corpus to 300 hours through speech perturbation using two speed rates: 0.9 and 1.1 . Finally, we fine-tuned our initial checkpoint in Faroese with the augmented data during 163 epochs to obtain a final model (Hernandez Mena, 2022a) which is available on a Creative Commons licence CCBY4.
|
| 350 |
+
|
| 351 |
+
431
|
| 352 |
+
|
| 353 |
+
${}^{3}$ Available at: https://catalog.ngc.nvidia.com/orgs/nvidia/models/nemospeechmodels/ files
|
| 354 |
+
|
| 355 |
+
432 486
|
| 356 |
+
|
| 357 |
+
max width=
|
| 358 |
+
|
| 359 |
+
X 10|c|Points of articulation
|
| 360 |
+
|
| 361 |
+
1-11
|
| 362 |
+
20*Manners of articulation Consonants Bi-labial Labiodental Dental Alveolar Post-alveolar Retroflex Palatal Velar Glottal
|
| 363 |
+
|
| 364 |
+
2-11
|
| 365 |
+
Voiceless Stop p X X t X X X k X
|
| 366 |
+
|
| 367 |
+
2-11
|
| 368 |
+
Voiced Stop b X X d X X X g X
|
| 369 |
+
|
| 370 |
+
2-11
|
| 371 |
+
Voiceless Affricate X X X X tS X X X X
|
| 372 |
+
|
| 373 |
+
2-11
|
| 374 |
+
Voiced Affricate X X X X dZ X X X X
|
| 375 |
+
|
| 376 |
+
2-11
|
| 377 |
+
Voiceless Fricative X f 5 S S Z X X h
|
| 378 |
+
|
| 379 |
+
2-11
|
| 380 |
+
Voiced Fricative X V 4 X X X X X X
|
| 381 |
+
|
| 382 |
+
2-11
|
| 383 |
+
Voiceless Nasal M X X $X$ X X X X X
|
| 384 |
+
|
| 385 |
+
2-11
|
| 386 |
+
Voiced Nasal m X X n X X X $\mathrm{N}$ X
|
| 387 |
+
|
| 388 |
+
2-11
|
| 389 |
+
Voiceless Lateral X X X L X X X X X
|
| 390 |
+
|
| 391 |
+
2-11
|
| 392 |
+
Voiced Lateral X X X 1 X X X X X
|
| 393 |
+
|
| 394 |
+
2-11
|
| 395 |
+
Approximants X X X r X X j W X
|
| 396 |
+
|
| 397 |
+
2-11
|
| 398 |
+
Vowels X X X X Front X Central X Back
|
| 399 |
+
|
| 400 |
+
2-11
|
| 401 |
+
Close X X X X i y X 3 X U
|
| 402 |
+
|
| 403 |
+
2-11
|
| 404 |
+
X X X X X X IY X U X
|
| 405 |
+
|
| 406 |
+
2-11
|
| 407 |
+
Close-mid X X X X e 2 X X O
|
| 408 |
+
|
| 409 |
+
2-11
|
| 410 |
+
X X X X X X X 8 X X
|
| 411 |
+
|
| 412 |
+
2-11
|
| 413 |
+
Open-mid X X X X X E 9 X X O
|
| 414 |
+
|
| 415 |
+
2-11
|
| 416 |
+
X X X X X X X X X X
|
| 417 |
+
|
| 418 |
+
2-11
|
| 419 |
+
Open X X X X X a X X X
|
| 420 |
+
|
| 421 |
+
1-11
|
| 422 |
+
|
| 423 |
+
Table 3: Phonetic Repertoire of Faroese
|
| 424 |
+
|
| 425 |
+
507
|
| 426 |
+
|
| 427 |
+
433 487
|
| 428 |
+
|
| 429 |
+
434 488
|
| 430 |
+
|
| 431 |
+
435 489
|
| 432 |
+
|
| 433 |
+
436 490
|
| 434 |
+
|
| 435 |
+
437 491
|
| 436 |
+
|
| 437 |
+
438 492
|
| 438 |
+
|
| 439 |
+
439 493
|
| 440 |
+
|
| 441 |
+
440
|
| 442 |
+
|
| 443 |
+
441
|
| 444 |
+
|
| 445 |
+
442
|
| 446 |
+
|
| 447 |
+
443 497
|
| 448 |
+
|
| 449 |
+
444
|
| 450 |
+
|
| 451 |
+
445 499
|
| 452 |
+
|
| 453 |
+
446
|
| 454 |
+
|
| 455 |
+
447
|
| 456 |
+
|
| 457 |
+
448 502
|
| 458 |
+
|
| 459 |
+
449
|
| 460 |
+
|
| 461 |
+
450 504
|
| 462 |
+
|
| 463 |
+
509
|
| 464 |
+
|
| 465 |
+
§ 3.3 KALDI MODEL
|
| 466 |
+
|
| 467 |
+
Kaldi (Povey et al., 2011), released in 2011, is a well established toolkit for speech recognition written in $\mathrm{C} + +$ , which is based on distinct paradigms such as: finite-state transducers (Allauzen et al., 2007), Hidden Markov Models (Juang and Rabiner, 1991), Gaussian Mixture Models (Naeem et al., 2020) as well as neural networks (Rath et al., 2013).
|
| 468 |
+
|
| 469 |
+
Our "Kaldi Recipe for Faroese" (Hernán-dez Mena, 2022) was created using the Ravnurs-son corpus as training data. The recipe produces models based on Hidden Markov Models (HMMs) as well as Neural Networks; in specific, the neural network is an LSTM or "Long Short-Term Memory" (Huang et al., 2017). This recipe requires a 3-gram language model (lm) for decoding, a 4- gram Im for re-scoring and a pronouncing dictionary; elements that are available in our "Faroese Language Models with Pronunciations" (Hernán-dez Mena et al., 2022), discussed in further sections.
|
| 470 |
+
|
| 471 |
+
The recipe is available on Clarin. is ${}^{4}$ under a Creative Commons licence CCBY4.
|
| 472 |
+
|
| 473 |
+
485
|
| 474 |
+
|
| 475 |
+
§ 3.4 POCKETSPHINX MODEL
|
| 476 |
+
|
| 477 |
+
Sphinx is an old speech recognition system 512
|
| 478 |
+
|
| 479 |
+
based on Hidden Markov Models developed by 514 Carnegie-Mellon University in the late 80's (Lee et al., 1990). Through time, progressive versions of Sphinx have been released up the version 4 . At some point, the version 2 turned into Pock-
|
| 480 |
+
|
| 481 |
+
etSphinx (Huggins-Daines et al., 2006). Pocket- 519 Sphinx was supposed to be a lighter and faster version of Sphinx but nowadays it has become the main version that can be used in real time mode, even in ARM processors. PocketSphinx has long
|
| 482 |
+
|
| 483 |
+
ceased to be a suitable system for research, but 524 nevertheless it still has an active community of users that choose it as a real time speech recognition system in devices with not a great computing power such as Raspberry PI (Upton and Halfacree,
|
| 484 |
+
|
| 485 |
+
2014) or other ARM computers. 529
|
| 486 |
+
|
| 487 |
+
Our PocketSphinx models ${}^{5}$ , trained with the Ravnursson corpus, are suitable for the Pocket-Sphinx Python library available at the Pypi repository ${}^{6}$ . With this library it is possible to perform both standard and real time speech recognition,
|
| 488 |
+
|
| 489 |
+
539
|
| 490 |
+
|
| 491 |
+
${}^{5}$ Available at: https://github.com/ CarlosDanielMena/RAVNURSSON_FAROESE_ Models_100h
|
| 492 |
+
|
| 493 |
+
${}^{6}$ See: https://pypi.org/project/ pocket sphinx/
|
| 494 |
+
|
| 495 |
+
${}^{4}$ See: http://hdl.handle.net/20.500.12537/305
|
| 496 |
+
|
| 497 |
+
540 594
|
| 498 |
+
|
| 499 |
+
max width=
|
| 500 |
+
|
| 501 |
+
SAMPA $\mathbf{{IPA}}$ SAMPA $\mathbf{{IPA}}$ SAMPA $\mathbf{{IPA}}$ SAMPA $\mathbf{{IPA}}$
|
| 502 |
+
|
| 503 |
+
1-8
|
| 504 |
+
p ${\mathrm{p}}^{\mathrm{h}}$ m m e e aJ ai
|
| 505 |
+
|
| 506 |
+
1-8
|
| 507 |
+
b b M $\dot{\mathrm{m}}$ E E aW au
|
| 508 |
+
|
| 509 |
+
1-8
|
| 510 |
+
t ${t}^{h}$ n n a a OJ oi
|
| 511 |
+
|
| 512 |
+
1-8
|
| 513 |
+
d d $X$ $\underset{ \circ }{\text{ n }}$ $y$ $y$ OW ou
|
| 514 |
+
|
| 515 |
+
1-8
|
| 516 |
+
$\mathrm{k}$ ${\mathrm{k}}^{\mathrm{h}}$ $\mathrm{N}$ IJ Y Y 3W tu
|
| 517 |
+
|
| 518 |
+
1-8
|
| 519 |
+
g g $X$ ij 2 $\varnothing$ EW eu
|
| 520 |
+
|
| 521 |
+
1-8
|
| 522 |
+
f f 1 1 9 oe 9W œu
|
| 523 |
+
|
| 524 |
+
1-8
|
| 525 |
+
V V L 1 U U 9J cei
|
| 526 |
+
|
| 527 |
+
1-8
|
| 528 |
+
S S j j O 0 4 0
|
| 529 |
+
|
| 530 |
+
1-8
|
| 531 |
+
S f W W O 0 5 0
|
| 532 |
+
|
| 533 |
+
1-8
|
| 534 |
+
Z S r I EA ea 8 0
|
| 535 |
+
|
| 536 |
+
1-8
|
| 537 |
+
h h U U OA 0a H Pre-aspiration
|
| 538 |
+
|
| 539 |
+
1-8
|
| 540 |
+
tS tʃ $\mathrm{i}$ $\mathrm{i}$ UJ $v\dot{1}$ X X
|
| 541 |
+
|
| 542 |
+
1-8
|
| 543 |
+
dZ q I I EJ ei X X
|
| 544 |
+
|
| 545 |
+
1-8
|
| 546 |
+
|
| 547 |
+
Table 4: SAMPA vs. IPA Equivalences.
|
| 548 |
+
|
| 549 |
+
608
|
| 550 |
+
|
| 551 |
+
609
|
| 552 |
+
|
| 553 |
+
541 595
|
| 554 |
+
|
| 555 |
+
542 596
|
| 556 |
+
|
| 557 |
+
543 597
|
| 558 |
+
|
| 559 |
+
544 598
|
| 560 |
+
|
| 561 |
+
545 599
|
| 562 |
+
|
| 563 |
+
546 600
|
| 564 |
+
|
| 565 |
+
547 601
|
| 566 |
+
|
| 567 |
+
548 602
|
| 568 |
+
|
| 569 |
+
549 603
|
| 570 |
+
|
| 571 |
+
550 604
|
| 572 |
+
|
| 573 |
+
551 605
|
| 574 |
+
|
| 575 |
+
552 606
|
| 576 |
+
|
| 577 |
+
553 607
|
| 578 |
+
|
| 579 |
+
556 610
|
| 580 |
+
|
| 581 |
+
612 forced-alignment and produce timestamps. The version of PocketSphinx that was available when we produced these models was the number 4 . Few weeks later the version 5 was released but our models remain compatible.
|
| 582 |
+
|
| 583 |
+
§ 4 PRONUNCIATION MODELS
|
| 584 |
+
|
| 585 |
+
The pronunciation models that we discuss in this section is a set of pronouncing dictionaries that are included in our "Faroese Language Models with Pronunciations" (Hernández Mena et al., 2022) along with a number of language models that will be discussed in section 5 . Most of the pronunciations come from the original BLARK, but for convenience, we subdivide them in different dictionaries as follows:
|
| 586 |
+
|
| 587 |
+
* Central_Faroese.dic: It contains pronunciations of the variant of Faroese which is spoken in the capital.
|
| 588 |
+
|
| 589 |
+
* East_Faroese.dic: It contains pronunciation of the northwest variant of Faroese ${}^{7}$ .
|
| 590 |
+
|
| 591 |
+
583 - Ravnursson_Composite_Words.dic: It contains words with hyphens and/or underscores
|
| 592 |
+
|
| 593 |
+
593 that are present in the Ravnursson Corpus. We keep them separate in a different dictionary because these type of composite
|
| 594 |
+
|
| 595 |
+
words can be problematic for a grapheme-to- 617 phoneme (g2p) tool.
|
| 596 |
+
|
| 597 |
+
* BLARK.dic: It contains pronunciations of
|
| 598 |
+
|
| 599 |
+
words that are present in the BLARK but that 620 are not present in any other dictionary of the
|
| 600 |
+
|
| 601 |
+
set. 622
|
| 602 |
+
|
| 603 |
+
* FAROESE_ASR.dic: This dictionary is
|
| 604 |
+
|
| 605 |
+
recommended for ASR experiments in 625 Kaldi or any other ASR system based on
|
| 606 |
+
|
| 607 |
+
phonemes. The dictionary is the mix of 627 Central_Faroese.dic, East_Faroese.dic and Ravnursson_Composite_Words.dic. It is im-
|
| 608 |
+
|
| 609 |
+
portant to clarify that the dictionary can 630 contain words with multiple pronunciations,
|
| 610 |
+
|
| 611 |
+
which is normal in Kaldi-like systems. 632 633
|
| 612 |
+
|
| 613 |
+
§ 4.1 PHONEME SETS OF DICTIONARIES
|
| 614 |
+
|
| 615 |
+
634
|
| 616 |
+
|
| 617 |
+
Table 3 shows the phonetic repertoire of Faroese 635
|
| 618 |
+
|
| 619 |
+
using 42 SAMPA symbols. Each of these corre- 637 spond to an individual phoneme that is included in the pronouncing dictionaries described in section 4, except for the vowel "/3/" that only occurs in diphthong. The phonetic repertoire of Faroese
|
| 620 |
+
|
| 621 |
+
includes the following 12 diphthongs: EA, OA, 642 $\mathbf{{UJ}},\mathbf{{EJ}},\mathbf{{aJ}},\mathbf{{aW}},\mathbf{{OJ}},\mathbf{{OW}},\mathbf{{3W}},\mathbf{{EW}},\mathbf{{9W}}$ and $\mathbf{{9J}}$ . Summing the 41 individual phonemes in Table 3, plus the 12 diphthong, plus seven phonemes with
|
| 622 |
+
|
| 623 |
+
pre-aspiration (Hb, Hd, HdZ, Hg, Hp, Ht, HtS), 646
|
| 624 |
+
|
| 625 |
+
we have a total of 60 phonemes. That is the list 647 of 60 phonemes that are included in the dictio-
|
| 626 |
+
|
| 627 |
+
${}^{7}$ In the most recent dialect classification (Petersen,2022), the islands in the northwest area are classified as being the same dialect area. However, there is a difference in the pronunciation of the digraph ${ei}$ between the westernmost islands and the more central and eastern islands in that dialect area in. Therefore, the westernmost part of the dialect area is not included in our EAST dictionary. For that reason, we have given this dictionary the name EAST. The idea is that this makes it is possible to make WEST, NORTHERN and SOUTHERN dictionaries in the future.
|
| 628 |
+
|
| 629 |
+
649 naries presented in section 4. To see an equivalence between our SAMPA symbols versus the IPA phonemes, please see Table 4.
|
| 630 |
+
|
| 631 |
+
§ 5 LANGUAGE MODELS
|
| 632 |
+
|
| 633 |
+
As it was mentioned in section 4, our "Faroese Language Models with Pronunciations" is a set of n-gram language models of distinct sizes that were created using the Faroese text provided in the BLARK, as it provides with text from newspaper articles, parliamentary speeches, books and
|
| 634 |
+
|
| 635 |
+
661 more. The normalization process of that text included to change everything to lowercase, allow only characters belonging to the Faroese alphabet
|
| 636 |
+
|
| 637 |
+
664 and removing punctuation marks.
|
| 638 |
+
|
| 639 |
+
The resulting text has a length of more than
|
| 640 |
+
|
| 641 |
+
666 half million lines of text $({106.3MB}$ approximately). The text was used to create a 3-gram (recommended for decoding) and a 4-gram (recom-
|
| 642 |
+
|
| 643 |
+
669 mended for re-scoring) language models with the SRILM toolkit (Stolcke, 2002). Both the 3-gram and 4-gram models come in pruned and unpruned versions. It is also included a 6-gram language model in binary format suitable for ASR experiments with the NeMo toolkit. In particular, this model was created using KenLM (Heafield, 2011). It is important to mention that all the words present in any of the language models are present in the pronouncing dictionaries for the east and central variants of Faroese (see section 4).
|
| 644 |
+
|
| 645 |
+
§ 6 RESULTS
|
| 646 |
+
|
| 647 |
+
Table 5 shows a comparison of the Word Error Rate (WER) obtained with the acoustic models presented in section 3. Results with Pocket-
|
| 648 |
+
|
| 649 |
+
686 Sphinx are not included because PocketSphinx is no longer competitive and the models created with it are destined to perform real time recognition in devices with low computing power as explained in
|
| 650 |
+
|
| 651 |
+
691 section 3.4. The NeMo results include the WER obtained using the 6-gram language model (LM) presented in section 5 as well as the WER obtained with no language model at all. The Kaldi results include the WER obtained with Hidden Markov Models (HMM) only and the WER obtained with the LSTM network. As it can be seen, the best results are obtained with the WAV2VEC2 model.
|
| 652 |
+
|
| 653 |
+
According to our previous experience (Hernan-
|
| 654 |
+
|
| 655 |
+
701 dez Mena et al., 2020; Mena et al., 2022), it is remarkable that the WER obtained with NeMo using a language model and the WER obtained with Kaldi using the LSTM are so close to each other despite of the relatively low amount of training data. This fact reveals that the training method described by Huang et al. is really effective.
|
| 656 |
+
|
| 657 |
+
On the other hand, Table 6 shows the results obtained with the newest system Whisper (Radford et al., 2022). Whisper is a transformer-based speech recognition system trained with ${680}\mathrm{k}$ hours of transcribed data in multiple languages. Whisper is also a multitask system able to perform multilingual speech recognition as well as speech translation and language identification. According to the original paper (Radford et al., 2022), the training set that Whisper uses for translation includes 46 hours of Faroese. Based on this, we decided to test Whisper in its distinct sizes with no fine-tuning step and using the development and test portions of the Ravnursson corpus. As it can be seen in Table 6, we obtained terribly bad WER results, revealing that Whisper needs to be fine-tuned prior to recognize Faroese data; unfortunately, this is beyond the scope of this paper but it will tackle as further work.
|
| 658 |
+
|
| 659 |
+
§ 7 CONCLUSIONS
|
| 660 |
+
|
| 661 |
+
A major development of Faroese ASR is presented in this work. The Ravnursson project has produced a corpus of 109 hours of transcribed speech and acoustic models for WAV2VEC2, NeMo, Kaldi and PocketSphinx have been developed. Furthermore, the project has also produced a set of n-gram language models of distinct sizes and pronunciation dictionaries in Faroese suitable for ASR experimentation. Quality assessment of the acoustic models are shown in Table 5 where the best results of ${7.60}\%$ WER was achieved by the WAV2VEC2 model. Another interesting result is shown in Table 6 demonstrating that a fine-tuning step is needed for Faroese for the multi-lingual ASR system Whisper.
|
| 662 |
+
|
| 663 |
+
Faroese ASR is no longer under-developed due to this work. The project has lowered the technological threshold for implementing ASR solutions for Faroese in industry and for studying the Faroese language using ASR as a tool. With all the results made available with open licenses, there is no good reason why Faroese ASR should not be included in standard language technology software in the future.
|
| 664 |
+
|
| 665 |
+
756 810
|
| 666 |
+
|
| 667 |
+
max width=
|
| 668 |
+
|
| 669 |
+
Corpus Portion NeMo SP No LM NeMo SP With LM Kaldi HMM Kaldi LSTM WAV2VEC2 XLRS-53
|
| 670 |
+
|
| 671 |
+
1-6
|
| 672 |
+
$\mathbf{{Dev}}$ 20.51% 13.66% 20.60% 12.22% 5.56%
|
| 673 |
+
|
| 674 |
+
1-6
|
| 675 |
+
Test 22.81% 15.95% 23.44% 14.04% 7.60%
|
| 676 |
+
|
| 677 |
+
1-6
|
| 678 |
+
|
| 679 |
+
Table 5: WER Results.
|
| 680 |
+
|
| 681 |
+
757 811
|
| 682 |
+
|
| 683 |
+
758 812
|
| 684 |
+
|
| 685 |
+
759 813
|
| 686 |
+
|
| 687 |
+
760 814
|
| 688 |
+
|
| 689 |
+
761 815
|
| 690 |
+
|
| 691 |
+
762 816
|
| 692 |
+
|
| 693 |
+
763
|
| 694 |
+
|
| 695 |
+
max width=
|
| 696 |
+
|
| 697 |
+
Whisper Size $\mathbf{{Dev}}$ WER Test WER
|
| 698 |
+
|
| 699 |
+
1-3
|
| 700 |
+
Tiny 113.4% 116.7%
|
| 701 |
+
|
| 702 |
+
1-3
|
| 703 |
+
Base 112.61% 113.07%
|
| 704 |
+
|
| 705 |
+
1-3
|
| 706 |
+
Small 128.05% 132.64%
|
| 707 |
+
|
| 708 |
+
1-3
|
| 709 |
+
Medium 116.34% 119.3%
|
| 710 |
+
|
| 711 |
+
1-3
|
| 712 |
+
Large 105.93% 110.25%
|
| 713 |
+
|
| 714 |
+
1-3
|
| 715 |
+
|
| 716 |
+
Table 6: Whisper WER Results.
|
| 717 |
+
|
| 718 |
+
764
|
| 719 |
+
|
| 720 |
+
765
|
| 721 |
+
|
| 722 |
+
766
|
| 723 |
+
|
| 724 |
+
767
|
| 725 |
+
|
| 726 |
+
768
|
| 727 |
+
|
| 728 |
+
769
|
| 729 |
+
|
| 730 |
+
770
|
| 731 |
+
|
| 732 |
+
772
|
| 733 |
+
|
| 734 |
+
774
|
| 735 |
+
|
| 736 |
+
§ ACKNOWLEDGMENTS
|
| 737 |
+
|
| 738 |
+
The text has to be anonymous. The real acknowl-
|
| 739 |
+
|
| 740 |
+
777 edgments will be revealed in the final version of the manuscript. The text has to be anonymous.
|
| 741 |
+
|
| 742 |
+
779 The real acknowledgments will be revealed in the final version of the manuscript. The text has to
|
| 743 |
+
|
| 744 |
+
782 be anonymous. The real acknowledgments will be revealed in the final version of the manuscript.
|
| 745 |
+
|
| 746 |
+
784 The text has to be anonymous. The real acknowledgments will be revealed in the final version of the manuscript.
|
| 747 |
+
|
| 748 |
+
787
|
| 749 |
+
|
| 750 |
+
789
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/WGYiq3yOTa/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,747 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Class Explanations: the Role of Content and Function Words
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author, Anonymous Author, Anonymous Author, Anonymous Author 057 058
|
| 10 |
+
|
| 11 |
+
Affiliation / Address line 1 059
|
| 12 |
+
|
| 13 |
+
006 Affiliation / Address line 2 060
|
| 14 |
+
|
| 15 |
+
\{email\}@domain 061
|
| 16 |
+
|
| 17 |
+
062
|
| 18 |
+
|
| 19 |
+
063
|
| 20 |
+
|
| 21 |
+
## Abstract
|
| 22 |
+
|
| 23 |
+
We address two understudied areas related to explainability for neural text models. First, class explanations. What features
|
| 24 |
+
|
| 25 |
+
016 are descriptive across a class, rather than explaining single input instances? Sec-
|
| 26 |
+
|
| 27 |
+
018 ond, the type of features that are used for providing explanations. Does the explanation involve the statistical pattern of
|
| 28 |
+
|
| 29 |
+
021 word usage or the presence of domain-specific content words? Here, we present
|
| 30 |
+
|
| 31 |
+
023 a method to extract both class explanations and strategies to differentiate between two
|
| 32 |
+
|
| 33 |
+
026 types of explanations - domain-specific signals or statistical variations in frequen-
|
| 34 |
+
|
| 35 |
+
028 cies of common words. We demonstrate our method using a case study in which we analyse transcripts of political debates
|
| 36 |
+
|
| 37 |
+
031 in the Swedish Riksdag.
|
| 38 |
+
|
| 39 |
+
033
|
| 40 |
+
|
| 41 |
+
## 1 Introduction
|
| 42 |
+
|
| 43 |
+
Recent developments in NLP are often the result of ever more complex model architectures and an
|
| 44 |
+
|
| 45 |
+
036 increasing number of model parameters. Yet, if we want to rely on these models, we should be
|
| 46 |
+
|
| 47 |
+
038 able to review the similarities and dissimilarities between the model and human judgement. Explainability frameworks can do this by highlighting on what the model has learnt to base its decisions. Are these coincidental statistical patterns or something that a human would use as an explanation? Madsen et al. (2022) argue that explanations should ideally be both functionally-grounded (true to the underlying machine learning model) as well as human-grounded (useful to a human).
|
| 48 |
+
|
| 49 |
+
In this article, we propose a new method for extracting class explanations from text classifiers. Besides, we also show a new way to distinguish between two types of features that appear in those
|
| 50 |
+
|
| 51 |
+
053 explanations, that is, between content words and
|
| 52 |
+
|
| 53 |
+
subtle statistical differences in function words' 065 frequencies. Our method aggregates explanations
|
| 54 |
+
|
| 55 |
+
for individual data points (here provided by LIME 067 (Ribeiro et al., 2016)), followed by a sorting stage that separates the different kinds of features.
|
| 56 |
+
|
| 57 |
+
Our work is in part motivated by use cases of 070 machine learning for texts in the social sciences.
|
| 58 |
+
|
| 59 |
+
In this field, explainability methods are relevant 072 both as checks to compare against human expert
|
| 60 |
+
|
| 61 |
+
knowledge and as a tool for bias detection. As a 075 case study, we use our method to explain the de-
|
| 62 |
+
|
| 63 |
+
cisions of a binary classifier trained to identify if 077 speeches in the Swedish Riksdag belong to either of the two main parties, the Moderates (M) or the
|
| 64 |
+
|
| 65 |
+
Social Democrats (S). 080
|
| 66 |
+
|
| 67 |
+
We find that our method can separate class ex-
|
| 68 |
+
|
| 69 |
+
plainability features and that those data points 082 whose explanations contain primarily domain-specific content words are more often classified
|
| 70 |
+
|
| 71 |
+
correctly. 085
|
| 72 |
+
|
| 73 |
+
## 2 Literature Review
|
| 74 |
+
|
| 75 |
+
087
|
| 76 |
+
|
| 77 |
+
As a result of the extensive work on explainability methods, a complex typology of different ap-
|
| 78 |
+
|
| 79 |
+
proaches exists (see Danilevsky et al. (2020) or 090 Madsen et al. (2022) for a survey). One impor-
|
| 80 |
+
|
| 81 |
+
tant distinction is between global and local. On 092 the one hand, global methods aim to explain some general behaviour of a model, such as class explanations, which summarise the model with respect
|
| 82 |
+
|
| 83 |
+
to a certain class. On the other, local methods aim 097 to explain why the model assigned a single data point to a particular class.
|
| 84 |
+
|
| 85 |
+
Between global and local methods, the latter receive the most attention (Nauta et al., 2022). Three popular methods are gradient-based approaches (Baehrens et al., 2010), Shapley values (Shapley, 1952), and LIME. Gradient-based approaches use the model's weights and take the gradient with regard to the input. As such, they measure the
|
| 86 |
+
|
| 87 |
+
change in the outcome given some small change in 107 the input. Yet, they are only an accurate reflection of the model if that model is linear (Li et al., 2016), which is not the case for most deep NLP architectures. On the other hand, while Shapley values have many theoretical guarantees to make them a faithful interpretation (they represent the true contributions of the features (Ethayarajh and Jurafsky, 2021)), their implementations (e.g. via attention flows for transformer-based architectures (Abnar and Zuidema, 2020)) tend to be computationally expensive, which is problematic in the current setting, where we focus on aggregating a substantial number of individual explanations. Finally, LIME has an advantage over gradient-based approaches as it it model agnostic. This means that LIME attempts explain a trained classifier independent of its architecture (Ribeiro et al., 2016).
|
| 88 |
+
|
| 89 |
+
### 2.1 Class explanations
|
| 90 |
+
|
| 91 |
+
The area of global class explanations is so far less studied than that of local explanations. One approach to providing global understanding of the model is to use behavioural or structural probes (Tenney et al., 2019; Hewitt and Manning, 2019; Wallace et al., 2019). Probing is a technique where a supervised model (a probe) is used to determine what is encoded in the internal representation of the studied model. This is done by training the probe to predict based on the frozen representations of the black-box model. If the probe performs well on the task, that indicates the required information was well represented by the black-box model, if the probe is unable to achieve high accuracy, that is taken to signify that the studied patterns are not learned by the black-box model. This has some limitations - for example, the complexity of the probe. If the probe is too simple, it may not capture second order effects, if it is too complex, it may learn the task internally and "discover" things that are in the probe rather than the model (Hewitt and Liang, 2019). More importantly, these methods tend to be applied to the discovery of simple syntactic structures like part of speech (POS) tagging, syntactic tree structures (Rogers et al., 2020) or to detect the presence of specific knowledge (Petroni et al., 2019). Other attempts in this area include leveraging local methods and utilising a strategy for aggregating and presenting those results to the user. An example of such approach is SP-LIME (Ribeiro et al., 2016), which aggregates individual LIME
|
| 92 |
+
|
| 93 |
+
explanations with a greedy search for finding data 162
|
| 94 |
+
|
| 95 |
+
points (texts) that are explained by the most dis- 163 similar sets of features in order to represent the breadth of the class explanations. The results are presented as ranked text examples with their corresponding explanations, where the number of ex-
|
| 96 |
+
|
| 97 |
+
amples is defined by the user. Due to its focus 168 on features that cover as many input instances as possible, this method tends to overemphasise stop words (see further discussion in Section 6).
|
| 98 |
+
|
| 99 |
+
### 2.2 Features of Explanations
|
| 100 |
+
|
| 101 |
+
To a human, not all features learnt by the machine 175 learning model are equally informative. Some signals may come from speech patterns, others from the topic that is discussed and the sentiment, yet others may indicate preferred catch-phrases and slogans. There is a distinction between explanations of the model (what a model bases its prediction on) and human explanation (what a human would base their decision on if faced with the same prediction task) (Miller, 2019). Since humans have background knowledge that is not accessible to the model and the model has the capacity to detect small statistical signals that are beyond human computational capabilities, the set of features that are selected by either may differ. This issue can be viewed in terms of the concepts presented in the position paper by Doshi-Velez and Kim (2017) and further discussed by Madsen et al. (2022), namely - human-grounded and functionally-grounded explainability. Functionally-grounded explainability is concerned with how well the explanation reflects the model, whereas human-grounded explainability is concerned with producing explanations that are useful to a human. This is also in line with work by Nauta et al. (2022), where the authors argue for the rigorous evaluation of an explainability method across twelve properties in three categories - content, presentation, and user. The content properties and in particular correctness (faithfulness w.r.t. the black box) are related to the functionally-grounded approach, whereas the user properties - context (how relevant the explanation is to the user), coherence (how accordant the explanation is with prior knowledge), and controllability (how interactive or controllable an explanation is) - relate to human-grounded explainability.
|
| 102 |
+
|
| 103 |
+
In our work, we use function and content words
|
| 104 |
+
|
| 105 |
+
as a proxy for functionally-grounded and human- 215 grounded explanations. The term function words is used in a broader sense here than the strict linguistic definition of prepositions, conjunctions etc. In the setting of parliamentary debates, for example, there is procedural language (e.g. "fru tallman" (madam speaker)) that can also act as function words in the domain. A model can learn to detect distributional differences of any word as long as it is correlated with the predicted class, but a human will be unlikely to relate and understand the cause of the distributional differences of stop-words. The difference in frequency of how often a group uses the word "also", for example, may not be very informative for a human, even if stop word distributions point to real speech patterns that dis-
|
| 106 |
+
|
| 107 |
+
232 tinguish between the speakers (Arun et al., 2009a) and have even been linked to the author's gender (Arun et al., 2009b). Human domain knowledge will most likely be captured through domain-specific, content words. Being able to confirm the (extent of the) model's grounding in content words can serve to validate it.
|
| 108 |
+
|
| 109 |
+
## 3 Method
|
| 110 |
+
|
| 111 |
+
Our algorithm for computing class explanations consists of four steps: post-hoc instance explanations extraction, aggregation, sorting, and a keyword-in-context search that extracts example texts. This framework is formalized in Algorithm 1. It is similar to SP-LIME, but rather than searching for data points that capture the most diversity of the important features, we propose to work directly with the feature importance and explore ways to summarize and sort these by relevance.
|
| 112 |
+
|
| 113 |
+
The implementation will be linked in the non-anonymous version.
|
| 114 |
+
|
| 115 |
+
### 3.1 Step 1: Instance explanation extraction
|
| 116 |
+
|
| 117 |
+
For a set of held-out data samples $N$ , we apply the trained classifier $f$ . In the instances where
|
| 118 |
+
|
| 119 |
+
259 the classifier makes the correct prediction, we ex- tract the list of features and their corresponding saliency with model $g$ . This can also be flipped to focus on instances where the model makes the incorrect predictions to investigate which patterns or instances are hard to classify. A certainty threshold can also be used to explore only cases where the model is certain or borderline cases. Our method aims to be extendable to different model architectures, therefore we require a post-
|
| 120 |
+
|
| 121 |
+
269 hoc, model agnostic instance explanation function
|
| 122 |
+
|
| 123 |
+
Algorithm 1 Class explainability from instance explanations
|
| 124 |
+
|
| 125 |
+
---
|
| 126 |
+
|
| 127 |
+
Require: Binary classifier $f$ , data samples $N$
|
| 128 |
+
|
| 129 |
+
Require: Instance explainability function $g$
|
| 130 |
+
|
| 131 |
+
Require: Feature scoring function $h$
|
| 132 |
+
|
| 133 |
+
$W \leftarrow \{ \} \; \vartriangleright$ features and importance scores
|
| 134 |
+
|
| 135 |
+
${c1} \leftarrow \{ \} \; \vartriangleright$ features explaining class 1
|
| 136 |
+
|
| 137 |
+
${c2} \leftarrow \{ \} \; \vartriangleright$ features explaining class 2
|
| 138 |
+
|
| 139 |
+
---
|
| 140 |
+
|
| 141 |
+
270
|
| 142 |
+
|
| 143 |
+
271 Step 1 - Instance explanation extraction
|
| 144 |
+
|
| 145 |
+
---
|
| 146 |
+
|
| 147 |
+
for text, true_label $\in N$ do
|
| 148 |
+
|
| 149 |
+
if $f\left( \text{text}\right) =$ true_label then
|
| 150 |
+
|
| 151 |
+
$W \leftarrow W \cup \{ g\left( {\text{ text }, f}\right) \}$
|
| 152 |
+
|
| 153 |
+
end if
|
| 154 |
+
|
| 155 |
+
end for
|
| 156 |
+
|
| 157 |
+
---
|
| 158 |
+
|
| 159 |
+
Step 2 - Aggregation
|
| 160 |
+
|
| 161 |
+
---
|
| 162 |
+
|
| 163 |
+
for feature, score $\in W$ do
|
| 164 |
+
|
| 165 |
+
if score $< 0$ then
|
| 166 |
+
|
| 167 |
+
${c1} \leftarrow {c1} \cup \{$ feature $\}$
|
| 168 |
+
|
| 169 |
+
else
|
| 170 |
+
|
| 171 |
+
${c2} \leftarrow {c2} \cup \{$ feature $\}$
|
| 172 |
+
|
| 173 |
+
end if
|
| 174 |
+
|
| 175 |
+
end for
|
| 176 |
+
|
| 177 |
+
---
|
| 178 |
+
|
| 179 |
+
286
|
| 180 |
+
|
| 181 |
+
288
|
| 182 |
+
|
| 183 |
+
Step 3 – Sorting for $c \in \{ {c1},{c2}\}$ do return $c$ sorted by $h$ score end for Step 4 - Keywords in context
|
| 184 |
+
|
| 185 |
+
---
|
| 186 |
+
|
| 187 |
+
for $c \in \left\{ {{c1},{c2}}\right\}$ do
|
| 188 |
+
|
| 189 |
+
for $\operatorname{term} \in$ top $X$ terms in $c$ do
|
| 190 |
+
|
| 191 |
+
return all occurrences of term
|
| 192 |
+
|
| 193 |
+
with $n$ words before and after
|
| 194 |
+
|
| 195 |
+
end for
|
| 196 |
+
|
| 197 |
+
end for
|
| 198 |
+
|
| 199 |
+
---
|
| 200 |
+
|
| 201 |
+
298
|
| 202 |
+
|
| 203 |
+
301
|
| 204 |
+
|
| 205 |
+
303
|
| 206 |
+
|
| 207 |
+
306
|
| 208 |
+
|
| 209 |
+
308 $g$ . For now, we have chosen LIME, but alternative methods can be used as well, as long as they are able to extract features and the feature contribution scores that explain an instance. This means we are currently constrained by LIME's limitations and only consider single tokens as features. Since LIME is a surrogate model, there is also some uncoupling between the classification model and the explanations. For each correctly classified instance, we extract the top $k$ features (here set to 10). This can be reduced even further in order
|
| 210 |
+
|
| 211 |
+
to limit the number of features that are considered 323 or extended to include all tokens and the task of limiting the explanation will then be completely relegated to the sorting step.
|
| 212 |
+
|
| 213 |
+
### 3.2 Step 2: Aggregation
|
| 214 |
+
|
| 215 |
+
A feature can contribute either positively or negatively towards the prediction of the model. When working with a binary classifier, a negatively contributing feature towards predicting class 1 means it is a positively contributing feature for class 2 . Therefore, the features collected from the previous step are aggregated in two sets $- {c1},{c2}$ - one for each class based on their feature score sign. Note that these two sets of features may have overlaps if the predictive signal is indicative of the different context in which those features appear.
|
| 216 |
+
|
| 217 |
+
### 3.3 Step 3: Sorting
|
| 218 |
+
|
| 219 |
+
The resulting sets of features for each class need to be constrained to a feasible size to be interpretable by a human. We propose two approaches to developing a feature relevance score $h$ to prioritize and distinguish these terms along an axis of more domain-specific concepts to more generic stop-words - normalization and PCA.
|
| 220 |
+
|
| 221 |
+
Normalization. Here, we use the sum of LIME scores for each feature of the explanation divided by number of occurrences of that feature in the validation set. We calculate the feature relevance score $h$ of the ${j}^{\text{th }}$ feature as: ${h}_{j} = \frac{1}{{m}_{j}}\mathop{\sum }\limits_{{i = 1}}^{N}{W}_{ij}$ . Here, $N$ is the number of data points in the explained dataset, ${m}_{j}$ is the number of occurrences of feature $j$ in the explained set, and $W$ is the explanation matrix containing the local importance of the interpretable components for each instance. This will give higher scores to features identified as more important by LIME, but will penalise common words, if they do not contribute to a class prediction often. This is in line with the definition of stop-words and should target the corpus-specific stop-words. We also filter out words that appear in two or less documents, as these can be party specific, but may not be useful for generalisation. This number can also be increased to filter out more predictive (according to LIME) words.
|
| 222 |
+
|
| 223 |
+
PCA. The second approach to sorting is to decouple it from the LIME score after the initial aggregation step and use PCA of word embed-dings. We found that PCA applied to pre-trained word embeddings tends to separate domain specific words from function words and more generic
|
| 224 |
+
|
| 225 |
+
terms. A theoretical motivation for this analysis 378
|
| 226 |
+
|
| 227 |
+
lies in the distributional differences between a gen- 379
|
| 228 |
+
|
| 229 |
+
eral text (used for pre-training word embeddings) 380
|
| 230 |
+
|
| 231 |
+
and a domain-specific text (in this case - politi- 381 cal debate). We hypothesise that the general embedding model will see the domain specific terms
|
| 232 |
+
|
| 233 |
+
is sufficiently distinct context in order to embed 384 them in a compact space with a latent dimension separating them from more common and general terms. This relies on the studied data having a
|
| 234 |
+
|
| 235 |
+
significant amount of domain specific terminology 389 that is rarer in general. We expect this to be the case for many application within the social sciences (e.g. politics), but can have limitations in, lower-level, syntactic classification tasks like POS
|
| 236 |
+
|
| 237 |
+
tagging. 394
|
| 238 |
+
|
| 239 |
+
To calculate the sorting score, the terms from
|
| 240 |
+
|
| 241 |
+
each set ${c1}$ and ${c2}$ are embedded using a model ${}^{\top }$ 396 trained on the Swedish CoNLL17 corpus. A PCA is run on each set of words $- {c1},{c2}$ - and the first PCA dimension value is used as the sorting score $h$ . Similarly to the normalisation approach, words that appear in two or fewer documents are filtered out. This dimension seems to provide a good distinction of domain specific terms.
|
| 242 |
+
|
| 243 |
+
### 3.4 Step 4: Keywords in Context
|
| 244 |
+
|
| 245 |
+
To further increase human interpretability, we also provide a way to provide context by extracting snippets of texts around the top word features produced in Step 3. For each occurrence, we use a simple keyword-in-context search and extract $n$ words before and after our feature word. This is clearly not feasible or interesting for very frequent words, which further motivates separating rarer, domain specific content words from more common function words.
|
| 246 |
+
|
| 247 |
+
## 4 Data
|
| 248 |
+
|
| 249 |
+
The dataset used for the case-study consists of transcripts of debates in the Swedish Riksdag, sourced from Riksdagens öppna data - Anföranden ${}^{2}$ . We use a pre-processed version available from Språkbanken ${}^{3}$ consisting of debates from 1993 to 2018. For our experiment, texts from the Social Democrat (S) and Moderate (M) parties
|
| 250 |
+
|
| 251 |
+
431 have been extracted, resulting in ${104},{842}\mathrm{\;S}$ and ${62},{160}\mathrm{M}$ data points (one data point is one speech that could be part of a longer debate). From these, 100 examples have been sampled for a small-scale human baseline check, where two annotators are asked to perform the classification task of determining the party label from the speech texts and were evaluated against the true label. Since these are debates, references to the opponent are a strong but trivial predictor of party. References to people and political parties have been removed by targeting Swedish political party stems and words tagged as "People_along_political_spectrum" in Spräkbanken's tags, based on Swedish FrameNet (Heppin and Gronostaj, 2012). Data points shorter than 50 words have been removed, as manual analysis shows these tend to be entirely procedural and do not carry political sentiment. This is in line with similar cleaning practices used for US congressional debates (Bayram et al., 2019). The data is undersampled to balance the classes and split into: train(108,169), test(12,019)and validation (2,000)sets. The validation set is used for explainability methods.
|
| 252 |
+
|
| 253 |
+
---
|
| 254 |
+
|
| 255 |
+
http://vectors.nlpl.eu/repository/20/ 69.zip
|
| 256 |
+
|
| 257 |
+
thttps://data.riksdagen.se/data/ anforanden/
|
| 258 |
+
|
| 259 |
+
'https://spraakbanken.gu.se/resurser/ rd-anf-1993-2018
|
| 260 |
+
|
| 261 |
+
---
|
| 262 |
+
|
| 263 |
+
## 5 Experiments
|
| 264 |
+
|
| 265 |
+
To test our methodology we apply it to a BERT classifier trained to predict the party label of a text (Devlin et al., 2019). The classifier is fine-tuned from a pre-trained model for Swedish data released by The National Library of Sweden/KBLab and available through the huggingface library The model has a 50,325 word vocabulary and 512 maximum token length. Longer inputs are truncated. As a baseline for investigating class differences and separability of the data we use a logistic regression classifier, as this provides easy access to class explanations by simply looking at the top and bottom scoring internal weights of the model. $\mathrm{N}$ -gram spans from 1 to 3 and a combination of all have been compared. The number of input features is 50,325 - the same as the pre-trained BERT model.
|
| 266 |
+
|
| 267 |
+
A small-scale human annotation check on 100 instances shows the two annotators perform with 58 and 56 percent accuracy respectively. A Cohen's kappa of 0.4 indicates this is a hard classification task.
|
| 268 |
+
|
| 269 |
+
In the interest of space, the sections below con-
|
| 270 |
+
|
| 271 |
+
tain partial results. The full results are available in 486
|
| 272 |
+
|
| 273 |
+
an online appendix. 487
|
| 274 |
+
|
| 275 |
+
488
|
| 276 |
+
|
| 277 |
+
### 5.1 Baseline
|
| 278 |
+
|
| 279 |
+
489
|
| 280 |
+
|
| 281 |
+
Table 1 summarises the accuracy and F1 scores 490
|
| 282 |
+
|
| 283 |
+
for the logistic regression classifier. We observe 492 that the best result is achieved with 1 -grams, with the inclusion of 2- and 3- grams adding no performance gains. It seems the main part of the distinguishing signal can be picked up by specific words
|
| 284 |
+
|
| 285 |
+
rather than phrases. 497
|
| 286 |
+
|
| 287 |
+
<table><tr><td>n-gram span</td><td>#feat</td><td>acc</td><td>F1</td></tr><tr><td>1,1</td><td>50,325</td><td>76.94</td><td>76.80</td></tr><tr><td>2,2</td><td>50,325</td><td>73.19</td><td>73.05</td></tr><tr><td>3,3</td><td>50,325</td><td>69.39</td><td>69.15</td></tr><tr><td>1,3</td><td>150,975</td><td>76.93</td><td>76.80</td></tr></table>
|
| 288 |
+
|
| 289 |
+
Table 1: Logistic regression classifier performance.
|
| 290 |
+
|
| 291 |
+
499
|
| 292 |
+
|
| 293 |
+
502
|
| 294 |
+
|
| 295 |
+
504
|
| 296 |
+
|
| 297 |
+
From the internal model weights, we can identify both domain specific words - "sjuka" (sick), "arbetslösa" (unemplyed), "arbetslinjen" (the employment line, a Moderate catchphrase), and function words - "det" (the), "ocks" (also), "synner-het" (in particular), can be predictive of the party label. This is in agreement with our assumption that a model can depend on both statistical differences in stop word or in human concepts as the basis of its prediction, and in doing so outperforms the human annotators.
|
| 298 |
+
|
| 299 |
+
519
|
| 300 |
+
|
| 301 |
+
### 5.2 BERT
|
| 302 |
+
|
| 303 |
+
The BERT model ${}^{6}$ has an accuracy of 78.44 and 522 F1 score of 76.66 on the test set and accuracy of
|
| 304 |
+
|
| 305 |
+
79.95 and F1 score of 78.27 on the validation set, 524 which is only a slight improvement over the logistic regression baseline.
|
| 306 |
+
|
| 307 |
+
Applying LIME to all validation samples and aggregating the top 10 features for each data point
|
| 308 |
+
|
| 309 |
+
results is a list of 2,043 Moderate and 2,085 Social 529 Democrats terms. Out of these 1,456 Moderate and 1,334 Social Democrat terms appear in more than two documents, and are thus candidates to be included as part of class explanations (this limit
|
| 310 |
+
|
| 311 |
+
can be adjusted by the user). 534
|
| 312 |
+
|
| 313 |
+
539
|
| 314 |
+
|
| 315 |
+
---
|
| 316 |
+
|
| 317 |
+
5 https://github.com/
|
| 318 |
+
|
| 319 |
+
anonymous-supplementary-materials/
|
| 320 |
+
|
| 321 |
+
NoDaLiDa2023_Appendix
|
| 322 |
+
|
| 323 |
+
${}^{6}$ With hyperparameters: $\operatorname{lr} = 5\mathrm{e} - 6$ , batch size $= {48}$ , steps $= {6000}$
|
| 324 |
+
|
| 325 |
+
https://huggingface.co/KB/ bert-base-swedish-cased
|
| 326 |
+
|
| 327 |
+
---
|
| 328 |
+
|
| 329 |
+
540
|
| 330 |
+
|
| 331 |
+
<table><tr><td colspan="2">PCA ordering</td></tr><tr><td>rank</td><td>term</td></tr><tr><td>1</td><td>utgiftsomrâde (expenditure area)</td></tr><tr><td>2</td><td>budgetpropositionen (the budget bill)</td></tr><tr><td>3</td><td>jobbskatteavdrag (employment tax credit)</td></tr><tr><td>4</td><td>arbetslöshetsförsäkringen (unemployment insurance)</td></tr><tr><td>5</td><td>skattehöjningar (tax increases)</td></tr><tr><td/><td>...</td></tr><tr><td>1454</td><td>högkvalitativa (high quality)</td></tr><tr><td>1455</td><td>vackra (beautiful)</td></tr><tr><td>1456</td><td>klassiska (classic)</td></tr><tr><td colspan="2">Normalised LIME score</td></tr><tr><td>rank</td><td>term</td></tr><tr><td>1</td><td>vänsterregering (left-wing government)</td></tr><tr><td>2</td><td>fattigdomsbekämpning (poverty alleviation)</td></tr><tr><td>3</td><td>bidragsberoende (benefits dependency)</td></tr><tr><td>4</td><td>fridens (of peace)</td></tr><tr><td>5</td><td>arbetsföra (able to work)</td></tr><tr><td/><td>...</td></tr><tr><td>1454</td><td>som (as)</td></tr><tr><td>1455</td><td>ett (one)</td></tr><tr><td>1456</td><td>en (one)</td></tr></table>
|
| 332 |
+
|
| 333 |
+
Table 2: Results for the Moderates.
|
| 334 |
+
|
| 335 |
+
541
|
| 336 |
+
|
| 337 |
+
542
|
| 338 |
+
|
| 339 |
+
543
|
| 340 |
+
|
| 341 |
+
544
|
| 342 |
+
|
| 343 |
+
545
|
| 344 |
+
|
| 345 |
+
546
|
| 346 |
+
|
| 347 |
+
547
|
| 348 |
+
|
| 349 |
+
548
|
| 350 |
+
|
| 351 |
+
549
|
| 352 |
+
|
| 353 |
+
550
|
| 354 |
+
|
| 355 |
+
551
|
| 356 |
+
|
| 357 |
+
552
|
| 358 |
+
|
| 359 |
+
553
|
| 360 |
+
|
| 361 |
+
554
|
| 362 |
+
|
| 363 |
+
555
|
| 364 |
+
|
| 365 |
+
556
|
| 366 |
+
|
| 367 |
+
557
|
| 368 |
+
|
| 369 |
+
558
|
| 370 |
+
|
| 371 |
+
559
|
| 372 |
+
|
| 373 |
+
560
|
| 374 |
+
|
| 375 |
+
561
|
| 376 |
+
|
| 377 |
+
562
|
| 378 |
+
|
| 379 |
+
563
|
| 380 |
+
|
| 381 |
+
564
|
| 382 |
+
|
| 383 |
+
566
|
| 384 |
+
|
| 385 |
+
### 5.3 Validation
|
| 386 |
+
|
| 387 |
+
Tables 2- 3 show the results of both LIME and PCA for both M and S. In both cases, the models separate informative terms from generic ones. This is especially the case with the LIME scores, where the lowest-scoring words are all stop words. As for the highest-scoring words, we find that they are all related to taxes and employment. This is understandable, as this is also what makes up the main political left/right dimension in Sweden (Franzmann and Kaiser, 2006; Jolly et al., 2022;
|
| 388 |
+
|
| 389 |
+
583 Ezrow et al., 2011). Besides, we can identify sev- eral references to several (groups of) parties and ministers, which we would expect in debates.
|
| 390 |
+
|
| 391 |
+
While these findings are hopeful on their own, to be useful for social scientists, we need to do more to ensure that our results are valid. In other words, we want to ensure that our method measures what we intend to measure (Carmines and Zeller, 1979). In our case, this is whether a speech is representative of $\mathrm{S}$ or $\mathrm{M}$ .
|
| 392 |
+
|
| 393 |
+
593 Looking at how appropriate the terms are, as we
|
| 394 |
+
|
| 395 |
+
<table><tr><td colspan="2">PCA ordering</td></tr><tr><td>rank</td><td>term</td></tr><tr><td>1</td><td>budgetpropositionen (the budget bill)</td></tr><tr><td>2</td><td>arbetsmarknadspolitik (labor market policy)</td></tr><tr><td>3</td><td>samlingspartiet [Refers to the Moderates]</td></tr><tr><td>4</td><td>ungdomsarbetslösheten (youth unemployment)</td></tr><tr><td>5</td><td>skattesänkningar (tax cuts)</td></tr><tr><td/><td>...</td></tr><tr><td>1332</td><td>tillsammans (together)</td></tr><tr><td>1333</td><td>u (u)</td></tr><tr><td>1334</td><td>dam (lady)</td></tr></table>
|
| 396 |
+
|
| 397 |
+
<table><tr><td colspan="2">Normalised LIME score</td></tr><tr><td>rank</td><td>term</td></tr><tr><td>1</td><td>överläggningen (the deliberation)</td></tr><tr><td>2</td><td>moderatledda (moderate-led)</td></tr><tr><td>3</td><td>kd (abbrev. for Christian Democrat party)</td></tr><tr><td>4</td><td>skattesänkningarna (the tax cuts)</td></tr><tr><td>5</td><td>borgarna (the bourgeois [parties to the right])</td></tr><tr><td/><td>...</td></tr><tr><td>1332</td><td>har (have)</td></tr><tr><td>1333</td><td>av (of)</td></tr><tr><td>1334</td><td>för (for)</td></tr></table>
|
| 398 |
+
|
| 399 |
+
Table 3: Results for Social Democrats
|
| 400 |
+
|
| 401 |
+
594
|
| 402 |
+
|
| 403 |
+
595
|
| 404 |
+
|
| 405 |
+
596
|
| 406 |
+
|
| 407 |
+
597
|
| 408 |
+
|
| 409 |
+
600
|
| 410 |
+
|
| 411 |
+
605
|
| 412 |
+
|
| 413 |
+
607
|
| 414 |
+
|
| 415 |
+
610
|
| 416 |
+
|
| 417 |
+
617
|
| 418 |
+
|
| 419 |
+
620
|
| 420 |
+
|
| 421 |
+
622
|
| 422 |
+
|
| 423 |
+
623
|
| 424 |
+
|
| 425 |
+
624
|
| 426 |
+
|
| 427 |
+
did above, is a first step. This is also known as 625 face validity, as we look if our method "appears to
|
| 428 |
+
|
| 429 |
+
measure" what we want it to measure (Anastasi, 627 1976, pp. 139-140). Yet, face validity depends on many implicit decisions that vary between context
|
| 430 |
+
|
| 431 |
+
and researcher. As such, we should look further if 630 we wish to provide a more satisfactory validation.
|
| 432 |
+
|
| 433 |
+
One good candidate for this is by looking at $\operatorname{con}$ - 632 struct validity (Shadish et al., 2002; Carmines and Zeller, 1979). This refers to the degree to which we can use our results to say something about that
|
| 434 |
+
|
| 435 |
+
what we aim to measure. One way to learn this 637 here is to look at the wider context in which the terms the algorithm uses appear. For example, if a term used by the algorithm to assign a speech to $\mathrm{S}$ occurs in a context that defines $\mathrm{S}$ , this strength-
|
| 436 |
+
|
| 437 |
+
ens our case for construct validity. To see this, we 642 can use keyword-in-context (KWIC), which looks at the $n$ (here we choose 20) words before and after the term that interests us. In Table 4 we show
|
| 438 |
+
|
| 439 |
+
this for one of the terms from the PCA analysis 646
|
| 440 |
+
|
| 441 |
+
for S - arbetsmarknadspolitik (labour market pol- 647
|
| 442 |
+
|
| 443 |
+
icy). Here, we see that the context of the word
|
| 444 |
+
|
| 445 |
+
649 indeed refers to policies close to S. In both cases, the term is used to call for more and new measures to regulate the labour market - something indicative of S. Similar examples for the words in Tables 2-3 are in the online appendix. As we have implemented KWIC in our algorithm, scholars can thus easily assess whether the same is true for any of the other terms and in this way better assess the validity.
|
| 446 |
+
|
| 447 |
+
"... enda âtgärd lösa detta, det behövs många âtgärder. Det handlar om ett gott företagarklimat, om en ny arbetsmarknad-spolitik, om ytterligare utbildningssatsningar, om att bygga om - osv. med de förslag till âtgärder som vi ..."
|
| 448 |
+
|
| 449 |
+
"... single measure solve this, many measures are needed. It's about a good business climate, about a new labour market policy, about further training efforts, about rebuilding - etc. with the proposed measures that we ..."
|
| 450 |
+
|
| 451 |
+
"... i arbete det finns individer som kommer att behöva säskilt stöd, och då behöver vi ha en bra arbetsmarknadspolitik. Men det är förstås in-get egenvärde i att ungdomar som kan få jobb ändà ska vara i en . . ."
|
| 452 |
+
|
| 453 |
+
"... in work there are individuals who will need separate support, and then we need to have a good labour market policy. But of course there is no intrinsic value in young people who can get a job still being in a..."
|
| 454 |
+
|
| 455 |
+
Table 4: Keywords-in-context for the class-explanation feature labour market policy for the Social Democrats.
|
| 456 |
+
|
| 457 |
+
### 5.4 Explanations and Predictive Accuracy
|
| 458 |
+
|
| 459 |
+
Returning to individual instance explanations, we also wanted to investigate if the kind of words (domain specific or statistical distributions) occurring in an explanation have any relationship with the certainty of the model on those datapoints. We found domain specific words (here related to politics), along the positive PCA spectrum, while more common, general words had embeddings placing them towards the negative end. We find that data points where the explanation-words are predominantly positioned within the positive PCA spectrum (the sum of the PCA coordinates of the
|
| 460 |
+
|
| 461 |
+
701 top-ten explanation features is positive) are cases
|
| 462 |
+
|
| 463 |
+
where the model is more accurate. Compared to 702
|
| 464 |
+
|
| 465 |
+
datapoints where explanations lie in the negative 703
|
| 466 |
+
|
| 467 |
+
PCA space, there is an accuracy gain of roughly 704
|
| 468 |
+
|
| 469 |
+
10 percent (Table 5). Interestingly, this suggests 705 that explanations containing domain specific, rarer words are correlated with the model's correctness,
|
| 470 |
+
|
| 471 |
+
although the number of datapoints with domain 708
|
| 472 |
+
|
| 473 |
+
specific explanations is quite small. 710
|
| 474 |
+
|
| 475 |
+
<table><tr><td/><td>Correct</td><td>Incorrect</td><td>$\mathbf{{Acc}}$</td></tr><tr><td>Pos PCA sum</td><td>186</td><td>25</td><td>88.15</td></tr><tr><td>Neg PCA sum</td><td>1413</td><td>376</td><td>78.98</td></tr></table>
|
| 476 |
+
|
| 477 |
+
Table 5: Classifier performance on the validation set split based on the sum of PCA coordinates of the explanation provided by LIME.
|
| 478 |
+
|
| 479 |
+
713
|
| 480 |
+
|
| 481 |
+
715
|
| 482 |
+
|
| 483 |
+
718
|
| 484 |
+
|
| 485 |
+
720
|
| 486 |
+
|
| 487 |
+
## 6 Comparison to SP-LIME
|
| 488 |
+
|
| 489 |
+
Our method is comparable with SP-LIME, which aggregates individual LIME explanations. SP-LIME consists of three similar steps: post-hoc instance explanations extraction, sorting and example extraction. In contrast to our proposed scoring functions, SP-LIME calculates the score for feature $j$ as ${I}_{j} = \sqrt{\mathop{\sum }\limits_{{i = 1}}^{N}{W}_{ij}}$ where $N$ is the number of data points in the explained dataset and $W$ is the explanation matrix containing the local importance of the features. Based on this scoring, SP-LIME performs a greedy search to extract the top scoring data examples that also have the greatest coverage of distinct features. Therefore, the model explanation takes the form of a set number of text examples with their corresponding instance explanations, where the number of examples provided is defined by the user. Since the method performs a greedy search, the results are ordered by their contribution to how well they explain the model and how many unique features they cover.
|
| 490 |
+
|
| 491 |
+
We apply SP-LIME to the BERT classifier and extract the top 20 text examples that the explainability approach considers most representative. These contain $9\mathrm{\;S}$ examples and ${11}\mathrm{M}$ examples. A selected set of instance explanations can be seen in Table 6 and the full list is available in our online appendix. We can see the overemphasis of stop words especially in the top examples. Only a couple of the surfaced terms carry a political significance, and even those lack context and have arguable generalisability. Some of the examples
|
| 492 |
+
|
| 493 |
+
provided by SP-LIME (see Top 12 and Top 16 in 755
|
| 494 |
+
|
| 495 |
+
Rank 1 SP-LIME example (true label S): är (is), det (the), som (as), den (the), vi (we), Natomedlemskap (NATO membership), att (to), du (you), samlingsregeringen (the coalition government), $\mathbf{{Vi}\left( {We}\right) }$ Rank 2 SP-LIME example (true label M): fragorna (the questions), protektionistiska (protectionist), önskar (wish), Det (The), och (and), Herr (Mr), oerhört (incredibly), handelsminister (Minister of Trade), tackar (thanks), de (the)
|
| 496 |
+
|
| 497 |
+
... Rank 12 SP-LIME example (true label M): medelinkomsttagare (middle income earner), avregleringar (deregulations), vänster (left), tvivelaktiga (questionable), skattesänkningar (tax cuts), Då (Then), och (and), Man (One/third person singular), bostadsmarknaden (the housing market), stöd (support)
|
| 498 |
+
|
| 499 |
+
...
|
| 500 |
+
|
| 501 |
+
Rank 16 SP-LIME example (true label S): borgarna (the bourgeois), oss (us), längtidsarbetslösa (long-term unemployed), klyftorna (the cleavages), det (the), sjuka (sick), rödgröna (red green) 7, Vi (We), Làt (Let), är (is)
|
| 502 |
+
|
| 503 |
+
Table 6: Explanations provided by SP-LIME. Bold features indicate words contributing towards an $\mathrm{M}$ classification, while italic features do the same for S. Full results are in the online appendix.
|
| 504 |
+
|
| 505 |
+
Table 6) are instances where human intuition is more easy to align with. However SP-LIME in general does not provide a way to distinguish between the two types of contributing features that the current work targets. Finally, SP-LIME also differ from our method in the way it presents texts containing explanatory features. SP-LIME tries to find texts which has as many features as possible in one and the same text, while we choose to present many alternative contexts in which explaining feature words appear, motivated by social science use-cases.
|
| 506 |
+
|
| 507 |
+
## 7 Conclusion and Discussion
|
| 508 |
+
|
| 509 |
+
We have developed a new algorithm for extracting class explanations, which takes the distinction between functional and content words into account. It thereby provides an alternative to prior
|
| 510 |
+
|
| 511 |
+
methods like SP-LIME, which mixes explanations 810
|
| 512 |
+
|
| 513 |
+
based on e.g. stop word frequency with presence 811
|
| 514 |
+
|
| 515 |
+
of certain domain specific terms. Our motivation 812
|
| 516 |
+
|
| 517 |
+
comes from the idea of human-grounded explain- 813
|
| 518 |
+
|
| 519 |
+
ability: a useful explanation for a human will fo- 814
|
| 520 |
+
|
| 521 |
+
cus on content rather than stop-words, while still 815
|
| 522 |
+
|
| 523 |
+
being true to the model. In our case-study, we 816 demonstrated this on speeches from the Swedish parliament, with the task of explaining a binary classifier associating speeches to either of the two
|
| 524 |
+
|
| 525 |
+
main parties. This is a difficult task, our human 821 annotation experiment showed human performing just better than random, potentially as they primarily looked for clues about policy. The machine learning models performed better, as they
|
| 526 |
+
|
| 527 |
+
likely also managed to identify statistical speech 826 patterns of speakers, which we saw in explanations where e.g. stop words inevitably appear. Our algorithm can not only identify these, but also separate them from explanations containing domain
|
| 528 |
+
|
| 529 |
+
specific words, hinting at policy, motivated by the 831 needs of social scientists. Additionally, we find indications that domain specific explanations correlate with model performance. Patterns related to policy in our experiment may be more robust than
|
| 530 |
+
|
| 531 |
+
learned speech patterns of stop words, which risks 836 being influenced by single frequent individuals in
|
| 532 |
+
|
| 533 |
+
the dataset, rather than capturing patterns common 838 to a political party.
|
| 534 |
+
|
| 535 |
+
Future work will focus on systematic and exten-
|
| 536 |
+
|
| 537 |
+
sive testing of the proposed methodology in order 841 to evaluate it along the twelve properties proposed
|
| 538 |
+
|
| 539 |
+
by Nauta et al. (2022). The focus should be on 843 measuring the faithfulness to the underlying black box model, correctness, as well as a larger scale domain expert evaluation to measure how relevant
|
| 540 |
+
|
| 541 |
+
and valid the explanations are (context and coher- 848 ence properties). The generalisability will also be tested, by studying other domains and classifica-
|
| 542 |
+
|
| 543 |
+
tion tasks. 851
|
| 544 |
+
|
| 545 |
+
853
|
| 546 |
+
|
| 547 |
+
## References
|
| 548 |
+
|
| 549 |
+
Samira Abnar and Willem Zuidema. 2020. Quantifying Attention Flow in Transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4190-4197. ACL.
|
| 550 |
+
|
| 551 |
+
Anne Anastasi. 1976. Psychological Testing, 4 edition. Macmillan, New York, NY.
|
| 552 |
+
|
| 553 |
+
R. Arun, V. Suresh, and C. E. Veni Madhavan. 2009a. Stopword Graphs and Authorship Attribution in Text
|
| 554 |
+
|
| 555 |
+
858
|
| 556 |
+
|
| 557 |
+
863 Corpora. In 2009 IEEE International Conference on 865 Semantic Computing, pages 192-196.
|
| 558 |
+
|
| 559 |
+
Rajkumar Arun, Ravi Saradha, V. Suresh, M. Murty, and C. Madhavan. 2009b. Stopwords and Stylom-etry: A Latent Dirichlet Allocation Approach. In NIPS workshop on Applications for Topic Models. 870
|
| 560 |
+
|
| 561 |
+
David Baehrens, Timon Schroeter, Stefan Harmel-ing, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert Müller. 2010. How to Explain Individual Classification Decisions. The Journal of Machine Learning Research, 11:1803-1831.
|
| 562 |
+
|
| 563 |
+
Ulya Bayram, John Pestian, Daniel Santel, and Ali A. Minai. 2019. What's in a Word? Detecting Partisan Affiliation from Word Use in Congressional Speeches. In 2019 International Joint Conference
|
| 564 |
+
|
| 565 |
+
880 on Neural Networks (IJCNN), pages 1-8. IEEE.
|
| 566 |
+
|
| 567 |
+
Edward Carmines and Richard Zeller. 1979. Reliability
|
| 568 |
+
|
| 569 |
+
882 and Validity Assessment. Sage, Thousand Oaks, CA.
|
| 570 |
+
|
| 571 |
+
Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis
|
| 572 |
+
|
| 573 |
+
885 Katsis, Ban Kawas, and Prithviraj Sen. 2020. A Survey of the State of Explainable AI for Natural Language Processing. In Proceedings of the 1st Confer-
|
| 574 |
+
|
| 575 |
+
887 ence of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 447-459. ACL.
|
| 576 |
+
|
| 577 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, MN. ACL.
|
| 578 |
+
|
| 579 |
+
Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning.
|
| 580 |
+
|
| 581 |
+
902
|
| 582 |
+
|
| 583 |
+
Kawin Ethayarajh and Dan Jurafsky. 2021. Attention Flows are Shapley Value Explanations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language
|
| 584 |
+
|
| 585 |
+
907 Processing (Volume 2: Short Papers), pages 49-54. ACL.
|
| 586 |
+
|
| 587 |
+
Lawrence Ezrow, Catherine de Vries, Marco Steenber-gen, and Erica Edwards. 2011. Mean voter representation and partisan constituency representation: Do
|
| 588 |
+
|
| 589 |
+
912 parties respond to the mean voter position or to their supporters? Party Politics, 17(3):275-301.
|
| 590 |
+
|
| 591 |
+
Simon Franzmann and André Kaiser. 2006. Locating Political Parties in Policy Space: A Reanalysis of Party Manifesto Data. Party Politics, 12(2):163-
|
| 592 |
+
|
| 593 |
+
917 188.
|
| 594 |
+
|
| 595 |
+
Karin Friberg Heppin and Maria Toporowska Gronos- 918
|
| 596 |
+
|
| 597 |
+
taj. 2012. The Rocky Road towards a Swedish 919
|
| 598 |
+
|
| 599 |
+
FrameNet - Creating SweFN. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 256- 261. European Language Resources Association (ELRA).
|
| 600 |
+
|
| 601 |
+
John Hewitt and Percy Liang. 2019. Designing and Interpreting Probes with Control Tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743. ACL.
|
| 602 |
+
|
| 603 |
+
John Hewitt and Christopher D. Manning. 2019. A Structural Probe for Finding Syntax in Word Representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138. ACL.
|
| 604 |
+
|
| 605 |
+
Seth Jolly, Ryan Bakker, Liesbet Hooghe, Gary Marks, Jonathan Polk, Jan Rovny, Marco Steenbergen, and Milada Anna Vachudova. 2022. Chapel Hill Expert Survey trend file, 1999-2019. Electoral Studies, 75:102420.
|
| 606 |
+
|
| 607 |
+
Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Juraf-sky. 2016. Visualizing and Understanding Neural Models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 681-691. ACL.
|
| 608 |
+
|
| 609 |
+
Andreas Madsen, Siva Reddy, and Sarath Chandar. 2022. Post-Hoc Interpretability for Neural NLP: A Survey. ACM Computing Surveys, 55(8):1-42.
|
| 610 |
+
|
| 611 |
+
Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial intelligence, 267:1-38.
|
| 612 |
+
|
| 613 |
+
Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jörg Schlötterer, Maurice van Keulen, and Christin Seifert. 2022. From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. CoRR, abs/2201.08164.
|
| 614 |
+
|
| 615 |
+
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language Models as Knowledge Bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473. ACL.
|
| 616 |
+
|
| 617 |
+
Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of
|
| 618 |
+
|
| 619 |
+
920
|
| 620 |
+
|
| 621 |
+
921
|
| 622 |
+
|
| 623 |
+
922
|
| 624 |
+
|
| 625 |
+
924
|
| 626 |
+
|
| 627 |
+
929
|
| 628 |
+
|
| 629 |
+
934
|
| 630 |
+
|
| 631 |
+
936
|
| 632 |
+
|
| 633 |
+
939
|
| 634 |
+
|
| 635 |
+
941
|
| 636 |
+
|
| 637 |
+
946
|
| 638 |
+
|
| 639 |
+
949
|
| 640 |
+
|
| 641 |
+
951
|
| 642 |
+
|
| 643 |
+
954
|
| 644 |
+
|
| 645 |
+
956
|
| 646 |
+
|
| 647 |
+
966
|
| 648 |
+
|
| 649 |
+
971 972 the 2016 Conference of the North American Chap- 1026 973 ter of the Association for Computational Linguistics: 1027 974 Demonstrations, pages 97-101. ACL. 1028
|
| 650 |
+
|
| 651 |
+
975 Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 1029
|
| 652 |
+
|
| 653 |
+
976 2020. A Primer in BERTology: What We Know 1030
|
| 654 |
+
|
| 655 |
+
977 About How BERT Works. Transactions of the Asso- 1031
|
| 656 |
+
|
| 657 |
+
978 ciation for Computational Linguistics, 8:842-866. 1032
|
| 658 |
+
|
| 659 |
+
979 William R. Shadish, Thomas D. Cook, and Don- 1033
|
| 660 |
+
|
| 661 |
+
980 ald T. Campbell. 2002. Experimental and Quasi- 1034
|
| 662 |
+
|
| 663 |
+
Experimental Designs for Generalized Causal Infer- 1035
|
| 664 |
+
|
| 665 |
+
ence. Houghton Mifflin, Boston, MA. 1036
|
| 666 |
+
|
| 667 |
+
983 Lloyd S. Shapley. 1952. A Value for N-Person Games. 1037
|
| 668 |
+
|
| 669 |
+
RAND Corporation, Santa Monica, CA. 1038
|
| 670 |
+
|
| 671 |
+
985 1039
|
| 672 |
+
|
| 673 |
+
986 Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, 1040
|
| 674 |
+
|
| 675 |
+
987 Adam Poliak, R. Thomas McCoy, Najoung Kim, 1041 988 Benjamin Van Durme, Samuel R. Bowman, Dipan- 1042 jan Das, and Ellie Pavlick. 2019. What do you
|
| 676 |
+
|
| 677 |
+
989 learn from context? Probing for sentence struc- 1043
|
| 678 |
+
|
| 679 |
+
990 ture in contextualized word representations. CoRR, 1044
|
| 680 |
+
|
| 681 |
+
abs/1905.06316. 1045
|
| 682 |
+
|
| 683 |
+
Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, 1046
|
| 684 |
+
|
| 685 |
+
993 and Matt Gardner. 2019. Do NLP Models Know 1047
|
| 686 |
+
|
| 687 |
+
Numbers? Probing Numeracy in Embeddings. In 1048
|
| 688 |
+
|
| 689 |
+
995 Proceedings of the 2019 Conference on Empirical 1049
|
| 690 |
+
|
| 691 |
+
Methods in Natural Language Processing and the 1050
|
| 692 |
+
|
| 693 |
+
997 9th International Joint Conference on Natural Lan- 1051 guage Processing (EMNLP-IJCNLP), pages 5307- 1052 998 5315. ACL.
|
| 694 |
+
|
| 695 |
+
1053
|
| 696 |
+
|
| 697 |
+
1000 1054
|
| 698 |
+
|
| 699 |
+
1055
|
| 700 |
+
|
| 701 |
+
1056
|
| 702 |
+
|
| 703 |
+
1003 1057
|
| 704 |
+
|
| 705 |
+
1004 1058
|
| 706 |
+
|
| 707 |
+
1005 1059
|
| 708 |
+
|
| 709 |
+
1006 1060
|
| 710 |
+
|
| 711 |
+
1007 1061
|
| 712 |
+
|
| 713 |
+
1008 1062
|
| 714 |
+
|
| 715 |
+
1009 1063
|
| 716 |
+
|
| 717 |
+
1010 1064
|
| 718 |
+
|
| 719 |
+
1011 1065
|
| 720 |
+
|
| 721 |
+
1012 1066
|
| 722 |
+
|
| 723 |
+
1013 1067
|
| 724 |
+
|
| 725 |
+
1014 1068
|
| 726 |
+
|
| 727 |
+
1015 1069
|
| 728 |
+
|
| 729 |
+
1016 1070
|
| 730 |
+
|
| 731 |
+
1017 1071
|
| 732 |
+
|
| 733 |
+
1018 1072
|
| 734 |
+
|
| 735 |
+
1019 1073
|
| 736 |
+
|
| 737 |
+
1020 1074
|
| 738 |
+
|
| 739 |
+
1021 1075
|
| 740 |
+
|
| 741 |
+
1022 1076
|
| 742 |
+
|
| 743 |
+
1023 1077
|
| 744 |
+
|
| 745 |
+
1024 1078
|
| 746 |
+
|
| 747 |
+
1025 1079
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/WGYiq3yOTa/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,682 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ CLASS EXPLANATIONS: THE ROLE OF CONTENT AND FUNCTION WORDS
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author, Anonymous Author, Anonymous Author, Anonymous Author 057 058
|
| 10 |
+
|
| 11 |
+
Affiliation / Address line 1 059
|
| 12 |
+
|
| 13 |
+
006 Affiliation / Address line 2 060
|
| 14 |
+
|
| 15 |
+
{email}@domain 061
|
| 16 |
+
|
| 17 |
+
062
|
| 18 |
+
|
| 19 |
+
063
|
| 20 |
+
|
| 21 |
+
§ ABSTRACT
|
| 22 |
+
|
| 23 |
+
We address two understudied areas related to explainability for neural text models. First, class explanations. What features
|
| 24 |
+
|
| 25 |
+
016 are descriptive across a class, rather than explaining single input instances? Sec-
|
| 26 |
+
|
| 27 |
+
018 ond, the type of features that are used for providing explanations. Does the explanation involve the statistical pattern of
|
| 28 |
+
|
| 29 |
+
021 word usage or the presence of domain-specific content words? Here, we present
|
| 30 |
+
|
| 31 |
+
023 a method to extract both class explanations and strategies to differentiate between two
|
| 32 |
+
|
| 33 |
+
026 types of explanations - domain-specific signals or statistical variations in frequen-
|
| 34 |
+
|
| 35 |
+
028 cies of common words. We demonstrate our method using a case study in which we analyse transcripts of political debates
|
| 36 |
+
|
| 37 |
+
031 in the Swedish Riksdag.
|
| 38 |
+
|
| 39 |
+
033
|
| 40 |
+
|
| 41 |
+
§ 1 INTRODUCTION
|
| 42 |
+
|
| 43 |
+
Recent developments in NLP are often the result of ever more complex model architectures and an
|
| 44 |
+
|
| 45 |
+
036 increasing number of model parameters. Yet, if we want to rely on these models, we should be
|
| 46 |
+
|
| 47 |
+
038 able to review the similarities and dissimilarities between the model and human judgement. Explainability frameworks can do this by highlighting on what the model has learnt to base its decisions. Are these coincidental statistical patterns or something that a human would use as an explanation? Madsen et al. (2022) argue that explanations should ideally be both functionally-grounded (true to the underlying machine learning model) as well as human-grounded (useful to a human).
|
| 48 |
+
|
| 49 |
+
In this article, we propose a new method for extracting class explanations from text classifiers. Besides, we also show a new way to distinguish between two types of features that appear in those
|
| 50 |
+
|
| 51 |
+
053 explanations, that is, between content words and
|
| 52 |
+
|
| 53 |
+
subtle statistical differences in function words' 065 frequencies. Our method aggregates explanations
|
| 54 |
+
|
| 55 |
+
for individual data points (here provided by LIME 067 (Ribeiro et al., 2016)), followed by a sorting stage that separates the different kinds of features.
|
| 56 |
+
|
| 57 |
+
Our work is in part motivated by use cases of 070 machine learning for texts in the social sciences.
|
| 58 |
+
|
| 59 |
+
In this field, explainability methods are relevant 072 both as checks to compare against human expert
|
| 60 |
+
|
| 61 |
+
knowledge and as a tool for bias detection. As a 075 case study, we use our method to explain the de-
|
| 62 |
+
|
| 63 |
+
cisions of a binary classifier trained to identify if 077 speeches in the Swedish Riksdag belong to either of the two main parties, the Moderates (M) or the
|
| 64 |
+
|
| 65 |
+
Social Democrats (S). 080
|
| 66 |
+
|
| 67 |
+
We find that our method can separate class ex-
|
| 68 |
+
|
| 69 |
+
plainability features and that those data points 082 whose explanations contain primarily domain-specific content words are more often classified
|
| 70 |
+
|
| 71 |
+
correctly. 085
|
| 72 |
+
|
| 73 |
+
§ 2 LITERATURE REVIEW
|
| 74 |
+
|
| 75 |
+
087
|
| 76 |
+
|
| 77 |
+
As a result of the extensive work on explainability methods, a complex typology of different ap-
|
| 78 |
+
|
| 79 |
+
proaches exists (see Danilevsky et al. (2020) or 090 Madsen et al. (2022) for a survey). One impor-
|
| 80 |
+
|
| 81 |
+
tant distinction is between global and local. On 092 the one hand, global methods aim to explain some general behaviour of a model, such as class explanations, which summarise the model with respect
|
| 82 |
+
|
| 83 |
+
to a certain class. On the other, local methods aim 097 to explain why the model assigned a single data point to a particular class.
|
| 84 |
+
|
| 85 |
+
Between global and local methods, the latter receive the most attention (Nauta et al., 2022). Three popular methods are gradient-based approaches (Baehrens et al., 2010), Shapley values (Shapley, 1952), and LIME. Gradient-based approaches use the model's weights and take the gradient with regard to the input. As such, they measure the
|
| 86 |
+
|
| 87 |
+
change in the outcome given some small change in 107 the input. Yet, they are only an accurate reflection of the model if that model is linear (Li et al., 2016), which is not the case for most deep NLP architectures. On the other hand, while Shapley values have many theoretical guarantees to make them a faithful interpretation (they represent the true contributions of the features (Ethayarajh and Jurafsky, 2021)), their implementations (e.g. via attention flows for transformer-based architectures (Abnar and Zuidema, 2020)) tend to be computationally expensive, which is problematic in the current setting, where we focus on aggregating a substantial number of individual explanations. Finally, LIME has an advantage over gradient-based approaches as it it model agnostic. This means that LIME attempts explain a trained classifier independent of its architecture (Ribeiro et al., 2016).
|
| 88 |
+
|
| 89 |
+
§ 2.1 CLASS EXPLANATIONS
|
| 90 |
+
|
| 91 |
+
The area of global class explanations is so far less studied than that of local explanations. One approach to providing global understanding of the model is to use behavioural or structural probes (Tenney et al., 2019; Hewitt and Manning, 2019; Wallace et al., 2019). Probing is a technique where a supervised model (a probe) is used to determine what is encoded in the internal representation of the studied model. This is done by training the probe to predict based on the frozen representations of the black-box model. If the probe performs well on the task, that indicates the required information was well represented by the black-box model, if the probe is unable to achieve high accuracy, that is taken to signify that the studied patterns are not learned by the black-box model. This has some limitations - for example, the complexity of the probe. If the probe is too simple, it may not capture second order effects, if it is too complex, it may learn the task internally and "discover" things that are in the probe rather than the model (Hewitt and Liang, 2019). More importantly, these methods tend to be applied to the discovery of simple syntactic structures like part of speech (POS) tagging, syntactic tree structures (Rogers et al., 2020) or to detect the presence of specific knowledge (Petroni et al., 2019). Other attempts in this area include leveraging local methods and utilising a strategy for aggregating and presenting those results to the user. An example of such approach is SP-LIME (Ribeiro et al., 2016), which aggregates individual LIME
|
| 92 |
+
|
| 93 |
+
explanations with a greedy search for finding data 162
|
| 94 |
+
|
| 95 |
+
points (texts) that are explained by the most dis- 163 similar sets of features in order to represent the breadth of the class explanations. The results are presented as ranked text examples with their corresponding explanations, where the number of ex-
|
| 96 |
+
|
| 97 |
+
amples is defined by the user. Due to its focus 168 on features that cover as many input instances as possible, this method tends to overemphasise stop words (see further discussion in Section 6).
|
| 98 |
+
|
| 99 |
+
§ 2.2 FEATURES OF EXPLANATIONS
|
| 100 |
+
|
| 101 |
+
To a human, not all features learnt by the machine 175 learning model are equally informative. Some signals may come from speech patterns, others from the topic that is discussed and the sentiment, yet others may indicate preferred catch-phrases and slogans. There is a distinction between explanations of the model (what a model bases its prediction on) and human explanation (what a human would base their decision on if faced with the same prediction task) (Miller, 2019). Since humans have background knowledge that is not accessible to the model and the model has the capacity to detect small statistical signals that are beyond human computational capabilities, the set of features that are selected by either may differ. This issue can be viewed in terms of the concepts presented in the position paper by Doshi-Velez and Kim (2017) and further discussed by Madsen et al. (2022), namely - human-grounded and functionally-grounded explainability. Functionally-grounded explainability is concerned with how well the explanation reflects the model, whereas human-grounded explainability is concerned with producing explanations that are useful to a human. This is also in line with work by Nauta et al. (2022), where the authors argue for the rigorous evaluation of an explainability method across twelve properties in three categories - content, presentation, and user. The content properties and in particular correctness (faithfulness w.r.t. the black box) are related to the functionally-grounded approach, whereas the user properties - context (how relevant the explanation is to the user), coherence (how accordant the explanation is with prior knowledge), and controllability (how interactive or controllable an explanation is) - relate to human-grounded explainability.
|
| 102 |
+
|
| 103 |
+
In our work, we use function and content words
|
| 104 |
+
|
| 105 |
+
as a proxy for functionally-grounded and human- 215 grounded explanations. The term function words is used in a broader sense here than the strict linguistic definition of prepositions, conjunctions etc. In the setting of parliamentary debates, for example, there is procedural language (e.g. "fru tallman" (madam speaker)) that can also act as function words in the domain. A model can learn to detect distributional differences of any word as long as it is correlated with the predicted class, but a human will be unlikely to relate and understand the cause of the distributional differences of stop-words. The difference in frequency of how often a group uses the word "also", for example, may not be very informative for a human, even if stop word distributions point to real speech patterns that dis-
|
| 106 |
+
|
| 107 |
+
232 tinguish between the speakers (Arun et al., 2009a) and have even been linked to the author's gender (Arun et al., 2009b). Human domain knowledge will most likely be captured through domain-specific, content words. Being able to confirm the (extent of the) model's grounding in content words can serve to validate it.
|
| 108 |
+
|
| 109 |
+
§ 3 METHOD
|
| 110 |
+
|
| 111 |
+
Our algorithm for computing class explanations consists of four steps: post-hoc instance explanations extraction, aggregation, sorting, and a keyword-in-context search that extracts example texts. This framework is formalized in Algorithm 1. It is similar to SP-LIME, but rather than searching for data points that capture the most diversity of the important features, we propose to work directly with the feature importance and explore ways to summarize and sort these by relevance.
|
| 112 |
+
|
| 113 |
+
The implementation will be linked in the non-anonymous version.
|
| 114 |
+
|
| 115 |
+
§ 3.1 STEP 1: INSTANCE EXPLANATION EXTRACTION
|
| 116 |
+
|
| 117 |
+
For a set of held-out data samples $N$ , we apply the trained classifier $f$ . In the instances where
|
| 118 |
+
|
| 119 |
+
259 the classifier makes the correct prediction, we ex- tract the list of features and their corresponding saliency with model $g$ . This can also be flipped to focus on instances where the model makes the incorrect predictions to investigate which patterns or instances are hard to classify. A certainty threshold can also be used to explore only cases where the model is certain or borderline cases. Our method aims to be extendable to different model architectures, therefore we require a post-
|
| 120 |
+
|
| 121 |
+
269 hoc, model agnostic instance explanation function
|
| 122 |
+
|
| 123 |
+
Algorithm 1 Class explainability from instance explanations
|
| 124 |
+
|
| 125 |
+
Require: Binary classifier $f$ , data samples $N$
|
| 126 |
+
|
| 127 |
+
Require: Instance explainability function $g$
|
| 128 |
+
|
| 129 |
+
Require: Feature scoring function $h$
|
| 130 |
+
|
| 131 |
+
$W \leftarrow \{ \} \; \vartriangleright$ features and importance scores
|
| 132 |
+
|
| 133 |
+
${c1} \leftarrow \{ \} \; \vartriangleright$ features explaining class 1
|
| 134 |
+
|
| 135 |
+
${c2} \leftarrow \{ \} \; \vartriangleright$ features explaining class 2
|
| 136 |
+
|
| 137 |
+
270
|
| 138 |
+
|
| 139 |
+
271 Step 1 - Instance explanation extraction
|
| 140 |
+
|
| 141 |
+
for text, true_label $\in N$ do
|
| 142 |
+
|
| 143 |
+
if $f\left( \text{ text }\right) =$ true_label then
|
| 144 |
+
|
| 145 |
+
$W \leftarrow W \cup \{ g\left( {\text{ text },f}\right) \}$
|
| 146 |
+
|
| 147 |
+
end if
|
| 148 |
+
|
| 149 |
+
end for
|
| 150 |
+
|
| 151 |
+
Step 2 - Aggregation
|
| 152 |
+
|
| 153 |
+
for feature, score $\in W$ do
|
| 154 |
+
|
| 155 |
+
if score $< 0$ then
|
| 156 |
+
|
| 157 |
+
${c1} \leftarrow {c1} \cup \{$ feature $\}$
|
| 158 |
+
|
| 159 |
+
else
|
| 160 |
+
|
| 161 |
+
${c2} \leftarrow {c2} \cup \{$ feature $\}$
|
| 162 |
+
|
| 163 |
+
end if
|
| 164 |
+
|
| 165 |
+
end for
|
| 166 |
+
|
| 167 |
+
286
|
| 168 |
+
|
| 169 |
+
288
|
| 170 |
+
|
| 171 |
+
Step 3 – Sorting for $c \in \{ {c1},{c2}\}$ do return $c$ sorted by $h$ score end for Step 4 - Keywords in context
|
| 172 |
+
|
| 173 |
+
for $c \in \left\{ {{c1},{c2}}\right\}$ do
|
| 174 |
+
|
| 175 |
+
for $\operatorname{term} \in$ top $X$ terms in $c$ do
|
| 176 |
+
|
| 177 |
+
return all occurrences of term
|
| 178 |
+
|
| 179 |
+
with $n$ words before and after
|
| 180 |
+
|
| 181 |
+
end for
|
| 182 |
+
|
| 183 |
+
end for
|
| 184 |
+
|
| 185 |
+
298
|
| 186 |
+
|
| 187 |
+
301
|
| 188 |
+
|
| 189 |
+
303
|
| 190 |
+
|
| 191 |
+
306
|
| 192 |
+
|
| 193 |
+
308 $g$ . For now, we have chosen LIME, but alternative methods can be used as well, as long as they are able to extract features and the feature contribution scores that explain an instance. This means we are currently constrained by LIME's limitations and only consider single tokens as features. Since LIME is a surrogate model, there is also some uncoupling between the classification model and the explanations. For each correctly classified instance, we extract the top $k$ features (here set to 10). This can be reduced even further in order
|
| 194 |
+
|
| 195 |
+
to limit the number of features that are considered 323 or extended to include all tokens and the task of limiting the explanation will then be completely relegated to the sorting step.
|
| 196 |
+
|
| 197 |
+
§ 3.2 STEP 2: AGGREGATION
|
| 198 |
+
|
| 199 |
+
A feature can contribute either positively or negatively towards the prediction of the model. When working with a binary classifier, a negatively contributing feature towards predicting class 1 means it is a positively contributing feature for class 2 . Therefore, the features collected from the previous step are aggregated in two sets $- {c1},{c2}$ - one for each class based on their feature score sign. Note that these two sets of features may have overlaps if the predictive signal is indicative of the different context in which those features appear.
|
| 200 |
+
|
| 201 |
+
§ 3.3 STEP 3: SORTING
|
| 202 |
+
|
| 203 |
+
The resulting sets of features for each class need to be constrained to a feasible size to be interpretable by a human. We propose two approaches to developing a feature relevance score $h$ to prioritize and distinguish these terms along an axis of more domain-specific concepts to more generic stop-words - normalization and PCA.
|
| 204 |
+
|
| 205 |
+
Normalization. Here, we use the sum of LIME scores for each feature of the explanation divided by number of occurrences of that feature in the validation set. We calculate the feature relevance score $h$ of the ${j}^{\text{ th }}$ feature as: ${h}_{j} = \frac{1}{{m}_{j}}\mathop{\sum }\limits_{{i = 1}}^{N}{W}_{ij}$ . Here, $N$ is the number of data points in the explained dataset, ${m}_{j}$ is the number of occurrences of feature $j$ in the explained set, and $W$ is the explanation matrix containing the local importance of the interpretable components for each instance. This will give higher scores to features identified as more important by LIME, but will penalise common words, if they do not contribute to a class prediction often. This is in line with the definition of stop-words and should target the corpus-specific stop-words. We also filter out words that appear in two or less documents, as these can be party specific, but may not be useful for generalisation. This number can also be increased to filter out more predictive (according to LIME) words.
|
| 206 |
+
|
| 207 |
+
PCA. The second approach to sorting is to decouple it from the LIME score after the initial aggregation step and use PCA of word embed-dings. We found that PCA applied to pre-trained word embeddings tends to separate domain specific words from function words and more generic
|
| 208 |
+
|
| 209 |
+
terms. A theoretical motivation for this analysis 378
|
| 210 |
+
|
| 211 |
+
lies in the distributional differences between a gen- 379
|
| 212 |
+
|
| 213 |
+
eral text (used for pre-training word embeddings) 380
|
| 214 |
+
|
| 215 |
+
and a domain-specific text (in this case - politi- 381 cal debate). We hypothesise that the general embedding model will see the domain specific terms
|
| 216 |
+
|
| 217 |
+
is sufficiently distinct context in order to embed 384 them in a compact space with a latent dimension separating them from more common and general terms. This relies on the studied data having a
|
| 218 |
+
|
| 219 |
+
significant amount of domain specific terminology 389 that is rarer in general. We expect this to be the case for many application within the social sciences (e.g. politics), but can have limitations in, lower-level, syntactic classification tasks like POS
|
| 220 |
+
|
| 221 |
+
tagging. 394
|
| 222 |
+
|
| 223 |
+
To calculate the sorting score, the terms from
|
| 224 |
+
|
| 225 |
+
each set ${c1}$ and ${c2}$ are embedded using a model ${}^{\top }$ 396 trained on the Swedish CoNLL17 corpus. A PCA is run on each set of words $- {c1},{c2}$ - and the first PCA dimension value is used as the sorting score $h$ . Similarly to the normalisation approach, words that appear in two or fewer documents are filtered out. This dimension seems to provide a good distinction of domain specific terms.
|
| 226 |
+
|
| 227 |
+
§ 3.4 STEP 4: KEYWORDS IN CONTEXT
|
| 228 |
+
|
| 229 |
+
To further increase human interpretability, we also provide a way to provide context by extracting snippets of texts around the top word features produced in Step 3. For each occurrence, we use a simple keyword-in-context search and extract $n$ words before and after our feature word. This is clearly not feasible or interesting for very frequent words, which further motivates separating rarer, domain specific content words from more common function words.
|
| 230 |
+
|
| 231 |
+
§ 4 DATA
|
| 232 |
+
|
| 233 |
+
The dataset used for the case-study consists of transcripts of debates in the Swedish Riksdag, sourced from Riksdagens öppna data - Anföranden ${}^{2}$ . We use a pre-processed version available from Språkbanken ${}^{3}$ consisting of debates from 1993 to 2018. For our experiment, texts from the Social Democrat (S) and Moderate (M) parties
|
| 234 |
+
|
| 235 |
+
431 have been extracted, resulting in ${104},{842}\mathrm{\;S}$ and ${62},{160}\mathrm{M}$ data points (one data point is one speech that could be part of a longer debate). From these, 100 examples have been sampled for a small-scale human baseline check, where two annotators are asked to perform the classification task of determining the party label from the speech texts and were evaluated against the true label. Since these are debates, references to the opponent are a strong but trivial predictor of party. References to people and political parties have been removed by targeting Swedish political party stems and words tagged as "People_along_political_spectrum" in Spräkbanken's tags, based on Swedish FrameNet (Heppin and Gronostaj, 2012). Data points shorter than 50 words have been removed, as manual analysis shows these tend to be entirely procedural and do not carry political sentiment. This is in line with similar cleaning practices used for US congressional debates (Bayram et al., 2019). The data is undersampled to balance the classes and split into: train(108,169), test(12,019)and validation (2,000)sets. The validation set is used for explainability methods.
|
| 236 |
+
|
| 237 |
+
http://vectors.nlpl.eu/repository/20/ 69.zip
|
| 238 |
+
|
| 239 |
+
thttps://data.riksdagen.se/data/ anforanden/
|
| 240 |
+
|
| 241 |
+
'https://spraakbanken.gu.se/resurser/ rd-anf-1993-2018
|
| 242 |
+
|
| 243 |
+
§ 5 EXPERIMENTS
|
| 244 |
+
|
| 245 |
+
To test our methodology we apply it to a BERT classifier trained to predict the party label of a text (Devlin et al., 2019). The classifier is fine-tuned from a pre-trained model for Swedish data released by The National Library of Sweden/KBLab and available through the huggingface library The model has a 50,325 word vocabulary and 512 maximum token length. Longer inputs are truncated. As a baseline for investigating class differences and separability of the data we use a logistic regression classifier, as this provides easy access to class explanations by simply looking at the top and bottom scoring internal weights of the model. $\mathrm{N}$ -gram spans from 1 to 3 and a combination of all have been compared. The number of input features is 50,325 - the same as the pre-trained BERT model.
|
| 246 |
+
|
| 247 |
+
A small-scale human annotation check on 100 instances shows the two annotators perform with 58 and 56 percent accuracy respectively. A Cohen's kappa of 0.4 indicates this is a hard classification task.
|
| 248 |
+
|
| 249 |
+
In the interest of space, the sections below con-
|
| 250 |
+
|
| 251 |
+
tain partial results. The full results are available in 486
|
| 252 |
+
|
| 253 |
+
an online appendix. 487
|
| 254 |
+
|
| 255 |
+
488
|
| 256 |
+
|
| 257 |
+
§ 5.1 BASELINE
|
| 258 |
+
|
| 259 |
+
489
|
| 260 |
+
|
| 261 |
+
Table 1 summarises the accuracy and F1 scores 490
|
| 262 |
+
|
| 263 |
+
for the logistic regression classifier. We observe 492 that the best result is achieved with 1 -grams, with the inclusion of 2- and 3- grams adding no performance gains. It seems the main part of the distinguishing signal can be picked up by specific words
|
| 264 |
+
|
| 265 |
+
rather than phrases. 497
|
| 266 |
+
|
| 267 |
+
max width=
|
| 268 |
+
|
| 269 |
+
n-gram span #feat acc F1
|
| 270 |
+
|
| 271 |
+
1-4
|
| 272 |
+
1,1 50,325 76.94 76.80
|
| 273 |
+
|
| 274 |
+
1-4
|
| 275 |
+
2,2 50,325 73.19 73.05
|
| 276 |
+
|
| 277 |
+
1-4
|
| 278 |
+
3,3 50,325 69.39 69.15
|
| 279 |
+
|
| 280 |
+
1-4
|
| 281 |
+
1,3 150,975 76.93 76.80
|
| 282 |
+
|
| 283 |
+
1-4
|
| 284 |
+
|
| 285 |
+
Table 1: Logistic regression classifier performance.
|
| 286 |
+
|
| 287 |
+
499
|
| 288 |
+
|
| 289 |
+
502
|
| 290 |
+
|
| 291 |
+
504
|
| 292 |
+
|
| 293 |
+
From the internal model weights, we can identify both domain specific words - "sjuka" (sick), "arbetslösa" (unemplyed), "arbetslinjen" (the employment line, a Moderate catchphrase), and function words - "det" (the), "ocks" (also), "synner-het" (in particular), can be predictive of the party label. This is in agreement with our assumption that a model can depend on both statistical differences in stop word or in human concepts as the basis of its prediction, and in doing so outperforms the human annotators.
|
| 294 |
+
|
| 295 |
+
519
|
| 296 |
+
|
| 297 |
+
§ 5.2 BERT
|
| 298 |
+
|
| 299 |
+
The BERT model ${}^{6}$ has an accuracy of 78.44 and 522 F1 score of 76.66 on the test set and accuracy of
|
| 300 |
+
|
| 301 |
+
79.95 and F1 score of 78.27 on the validation set, 524 which is only a slight improvement over the logistic regression baseline.
|
| 302 |
+
|
| 303 |
+
Applying LIME to all validation samples and aggregating the top 10 features for each data point
|
| 304 |
+
|
| 305 |
+
results is a list of 2,043 Moderate and 2,085 Social 529 Democrats terms. Out of these 1,456 Moderate and 1,334 Social Democrat terms appear in more than two documents, and are thus candidates to be included as part of class explanations (this limit
|
| 306 |
+
|
| 307 |
+
can be adjusted by the user). 534
|
| 308 |
+
|
| 309 |
+
539
|
| 310 |
+
|
| 311 |
+
5 https://github.com/
|
| 312 |
+
|
| 313 |
+
anonymous-supplementary-materials/
|
| 314 |
+
|
| 315 |
+
NoDaLiDa2023_Appendix
|
| 316 |
+
|
| 317 |
+
${}^{6}$ With hyperparameters: $\operatorname{lr} = 5\mathrm{e} - 6$ , batch size $= {48}$ , steps $= {6000}$
|
| 318 |
+
|
| 319 |
+
https://huggingface.co/KB/ bert-base-swedish-cased
|
| 320 |
+
|
| 321 |
+
540
|
| 322 |
+
|
| 323 |
+
max width=
|
| 324 |
+
|
| 325 |
+
2|c|PCA ordering
|
| 326 |
+
|
| 327 |
+
1-2
|
| 328 |
+
rank term
|
| 329 |
+
|
| 330 |
+
1-2
|
| 331 |
+
1 utgiftsomrâde (expenditure area)
|
| 332 |
+
|
| 333 |
+
1-2
|
| 334 |
+
2 budgetpropositionen (the budget bill)
|
| 335 |
+
|
| 336 |
+
1-2
|
| 337 |
+
3 jobbskatteavdrag (employment tax credit)
|
| 338 |
+
|
| 339 |
+
1-2
|
| 340 |
+
4 arbetslöshetsförsäkringen (unemployment insurance)
|
| 341 |
+
|
| 342 |
+
1-2
|
| 343 |
+
5 skattehöjningar (tax increases)
|
| 344 |
+
|
| 345 |
+
1-2
|
| 346 |
+
X ...
|
| 347 |
+
|
| 348 |
+
1-2
|
| 349 |
+
1454 högkvalitativa (high quality)
|
| 350 |
+
|
| 351 |
+
1-2
|
| 352 |
+
1455 vackra (beautiful)
|
| 353 |
+
|
| 354 |
+
1-2
|
| 355 |
+
1456 klassiska (classic)
|
| 356 |
+
|
| 357 |
+
1-2
|
| 358 |
+
2|c|Normalised LIME score
|
| 359 |
+
|
| 360 |
+
1-2
|
| 361 |
+
rank term
|
| 362 |
+
|
| 363 |
+
1-2
|
| 364 |
+
1 vänsterregering (left-wing government)
|
| 365 |
+
|
| 366 |
+
1-2
|
| 367 |
+
2 fattigdomsbekämpning (poverty alleviation)
|
| 368 |
+
|
| 369 |
+
1-2
|
| 370 |
+
3 bidragsberoende (benefits dependency)
|
| 371 |
+
|
| 372 |
+
1-2
|
| 373 |
+
4 fridens (of peace)
|
| 374 |
+
|
| 375 |
+
1-2
|
| 376 |
+
5 arbetsföra (able to work)
|
| 377 |
+
|
| 378 |
+
1-2
|
| 379 |
+
X ...
|
| 380 |
+
|
| 381 |
+
1-2
|
| 382 |
+
1454 som (as)
|
| 383 |
+
|
| 384 |
+
1-2
|
| 385 |
+
1455 ett (one)
|
| 386 |
+
|
| 387 |
+
1-2
|
| 388 |
+
1456 en (one)
|
| 389 |
+
|
| 390 |
+
1-2
|
| 391 |
+
|
| 392 |
+
Table 2: Results for the Moderates.
|
| 393 |
+
|
| 394 |
+
541
|
| 395 |
+
|
| 396 |
+
542
|
| 397 |
+
|
| 398 |
+
543
|
| 399 |
+
|
| 400 |
+
544
|
| 401 |
+
|
| 402 |
+
545
|
| 403 |
+
|
| 404 |
+
546
|
| 405 |
+
|
| 406 |
+
547
|
| 407 |
+
|
| 408 |
+
548
|
| 409 |
+
|
| 410 |
+
549
|
| 411 |
+
|
| 412 |
+
550
|
| 413 |
+
|
| 414 |
+
551
|
| 415 |
+
|
| 416 |
+
552
|
| 417 |
+
|
| 418 |
+
553
|
| 419 |
+
|
| 420 |
+
554
|
| 421 |
+
|
| 422 |
+
555
|
| 423 |
+
|
| 424 |
+
556
|
| 425 |
+
|
| 426 |
+
557
|
| 427 |
+
|
| 428 |
+
558
|
| 429 |
+
|
| 430 |
+
559
|
| 431 |
+
|
| 432 |
+
560
|
| 433 |
+
|
| 434 |
+
561
|
| 435 |
+
|
| 436 |
+
562
|
| 437 |
+
|
| 438 |
+
563
|
| 439 |
+
|
| 440 |
+
564
|
| 441 |
+
|
| 442 |
+
566
|
| 443 |
+
|
| 444 |
+
§ 5.3 VALIDATION
|
| 445 |
+
|
| 446 |
+
Tables 2- 3 show the results of both LIME and PCA for both M and S. In both cases, the models separate informative terms from generic ones. This is especially the case with the LIME scores, where the lowest-scoring words are all stop words. As for the highest-scoring words, we find that they are all related to taxes and employment. This is understandable, as this is also what makes up the main political left/right dimension in Sweden (Franzmann and Kaiser, 2006; Jolly et al., 2022;
|
| 447 |
+
|
| 448 |
+
583 Ezrow et al., 2011). Besides, we can identify sev- eral references to several (groups of) parties and ministers, which we would expect in debates.
|
| 449 |
+
|
| 450 |
+
While these findings are hopeful on their own, to be useful for social scientists, we need to do more to ensure that our results are valid. In other words, we want to ensure that our method measures what we intend to measure (Carmines and Zeller, 1979). In our case, this is whether a speech is representative of $\mathrm{S}$ or $\mathrm{M}$ .
|
| 451 |
+
|
| 452 |
+
593 Looking at how appropriate the terms are, as we
|
| 453 |
+
|
| 454 |
+
max width=
|
| 455 |
+
|
| 456 |
+
2|c|PCA ordering
|
| 457 |
+
|
| 458 |
+
1-2
|
| 459 |
+
rank term
|
| 460 |
+
|
| 461 |
+
1-2
|
| 462 |
+
1 budgetpropositionen (the budget bill)
|
| 463 |
+
|
| 464 |
+
1-2
|
| 465 |
+
2 arbetsmarknadspolitik (labor market policy)
|
| 466 |
+
|
| 467 |
+
1-2
|
| 468 |
+
3 samlingspartiet [Refers to the Moderates]
|
| 469 |
+
|
| 470 |
+
1-2
|
| 471 |
+
4 ungdomsarbetslösheten (youth unemployment)
|
| 472 |
+
|
| 473 |
+
1-2
|
| 474 |
+
5 skattesänkningar (tax cuts)
|
| 475 |
+
|
| 476 |
+
1-2
|
| 477 |
+
X ...
|
| 478 |
+
|
| 479 |
+
1-2
|
| 480 |
+
1332 tillsammans (together)
|
| 481 |
+
|
| 482 |
+
1-2
|
| 483 |
+
1333 u (u)
|
| 484 |
+
|
| 485 |
+
1-2
|
| 486 |
+
1334 dam (lady)
|
| 487 |
+
|
| 488 |
+
1-2
|
| 489 |
+
|
| 490 |
+
max width=
|
| 491 |
+
|
| 492 |
+
2|c|Normalised LIME score
|
| 493 |
+
|
| 494 |
+
1-2
|
| 495 |
+
rank term
|
| 496 |
+
|
| 497 |
+
1-2
|
| 498 |
+
1 överläggningen (the deliberation)
|
| 499 |
+
|
| 500 |
+
1-2
|
| 501 |
+
2 moderatledda (moderate-led)
|
| 502 |
+
|
| 503 |
+
1-2
|
| 504 |
+
3 kd (abbrev. for Christian Democrat party)
|
| 505 |
+
|
| 506 |
+
1-2
|
| 507 |
+
4 skattesänkningarna (the tax cuts)
|
| 508 |
+
|
| 509 |
+
1-2
|
| 510 |
+
5 borgarna (the bourgeois [parties to the right])
|
| 511 |
+
|
| 512 |
+
1-2
|
| 513 |
+
X ...
|
| 514 |
+
|
| 515 |
+
1-2
|
| 516 |
+
1332 har (have)
|
| 517 |
+
|
| 518 |
+
1-2
|
| 519 |
+
1333 av (of)
|
| 520 |
+
|
| 521 |
+
1-2
|
| 522 |
+
1334 för (for)
|
| 523 |
+
|
| 524 |
+
1-2
|
| 525 |
+
|
| 526 |
+
Table 3: Results for Social Democrats
|
| 527 |
+
|
| 528 |
+
594
|
| 529 |
+
|
| 530 |
+
595
|
| 531 |
+
|
| 532 |
+
596
|
| 533 |
+
|
| 534 |
+
597
|
| 535 |
+
|
| 536 |
+
600
|
| 537 |
+
|
| 538 |
+
605
|
| 539 |
+
|
| 540 |
+
607
|
| 541 |
+
|
| 542 |
+
610
|
| 543 |
+
|
| 544 |
+
617
|
| 545 |
+
|
| 546 |
+
620
|
| 547 |
+
|
| 548 |
+
622
|
| 549 |
+
|
| 550 |
+
623
|
| 551 |
+
|
| 552 |
+
624
|
| 553 |
+
|
| 554 |
+
did above, is a first step. This is also known as 625 face validity, as we look if our method "appears to
|
| 555 |
+
|
| 556 |
+
measure" what we want it to measure (Anastasi, 627 1976, pp. 139-140). Yet, face validity depends on many implicit decisions that vary between context
|
| 557 |
+
|
| 558 |
+
and researcher. As such, we should look further if 630 we wish to provide a more satisfactory validation.
|
| 559 |
+
|
| 560 |
+
One good candidate for this is by looking at $\operatorname{con}$ - 632 struct validity (Shadish et al., 2002; Carmines and Zeller, 1979). This refers to the degree to which we can use our results to say something about that
|
| 561 |
+
|
| 562 |
+
what we aim to measure. One way to learn this 637 here is to look at the wider context in which the terms the algorithm uses appear. For example, if a term used by the algorithm to assign a speech to $\mathrm{S}$ occurs in a context that defines $\mathrm{S}$ , this strength-
|
| 563 |
+
|
| 564 |
+
ens our case for construct validity. To see this, we 642 can use keyword-in-context (KWIC), which looks at the $n$ (here we choose 20) words before and after the term that interests us. In Table 4 we show
|
| 565 |
+
|
| 566 |
+
this for one of the terms from the PCA analysis 646
|
| 567 |
+
|
| 568 |
+
for S - arbetsmarknadspolitik (labour market pol- 647
|
| 569 |
+
|
| 570 |
+
icy). Here, we see that the context of the word
|
| 571 |
+
|
| 572 |
+
649 indeed refers to policies close to S. In both cases, the term is used to call for more and new measures to regulate the labour market - something indicative of S. Similar examples for the words in Tables 2-3 are in the online appendix. As we have implemented KWIC in our algorithm, scholars can thus easily assess whether the same is true for any of the other terms and in this way better assess the validity.
|
| 573 |
+
|
| 574 |
+
"... enda âtgärd lösa detta, det behövs många âtgärder. Det handlar om ett gott företagarklimat, om en ny arbetsmarknad-spolitik, om ytterligare utbildningssatsningar, om att bygga om - osv. med de förslag till âtgärder som vi ..."
|
| 575 |
+
|
| 576 |
+
"... single measure solve this, many measures are needed. It's about a good business climate, about a new labour market policy, about further training efforts, about rebuilding - etc. with the proposed measures that we ..."
|
| 577 |
+
|
| 578 |
+
"... i arbete det finns individer som kommer att behöva säskilt stöd, och då behöver vi ha en bra arbetsmarknadspolitik. Men det är förstås in-get egenvärde i att ungdomar som kan få jobb ändà ska vara i en . . ."
|
| 579 |
+
|
| 580 |
+
"... in work there are individuals who will need separate support, and then we need to have a good labour market policy. But of course there is no intrinsic value in young people who can get a job still being in a..."
|
| 581 |
+
|
| 582 |
+
Table 4: Keywords-in-context for the class-explanation feature labour market policy for the Social Democrats.
|
| 583 |
+
|
| 584 |
+
§ 5.4 EXPLANATIONS AND PREDICTIVE ACCURACY
|
| 585 |
+
|
| 586 |
+
Returning to individual instance explanations, we also wanted to investigate if the kind of words (domain specific or statistical distributions) occurring in an explanation have any relationship with the certainty of the model on those datapoints. We found domain specific words (here related to politics), along the positive PCA spectrum, while more common, general words had embeddings placing them towards the negative end. We find that data points where the explanation-words are predominantly positioned within the positive PCA spectrum (the sum of the PCA coordinates of the
|
| 587 |
+
|
| 588 |
+
701 top-ten explanation features is positive) are cases
|
| 589 |
+
|
| 590 |
+
where the model is more accurate. Compared to 702
|
| 591 |
+
|
| 592 |
+
datapoints where explanations lie in the negative 703
|
| 593 |
+
|
| 594 |
+
PCA space, there is an accuracy gain of roughly 704
|
| 595 |
+
|
| 596 |
+
10 percent (Table 5). Interestingly, this suggests 705 that explanations containing domain specific, rarer words are correlated with the model's correctness,
|
| 597 |
+
|
| 598 |
+
although the number of datapoints with domain 708
|
| 599 |
+
|
| 600 |
+
specific explanations is quite small. 710
|
| 601 |
+
|
| 602 |
+
max width=
|
| 603 |
+
|
| 604 |
+
X Correct Incorrect $\mathbf{{Acc}}$
|
| 605 |
+
|
| 606 |
+
1-4
|
| 607 |
+
Pos PCA sum 186 25 88.15
|
| 608 |
+
|
| 609 |
+
1-4
|
| 610 |
+
Neg PCA sum 1413 376 78.98
|
| 611 |
+
|
| 612 |
+
1-4
|
| 613 |
+
|
| 614 |
+
Table 5: Classifier performance on the validation set split based on the sum of PCA coordinates of the explanation provided by LIME.
|
| 615 |
+
|
| 616 |
+
713
|
| 617 |
+
|
| 618 |
+
715
|
| 619 |
+
|
| 620 |
+
718
|
| 621 |
+
|
| 622 |
+
720
|
| 623 |
+
|
| 624 |
+
§ 6 COMPARISON TO SP-LIME
|
| 625 |
+
|
| 626 |
+
Our method is comparable with SP-LIME, which aggregates individual LIME explanations. SP-LIME consists of three similar steps: post-hoc instance explanations extraction, sorting and example extraction. In contrast to our proposed scoring functions, SP-LIME calculates the score for feature $j$ as ${I}_{j} = \sqrt{\mathop{\sum }\limits_{{i = 1}}^{N}{W}_{ij}}$ where $N$ is the number of data points in the explained dataset and $W$ is the explanation matrix containing the local importance of the features. Based on this scoring, SP-LIME performs a greedy search to extract the top scoring data examples that also have the greatest coverage of distinct features. Therefore, the model explanation takes the form of a set number of text examples with their corresponding instance explanations, where the number of examples provided is defined by the user. Since the method performs a greedy search, the results are ordered by their contribution to how well they explain the model and how many unique features they cover.
|
| 627 |
+
|
| 628 |
+
We apply SP-LIME to the BERT classifier and extract the top 20 text examples that the explainability approach considers most representative. These contain $9\mathrm{\;S}$ examples and ${11}\mathrm{M}$ examples. A selected set of instance explanations can be seen in Table 6 and the full list is available in our online appendix. We can see the overemphasis of stop words especially in the top examples. Only a couple of the surfaced terms carry a political significance, and even those lack context and have arguable generalisability. Some of the examples
|
| 629 |
+
|
| 630 |
+
provided by SP-LIME (see Top 12 and Top 16 in 755
|
| 631 |
+
|
| 632 |
+
Rank 1 SP-LIME example (true label S): är (is), det (the), som (as), den (the), vi (we), Natomedlemskap (NATO membership), att (to), du (you), samlingsregeringen (the coalition government), $\mathbf{{Vi}\left( {We}\right) }$ Rank 2 SP-LIME example (true label M): fragorna (the questions), protektionistiska (protectionist), önskar (wish), Det (The), och (and), Herr (Mr), oerhört (incredibly), handelsminister (Minister of Trade), tackar (thanks), de (the)
|
| 633 |
+
|
| 634 |
+
... Rank 12 SP-LIME example (true label M): medelinkomsttagare (middle income earner), avregleringar (deregulations), vänster (left), tvivelaktiga (questionable), skattesänkningar (tax cuts), Då (Then), och (and), Man (One/third person singular), bostadsmarknaden (the housing market), stöd (support)
|
| 635 |
+
|
| 636 |
+
...
|
| 637 |
+
|
| 638 |
+
Rank 16 SP-LIME example (true label S): borgarna (the bourgeois), oss (us), längtidsarbetslösa (long-term unemployed), klyftorna (the cleavages), det (the), sjuka (sick), rödgröna (red green) 7, Vi (We), Làt (Let), är (is)
|
| 639 |
+
|
| 640 |
+
Table 6: Explanations provided by SP-LIME. Bold features indicate words contributing towards an $\mathrm{M}$ classification, while italic features do the same for S. Full results are in the online appendix.
|
| 641 |
+
|
| 642 |
+
Table 6) are instances where human intuition is more easy to align with. However SP-LIME in general does not provide a way to distinguish between the two types of contributing features that the current work targets. Finally, SP-LIME also differ from our method in the way it presents texts containing explanatory features. SP-LIME tries to find texts which has as many features as possible in one and the same text, while we choose to present many alternative contexts in which explaining feature words appear, motivated by social science use-cases.
|
| 643 |
+
|
| 644 |
+
§ 7 CONCLUSION AND DISCUSSION
|
| 645 |
+
|
| 646 |
+
We have developed a new algorithm for extracting class explanations, which takes the distinction between functional and content words into account. It thereby provides an alternative to prior
|
| 647 |
+
|
| 648 |
+
methods like SP-LIME, which mixes explanations 810
|
| 649 |
+
|
| 650 |
+
based on e.g. stop word frequency with presence 811
|
| 651 |
+
|
| 652 |
+
of certain domain specific terms. Our motivation 812
|
| 653 |
+
|
| 654 |
+
comes from the idea of human-grounded explain- 813
|
| 655 |
+
|
| 656 |
+
ability: a useful explanation for a human will fo- 814
|
| 657 |
+
|
| 658 |
+
cus on content rather than stop-words, while still 815
|
| 659 |
+
|
| 660 |
+
being true to the model. In our case-study, we 816 demonstrated this on speeches from the Swedish parliament, with the task of explaining a binary classifier associating speeches to either of the two
|
| 661 |
+
|
| 662 |
+
main parties. This is a difficult task, our human 821 annotation experiment showed human performing just better than random, potentially as they primarily looked for clues about policy. The machine learning models performed better, as they
|
| 663 |
+
|
| 664 |
+
likely also managed to identify statistical speech 826 patterns of speakers, which we saw in explanations where e.g. stop words inevitably appear. Our algorithm can not only identify these, but also separate them from explanations containing domain
|
| 665 |
+
|
| 666 |
+
specific words, hinting at policy, motivated by the 831 needs of social scientists. Additionally, we find indications that domain specific explanations correlate with model performance. Patterns related to policy in our experiment may be more robust than
|
| 667 |
+
|
| 668 |
+
learned speech patterns of stop words, which risks 836 being influenced by single frequent individuals in
|
| 669 |
+
|
| 670 |
+
the dataset, rather than capturing patterns common 838 to a political party.
|
| 671 |
+
|
| 672 |
+
Future work will focus on systematic and exten-
|
| 673 |
+
|
| 674 |
+
sive testing of the proposed methodology in order 841 to evaluate it along the twelve properties proposed
|
| 675 |
+
|
| 676 |
+
by Nauta et al. (2022). The focus should be on 843 measuring the faithfulness to the underlying black box model, correctness, as well as a larger scale domain expert evaluation to measure how relevant
|
| 677 |
+
|
| 678 |
+
and valid the explanations are (context and coher- 848 ence properties). The generalisability will also be tested, by studying other domains and classifica-
|
| 679 |
+
|
| 680 |
+
tion tasks. 851
|
| 681 |
+
|
| 682 |
+
853
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/YTVwaoG0Mi/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,523 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Detection and attribution of quotes in Finnish news media: BERT vs. rule-based approach
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
058
|
| 12 |
+
|
| 13 |
+
059
|
| 14 |
+
|
| 15 |
+
006 060
|
| 16 |
+
|
| 17 |
+
## Abstract
|
| 18 |
+
|
| 19 |
+
We approach the problem of recognition and attribution of quotes in Finnish news media. Solving this task would create possibilities for large-scale analysis of media wrt. the presence and styles of presenta-
|
| 20 |
+
|
| 21 |
+
018 tion of different voices and opinions. We describe the annotation of a corpus of media texts, numbering around 1500 articles,
|
| 22 |
+
|
| 23 |
+
021 with quote attribution and coreference information. Further, we compare two meth-
|
| 24 |
+
|
| 25 |
+
023 ods for automatic quote recognition: a rule-based one operating on dependency trees and a machine learning one built on
|
| 26 |
+
|
| 27 |
+
026 top of the BERT language model. We conclude that BERT provides more promis-
|
| 28 |
+
|
| 29 |
+
028 ing results even with little training data, achieving ${95}\%$ F-score on direct quote recognition and ${84}\%$ for indirect quotes.
|
| 30 |
+
|
| 31 |
+
031 Finally, we discuss open problems and further associated tasks, especially the neces-
|
| 32 |
+
|
| 33 |
+
033 sity of resolving speaker mentions to entity references.
|
| 34 |
+
|
| 35 |
+
## 1 Introduction
|
| 36 |
+
|
| 37 |
+
The recognition of quotes and reported speech is
|
| 38 |
+
|
| 39 |
+
038 an important step towards the computational analysis of news media articles. It allows us to measure on a large scale, who is given voice and how much, how opposing or competing views are presented alongside each other, as well as how the language of the quoted sources differs from the language of the journalistic reporting. In case of the Finnish news media, such analyses have recently been attempted by (Koivunen et al., 2021; Seuri et al., 2021). On the other hand, Suomen Kuvalehti et al. (2021) have studied politicians' visibility in the media based on the mentions of their names.
|
| 40 |
+
|
| 41 |
+
In the present paper, we focus on the technical 053 task of recognizing direct and indirect quotes in
|
| 42 |
+
|
| 43 |
+
061
|
| 44 |
+
|
| 45 |
+
062
|
| 46 |
+
|
| 47 |
+
063
|
| 48 |
+
|
| 49 |
+
064
|
| 50 |
+
|
| 51 |
+
the Finnish news media texts. The task can be il- 065
|
| 52 |
+
|
| 53 |
+
lustrated with the following example: 066
|
| 54 |
+
|
| 55 |
+
067
|
| 56 |
+
|
| 57 |
+
Sipilän mukaan lakiehdotuksia ollaan 068
|
| 58 |
+
|
| 59 |
+
tuomassa eduskuntaan helmikuussa. 069
|
| 60 |
+
|
| 61 |
+
According to Sipilä, bill proposals will 070 071
|
| 62 |
+
|
| 63 |
+
be brought to the parliament in Febru- 072 ary.
|
| 64 |
+
|
| 65 |
+
Such relations consists of three elements: the 075 cue 'mukaan' ('according to') indicates an indirect quote, in which the source (Juha Sipilä, the Finnish prime minister 2015-2019) says the text referred to as proposition, or quotation span. A
|
| 66 |
+
|
| 67 |
+
complete approach for quote detection and attribu- 080 tion would solve the following tasks:
|
| 68 |
+
|
| 69 |
+
082
|
| 70 |
+
|
| 71 |
+
1. Detecting quotation spans.
|
| 72 |
+
|
| 73 |
+
2. Attributing quotation spans to the source 085 mention in the text (which might also span
|
| 74 |
+
|
| 75 |
+
multiple tokens). 087
|
| 76 |
+
|
| 77 |
+
3. Linking source mentions to entity identi-
|
| 78 |
+
|
| 79 |
+
fiers (including coreference resolution and 090 lemmatization).
|
| 80 |
+
|
| 81 |
+
We will present methods for solving tasks 1 and 2 , 092 while discussing 3 as subject for further work.
|
| 82 |
+
|
| 83 |
+
Most existing work for this task deals with English, while occasionally other Germanic or Ro-
|
| 84 |
+
|
| 85 |
+
mance languages have been considered. Com- 097 pared to that, Finnish presents challenges due to a rich morphology and free word order. Those can largely be dealt with by the advanced NLP tools that we use (either a dependency parser pipeline or BERT), but they rule out the usage of simpler pattern-based methods and remain a possible source of errors even for state-of-the-art NLP.
|
| 86 |
+
|
| 87 |
+
107
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
${}^{1}$ We follow Pareti (2015)’s convention of marking the quotation span in cursive, the source in bold, and underlining the cue.
|
| 92 |
+
|
| 93 |
+
---
|
| 94 |
+
|
| 95 |
+
We describe the process of collecting and anno-
|
| 96 |
+
|
| 97 |
+
109 tating a gold standard corpus in sec. 3. Further, in sec. 4, we describe two different automatic approaches: a rule-based one, amounting to matching certain grammatical structures in dependency-parsed text, as well as a machine learning one, which utilizes the state-of-the-art neural language model BERT. We will release the annotated corpus and both methods publicly.
|
| 98 |
+
|
| 99 |
+
Our initial intuition was that dependency parsing provides enough information to recognize quotes with simple pattern matching. Another reason to implement this approach was that it did not need training data, which was at first unavailable for us. However, the final comparison revealed that the BERT-based model outperformed the rule-based even with little training data. The results of this experiment are described in sec. 5.
|
| 100 |
+
|
| 101 |
+
## 2 Related Work
|
| 102 |
+
|
| 103 |
+
To our knowledge, the most similar work to ours has been done by Silvia Pareti and colleagues (Pareti et al., 2013; Pareti, 2015, 2016), who annotated a corpus of attribution relations for English and experimented with machine learning models for recognizing such relations. For the latter they applied classification algorithms - CRF, k-NN, logistic regression - working on data enriched with linguistic features, which was state-of-the art in NLP at the time. However, Scheible et al. (2016) have criticized the choice of CRFs for quote detection because of the Markov assumption they make. More recently, Papay and Padó (2019) presented a neural LSTM-based model for recognizing quotations, but without attribution. Brunner et al. (2020) compare different embedding-based models (including BERT) on the task of recognizing types of speech, which includes direct and indirect quotes.
|
| 104 |
+
|
| 105 |
+
As to Nordic languages, a rule-based approach for Norwegian has been presented by Salway et al. (2017). It utilizes a dependency parser and a list of speech verbs. From other languages, Quintão (2014) used a machine learning method on Portuguese news corpora, while Pouliquen et al. (2007) used a rule-based approach for multiple European languages.
|
| 106 |
+
|
| 107 |
+
Muzny et al. (2017) present a method for quote attribution. They thus start with quotation spans already recognized and perform two tasks: 1) at-
|
| 108 |
+
|
| 109 |
+
161
|
| 110 |
+
|
| 111 |
+
tributing a quote to a speaker mention in the text, 162
|
| 112 |
+
|
| 113 |
+
2) linking the speaker mentions into entities. They 163 use a rule-based strategy on top of tools performing dependency parsing and coreference resolution. They also released a corpus of quote attributions consisting of three novels in English.
|
| 114 |
+
|
| 115 |
+
Although not dealing exactly with quote detec- 168 tion, Padó et al. (2019) provide a prominent example of computational analysis of political discourse using modern NLP methods. They use various neural models (including BERT) to detect claims and attribute them to actors, with the goal of modeling the discourse as a network of relations between actors and claims. Automatic quote detection could be a useful element of such larger
|
| 116 |
+
|
| 117 |
+
system as well. 178
|
| 118 |
+
|
| 119 |
+
## 3 Dataset and Annotation
|
| 120 |
+
|
| 121 |
+
180
|
| 122 |
+
|
| 123 |
+
The annotation process consisted of two parallel tasks: marking quotations and linking together chains of co-referencing expressions denoting people, institutions and other human-like actors present in the documents. Both annotation tasks were conducted using the WebAnno platform (Eckart de Castilho et al., 2016), by which each annotator was assigned their documents and
|
| 124 |
+
|
| 125 |
+
by which the annotation itself was done. The an- 190 notation guidelines were written beforehand and further developed after a test run.
|
| 126 |
+
|
| 127 |
+
The quotation detection annotation consisted of 193 1) marking the span in the text containing the con-
|
| 128 |
+
|
| 129 |
+
tent of the quote, 2) marking the speech act verb (if 195 present), 3) marking the source of the quotation (if present), and 4) noting whether the quote was direct or indirect. The task was relatively straightforward, as all annotators were students with at least
|
| 130 |
+
|
| 131 |
+
a minor degree in linguistics. 200
|
| 132 |
+
|
| 133 |
+
The project employed 10 annotators. Four of them were recruited in an earlier phase and annotated a test data set of 40 articles. After the test run, the guidelines were improved based on both inter-annotator agreement scores and feedback from the annotators, in accordance with the standard linguistic annotation methodology (Art-stein, 2017). The inter-annotator agreement scores (Fleiss’ $\kappa$ ) were between 0.77-0.8, which we deemed sufficient to consider the annotations consistent. The workload was balanced so that the 6 other annotators who were recruited at the later stage annotated more articles to compensate for
|
| 134 |
+
|
| 135 |
+
the test run. The annotators worked independently 215 on the WebAnno platform.
|
| 136 |
+
|
| 137 |
+
---
|
| 138 |
+
|
| 139 |
+
${}^{2}$ (links to repositories removed for anonymization, will be added in the published version)
|
| 140 |
+
|
| 141 |
+
---
|
| 142 |
+
|
| 143 |
+
217 The articles were sampled from a database containing the metadata for the online media sources and the sampled lists of articles were then scraped using a web crawler (Mäkelä and Toivanen, 2021) and automatically pre-processed to CONLL format containing lemmatization, part-of-speech and dependency taggings using Turku Neural Parser (Kanerva et al., 2018). We used four sources for the articles: YLE (the Finnish national broadcasting company), Helsingin Sanomat (the most popular daily newspaper), Iltalehti (an evening tabloid) and STT (the Finnish news agency), covering different kinds of media texts wrt. length and style. The total number of articles annotated was 1500 ,
|
| 144 |
+
|
| 145 |
+
232 of which 1460 were annotated by only one annotator at the second stage.
|
| 146 |
+
|
| 147 |
+
234
|
| 148 |
+
|
| 149 |
+
## 4 Methods
|
| 150 |
+
|
| 151 |
+
### 4.1 Rule-based approach
|
| 152 |
+
|
| 153 |
+
The input to the rule-based quote detection engine is text with linguistic annotations obtained from the Turku Neural Parser (Kanerva et al., 2018). The parser performs the following tasks: tokeniza-tion, lemmatization, part-of-speech and morphological tagging, and dependency parsing.
|
| 154 |
+
|
| 155 |
+
The first stage of quote recognition is recognizing syntactic structures that typically introduce a quote (Table 1). Rules 1-2 describe the very common structures like ’ $\mathrm{X}$ says that $\mathrm{Y}$ ’ and ’ $\mathrm{Y}$ ’ says X', respectively. Rules 3-4 describe structures of the type: 'according to X, Y' and 'in X's opinion, Y'. In such structures, the source and cue can be positioned differently relatively to the proposition: before, after, or even inside it (see the example for rule 4). In the latter case, we allow annotating the cue and source as part of the proposition to avoid discontinuous propositions. Finally, rule 5 is characteristic for Finnish: it captures the construction 'says + active participle', e.g. sanoo olevansa
|
| 156 |
+
|
| 157 |
+
259 'says that he is', or sanoo tehneensa 'says that he did'. This construction does not use the word että 'that'.
|
| 158 |
+
|
| 159 |
+
In the rules where the cue is a verb $(1,2$ and 5), the verb sanoa 'to say' can be substituted by any other speech act verb, e.g. kertoa 'to tell', korostaa 'to emphasize', kuitata 'to sum up' etc. We initially prepared a list of speech act verbs manually, then used a word2vec model to expand it with automatically generated synonyms, which
|
| 160 |
+
|
| 161 |
+
269 were again filtered manually. The final list con-
|
| 162 |
+
|
| 163 |
+
sisted of 73 verbs. 270
|
| 164 |
+
|
| 165 |
+
Once the source-cue-proposition triplets are 271
|
| 166 |
+
|
| 167 |
+
recognized, the proposition texts can typically be 272
|
| 168 |
+
|
| 169 |
+
extracted by taking the dependency subtree under 273 the token marked as proposition. However, further post-processing is needed for quotes consist-
|
| 170 |
+
|
| 171 |
+
ing of multiple sentences. For example in Table 276 1 , the example for rule 2 is clearly the last sentence of a multi-sentence quote. In order to expand the matches to multi-sentence quotes, we use two
|
| 172 |
+
|
| 173 |
+
rules: 281
|
| 174 |
+
|
| 175 |
+
1. If the paragraph containing the match starts
|
| 176 |
+
|
| 177 |
+
with a hyphen - extend the quote to the begin- 283 ning of the paragraph. This is because long
|
| 178 |
+
|
| 179 |
+
direct quotes are typically formatted as sepa- 286 rate paragraphs.
|
| 180 |
+
|
| 181 |
+
2. If there is a quotation mark between the cue 288 and the proposition head - extend the quote
|
| 182 |
+
|
| 183 |
+
backwards to the matching quotation mark. 291
|
| 184 |
+
|
| 185 |
+
In both these cases, the quote is classified as direct, as it is marked with quotation markers. Matches that do not fulfill the above conditions are classified as indirect.
|
| 186 |
+
|
| 187 |
+
Finally, we use an additional rule to detect 'freestanding' direct quotes encompassing entire paragraphs. These do not necessarily contain a source attribution (like ', says X') because the source might be already clear from context. Thus, we detect remaining paragraphs that either start with
|
| 188 |
+
|
| 189 |
+
a hyphen or are enclosed in quotation marks, as 303 direct quotes. For the attribution we currently use a naïve strategy of attributing them to the
|
| 190 |
+
|
| 191 |
+
same source as the previous quote in the text (if 306 present). This works in a lot of cases because the
|
| 192 |
+
|
| 193 |
+
quotes usually follow a structure in which a whole- 308 paragraph direct quote is introduced by an indirect
|
| 194 |
+
|
| 195 |
+
one, like: 310
|
| 196 |
+
|
| 197 |
+
311
|
| 198 |
+
|
| 199 |
+
According to Lindberg, approximately
|
| 200 |
+
|
| 201 |
+
every third pet is overweight. 313
|
| 202 |
+
|
| 203 |
+
- We do have a lot of work on that.
|
| 204 |
+
|
| 205 |
+
The rules from Table 1 are implemented using the spaCy library class DependencyMatcher ${}^{3}$ which offers a declarative language to express the rules and good performance. The post-processing code is implemented in Python.
|
| 206 |
+
|
| 207 |
+
323
|
| 208 |
+
|
| 209 |
+
---
|
| 210 |
+
|
| 211 |
+
https://spacy.io/api/ dependencymatcher
|
| 212 |
+
|
| 213 |
+
---
|
| 214 |
+
|
| 215 |
+
325 379
|
| 216 |
+
|
| 217 |
+
<table><tr><td>$\mathbf{{No}.}$</td><td>schema</td><td>example</td></tr><tr><td>1</td><td> <img src="https://cdn.noedgeai.com/019640ee-ca3a-7a47-b4fa-4aebda73092f_3.jpg?x=330&y=234&w=220&h=66&r=0"/> source cue prop</td><td>Malinen sanoo, että hän ei tule esittämään liiton hallituk- selle yhdenkään sopimuksen hyväksymistä. Malinen says that he will not propose accepting even a single motion of agreement to the union's board.</td></tr><tr><td>2</td><td> <img src="https://cdn.noedgeai.com/019640ee-ca3a-7a47-b4fa-4aebda73092f_3.jpg?x=332&y=405&w=208&h=66&r=0"/> </td><td>Siksi mekin lähdimme näihin neuvotteluihin mukaan, Mäkynen sanoo. This is why we also joined these negotiations, Mäkynen says.</td></tr><tr><td>3</td><td> <img src="https://cdn.noedgeai.com/019640ee-ca3a-7a47-b4fa-4aebda73092f_3.jpg?x=314&y=570&w=397&h=160&r=0"/> </td><td>Sipilän mukaan lakiehdotuksia ollaan tuomassa eduskun- taan helmikuussa. According to Sipilä, bill proposals will be brought to the parliament in February.</td></tr><tr><td>4</td><td> <img src="https://cdn.noedgeai.com/019640ee-ca3a-7a47-b4fa-4aebda73092f_3.jpg?x=304&y=781&w=388&h=123&r=0"/> CASE: Ela</td><td>Suomen vaikeista ongelmista talous on presidentin mielestä helpompi. From Finland's most difficult problems, the economy is in the president's opinion easy.</td></tr><tr><td>5</td><td> <img src="https://cdn.noedgeai.com/019640ee-ca3a-7a47-b4fa-4aebda73092f_3.jpg?x=326&y=987&w=221&h=68&r=0"/> </td><td>Orpo sanoo olevansa valmis poikkeuksellisiin keinoihin ja jopa lainmuutoksiin [...]. Orpo says that he is ready for exceptional measures and even legistative changes [...].</td></tr></table>
|
| 218 |
+
|
| 219 |
+
Table 1: The manually constructed rules for detecting quote-like syntactic structures.
|
| 220 |
+
|
| 221 |
+
378
|
| 222 |
+
|
| 223 |
+
380
|
| 224 |
+
|
| 225 |
+
381
|
| 226 |
+
|
| 227 |
+
330 384
|
| 228 |
+
|
| 229 |
+
335 389
|
| 230 |
+
|
| 231 |
+
337
|
| 232 |
+
|
| 233 |
+
338
|
| 234 |
+
|
| 235 |
+
339
|
| 236 |
+
|
| 237 |
+
340
|
| 238 |
+
|
| 239 |
+
342
|
| 240 |
+
|
| 241 |
+
345
|
| 242 |
+
|
| 243 |
+
347
|
| 244 |
+
|
| 245 |
+
352 406
|
| 246 |
+
|
| 247 |
+
### 4.2 BERT model
|
| 248 |
+
|
| 249 |
+
The machine learning model is realized as two to-
|
| 250 |
+
|
| 251 |
+
357 ken classification heads on top of BERT - a neural language model based on the transformer architecture (Devlin et al., 2019). We use the model pre-trained on Finnish data by Virtanen et al. (2019).
|
| 252 |
+
|
| 253 |
+
The first classification head recognizes and clas-
|
| 254 |
+
|
| 255 |
+
362 sifies spans of quoted text (propositions). The labeling follows the IOB schema and the class label encodes whether the quote is direct or indirect, as well as the relative position of the speaker men-
|
| 256 |
+
|
| 257 |
+
367 tion to the quoted text. The latter is expressed as one of the symbols: $+ , -$ or $=$ and a number 1-4. The symbol describes whether the speaker is mentioned after $\left( +\right)$ , before(-)or inside $\left( = \right)$ the proposition, while the number signifies, which recognized entity is the speaker. For example, the class label B-DIRECT+2 denotes the beginning (B-) of a direct quote, the source of which is the second recognized entity after the quote. A special label 00 signifies that the source of the quote is
|
| 258 |
+
|
| 259 |
+
377 not marked.
|
| 260 |
+
|
| 261 |
+
The second classification head recognizes the
|
| 262 |
+
|
| 263 |
+
entities, i.e. elements of coreference chains. It has 409 just one class encoded in the IOB schema and does
|
| 264 |
+
|
| 265 |
+
not perform the linking of entities into chains. 411
|
| 266 |
+
|
| 267 |
+
An example of sequence annotation is shown in Table 2. It shows the following sentence:
|
| 268 |
+
|
| 269 |
+
Kansainvälinen rikostuomioistuin aikoo 415
|
| 270 |
+
|
| 271 |
+
määrätä Sudanin presidentin Omar 416
|
| 272 |
+
|
| 273 |
+
al-Bashirin pidatettäväksi, kertoo 417
|
| 274 |
+
|
| 275 |
+
sanomalehti New York Times. 418
|
| 276 |
+
|
| 277 |
+
419
|
| 278 |
+
|
| 279 |
+
The International Criminal Court is in- 420
|
| 280 |
+
|
| 281 |
+
tending to issue an arrest warrant on 421 Sudan's president Omar al-Bashar, the newspaper New York Times reports.
|
| 282 |
+
|
| 283 |
+
There are three entities in the sentence: 'The In-
|
| 284 |
+
|
| 285 |
+
ternational Criminal Court', 'Sudan's president 426 Omar al-Bashar' and 'the newspaper New York
|
| 286 |
+
|
| 287 |
+
Times' - their annotations on the token level are 428
|
| 288 |
+
|
| 289 |
+
encoded on the 'entity' layer. The 'quote' layer 429
|
| 290 |
+
|
| 291 |
+
encodes an indirect quote, which is attributed to 430
|
| 292 |
+
|
| 293 |
+
the first entity following the quote (hence, +1). 431
|
| 294 |
+
|
| 295 |
+
433
|
| 296 |
+
|
| 297 |
+
<table><tr><td>word</td><td>quote</td><td>entity</td></tr><tr><td>Kansainvälinen</td><td>B-INDIRECT+1</td><td>B</td></tr><tr><td>rikostuomioistuin</td><td>I-INDIRECT+1</td><td>I</td></tr><tr><td>aikoo</td><td>I-INDIRECT+1</td><td>O</td></tr><tr><td>mäiārātā</td><td>I-INDIRECT+1</td><td>O</td></tr><tr><td>Sudanin</td><td>I-INDIRECT+1</td><td>B</td></tr><tr><td>presidentin</td><td>I-INDIRECT+1</td><td>I</td></tr><tr><td>Omar</td><td>I-INDIRECT+1</td><td>I</td></tr><tr><td>al-Bashirin</td><td>I-INDIRECT+1</td><td>I</td></tr><tr><td>pidätettäväksi</td><td>I-INDIRECT+1</td><td>O</td></tr><tr><td>9</td><td>O</td><td>O</td></tr><tr><td>kertoo</td><td>O</td><td>O</td></tr><tr><td>sanomalehti</td><td>O</td><td>B</td></tr><tr><td>New</td><td>O</td><td>I</td></tr><tr><td>York</td><td>O</td><td>I</td></tr><tr><td>Times</td><td>O</td><td>I</td></tr><tr><td>-</td><td>O</td><td>O</td></tr></table>
|
| 298 |
+
|
| 299 |
+
Table 2: An example of sequence annotation for
|
| 300 |
+
|
| 301 |
+
438 the BERT model.
|
| 302 |
+
|
| 303 |
+
<table><tr><td/><td>training</td><td>evaluation</td></tr><tr><td>articles</td><td>1,172</td><td>287</td></tr><tr><td>sentences</td><td>22,949</td><td>5,097</td></tr><tr><td>tokens</td><td>252,006</td><td>59,076</td></tr><tr><td>quotes</td><td>3,854</td><td>984</td></tr></table>
|
| 304 |
+
|
| 305 |
+
Table 3: The sizes of datasets used in experiments.
|
| 306 |
+
|
| 307 |
+
## 5 Evaluation
|
| 308 |
+
|
| 309 |
+
For the evaluation experiments we use a roughly 80-20 split of the data by taking the data provided by 2 annotators as evaluation set and the remaining 8 annotators as training set. The dataset sizes are summarized in Table 3. We compare both methods on the task of quote recognition (with and without direct/indirect classification) and attribution.
|
| 310 |
+
|
| 311 |
+
Quote detection. The results of quote span detection without taking into account the direct-indirect distinction are shown in Table 4. On the other hand, the direct-indirect breakdown is shown in Table 5, where misclassifications (identifying a direct quote as an indirect one or vice versa) were counted as both a false positive and a false negative. We exclude punctuation tokens from the evaluation as especially the commas and periods on the boundaries of quotes might have been inconsistently annotated, and their inclusion in the quote is irrelevant.
|
| 312 |
+
|
| 313 |
+
Both settings show a clear advantage of the
|
| 314 |
+
|
| 315 |
+
485 BERT model. In case of direct quotes, the rules
|
| 316 |
+
|
| 317 |
+
<table><tr><td>method</td><td>$\mathbf{{Pr}}$</td><td>$\mathbf{{Re}}$</td><td>$\mathbf{{F1}}$</td></tr><tr><td>rule-based</td><td>.85</td><td>.78</td><td>.82</td></tr><tr><td>BERT</td><td>.92</td><td>.90</td><td>.91</td></tr></table>
|
| 318 |
+
|
| 319 |
+
Table 4: Results of quotation span detection without classification.
|
| 320 |
+
|
| 321 |
+
486
|
| 322 |
+
|
| 323 |
+
487
|
| 324 |
+
|
| 325 |
+
488
|
| 326 |
+
|
| 327 |
+
489
|
| 328 |
+
|
| 329 |
+
490
|
| 330 |
+
|
| 331 |
+
491
|
| 332 |
+
|
| 333 |
+
<table><tr><td rowspan="2">method</td><td colspan="3">indirect</td><td colspan="3">direct</td></tr><tr><td>$\mathbf{{Pr}}$</td><td>$\mathbf{{Re}}$</td><td>$\mathbf{{F1}}$</td><td>$\mathbf{{Pr}}$</td><td>$\mathbf{{Re}}$</td><td>$\mathbf{{F1}}$</td></tr><tr><td>rule-based</td><td>.75</td><td>.66</td><td>.70</td><td>.93</td><td>.86</td><td>.89</td></tr><tr><td>BERT</td><td>.84</td><td>.84</td><td>.84</td><td>.96</td><td>.94</td><td>.95</td></tr></table>
|
| 334 |
+
|
| 335 |
+
Table 5: Results of quotation span detection and direct/indirect classification.
|
| 336 |
+
|
| 337 |
+
492
|
| 338 |
+
|
| 339 |
+
493
|
| 340 |
+
|
| 341 |
+
497
|
| 342 |
+
|
| 343 |
+
499 for recognizing them are quite rigid. Furthermore,
|
| 344 |
+
|
| 345 |
+
they can suffer from paragraph segmentation er- 502 rors and misplaced or incidental quotation marks
|
| 346 |
+
|
| 347 |
+
(e.g. 'scare quotes'). This explains the lower re- 504 call of the rule-based method.
|
| 348 |
+
|
| 349 |
+
Indirect quotes have proven more challenging to the rule-based method as well. This can be to a variety of reasons: missing speech act verbs, incorrectly identifying quote spans based on syntactic criteria (also affected by parser, tagger and sentence segmentation errors), or uncommon structures not covered by the rules. Moreover, rule 3 ('according to') has a tendency to produce false positives, e.g. something being described 'according to the plan'.
|
| 350 |
+
|
| 351 |
+
In general, the BERT model has shown to be more flexible wrt. the often unpredictable nature of text data, and does not suffer from the error propagation through the NLP pipeline.
|
| 352 |
+
|
| 353 |
+
Attribution. The evaluation of attribution is problematic because of the fact that our dataset was not annotated with the BERT model in mind.
|
| 354 |
+
|
| 355 |
+
Thus, we present it as our best attempt given the 524 current possibilities, but recognize the need for further work in this regard.
|
| 356 |
+
|
| 357 |
+
The annotated data assigns each quote to a sin-
|
| 358 |
+
|
| 359 |
+
gle token representing the mention of the quote's 529 source in the text. If the source is represented by a longer phrase, the syntactic head (wrt. dependency parsing) of this phrase should be selected according to the annotation guidelines. On the
|
| 360 |
+
|
| 361 |
+
other hand, mentions of quote sources are typ- 534 ically entities annotated as parts of coreference chains, and thus the entire span is marked for the purpose of coreference annotation. Thus, by com-
|
| 362 |
+
|
| 363 |
+
bining the quote and coreference annotations, we 538
|
| 364 |
+
|
| 365 |
+
are able to obtain a span-to-span attribution rela- 539 tion for most cases. The exception are cases in
|
| 366 |
+
|
| 367 |
+
541 which the quoted entity is mentioned only once in the article, and thus not annotated as a coreference chain.
|
| 368 |
+
|
| 369 |
+
Although the BERT model outputs sources as entity spans, the rule-based model points to a single token - the syntactic head, similarly to the gold standard annotation. In order to make the results comparable, we reduced the output of the BERT model to the first token of the span, and then evaluated a source annotation as correct if it either points to exactly the same token as the gold standard, or if it points to a token within the same coreference span. Thus, the model's ability to correctly identify the entire span is currently not eval-
|
| 370 |
+
|
| 371 |
+
556 uated, as it is not implemented in the rule-based method.
|
| 372 |
+
|
| 373 |
+
558 Table 6 presents results of the attribution evaluation in terms of the number of gold-standard quote tokens with correctly and incorrectly recognized source, as well as unrecognized source. The latter case occurse if either the token is not recognized as a quote at all, or it is recognized but without identifying the source. We report the accuracy as the ratio of correctly identified to all tokens.
|
| 374 |
+
|
| 375 |
+
The results indicate a small advantage of the rule-based model. In both cases, the main source of errors are the unrecognized annotations, rather than the incorrect ones. For the rule-based model this is typically due to quotes not being recognized at all (see low recall in Table 4), while for the BERT model there is a large amount of correctly identified quotes, for which the source could not be found. Of the 1990 recognized quotes, 646 $\left( {{32}\% }\right)$ are reported without source, compared to ${13}\% \left( {{218}/{1633}}\right)$ for the rule-based model. The BERT model's ability to identify the source depends on the entity detection, for which the training data is incomplete (derived from coreference annotations only). Further, the model processes the text paragraph by paragraph and thus does not
|
| 376 |
+
|
| 377 |
+
583 find a source mention that is outside of the paragraph containing the quote. These problems offer room for improvement in further work, and thus it can be expected that the BERT model will eventually outperform the rule-based one also in attribution.
|
| 378 |
+
|
| 379 |
+
## 6 Discussion and Further Work
|
| 380 |
+
|
| 381 |
+
Although we regard the work presented in the pre-
|
| 382 |
+
|
| 383 |
+
593 vious sections as a complete solution to a well-
|
| 384 |
+
|
| 385 |
+
<table><tr><td>method</td><td>cor</td><td>inc</td><td>unrec</td><td>accuracy</td></tr><tr><td>rule-based</td><td>7889</td><td>774</td><td>4996</td><td>.58</td></tr><tr><td>BERT</td><td>7554</td><td>767</td><td>5338</td><td>.55</td></tr></table>
|
| 386 |
+
|
| 387 |
+
Table 6: Results of attribution.
|
| 388 |
+
|
| 389 |
+
594
|
| 390 |
+
|
| 391 |
+
595
|
| 392 |
+
|
| 393 |
+
596
|
| 394 |
+
|
| 395 |
+
597
|
| 396 |
+
|
| 397 |
+
598
|
| 398 |
+
|
| 399 |
+
599
|
| 400 |
+
|
| 401 |
+
delimited problem, we see some potential for both 600 incremental improvements, as well as work on further related tasks, that will be addressed in the future.
|
| 402 |
+
|
| 403 |
+
Entity annotation and detection. While de- 605 signing our annotation project, we did not antici-
|
| 404 |
+
|
| 405 |
+
pate that a machine learning quote detection model 607 will need to also detect entities that the quotes can be attributed to. We intended the coreference an-
|
| 406 |
+
|
| 407 |
+
notation to be used only in the further step (entity 610 resolution). In result, entities that are mentioned
|
| 408 |
+
|
| 409 |
+
only once were not annotated. The corpus could 612 be improved by ensuring that at least tokens assigned as source to a quote are also annotated as
|
| 410 |
+
|
| 411 |
+
an entity. This is expected to improve the BERT 615 model's performance on entity detection, and thus
|
| 412 |
+
|
| 413 |
+
quote attribution. 617
|
| 414 |
+
|
| 415 |
+
Entity resolution. While some works treat the
|
| 416 |
+
|
| 417 |
+
problem of quote attribution to speaker mention in 620 the text and entity resolution jointly (e.g. Muzny
|
| 418 |
+
|
| 419 |
+
et al. 2017), in our opinion entity resolution is a 622 complex task that is best treated separately. In addition to coreference resolution within one docu-
|
| 420 |
+
|
| 421 |
+
ment, also matching the entities across documents 625 could be considered there.
|
| 422 |
+
|
| 423 |
+
Coreference resolution can be done with BERT 627 with state-of-the-art accuracy (Joshi et al., 2019). However, the setup is complicated as coreferences
|
| 424 |
+
|
| 425 |
+
are typically long-range relations, so a sliding win- 630 dow approach needs to be used to mitigate BERT's
|
| 426 |
+
|
| 427 |
+
limitation in text size. Furthermore, modeling re- 632 lations with a neural model is not straightforward.
|
| 428 |
+
|
| 429 |
+
A related problem is that nested entities are possible and might be relevant, e.g.:
|
| 430 |
+
|
| 431 |
+
[[Viron] metallityöväen liiton] puheen- 637
|
| 432 |
+
|
| 433 |
+
johtaja Endel Soon] 638
|
| 434 |
+
|
| 435 |
+
[[Estonia]'s metal workers' union]'s chairman Endel Soon]
|
| 436 |
+
|
| 437 |
+
In such case, coreferences and other quotes might 642 also refer to the inner entities 'Estonia' or 'Estonia's metal workers' union'. For the present work, we disregarded nested entities as locally the outermost entity is typically the source of the quote it
|
| 438 |
+
|
| 439 |
+
stands next to. 647
|
| 440 |
+
|
| 441 |
+
## 7 Conclusion
|
| 442 |
+
|
| 443 |
+
649
|
| 444 |
+
|
| 445 |
+
We have presented two methods for recognition of quotes in Finnish news media, along with an annotated corpus for training and evaluation. To our knowledge, our solution is the first one proposed 654 for Finnish. We hope that the progress achieved on this task will facilitate more detailed large-scale quantitative analysis of voices in the Finnish news media.
|
| 446 |
+
|
| 447 |
+
## References
|
| 448 |
+
|
| 449 |
+
661 Ron Artstein. 2017. Handbook of Linguistic Annotation, chapter Inter-annotator agreement.
|
| 450 |
+
|
| 451 |
+
664 Ann Brunner, Ngoc Duyen Tanja Tu, Lukas Weimer, and Fotis Jannidis. 2020. To bert or not to bert - comparing contextual embeddings in a deep learn-
|
| 452 |
+
|
| 453 |
+
666 ing architecture for the automatic recognition of four types of speech, thought and writing representation. In SwissText/KONVENS.
|
| 454 |
+
|
| 455 |
+
669 Richard Eckart de Castilho, Éva Mújdricza-Maydt, Seid Muhie Yimam, Silvana Hartmann, Iryna Gurevych, Anette Frank, and Chris Biemann. 2016. A Web-based Tool for the Integrated Annotation of Semantic and Syntactic Structures. In Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities (LT4DH), pages 76-84, Osaka, Japan.
|
| 456 |
+
|
| 457 |
+
Jacob Devlin, Ming-Wei Chang, Kanton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In ${NAACL}$ .
|
| 458 |
+
|
| 459 |
+
681 Mandar Joshi, Omer Levy, Daniel S. Weld, and Luke Zettlemoyer. 2019. Bert for coreference resolution: Baselines and analysis. In EMNLP 2019.
|
| 460 |
+
|
| 461 |
+
Jenna Kanerva, Filip Ginter, Niko Miekka, Akseli Leino, and Tapio Salakoski. 2018. Turku neural
|
| 462 |
+
|
| 463 |
+
686 parser pipeline: An end-to-end system for the conll 2018 shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computational Linguistics.
|
| 464 |
+
|
| 465 |
+
691 Anu Koivunen, Antti Kanner, Maciej Janicki, Auli Harju, Julius Hokkanen, and Eetu Mäkelä. 2021. Emotive, evaluative, epistemic: a linguistic analysis of affectivity in news journalism. Journalism, 22(5):1190-1206.
|
| 466 |
+
|
| 467 |
+
696 Grace Muzny, Michael Fang, Angel X. Chang, and Dan Jurafsky. 2017. A two-stage sieve approach for quote attribution.
|
| 468 |
+
|
| 469 |
+
Eetu Mäkelä and Pihla Toivanen. 2021. Finnish media scrapers. Journal of Open Source Software, 701 6(68):3504.
|
| 470 |
+
|
| 471 |
+
Sebastian Padó, André Blessing, Nico Blokker, Ere- 702
|
| 472 |
+
|
| 473 |
+
nay Dayanik, Sebastian Haunss, and Jonas Kuhn. 2019. Who sides with whom? towards computational construction of discourse networks for political debates. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2841-2847.
|
| 474 |
+
|
| 475 |
+
Sean Papay and Sebastian Padó. 2019. Quotation detection and classification with a corpus-agnostic model. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 888-894, Varna, Bulgaria. INCOMA Ltd.
|
| 476 |
+
|
| 477 |
+
Silvia Pareti. 2015. Attribution: A Computational Approach. Ph.D. thesis, University of Edinburgh.
|
| 478 |
+
|
| 479 |
+
Silvia Pareti. 2016. Parc 3.0: A corpus of attribution relations. In ${LREC}$ .
|
| 480 |
+
|
| 481 |
+
Silvia Pareti, Tim O'Keefe, Ioannis Konstas, James R. Curran, and Irena Koprinska. 2013. Automatically detecting and attributing indirect quotations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 989- 999.
|
| 482 |
+
|
| 483 |
+
Bruno Pouliquen, Ralf Steinberger, and Clive Best. 2007. Automatic detection of quotations in multilingual news. In Proceedings of Recend Advances in Natural Language Processing, pages 487-492, Borovets, Bulgaria.
|
| 484 |
+
|
| 485 |
+
Marta Quintão. 2014. Quotation attribution for portuguese news corpora.
|
| 486 |
+
|
| 487 |
+
Andrew Salway, Paul Meurer, Knut Hofland, and Øys-tein Reigem. 2017. Quote extraction and attribution from norwegian newspapers. In Proceedings of the 21st Nordic Conference of Computational Linguistics, pages 293-297, Gothenburg, Sweden.
|
| 488 |
+
|
| 489 |
+
Christian Scheible, Roman Klinger, and Sebastian Padó. 2016. Model architectures for quotation detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1736-1745.
|
| 490 |
+
|
| 491 |
+
Olli Seuri, Riikka Era, Anu Koivunen, Maciej Janicki, Pihla Toivanen, Julius Hokkanen, and Eetu Mäkelä. 2021. Uutisvuon hallitsija: Uutismedia kiky-kamppailussa 2015-2016. Politiikka : Valti-otieteellisen yhdistyksen julkaisu, 63(3):233-259.
|
| 492 |
+
|
| 493 |
+
Suomen Kuvalehti, Eetu Mäkelä, and Pihla Toiva-nen. 2021. Vuosi valokeilassa: Kuka sai medi-alta huomiota? kuka jäi varjoon? suomen kuvale-hti selvitti tutkijoiden kanssa, miten kansanedusta-jat näkyivät neljässä suuressa uutismediassa vuonna 2020.
|
| 494 |
+
|
| 495 |
+
Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: BERT for Finnish.
|
| 496 |
+
|
| 497 |
+
703
|
| 498 |
+
|
| 499 |
+
704
|
| 500 |
+
|
| 501 |
+
705
|
| 502 |
+
|
| 503 |
+
706
|
| 504 |
+
|
| 505 |
+
708
|
| 506 |
+
|
| 507 |
+
710
|
| 508 |
+
|
| 509 |
+
713
|
| 510 |
+
|
| 511 |
+
715
|
| 512 |
+
|
| 513 |
+
718
|
| 514 |
+
|
| 515 |
+
720
|
| 516 |
+
|
| 517 |
+
740
|
| 518 |
+
|
| 519 |
+
745
|
| 520 |
+
|
| 521 |
+
750
|
| 522 |
+
|
| 523 |
+
755
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/YTVwaoG0Mi/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,563 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ DETECTION AND ATTRIBUTION OF QUOTES IN FINNISH NEWS MEDIA: BERT VS. RULE-BASED APPROACH
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
058
|
| 12 |
+
|
| 13 |
+
059
|
| 14 |
+
|
| 15 |
+
006 060
|
| 16 |
+
|
| 17 |
+
§ ABSTRACT
|
| 18 |
+
|
| 19 |
+
We approach the problem of recognition and attribution of quotes in Finnish news media. Solving this task would create possibilities for large-scale analysis of media wrt. the presence and styles of presenta-
|
| 20 |
+
|
| 21 |
+
018 tion of different voices and opinions. We describe the annotation of a corpus of media texts, numbering around 1500 articles,
|
| 22 |
+
|
| 23 |
+
021 with quote attribution and coreference information. Further, we compare two meth-
|
| 24 |
+
|
| 25 |
+
023 ods for automatic quote recognition: a rule-based one operating on dependency trees and a machine learning one built on
|
| 26 |
+
|
| 27 |
+
026 top of the BERT language model. We conclude that BERT provides more promis-
|
| 28 |
+
|
| 29 |
+
028 ing results even with little training data, achieving ${95}\%$ F-score on direct quote recognition and ${84}\%$ for indirect quotes.
|
| 30 |
+
|
| 31 |
+
031 Finally, we discuss open problems and further associated tasks, especially the neces-
|
| 32 |
+
|
| 33 |
+
033 sity of resolving speaker mentions to entity references.
|
| 34 |
+
|
| 35 |
+
§ 1 INTRODUCTION
|
| 36 |
+
|
| 37 |
+
The recognition of quotes and reported speech is
|
| 38 |
+
|
| 39 |
+
038 an important step towards the computational analysis of news media articles. It allows us to measure on a large scale, who is given voice and how much, how opposing or competing views are presented alongside each other, as well as how the language of the quoted sources differs from the language of the journalistic reporting. In case of the Finnish news media, such analyses have recently been attempted by (Koivunen et al., 2021; Seuri et al., 2021). On the other hand, Suomen Kuvalehti et al. (2021) have studied politicians' visibility in the media based on the mentions of their names.
|
| 40 |
+
|
| 41 |
+
In the present paper, we focus on the technical 053 task of recognizing direct and indirect quotes in
|
| 42 |
+
|
| 43 |
+
061
|
| 44 |
+
|
| 45 |
+
062
|
| 46 |
+
|
| 47 |
+
063
|
| 48 |
+
|
| 49 |
+
064
|
| 50 |
+
|
| 51 |
+
the Finnish news media texts. The task can be il- 065
|
| 52 |
+
|
| 53 |
+
lustrated with the following example: 066
|
| 54 |
+
|
| 55 |
+
067
|
| 56 |
+
|
| 57 |
+
Sipilän mukaan lakiehdotuksia ollaan 068
|
| 58 |
+
|
| 59 |
+
tuomassa eduskuntaan helmikuussa. 069
|
| 60 |
+
|
| 61 |
+
According to Sipilä, bill proposals will 070 071
|
| 62 |
+
|
| 63 |
+
be brought to the parliament in Febru- 072 ary.
|
| 64 |
+
|
| 65 |
+
Such relations consists of three elements: the 075 cue 'mukaan' ('according to') indicates an indirect quote, in which the source (Juha Sipilä, the Finnish prime minister 2015-2019) says the text referred to as proposition, or quotation span. A
|
| 66 |
+
|
| 67 |
+
complete approach for quote detection and attribu- 080 tion would solve the following tasks:
|
| 68 |
+
|
| 69 |
+
082
|
| 70 |
+
|
| 71 |
+
1. Detecting quotation spans.
|
| 72 |
+
|
| 73 |
+
2. Attributing quotation spans to the source 085 mention in the text (which might also span
|
| 74 |
+
|
| 75 |
+
multiple tokens). 087
|
| 76 |
+
|
| 77 |
+
3. Linking source mentions to entity identi-
|
| 78 |
+
|
| 79 |
+
fiers (including coreference resolution and 090 lemmatization).
|
| 80 |
+
|
| 81 |
+
We will present methods for solving tasks 1 and 2 , 092 while discussing 3 as subject for further work.
|
| 82 |
+
|
| 83 |
+
Most existing work for this task deals with English, while occasionally other Germanic or Ro-
|
| 84 |
+
|
| 85 |
+
mance languages have been considered. Com- 097 pared to that, Finnish presents challenges due to a rich morphology and free word order. Those can largely be dealt with by the advanced NLP tools that we use (either a dependency parser pipeline or BERT), but they rule out the usage of simpler pattern-based methods and remain a possible source of errors even for state-of-the-art NLP.
|
| 86 |
+
|
| 87 |
+
107
|
| 88 |
+
|
| 89 |
+
${}^{1}$ We follow Pareti (2015)’s convention of marking the quotation span in cursive, the source in bold, and underlining the cue.
|
| 90 |
+
|
| 91 |
+
We describe the process of collecting and anno-
|
| 92 |
+
|
| 93 |
+
109 tating a gold standard corpus in sec. 3. Further, in sec. 4, we describe two different automatic approaches: a rule-based one, amounting to matching certain grammatical structures in dependency-parsed text, as well as a machine learning one, which utilizes the state-of-the-art neural language model BERT. We will release the annotated corpus and both methods publicly.
|
| 94 |
+
|
| 95 |
+
Our initial intuition was that dependency parsing provides enough information to recognize quotes with simple pattern matching. Another reason to implement this approach was that it did not need training data, which was at first unavailable for us. However, the final comparison revealed that the BERT-based model outperformed the rule-based even with little training data. The results of this experiment are described in sec. 5.
|
| 96 |
+
|
| 97 |
+
§ 2 RELATED WORK
|
| 98 |
+
|
| 99 |
+
To our knowledge, the most similar work to ours has been done by Silvia Pareti and colleagues (Pareti et al., 2013; Pareti, 2015, 2016), who annotated a corpus of attribution relations for English and experimented with machine learning models for recognizing such relations. For the latter they applied classification algorithms - CRF, k-NN, logistic regression - working on data enriched with linguistic features, which was state-of-the art in NLP at the time. However, Scheible et al. (2016) have criticized the choice of CRFs for quote detection because of the Markov assumption they make. More recently, Papay and Padó (2019) presented a neural LSTM-based model for recognizing quotations, but without attribution. Brunner et al. (2020) compare different embedding-based models (including BERT) on the task of recognizing types of speech, which includes direct and indirect quotes.
|
| 100 |
+
|
| 101 |
+
As to Nordic languages, a rule-based approach for Norwegian has been presented by Salway et al. (2017). It utilizes a dependency parser and a list of speech verbs. From other languages, Quintão (2014) used a machine learning method on Portuguese news corpora, while Pouliquen et al. (2007) used a rule-based approach for multiple European languages.
|
| 102 |
+
|
| 103 |
+
Muzny et al. (2017) present a method for quote attribution. They thus start with quotation spans already recognized and perform two tasks: 1) at-
|
| 104 |
+
|
| 105 |
+
161
|
| 106 |
+
|
| 107 |
+
tributing a quote to a speaker mention in the text, 162
|
| 108 |
+
|
| 109 |
+
2) linking the speaker mentions into entities. They 163 use a rule-based strategy on top of tools performing dependency parsing and coreference resolution. They also released a corpus of quote attributions consisting of three novels in English.
|
| 110 |
+
|
| 111 |
+
Although not dealing exactly with quote detec- 168 tion, Padó et al. (2019) provide a prominent example of computational analysis of political discourse using modern NLP methods. They use various neural models (including BERT) to detect claims and attribute them to actors, with the goal of modeling the discourse as a network of relations between actors and claims. Automatic quote detection could be a useful element of such larger
|
| 112 |
+
|
| 113 |
+
system as well. 178
|
| 114 |
+
|
| 115 |
+
§ 3 DATASET AND ANNOTATION
|
| 116 |
+
|
| 117 |
+
180
|
| 118 |
+
|
| 119 |
+
The annotation process consisted of two parallel tasks: marking quotations and linking together chains of co-referencing expressions denoting people, institutions and other human-like actors present in the documents. Both annotation tasks were conducted using the WebAnno platform (Eckart de Castilho et al., 2016), by which each annotator was assigned their documents and
|
| 120 |
+
|
| 121 |
+
by which the annotation itself was done. The an- 190 notation guidelines were written beforehand and further developed after a test run.
|
| 122 |
+
|
| 123 |
+
The quotation detection annotation consisted of 193 1) marking the span in the text containing the con-
|
| 124 |
+
|
| 125 |
+
tent of the quote, 2) marking the speech act verb (if 195 present), 3) marking the source of the quotation (if present), and 4) noting whether the quote was direct or indirect. The task was relatively straightforward, as all annotators were students with at least
|
| 126 |
+
|
| 127 |
+
a minor degree in linguistics. 200
|
| 128 |
+
|
| 129 |
+
The project employed 10 annotators. Four of them were recruited in an earlier phase and annotated a test data set of 40 articles. After the test run, the guidelines were improved based on both inter-annotator agreement scores and feedback from the annotators, in accordance with the standard linguistic annotation methodology (Art-stein, 2017). The inter-annotator agreement scores (Fleiss’ $\kappa$ ) were between 0.77-0.8, which we deemed sufficient to consider the annotations consistent. The workload was balanced so that the 6 other annotators who were recruited at the later stage annotated more articles to compensate for
|
| 130 |
+
|
| 131 |
+
the test run. The annotators worked independently 215 on the WebAnno platform.
|
| 132 |
+
|
| 133 |
+
${}^{2}$ (links to repositories removed for anonymization, will be added in the published version)
|
| 134 |
+
|
| 135 |
+
217 The articles were sampled from a database containing the metadata for the online media sources and the sampled lists of articles were then scraped using a web crawler (Mäkelä and Toivanen, 2021) and automatically pre-processed to CONLL format containing lemmatization, part-of-speech and dependency taggings using Turku Neural Parser (Kanerva et al., 2018). We used four sources for the articles: YLE (the Finnish national broadcasting company), Helsingin Sanomat (the most popular daily newspaper), Iltalehti (an evening tabloid) and STT (the Finnish news agency), covering different kinds of media texts wrt. length and style. The total number of articles annotated was 1500 ,
|
| 136 |
+
|
| 137 |
+
232 of which 1460 were annotated by only one annotator at the second stage.
|
| 138 |
+
|
| 139 |
+
234
|
| 140 |
+
|
| 141 |
+
§ 4 METHODS
|
| 142 |
+
|
| 143 |
+
§ 4.1 RULE-BASED APPROACH
|
| 144 |
+
|
| 145 |
+
The input to the rule-based quote detection engine is text with linguistic annotations obtained from the Turku Neural Parser (Kanerva et al., 2018). The parser performs the following tasks: tokeniza-tion, lemmatization, part-of-speech and morphological tagging, and dependency parsing.
|
| 146 |
+
|
| 147 |
+
The first stage of quote recognition is recognizing syntactic structures that typically introduce a quote (Table 1). Rules 1-2 describe the very common structures like ’ $\mathrm{X}$ says that $\mathrm{Y}$ ’ and ’ $\mathrm{Y}$ ’ says X', respectively. Rules 3-4 describe structures of the type: 'according to X, Y' and 'in X's opinion, Y'. In such structures, the source and cue can be positioned differently relatively to the proposition: before, after, or even inside it (see the example for rule 4). In the latter case, we allow annotating the cue and source as part of the proposition to avoid discontinuous propositions. Finally, rule 5 is characteristic for Finnish: it captures the construction 'says + active participle', e.g. sanoo olevansa
|
| 148 |
+
|
| 149 |
+
259 'says that he is', or sanoo tehneensa 'says that he did'. This construction does not use the word että 'that'.
|
| 150 |
+
|
| 151 |
+
In the rules where the cue is a verb $(1,2$ and 5), the verb sanoa 'to say' can be substituted by any other speech act verb, e.g. kertoa 'to tell', korostaa 'to emphasize', kuitata 'to sum up' etc. We initially prepared a list of speech act verbs manually, then used a word2vec model to expand it with automatically generated synonyms, which
|
| 152 |
+
|
| 153 |
+
269 were again filtered manually. The final list con-
|
| 154 |
+
|
| 155 |
+
sisted of 73 verbs. 270
|
| 156 |
+
|
| 157 |
+
Once the source-cue-proposition triplets are 271
|
| 158 |
+
|
| 159 |
+
recognized, the proposition texts can typically be 272
|
| 160 |
+
|
| 161 |
+
extracted by taking the dependency subtree under 273 the token marked as proposition. However, further post-processing is needed for quotes consist-
|
| 162 |
+
|
| 163 |
+
ing of multiple sentences. For example in Table 276 1, the example for rule 2 is clearly the last sentence of a multi-sentence quote. In order to expand the matches to multi-sentence quotes, we use two
|
| 164 |
+
|
| 165 |
+
rules: 281
|
| 166 |
+
|
| 167 |
+
1. If the paragraph containing the match starts
|
| 168 |
+
|
| 169 |
+
with a hyphen - extend the quote to the begin- 283 ning of the paragraph. This is because long
|
| 170 |
+
|
| 171 |
+
direct quotes are typically formatted as sepa- 286 rate paragraphs.
|
| 172 |
+
|
| 173 |
+
2. If there is a quotation mark between the cue 288 and the proposition head - extend the quote
|
| 174 |
+
|
| 175 |
+
backwards to the matching quotation mark. 291
|
| 176 |
+
|
| 177 |
+
In both these cases, the quote is classified as direct, as it is marked with quotation markers. Matches that do not fulfill the above conditions are classified as indirect.
|
| 178 |
+
|
| 179 |
+
Finally, we use an additional rule to detect 'freestanding' direct quotes encompassing entire paragraphs. These do not necessarily contain a source attribution (like ', says X') because the source might be already clear from context. Thus, we detect remaining paragraphs that either start with
|
| 180 |
+
|
| 181 |
+
a hyphen or are enclosed in quotation marks, as 303 direct quotes. For the attribution we currently use a naïve strategy of attributing them to the
|
| 182 |
+
|
| 183 |
+
same source as the previous quote in the text (if 306 present). This works in a lot of cases because the
|
| 184 |
+
|
| 185 |
+
quotes usually follow a structure in which a whole- 308 paragraph direct quote is introduced by an indirect
|
| 186 |
+
|
| 187 |
+
one, like: 310
|
| 188 |
+
|
| 189 |
+
311
|
| 190 |
+
|
| 191 |
+
According to Lindberg, approximately
|
| 192 |
+
|
| 193 |
+
every third pet is overweight. 313
|
| 194 |
+
|
| 195 |
+
* We do have a lot of work on that.
|
| 196 |
+
|
| 197 |
+
The rules from Table 1 are implemented using the spaCy library class DependencyMatcher ${}^{3}$ which offers a declarative language to express the rules and good performance. The post-processing code is implemented in Python.
|
| 198 |
+
|
| 199 |
+
323
|
| 200 |
+
|
| 201 |
+
https://spacy.io/api/ dependencymatcher
|
| 202 |
+
|
| 203 |
+
325 379
|
| 204 |
+
|
| 205 |
+
max width=
|
| 206 |
+
|
| 207 |
+
$\mathbf{{No}.}$ schema example
|
| 208 |
+
|
| 209 |
+
1-3
|
| 210 |
+
1
|
| 211 |
+
< g r a p h i c s >
|
| 212 |
+
source cue prop Malinen sanoo, että hän ei tule esittämään liiton hallituk- selle yhdenkään sopimuksen hyväksymistä. Malinen says that he will not propose accepting even a single motion of agreement to the union's board.
|
| 213 |
+
|
| 214 |
+
1-3
|
| 215 |
+
2
|
| 216 |
+
< g r a p h i c s >
|
| 217 |
+
Siksi mekin lähdimme näihin neuvotteluihin mukaan, Mäkynen sanoo. This is why we also joined these negotiations, Mäkynen says.
|
| 218 |
+
|
| 219 |
+
1-3
|
| 220 |
+
3
|
| 221 |
+
< g r a p h i c s >
|
| 222 |
+
Sipilän mukaan lakiehdotuksia ollaan tuomassa eduskun- taan helmikuussa. According to Sipilä, bill proposals will be brought to the parliament in February.
|
| 223 |
+
|
| 224 |
+
1-3
|
| 225 |
+
4
|
| 226 |
+
< g r a p h i c s >
|
| 227 |
+
CASE: Ela Suomen vaikeista ongelmista talous on presidentin mielestä helpompi. From Finland's most difficult problems, the economy is in the president's opinion easy.
|
| 228 |
+
|
| 229 |
+
1-3
|
| 230 |
+
5
|
| 231 |
+
< g r a p h i c s >
|
| 232 |
+
Orpo sanoo olevansa valmis poikkeuksellisiin keinoihin ja jopa lainmuutoksiin [...]. Orpo says that he is ready for exceptional measures and even legistative changes [...].
|
| 233 |
+
|
| 234 |
+
1-3
|
| 235 |
+
|
| 236 |
+
Table 1: The manually constructed rules for detecting quote-like syntactic structures.
|
| 237 |
+
|
| 238 |
+
378
|
| 239 |
+
|
| 240 |
+
380
|
| 241 |
+
|
| 242 |
+
381
|
| 243 |
+
|
| 244 |
+
330 384
|
| 245 |
+
|
| 246 |
+
335 389
|
| 247 |
+
|
| 248 |
+
337
|
| 249 |
+
|
| 250 |
+
338
|
| 251 |
+
|
| 252 |
+
339
|
| 253 |
+
|
| 254 |
+
340
|
| 255 |
+
|
| 256 |
+
342
|
| 257 |
+
|
| 258 |
+
345
|
| 259 |
+
|
| 260 |
+
347
|
| 261 |
+
|
| 262 |
+
352 406
|
| 263 |
+
|
| 264 |
+
§ 4.2 BERT MODEL
|
| 265 |
+
|
| 266 |
+
The machine learning model is realized as two to-
|
| 267 |
+
|
| 268 |
+
357 ken classification heads on top of BERT - a neural language model based on the transformer architecture (Devlin et al., 2019). We use the model pre-trained on Finnish data by Virtanen et al. (2019).
|
| 269 |
+
|
| 270 |
+
The first classification head recognizes and clas-
|
| 271 |
+
|
| 272 |
+
362 sifies spans of quoted text (propositions). The labeling follows the IOB schema and the class label encodes whether the quote is direct or indirect, as well as the relative position of the speaker men-
|
| 273 |
+
|
| 274 |
+
367 tion to the quoted text. The latter is expressed as one of the symbols: $+ , -$ or $=$ and a number 1-4. The symbol describes whether the speaker is mentioned after $\left( +\right)$ , before(-)or inside $\left( = \right)$ the proposition, while the number signifies, which recognized entity is the speaker. For example, the class label B-DIRECT+2 denotes the beginning (B-) of a direct quote, the source of which is the second recognized entity after the quote. A special label 00 signifies that the source of the quote is
|
| 275 |
+
|
| 276 |
+
377 not marked.
|
| 277 |
+
|
| 278 |
+
The second classification head recognizes the
|
| 279 |
+
|
| 280 |
+
entities, i.e. elements of coreference chains. It has 409 just one class encoded in the IOB schema and does
|
| 281 |
+
|
| 282 |
+
not perform the linking of entities into chains. 411
|
| 283 |
+
|
| 284 |
+
An example of sequence annotation is shown in Table 2. It shows the following sentence:
|
| 285 |
+
|
| 286 |
+
Kansainvälinen rikostuomioistuin aikoo 415
|
| 287 |
+
|
| 288 |
+
määrätä Sudanin presidentin Omar 416
|
| 289 |
+
|
| 290 |
+
al-Bashirin pidatettäväksi, kertoo 417
|
| 291 |
+
|
| 292 |
+
sanomalehti New York Times. 418
|
| 293 |
+
|
| 294 |
+
419
|
| 295 |
+
|
| 296 |
+
The International Criminal Court is in- 420
|
| 297 |
+
|
| 298 |
+
tending to issue an arrest warrant on 421 Sudan's president Omar al-Bashar, the newspaper New York Times reports.
|
| 299 |
+
|
| 300 |
+
There are three entities in the sentence: 'The In-
|
| 301 |
+
|
| 302 |
+
ternational Criminal Court', 'Sudan's president 426 Omar al-Bashar' and 'the newspaper New York
|
| 303 |
+
|
| 304 |
+
Times' - their annotations on the token level are 428
|
| 305 |
+
|
| 306 |
+
encoded on the 'entity' layer. The 'quote' layer 429
|
| 307 |
+
|
| 308 |
+
encodes an indirect quote, which is attributed to 430
|
| 309 |
+
|
| 310 |
+
the first entity following the quote (hence, +1). 431
|
| 311 |
+
|
| 312 |
+
433
|
| 313 |
+
|
| 314 |
+
max width=
|
| 315 |
+
|
| 316 |
+
word quote entity
|
| 317 |
+
|
| 318 |
+
1-3
|
| 319 |
+
Kansainvälinen B-INDIRECT+1 B
|
| 320 |
+
|
| 321 |
+
1-3
|
| 322 |
+
rikostuomioistuin I-INDIRECT+1 I
|
| 323 |
+
|
| 324 |
+
1-3
|
| 325 |
+
aikoo I-INDIRECT+1 O
|
| 326 |
+
|
| 327 |
+
1-3
|
| 328 |
+
mäiārātā I-INDIRECT+1 O
|
| 329 |
+
|
| 330 |
+
1-3
|
| 331 |
+
Sudanin I-INDIRECT+1 B
|
| 332 |
+
|
| 333 |
+
1-3
|
| 334 |
+
presidentin I-INDIRECT+1 I
|
| 335 |
+
|
| 336 |
+
1-3
|
| 337 |
+
Omar I-INDIRECT+1 I
|
| 338 |
+
|
| 339 |
+
1-3
|
| 340 |
+
al-Bashirin I-INDIRECT+1 I
|
| 341 |
+
|
| 342 |
+
1-3
|
| 343 |
+
pidätettäväksi I-INDIRECT+1 O
|
| 344 |
+
|
| 345 |
+
1-3
|
| 346 |
+
9 O O
|
| 347 |
+
|
| 348 |
+
1-3
|
| 349 |
+
kertoo O O
|
| 350 |
+
|
| 351 |
+
1-3
|
| 352 |
+
sanomalehti O B
|
| 353 |
+
|
| 354 |
+
1-3
|
| 355 |
+
New O I
|
| 356 |
+
|
| 357 |
+
1-3
|
| 358 |
+
York O I
|
| 359 |
+
|
| 360 |
+
1-3
|
| 361 |
+
Times O I
|
| 362 |
+
|
| 363 |
+
1-3
|
| 364 |
+
- O O
|
| 365 |
+
|
| 366 |
+
1-3
|
| 367 |
+
|
| 368 |
+
Table 2: An example of sequence annotation for
|
| 369 |
+
|
| 370 |
+
438 the BERT model.
|
| 371 |
+
|
| 372 |
+
max width=
|
| 373 |
+
|
| 374 |
+
X training evaluation
|
| 375 |
+
|
| 376 |
+
1-3
|
| 377 |
+
articles 1,172 287
|
| 378 |
+
|
| 379 |
+
1-3
|
| 380 |
+
sentences 22,949 5,097
|
| 381 |
+
|
| 382 |
+
1-3
|
| 383 |
+
tokens 252,006 59,076
|
| 384 |
+
|
| 385 |
+
1-3
|
| 386 |
+
quotes 3,854 984
|
| 387 |
+
|
| 388 |
+
1-3
|
| 389 |
+
|
| 390 |
+
Table 3: The sizes of datasets used in experiments.
|
| 391 |
+
|
| 392 |
+
§ 5 EVALUATION
|
| 393 |
+
|
| 394 |
+
For the evaluation experiments we use a roughly 80-20 split of the data by taking the data provided by 2 annotators as evaluation set and the remaining 8 annotators as training set. The dataset sizes are summarized in Table 3. We compare both methods on the task of quote recognition (with and without direct/indirect classification) and attribution.
|
| 395 |
+
|
| 396 |
+
Quote detection. The results of quote span detection without taking into account the direct-indirect distinction are shown in Table 4. On the other hand, the direct-indirect breakdown is shown in Table 5, where misclassifications (identifying a direct quote as an indirect one or vice versa) were counted as both a false positive and a false negative. We exclude punctuation tokens from the evaluation as especially the commas and periods on the boundaries of quotes might have been inconsistently annotated, and their inclusion in the quote is irrelevant.
|
| 397 |
+
|
| 398 |
+
Both settings show a clear advantage of the
|
| 399 |
+
|
| 400 |
+
485 BERT model. In case of direct quotes, the rules
|
| 401 |
+
|
| 402 |
+
max width=
|
| 403 |
+
|
| 404 |
+
method $\mathbf{{Pr}}$ $\mathbf{{Re}}$ $\mathbf{{F1}}$
|
| 405 |
+
|
| 406 |
+
1-4
|
| 407 |
+
rule-based .85 .78 .82
|
| 408 |
+
|
| 409 |
+
1-4
|
| 410 |
+
BERT .92 .90 .91
|
| 411 |
+
|
| 412 |
+
1-4
|
| 413 |
+
|
| 414 |
+
Table 4: Results of quotation span detection without classification.
|
| 415 |
+
|
| 416 |
+
486
|
| 417 |
+
|
| 418 |
+
487
|
| 419 |
+
|
| 420 |
+
488
|
| 421 |
+
|
| 422 |
+
489
|
| 423 |
+
|
| 424 |
+
490
|
| 425 |
+
|
| 426 |
+
491
|
| 427 |
+
|
| 428 |
+
max width=
|
| 429 |
+
|
| 430 |
+
2*method 3|c|indirect 3|c|direct
|
| 431 |
+
|
| 432 |
+
2-7
|
| 433 |
+
$\mathbf{{Pr}}$ $\mathbf{{Re}}$ $\mathbf{{F1}}$ $\mathbf{{Pr}}$ $\mathbf{{Re}}$ $\mathbf{{F1}}$
|
| 434 |
+
|
| 435 |
+
1-7
|
| 436 |
+
rule-based .75 .66 .70 .93 .86 .89
|
| 437 |
+
|
| 438 |
+
1-7
|
| 439 |
+
BERT .84 .84 .84 .96 .94 .95
|
| 440 |
+
|
| 441 |
+
1-7
|
| 442 |
+
|
| 443 |
+
Table 5: Results of quotation span detection and direct/indirect classification.
|
| 444 |
+
|
| 445 |
+
492
|
| 446 |
+
|
| 447 |
+
493
|
| 448 |
+
|
| 449 |
+
497
|
| 450 |
+
|
| 451 |
+
499 for recognizing them are quite rigid. Furthermore,
|
| 452 |
+
|
| 453 |
+
they can suffer from paragraph segmentation er- 502 rors and misplaced or incidental quotation marks
|
| 454 |
+
|
| 455 |
+
(e.g. 'scare quotes'). This explains the lower re- 504 call of the rule-based method.
|
| 456 |
+
|
| 457 |
+
Indirect quotes have proven more challenging to the rule-based method as well. This can be to a variety of reasons: missing speech act verbs, incorrectly identifying quote spans based on syntactic criteria (also affected by parser, tagger and sentence segmentation errors), or uncommon structures not covered by the rules. Moreover, rule 3 ('according to') has a tendency to produce false positives, e.g. something being described 'according to the plan'.
|
| 458 |
+
|
| 459 |
+
In general, the BERT model has shown to be more flexible wrt. the often unpredictable nature of text data, and does not suffer from the error propagation through the NLP pipeline.
|
| 460 |
+
|
| 461 |
+
Attribution. The evaluation of attribution is problematic because of the fact that our dataset was not annotated with the BERT model in mind.
|
| 462 |
+
|
| 463 |
+
Thus, we present it as our best attempt given the 524 current possibilities, but recognize the need for further work in this regard.
|
| 464 |
+
|
| 465 |
+
The annotated data assigns each quote to a sin-
|
| 466 |
+
|
| 467 |
+
gle token representing the mention of the quote's 529 source in the text. If the source is represented by a longer phrase, the syntactic head (wrt. dependency parsing) of this phrase should be selected according to the annotation guidelines. On the
|
| 468 |
+
|
| 469 |
+
other hand, mentions of quote sources are typ- 534 ically entities annotated as parts of coreference chains, and thus the entire span is marked for the purpose of coreference annotation. Thus, by com-
|
| 470 |
+
|
| 471 |
+
bining the quote and coreference annotations, we 538
|
| 472 |
+
|
| 473 |
+
are able to obtain a span-to-span attribution rela- 539 tion for most cases. The exception are cases in
|
| 474 |
+
|
| 475 |
+
541 which the quoted entity is mentioned only once in the article, and thus not annotated as a coreference chain.
|
| 476 |
+
|
| 477 |
+
Although the BERT model outputs sources as entity spans, the rule-based model points to a single token - the syntactic head, similarly to the gold standard annotation. In order to make the results comparable, we reduced the output of the BERT model to the first token of the span, and then evaluated a source annotation as correct if it either points to exactly the same token as the gold standard, or if it points to a token within the same coreference span. Thus, the model's ability to correctly identify the entire span is currently not eval-
|
| 478 |
+
|
| 479 |
+
556 uated, as it is not implemented in the rule-based method.
|
| 480 |
+
|
| 481 |
+
558 Table 6 presents results of the attribution evaluation in terms of the number of gold-standard quote tokens with correctly and incorrectly recognized source, as well as unrecognized source. The latter case occurse if either the token is not recognized as a quote at all, or it is recognized but without identifying the source. We report the accuracy as the ratio of correctly identified to all tokens.
|
| 482 |
+
|
| 483 |
+
The results indicate a small advantage of the rule-based model. In both cases, the main source of errors are the unrecognized annotations, rather than the incorrect ones. For the rule-based model this is typically due to quotes not being recognized at all (see low recall in Table 4), while for the BERT model there is a large amount of correctly identified quotes, for which the source could not be found. Of the 1990 recognized quotes, 646 $\left( {{32}\% }\right)$ are reported without source, compared to ${13}\% \left( {{218}/{1633}}\right)$ for the rule-based model. The BERT model's ability to identify the source depends on the entity detection, for which the training data is incomplete (derived from coreference annotations only). Further, the model processes the text paragraph by paragraph and thus does not
|
| 484 |
+
|
| 485 |
+
583 find a source mention that is outside of the paragraph containing the quote. These problems offer room for improvement in further work, and thus it can be expected that the BERT model will eventually outperform the rule-based one also in attribution.
|
| 486 |
+
|
| 487 |
+
§ 6 DISCUSSION AND FURTHER WORK
|
| 488 |
+
|
| 489 |
+
Although we regard the work presented in the pre-
|
| 490 |
+
|
| 491 |
+
593 vious sections as a complete solution to a well-
|
| 492 |
+
|
| 493 |
+
max width=
|
| 494 |
+
|
| 495 |
+
method cor inc unrec accuracy
|
| 496 |
+
|
| 497 |
+
1-5
|
| 498 |
+
rule-based 7889 774 4996 .58
|
| 499 |
+
|
| 500 |
+
1-5
|
| 501 |
+
BERT 7554 767 5338 .55
|
| 502 |
+
|
| 503 |
+
1-5
|
| 504 |
+
|
| 505 |
+
Table 6: Results of attribution.
|
| 506 |
+
|
| 507 |
+
594
|
| 508 |
+
|
| 509 |
+
595
|
| 510 |
+
|
| 511 |
+
596
|
| 512 |
+
|
| 513 |
+
597
|
| 514 |
+
|
| 515 |
+
598
|
| 516 |
+
|
| 517 |
+
599
|
| 518 |
+
|
| 519 |
+
delimited problem, we see some potential for both 600 incremental improvements, as well as work on further related tasks, that will be addressed in the future.
|
| 520 |
+
|
| 521 |
+
Entity annotation and detection. While de- 605 signing our annotation project, we did not antici-
|
| 522 |
+
|
| 523 |
+
pate that a machine learning quote detection model 607 will need to also detect entities that the quotes can be attributed to. We intended the coreference an-
|
| 524 |
+
|
| 525 |
+
notation to be used only in the further step (entity 610 resolution). In result, entities that are mentioned
|
| 526 |
+
|
| 527 |
+
only once were not annotated. The corpus could 612 be improved by ensuring that at least tokens assigned as source to a quote are also annotated as
|
| 528 |
+
|
| 529 |
+
an entity. This is expected to improve the BERT 615 model's performance on entity detection, and thus
|
| 530 |
+
|
| 531 |
+
quote attribution. 617
|
| 532 |
+
|
| 533 |
+
Entity resolution. While some works treat the
|
| 534 |
+
|
| 535 |
+
problem of quote attribution to speaker mention in 620 the text and entity resolution jointly (e.g. Muzny
|
| 536 |
+
|
| 537 |
+
et al. 2017), in our opinion entity resolution is a 622 complex task that is best treated separately. In addition to coreference resolution within one docu-
|
| 538 |
+
|
| 539 |
+
ment, also matching the entities across documents 625 could be considered there.
|
| 540 |
+
|
| 541 |
+
Coreference resolution can be done with BERT 627 with state-of-the-art accuracy (Joshi et al., 2019). However, the setup is complicated as coreferences
|
| 542 |
+
|
| 543 |
+
are typically long-range relations, so a sliding win- 630 dow approach needs to be used to mitigate BERT's
|
| 544 |
+
|
| 545 |
+
limitation in text size. Furthermore, modeling re- 632 lations with a neural model is not straightforward.
|
| 546 |
+
|
| 547 |
+
A related problem is that nested entities are possible and might be relevant, e.g.:
|
| 548 |
+
|
| 549 |
+
[[Viron] metallityöväen liiton] puheen- 637
|
| 550 |
+
|
| 551 |
+
johtaja Endel Soon] 638
|
| 552 |
+
|
| 553 |
+
[[Estonia]'s metal workers' union]'s chairman Endel Soon]
|
| 554 |
+
|
| 555 |
+
In such case, coreferences and other quotes might 642 also refer to the inner entities 'Estonia' or 'Estonia's metal workers' union'. For the present work, we disregarded nested entities as locally the outermost entity is typically the source of the quote it
|
| 556 |
+
|
| 557 |
+
stands next to. 647
|
| 558 |
+
|
| 559 |
+
§ 7 CONCLUSION
|
| 560 |
+
|
| 561 |
+
649
|
| 562 |
+
|
| 563 |
+
We have presented two methods for recognition of quotes in Finnish news media, along with an annotated corpus for training and evaluation. To our knowledge, our solution is the first one proposed 654 for Finnish. We hope that the progress achieved on this task will facilitate more detailed large-scale quantitative analysis of voices in the Finnish news media.
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_7VPETQwnPX/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,949 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Probing structural constraints of negation in Pretrained Language Models
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
056
|
| 8 |
+
|
| 9 |
+
Anonymous Author
|
| 10 |
+
|
| 11 |
+
Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 2
|
| 14 |
+
|
| 15 |
+
Affiliation / Address line 3 email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 3
|
| 24 |
+
|
| 25 |
+
email@domain
|
| 26 |
+
|
| 27 |
+
Anonymousest Author 057
|
| 28 |
+
|
| 29 |
+
Affiliation / Address line 1 058
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3
|
| 32 |
+
|
| 33 |
+
email@domain 062
|
| 34 |
+
|
| 35 |
+
## Abstract
|
| 36 |
+
|
| 37 |
+
Contradictory results about the encoding of the semantic impact of negation in pretrained language models (PLMs) have been drawn recently (e.g. Kassner and Schütze (2020); Gubelmann and Hand-
|
| 38 |
+
|
| 39 |
+
018 schuh (2022)).
|
| 40 |
+
|
| 41 |
+
In this paper we focus rather on the way
|
| 42 |
+
|
| 43 |
+
021 PLMs encode negation and its formal impact, through the phenomenon of the Neg-
|
| 44 |
+
|
| 45 |
+
023 ative Polarity Item (NPI) licensing in English. More precisely, we use probes to identify which contextual representations
|
| 46 |
+
|
| 47 |
+
026 best encode 1) the presence of negation in a sentence, and 2) the polarity of a neigh-
|
| 48 |
+
|
| 49 |
+
028 boring masked polarity item.
|
| 50 |
+
|
| 51 |
+
We find that contextual representations of tokens inside the negation scope do allow
|
| 52 |
+
|
| 53 |
+
031 for (i) a better prediction of the presence
|
| 54 |
+
|
| 55 |
+
033 of not compared to those outside the scope and (ii) a better prediction of the right polarity of a masked polarity item licensed by not, although the magnitude of the difference varies from PLM to PLM. Impor-
|
| 56 |
+
|
| 57 |
+
038 tantly, in both cases the trend holds even when controlling for distance to not.
|
| 58 |
+
|
| 59 |
+
We thus confirm that the embeddings of these models do reflect the notion of negation scope, and do encode the impact of negation on NPI licensing. The subtle difference between licensing scope and negation scope, however, does not seem to be captured.
|
| 60 |
+
|
| 61 |
+
## 1 Introduction
|
| 62 |
+
|
| 63 |
+
Negation has recently been the focus of various works aiming at determining the abilities of Pre-trained Language Models (PLMs) to capture linguistic knowledge.
|
| 64 |
+
|
| 65 |
+
Some works investigate the 'semantic impact' 065 of negation, namely its impact in terms of truth
|
| 66 |
+
|
| 67 |
+
values, by interpreting how the presence of nega- 067 tion impacts the probability distribution at a masked position. The rationale is that negating a
|
| 68 |
+
|
| 69 |
+
verb reverses the truth value of its clause, which 070 should be reflected in the probability distribution
|
| 70 |
+
|
| 71 |
+
at certain positions. Ettinger (2020); Kassner and 072 Schütze (2020) use factual statements such as (1),
|
| 72 |
+
|
| 73 |
+
and report that models output similar distributions 075 for the positive and negative variants of (1), and
|
| 74 |
+
|
| 75 |
+
conclude that models largely ignore negation. 077
|
| 76 |
+
|
| 77 |
+
## (1) A robin is (not) a [MASK]
|
| 78 |
+
|
| 79 |
+
080
|
| 80 |
+
|
| 81 |
+
Gubelmann and Handschuh (2022) chose to
|
| 82 |
+
|
| 83 |
+
avoid factual statements and focus rather on multi- 082 sentence self-contained examples, such that, given the context provided by the first sentence, one par-
|
| 84 |
+
|
| 85 |
+
ticular word is either likely (in positive items) or 085 ruled out (in negative items) at a masked posi-
|
| 86 |
+
|
| 87 |
+
tion in the second sentence. Because this partic- 087 ular word is substantially less often the top-1 prediction in the negative items than in the positive
|
| 88 |
+
|
| 89 |
+
items, the authors draw the opposite conclusion 090 that PLMs do show sensitivity to negation.
|
| 90 |
+
|
| 91 |
+
A different line of works focused on finding out 092 to what extent negation is encoded in PLM embed-dings. Celikkanat et al. (2020) train classifiers taking as input the contextual embedding of a verb or
|
| 92 |
+
|
| 93 |
+
its subject or direct object, and predicting whether 097 the verb is negated or not. The resulting high accuracy allows them to conclude that these tokens' embeddings do contain "traces" of not. More generally, several authors have investigated whether the contextual representation of a token encodes information about surrounding tokens. To ease further reading, we will talk of a classifier taking as input an input embedding, namely the contextual representation of an input token, and predict-
|
| 94 |
+
|
| 95 |
+
ing some target information about another token 107 in the sentence. For instance, Klafka and Ettinger (2020) study how input embeddings encode ani-macy, gender, and number of surrounding words in a specific SVO context. Li et al. (2022) target the number feature of French participles in the context of object-past participle agreement. They show that the performance of the classifier depends on the syntactic position of the input token in the sentence. We will build on their idea to compare performance at predicting target information depending on the syntactic zone the input token belongs to.
|
| 96 |
+
|
| 97 |
+
In this paper, we focus on how the information about negation encoded in contextual embeddings is used. Our aim is to study PLMs' ability to capture and encode structural information concerning negation (namely negation scope), and also their ability to actually mobilize the encoding in order to capture phenomena that are direct consequences of the presence of negation. To do so, we focus on the licensing of Negative Polarity Items (NPI) by not modifying a verb. Polarity Items (PI), either positive (e.g. some), or negative (e.g. any), are words or expressions that are constrained in their distribution (Homer, 2020). A NPI will require that a word or a construction, called the licensor, be in the vicinity. And the licensor itself grammatically defines a zone of the sentence, called the licensing scope, in which the NPI can appear. The adverb not modifying a verb is one such licensor. While any is licensed by negation in (2-a) vs. (2-b), it is not licensed in (2-c), even though the verb is negated, arguably because it is not in the licensing scope ${}^{1}$ .
|
| 98 |
+
|
| 99 |
+
(2) a. Sam didn't find any books.
|
| 100 |
+
|
| 101 |
+
b. *Sam found any books.
|
| 102 |
+
|
| 103 |
+
Jumelet and Hupkes (2018) have shown that LSTM embeddings do encode the notion of licensing scope (given an input embedding, a classifier can predict the structural zone the input token belongs to), a finding later confirmed for transformer-based PLMs (Warstadt et al., 2019). Focusing on when the licensor is a verb-modifying not, we rather investigate whether this demonstrated encoding of the zones go as far as enabling a better prediction of a PI's polarity from inside the licensing scope compared to outside the scope. So instead of the question "Is this input embed-
|
| 104 |
+
|
| 105 |
+
ding the embedding of a token that is within, be- 162
|
| 106 |
+
|
| 107 |
+
fore or after the licensing scope?", we rather ask 163 the question "Given a masked PI position, and an input embedding of a neighboring token, what is the polarity of the PI?", and we study whether this question is better answered when the input embedding is inside or outside the licensing or negation scopes.
|
| 108 |
+
|
| 109 |
+
Note that our methodology differs from that of Jumelet and Hupkes (2018), who, given an input token, predict the zone this token belongs to. We instead predict the polarity of a neighboring masked polarity item and then compare accuracies depending on the input token's zone. Our motivation is that the polarity, being a lexical information, requires less linguistic preconception, and hence our probing method is a more direct translation of the NPI licensing phenomenon: we study whether and where the information of "which PIs are licit where?" is encoded, in the context of sentence negation. This method also allows us to better control the confounding factor of distance between the input embedding and the licensor not.
|
| 110 |
+
|
| 111 |
+
In the following we start in section 2 by defining the linguistic notions of negation scope and NPI licensing scope, and by showing how we actually identified them in English sentences. In section 3 , we define our probing experiments and discuss their results, both for the encoding of not (section 3.1), and the encoding of NPI licensing (section 3.2). We conclude in section 4.
|
| 112 |
+
|
| 113 |
+
## 2 Defining and identifying scopes
|
| 114 |
+
|
| 115 |
+
195
|
| 116 |
+
|
| 117 |
+
### 2.1 Negation scope
|
| 118 |
+
|
| 119 |
+
From a linguistic point of view, the scope of a negation cue is the area of the sentence whose propositional content's truth value is reversed by the presence of the cue. While in many cases it is sufficient to use the syntactic structure to recover the scope, in some cases semantics or even pragmatics come into play. ${}^{2}$ Nevertheless, annotation guidelines usually offer syntactic approximations of negation scope.
|
| 120 |
+
|
| 121 |
+
To identify the negation scope for a not ${}^{3}$ modifying a verb, we followed the syntactic constraints that emerge from the guidelines of Morante and Blanco (2012). Note though that these guide-
|
| 122 |
+
|
| 123 |
+
215
|
| 124 |
+
|
| 125 |
+
---
|
| 126 |
+
|
| 127 |
+
${}^{2}$ For instance in Kim did not go to the party because Bob was there., negation may scope only over the matrix clause or include the causal subordinate clause.
|
| 128 |
+
|
| 129 |
+
${}^{3}$ In all this article, not stands for either not or $n$ ’t.
|
| 130 |
+
|
| 131 |
+
${}^{1}$ We leave aside the uses of any and the like having free choice interpretations, as for instance in "Pick any card".
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
+
|
| 135 |
+

|
| 136 |
+
|
| 137 |
+
Table 1: The "neg-patterns": patterns adapted from Jumelet and Hupkes (2018), which we used to identify some cases of not licensing a NPI and to build the not+NPI test set. Col1: pattern id in Jumelet and Hupkes (2018). Col2: syntactic pattern (defined as a phrase-structure subtree, using the Penn Treebank's annotation scheme), with the licensing scope appearing in blue. Col3: examples with colors for the four zones: pink for tokens in the PRE zone (before both scopes), purple for PRE-IN (to the left of the licensing scope, but within the negation scope), blue for IN (within both scopes) and green for POST (after both scopes). The NPI licensor is not, and appears in yellow.
|
| 138 |
+
|
| 139 |
+
270
|
| 140 |
+
|
| 141 |
+
271
|
| 142 |
+
|
| 143 |
+
276
|
| 144 |
+
|
| 145 |
+
281 lines restrict the annotation to factual eventualities, leaving aside e.g. negated future verbs. We did not retain such a restriction, hence our identification of the negation scope is independent from verb tense or modality.
|
| 146 |
+
|
| 147 |
+
### 2.2 NPI licensing scope
|
| 148 |
+
|
| 149 |
+
Polarity items are a notoriously complex phenomenon. To identify the NPI licensing scope, we focus on specific syntactic patterns defined by Jumelet and Hupkes (2018), retaining only those involving not as licensor. ${}^{4}$ Table 1 shows an example for each retained pattern (hereafter the neg-patterns), with the NPI licensing scope in blue.
|
| 150 |
+
|
| 151 |
+
Importantly, in the neg-patterns, the licensing scope is strictly included in the negation scope: within the clause of the negated verb, the tokens to its left belong to the negation scope but not to the licensing scope. E.g. in (3), anyone is not licit as a subject of going, whether the location argument is itself a plain PP, a NPI or a PPI (3-b).
|
| 152 |
+
|
| 153 |
+
(3) a. I'm not going anywhere.
|
| 154 |
+
|
| 155 |
+
b. *Anyone is not going to the party/ somewhere/anywhere.
|
| 156 |
+
|
| 157 |
+
We thus defined 4 zones for the not+NPI sentences, exemplified in Table 1: PRE (tokens be-
|
| 158 |
+
|
| 159 |
+
259 fore both scopes), PRE-IN (to the left of the licensing scope, but within the negation scope), IN (in both scopes), and POST (after both scopes).
|
| 160 |
+
|
| 161 |
+
We note though that the restriction exemplified in (3-b) only holds for non-embedded NPIs (de Swart, 1998), so examples like (4), with an embedded NPI in the subject of the negated verb
|
| 162 |
+
|
| 163 |
+
269
|
| 164 |
+
|
| 165 |
+
283 (hence belonging to our PRE-IN zone), are theoretically possible.
|
| 166 |
+
|
| 167 |
+
286
|
| 168 |
+
|
| 169 |
+
## (4) Examples with any relevance to that issue didn't come up in the discussion.
|
| 170 |
+
|
| 171 |
+
288
|
| 172 |
+
|
| 173 |
+
Yet in practice, we found that they are ex-
|
| 174 |
+
|
| 175 |
+
tremely rare: using the Corpus of Contempo- 291 rary American English (COCA, Davies 2015) ${}^{5}$ , we extracted sentences matching one of the neg-patterns, and among these, sentences having any or any-body/one/thing/time/where in the IN zone,
|
| 176 |
+
|
| 177 |
+
the PRE-IN zone or both. As shown in Table 2, 296 any* in the PRE-IN zone are way rarer than in the
|
| 178 |
+
|
| 179 |
+
classical licensing scope (IN zone) ${}^{6}$ . Hence we 298 sticked to the usual notion of direct NPI licensing scope, as illustrated in Table 1.
|
| 180 |
+
|
| 181 |
+
301
|
| 182 |
+
|
| 183 |
+
<table><tr><td>Total</td><td>IN</td><td>PRE-IN</td><td>both</td></tr><tr><td>45,157</td><td>35,938</td><td>711</td><td>58</td></tr></table>
|
| 184 |
+
|
| 185 |
+
303
|
| 186 |
+
|
| 187 |
+
Table 2: Number of sentences from the COCA
|
| 188 |
+
|
| 189 |
+
corpus, matching the neg-patterns of Table 1: 306 Col1: total number, Col2-4: number having a
|
| 190 |
+
|
| 191 |
+
any* in the IN zone, the PRE-IN zone, and in both 308 zones respectively.
|
| 192 |
+
|
| 193 |
+
313
|
| 194 |
+
|
| 195 |
+
318
|
| 196 |
+
|
| 197 |
+
323
|
| 198 |
+
|
| 199 |
+
---
|
| 200 |
+
|
| 201 |
+
${}^{5}$ We used a version with texts from 1990 to 2012. COCA is distributed with some tokens in some sentences voluntarily masked, varying across distributions. We ignored such sentences.
|
| 202 |
+
|
| 203 |
+
${}^{6}$ More precisely, the figures in Table 2 correspond to an upper bound, because of (i) potential syntactic parsing errors impacting the identification of the zones, (ii) cases in which the NPI licensor is different from the not targeted by the patterns, and (iii) cases in which the any* is a free choice item and not a NPI (as in "Pick any one"). We inspected 250 examples of any* in the PRE-IN zone, and 250 examples in the IN zone. In the former, we found that almost all cases fall under (i), (ii) or (iii), less than 3% corresponding to examples such as (4)). In contrast, in the IN zone the proportion of NPIs actually licensed by the target not is ${92}\%$ .
|
| 204 |
+
|
| 205 |
+
${}^{4}$ We ignored pattern 4 (never instead of not as licensor), and 6 (too few occurrences in our data). We merged patterns 1 and 2 , and corrected an obvious minor error in pattern 5 .
|
| 206 |
+
|
| 207 |
+
---
|
| 208 |
+
|
| 209 |
+
### 2.3 Building the not+NPI test set
|
| 210 |
+
|
| 211 |
+
Having defined these structural zones, we can use them to probe the traces they carry and compare the magnitude of these traces across the four zones. To do so, we built a test set of COCA sentences containing a not licensing a NPI (hereafter the not+NPI test set), matching one of the neg-patterns of Table 1, and having at least one any, anybody, anyone, anything, anytime or anywhere within the licensing scope.
|
| 212 |
+
|
| 213 |
+
The scope of negation has been implemented through an approximation using dependency parses (from the Stanza parser (Qi et al., 2020)), which proved more convenient than phrase-structure parses: we took the subtree of the negated verb, excluding not itself, and excluding dependents corresponding to sentential or verbal conjuncts and to sentential parentheticals.
|
| 214 |
+
|
| 215 |
+
More precisely, we identified the token having not as dependent (which, given our patterns, can be either the negated verb or a predicative adjective in case of a negated copula). Then, we retrieved the children of this head, except those attached to it with a "conj", "parataxis", "mark" or "discourse" dependency. In the complete subtrees of the selected dependents, all tokens were annotated as being inside the negation scope.
|
| 216 |
+
|
| 217 |
+
<table><tr><td>Genre</td><td>Mag</td><td>Acad</td><td>Fict</td><td>News</td><td>Total</td></tr><tr><td>#with not</td><td>537</td><td>383</td><td>830</td><td>536</td><td>2285</td></tr><tr><td>#and a NPI</td><td>31</td><td>21</td><td>58</td><td>34</td><td>143</td></tr></table>
|
| 218 |
+
|
| 219 |
+
Table 3: Thousands of sentences in COCA: Line 1: containing a not. Line 2: containing a not and at least one NPI (among any- $\varnothing /$ body/one/where/time/thing), anywhere in the sentence.
|
| 220 |
+
|
| 221 |
+
362
|
| 222 |
+
|
| 223 |
+
For the licensing scope, we parsed the corpus using the PTB-style parser "Supar Parser"' of Zhang et al. (2020), and further retained only the
|
| 224 |
+
|
| 225 |
+
367 sentences (i) matching the neg-patterns of Table 1 and (ii) having a NPI within the licensing scope (IN zone, shown in blue in Table 1).
|
| 226 |
+
|
| 227 |
+
We finally obtained a not+NPI test set, whose statistics are provided in Table 4.
|
| 228 |
+
|
| 229 |
+
## 3 Probing for the scopes
|
| 230 |
+
|
| 231 |
+
Our objective is to study how a transformer-based PLM (i) encodes the presence of a negation
|
| 232 |
+
|
| 233 |
+
<table><tr><td>$\mathbf{{Pattern}}$</td><td>Mag</td><td>Acad</td><td>Fict</td><td>News</td><td>Total</td></tr><tr><td>1/2</td><td>6.56</td><td>1.69</td><td>16.49</td><td>6.16</td><td>30.90</td></tr><tr><td>3</td><td>0.57</td><td>0.14</td><td>1.33</td><td>0.49</td><td>2.53</td></tr><tr><td>5*</td><td>0.22</td><td>0.08</td><td>0.58</td><td>0.15</td><td>1.02</td></tr></table>
|
| 234 |
+
|
| 235 |
+
Table 4: Statistics of the not+NPI test set: thousands of COCA sentences matching the neg-patterns (cf. Table 1), and having at least one any* in the IN zone (licensing scope), broken down by corpus genre.
|
| 236 |
+
|
| 237 |
+
378
|
| 238 |
+
|
| 239 |
+
379
|
| 240 |
+
|
| 241 |
+
380
|
| 242 |
+
|
| 243 |
+
381
|
| 244 |
+
|
| 245 |
+
384
|
| 246 |
+
|
| 247 |
+
389 (the "traces" of negation) and (ii) models lexico-
|
| 248 |
+
|
| 249 |
+
syntactic constraints imposed by negation, such as 391 the modeling of a NPI licensing scope. Using the terminology introduced in section 1 , we will probe
|
| 250 |
+
|
| 251 |
+
whether input embeddings encode as target infor- 394 mation (i) the presence of not elsewhere in the sen-
|
| 252 |
+
|
| 253 |
+
tence, and (ii) the polarity of a masked PI. The 396 former focuses on a plain encoding of negation, whereas the latter focuses on whether the encoding of negation can be mobilized to reflect a property (NPI licensing) that is directly imposed by negation. To investigate whether such an encoding matches linguistic notions of scopes, we will contrast results depending on the zone the input token belongs to (among the four zones defined for a not
|
| 254 |
+
|
| 255 |
+
licensing a NPI, namely PRE, PRE-IN, IN, POST) 406 and its distance to not.
|
| 256 |
+
|
| 257 |
+
We study four PLMs: BERT-base-case, BERT-
|
| 258 |
+
|
| 259 |
+
large-case (Devlin et al., 2019) and ROBERTA- 409 base and ROBERTA-large (Liu et al., 2019). All
|
| 260 |
+
|
| 261 |
+
our experiments were done with each of these 411 models, and for a given model, each experiment was repeated three times. All the sentences we used for training, tuning and testing were extracted from the COCA corpus.
|
| 262 |
+
|
| 263 |
+
416
|
| 264 |
+
|
| 265 |
+
### 3.1 Probing for the negation scope
|
| 266 |
+
|
| 267 |
+
In preliminary experiments, we extend Celikkanat et al. (2020)'s study by investigating the traces of
|
| 268 |
+
|
| 269 |
+
not in the contextual embedding of all the tokens 421 of a sentence containing not (instead of just the verb, subject and object).
|
| 270 |
+
|
| 271 |
+
#### 3.1.1 Training neg-classifiers
|
| 272 |
+
|
| 273 |
+
We train binary classifiers (hereafter the m-neg- 426 classifiers, with $m$ the name of the studied PLM) taking an input contextual embedding, and predicting the presence or absence of at least one
|
| 274 |
+
|
| 275 |
+
not in the sentence. We train 3 classifiers for 430
|
| 276 |
+
|
| 277 |
+
each of the 4 tested PLMs. To train and evalu- 431 ate these classifiers, we randomly extract 40,000 sentences containing exactly one not, and 40,000 sentences not containing any not. We BERT- and ROBERTA-tokenized these sentences and for each model, we randomly selected one PLM token in each sentence to serve as input token. For these input tokens, we ignored any token not, plus all PLM tokens associated to a contracted negation: for instance don’t is BERT-tokenized into don $+ {}^{\prime } + t$ , and ROBERTA-tokenized into don’ + t. We ignore all these tokens, as they are too obvious a clue for the presence of a verbal negation. Furthermore, in order to homogenize the handling of negation whether contracted or not, we also set aside any modal or auxiliary that can form a negated contracted form. Hence, in She did leave, She did not leave or She didn't leave, the only candidate input tokens are those for She and leave ${}^{8}$ . We use ${64}\mathrm{k}$ sentences for training (neg-train-sets), and the remaining ${16}\mathrm{k}$ for testing (neg-test-set).
|
| 278 |
+
|
| 279 |
+
---
|
| 280 |
+
|
| 281 |
+
${}^{7}$ https://parser.yzhang.site/en/latest/index.html
|
| 282 |
+
|
| 283 |
+
---
|
| 284 |
+
|
| 285 |
+
We provide the obtained accuracies on this neg-test-set in Table 5, which shows that performance is significantly above chance.
|
| 286 |
+
|
| 287 |
+
<table><tr><td>Model</td><td>${\mathrm{{BERT}}}_{b}$</td><td>${\mathrm{{BERT}}}_{l}$</td><td>ROB. $b$</td><td>ROB. ${}_{l}$</td></tr><tr><td>Accur.</td><td>74.3</td><td>73.1</td><td>72.1</td><td>76.6</td></tr></table>
|
| 288 |
+
|
| 289 |
+
Table 5: Accuracies of the neg-classifiers on the neg-test-set for each PLM (averaged over 3 runs).
|
| 290 |
+
|
| 291 |
+
#### 3.1.2 Studying results on the not+NPI test set
|
| 292 |
+
|
| 293 |
+
To probe the negation scope, we then use the not+NPI test set (cf. section 2), and compare accuracies in PRE-IN versus PRE, and in IN versus POST.
|
| 294 |
+
|
| 295 |
+
Note though that distance to not is also likely to impact the classifiers' accuracy. Indeed, by definition the structural zones obviously correlate with distance to not. For instance, a token at distance 3 to the right of not is more likely to be in the licensing scope than a token at distance 20 . Hence, to study the impact of the input token's zone, we need to control for distance to the negation clue.
|
| 296 |
+
|
| 297 |
+
We thus break down our classifiers' accuracy on the not $+ \mathrm{{NPI}}$ test set, not only according to the input token's zone, but also according to its relative position to the negation cue. Table 6 shows an example of not+NPI sentence, and the zone and
|
| 298 |
+
|
| 299 |
+
relative position to not of each token. The target 486
|
| 300 |
+
|
| 301 |
+
not has position 0 , and so do all the PLMs' sub- 487 word tokens involved in the negation complex, and all preceding modal or auxiliary, to homogenize across PLMs and across contracted/plain negation. By construction, the PRE and PRE-IN zones
|
| 302 |
+
|
| 303 |
+
correspond to negative positions, whereas IN and 492 POST correspond to positive ones.
|
| 304 |
+
|
| 305 |
+
The break-down by position for ROBERTA-
|
| 306 |
+
|
| 307 |
+
large is shown in Figure 1 (results for other models 497 are in Appendix C). Two effects can be observed,
|
| 308 |
+
|
| 309 |
+
for all the 4 PLMs: firstly, there is a general de- 499 crease of the accuracy as moving away from not, for the four zones. This contrasts with the findings
|
| 310 |
+
|
| 311 |
+
of Klafka and Ettinger (2020), who did not ob- 502 serve a distance effect in their experiments, when probing whether the contextual representation of e.g. a direct object encodes e.g. the animacy of the subject. The decrease is more rapid before not than after it, which remains to be explained. It might come from the negation scope being shorter before not than after it.
|
| 312 |
+
|
| 313 |
+
Secondly, when looking at fixed relative distances, there is a slight but almost systematic effect that when the input token is in the negation scope (either PRE-IN or IN), the accuracy is higher than when it is outside (PRE and POST) (the differences are statistically significant at $p <$ 0.001, cf. Appendix B). This tendency is more marked for the PRE vs. PRE-IN distinction than for the POST vs. IN distinction.
|
| 314 |
+
|
| 315 |
+
522
|
| 316 |
+
|
| 317 |
+
This observation can be summarized by com-
|
| 318 |
+
|
| 319 |
+
puting the average accuracy gap, namely the ac- 524 curacy differences averaged across positions (the average of the purple minus pink bars, and of blue minus green bars in Figure 3), which provide an average difference when a token is within or outside the negation scope. The average accuracy gaps for the four tested models are given in Table 7. It confirms that input embeddings of tokens inside the negation scope do allow for a slightly better prediction of the presence of not than those outside the scope. Note that the average difference is stable across models, whose size does not seem to matter. It shows that the strength of the encoding of not in contextual representations matches
|
| 320 |
+
|
| 321 |
+
the linguistic notion of negation scope. 539
|
| 322 |
+
|
| 323 |
+
---
|
| 324 |
+
|
| 325 |
+
${}^{8}$ COCA sentences are tokenized and tagged. We detok-enized them before BERT/ROBERTA tokenization, in order to get closer to a standard input.
|
| 326 |
+
|
| 327 |
+
---
|
| 328 |
+
|
| 329 |
+
540 594
|
| 330 |
+
|
| 331 |
+

|
| 332 |
+
|
| 333 |
+
Table 6: Example sentence from the not+NPI test set: structural zones and relative positions to not. Any auxiliary or modal preceding the target not has position 0 too, to homogenize contracted and plain negation, and BERT versus ROBERTA's tokenization.
|
| 334 |
+
|
| 335 |
+
596
|
| 336 |
+
|
| 337 |
+
597
|
| 338 |
+
|
| 339 |
+
598
|
| 340 |
+
|
| 341 |
+
599
|
| 342 |
+
|
| 343 |
+
541 595
|
| 344 |
+
|
| 345 |
+
546 600
|
| 346 |
+
|
| 347 |
+
551 605
|
| 348 |
+
|
| 349 |
+
553 607
|
| 350 |
+
|
| 351 |
+
556 610
|
| 352 |
+
|
| 353 |
+

|
| 354 |
+
|
| 355 |
+
Figure 1: Accuracy of the ROBERTA-large-neg-classifier (average on 3 runs) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at $p < {0.001}$ (cf. Appendix B). Figures for the other 3 models are provided in Appendix C.
|
| 356 |
+
|
| 357 |
+
608
|
| 358 |
+
|
| 359 |
+
609
|
| 360 |
+
|
| 361 |
+
612
|
| 362 |
+
|
| 363 |
+
615
|
| 364 |
+
|
| 365 |
+
617
|
| 366 |
+
|
| 367 |
+
619
|
| 368 |
+
|
| 369 |
+
620
|
| 370 |
+
|
| 371 |
+
621
|
| 372 |
+
|
| 373 |
+
622
|
| 374 |
+
|
| 375 |
+
623
|
| 376 |
+
|
| 377 |
+
624
|
| 378 |
+
|
| 379 |
+
625
|
| 380 |
+
|
| 381 |
+
626
|
| 382 |
+
|
| 383 |
+
627
|
| 384 |
+
|
| 385 |
+
628
|
| 386 |
+
|
| 387 |
+
<table><tr><td>${\mathrm{{BERT}}}_{b}$</td><td>${\mathrm{{BERT}}}_{l}$</td><td>${\mathrm{{ROB}}}_{b}$</td><td>${\mathrm{{ROB}}}_{l}$</td></tr><tr><td>3.0 (0.6)</td><td>3.5 (0.2)</td><td>2.6 (0.2)</td><td>2.6 (1.3)</td></tr></table>
|
| 388 |
+
|
| 389 |
+
Table 7: Accuracy gaps for the neg-classifiers on the not+NPI test set, for each tested PLM, averaged over 14 relative positions and 3 runs (stdev within brackets).
|
| 390 |
+
|
| 391 |
+
583
|
| 392 |
+
|
| 393 |
+
588 We also observe that the biggest difference occurs at position -1 . This corresponds mostly to a contrast between a finite vs. non-finite negated verb (neg-patterns $1/2/3$ vs. neg-pattern 5 in Table 1), which seems well reflected in PLMs' em-
|
| 394 |
+
|
| 395 |
+
593 beddings.
|
| 396 |
+
|
| 397 |
+
629
|
| 398 |
+
|
| 399 |
+
### 3.2 Probing for the licensing scope
|
| 400 |
+
|
| 401 |
+
630
|
| 402 |
+
|
| 403 |
+
631
|
| 404 |
+
|
| 405 |
+
We then focused on whether this encoding of not 632
|
| 406 |
+
|
| 407 |
+
can actually be mobilized to capture the licens- 633
|
| 408 |
+
|
| 409 |
+
ing of a NPI. We built classifiers (hereafter the 634
|
| 410 |
+
|
| 411 |
+
$m$ -pol-classifiers, with $m$ the name of the studied 635
|
| 412 |
+
|
| 413 |
+
PLM), taking an input contextual embedding, and 636
|
| 414 |
+
|
| 415 |
+
predicting as target information the polarity of a 637
|
| 416 |
+
|
| 417 |
+
masked position, originally filled with a positive 638 or negative PI. Importantly, the input embedding in the training set is randomly chosen in the sen-
|
| 418 |
+
|
| 419 |
+
tence, and can correspond to a position that is or 642 isn't linguistically related to the polarity of the PI (cf. figure 2). This avoids using linguistic preconceptions while building the classifiers.
|
| 420 |
+
|
| 421 |
+
We train on sentences originally having either a 646
|
| 422 |
+
|
| 423 |
+
PPI or a NPI, which we mask before running each 647
|
| 424 |
+
|
| 425 |
+
648
|
| 426 |
+
|
| 427 |
+

|
| 428 |
+
|
| 429 |
+
Figure 2: Illustration of the training of the pol-classifiers.
|
| 430 |
+
|
| 431 |
+
649
|
| 432 |
+
|
| 433 |
+
654 studied PLM. More precisely, in each COCA sub-corpus (each genre), and for each of the 6 NPI/PPI pairs listed by Jumelet and Hupkes ${\left( {2018}\right) }^{9}$ , we randomly took at most 2,000 sentences containing the NPI, and the same amount of sentences con-
|
| 434 |
+
|
| 435 |
+
664 taining the corresponding ${\mathrm{{PPI}}}^{10}$ . In each of these, we masked the PI, randomly selected one token per sentence to serve as input token (excluding the masked position) and split these into 63,529 examples for training (pol-train-set) and 15,883 for testing (pol-test-set).
|
| 436 |
+
|
| 437 |
+
<table><tr><td>Model</td><td>${\mathrm{{BERT}}}_{b}$</td><td>${\mathrm{{BERT}}}_{l}$</td><td>ROB. $b$</td><td>ROB. ${}_{l}$</td></tr><tr><td>Accur.</td><td>64.2</td><td>63.7</td><td>56.6</td><td>68.6</td></tr></table>
|
| 438 |
+
|
| 439 |
+
Table 8: Accuracies of the pol-classifiers on the pol-test-set for each PLM (averaged over 3 runs).
|
| 440 |
+
|
| 441 |
+
Accuracies on the pol-test-set for each PLM are shown in Table 8. While still above chance, we observe that it doesn’t exceed ${69}\%$ , which is quite lower than the accuracies of the neg-classifiers (Table 5). This is not surprising since the task is more difficult. First, as stressed above, some of the training input tokens are independent, from the linguistic point of view, of the PI's polarity. Second, the cues for predicting the polarity are
|
| 442 |
+
|
| 443 |
+
686 diverse. And third, in numerous contexts, both polarities are indeed possible, even though not equally likely. We did not control the training for this, on purpose not to introduce any additional
|
| 444 |
+
|
| 445 |
+
691 bias in the data. We can thus interpret the pol- classifier's scores as how likely a given polarity is.
|
| 446 |
+
|
| 447 |
+
Next, we applied these classifiers on the not+NPI test set. The objective is to compare the classifiers' accuracy depending on the structural
|
| 448 |
+
|
| 449 |
+
701
|
| 450 |
+
|
| 451 |
+
zone the input token belongs to. If PLMs have a 702
|
| 452 |
+
|
| 453 |
+
notion of licensing scope, then the polarity predic- 703 tion should be higher when using an input token from the IN zone.
|
| 454 |
+
|
| 455 |
+
#### 3.2.1 Results
|
| 456 |
+
|
| 457 |
+
Once more, we control for distance of the in- 708 put embedding to not. The break-down by position and structural zone for ROBERTA-large is provided in Figure 3 (results for other models are in Appendix C).
|
| 458 |
+
|
| 459 |
+
Again, we observe a general accuracy decrease as moving away from not, and this decrease is faster than for the previous experiment. We also note that the decrease is more rapid in the PRE-IN zone than in the IN zone (for instance at distance -4 in PRE-IN, the accuracy is less than 70%, whereas it is still above it at distance 8 in the IN zone). This tends to indicate that the traces of not are more robust in the licensing scope.
|
| 460 |
+
|
| 461 |
+
Secondly, as for the previous experiment, for each relative position, when the input token is in the negation scope (either PRE-IN or IN), the accuracy is higher than when it is outside (PRE and POST). Even though we cannot exclude that the relatively high overall accuracies may be explained by the classifier catching some regulari-
|
| 462 |
+
|
| 463 |
+
ties of the sentences containing a NPI rather than 730 a PPI (independently of the presence of not), it remains that for the not+NPI sentences, accuracy is higher when the input token is in the negation scope than outside it. Moreover, this trend is much
|
| 464 |
+
|
| 465 |
+
more marked than for the previous experiment. 735
|
| 466 |
+
|
| 467 |
+
Thirdly, the amplitude of this observation depends on the model. We provide the accuracy gaps for each PLM in Table 9. We observe that the trend is marked for ROBERTA-large and BERT-
|
| 468 |
+
|
| 469 |
+
base (gap of 8.7 and 7.4 accuracy points, actually 740 much higher than the accuracy gaps for predicting the presence of not), but lower for ROBERTA-base and BERT-large.
|
| 470 |
+
|
| 471 |
+
<table><tr><td>${\mathrm{{BERT}}}_{b}$</td><td>${\mathrm{{BERT}}}_{l}$</td><td>${\mathrm{{ROB}}}_{b}$</td><td>${\mathrm{{ROB}}}_{l}$</td></tr><tr><td>7.4 (0.5)</td><td>3.1 (0.4)</td><td>1.4 (0.2)</td><td>8.7 (0.6)</td></tr></table>
|
| 472 |
+
|
| 473 |
+
745
|
| 474 |
+
|
| 475 |
+
Table 9: Accuracy gaps for the pol-classifiers on
|
| 476 |
+
|
| 477 |
+
the not+NPI test set, averaged over 14 relative po- 750 sitions and 3 runs (stdev within brackets).
|
| 478 |
+
|
| 479 |
+
This leads us to conclude that (i) PLMs do encode structural constraints imposed by not (NPI li-
|
| 480 |
+
|
| 481 |
+
censing), but to varying degrees across the PLMs 755
|
| 482 |
+
|
| 483 |
+
---
|
| 484 |
+
|
| 485 |
+
${}^{9}$ (any/some) $\left( {\varnothing /\text{where/one/body/thing/time)}}\right)$
|
| 486 |
+
|
| 487 |
+
${}^{10}$ For any/some(%/one/thing), we took $2 \times {2000}$ occurrences. For any/some(body/time/where), less occurrences were available in some of the subcorpora. We took as many as possible, but keeping a strict balance between NPI and PPI sentences (between $2 \times {169}$ and $2 \times {958}$ depending on the corpus genre and on the NPI/PPI pair).
|
| 488 |
+
|
| 489 |
+
---
|
| 490 |
+
|
| 491 |
+
756 810
|
| 492 |
+
|
| 493 |
+
757 811
|
| 494 |
+
|
| 495 |
+
758 812
|
| 496 |
+
|
| 497 |
+

|
| 498 |
+
|
| 499 |
+
Figure 3: Accuracy of the ROBERTA-large-pol-classifier (average on 3 runs) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at $p < {0.001}$ (cf. Appendix B).
|
| 500 |
+
|
| 501 |
+
817
|
| 502 |
+
|
| 503 |
+
818
|
| 504 |
+
|
| 505 |
+
819
|
| 506 |
+
|
| 507 |
+
820
|
| 508 |
+
|
| 509 |
+
822
|
| 510 |
+
|
| 511 |
+
826
|
| 512 |
+
|
| 513 |
+
759 813
|
| 514 |
+
|
| 515 |
+
760 814
|
| 516 |
+
|
| 517 |
+
761 815
|
| 518 |
+
|
| 519 |
+
762 816
|
| 520 |
+
|
| 521 |
+
767 821
|
| 522 |
+
|
| 523 |
+
828
|
| 524 |
+
|
| 525 |
+
831 we tested, and (ii) that this encoding is stronger in the negation scope than outside it, independently of the distance to not. This only partially matches the linguistic expectation that the strongest zone should be the licensing scope rather than the entire negation scope.
|
| 526 |
+
|
| 527 |
+
## 4 Conclusion
|
| 528 |
+
|
| 529 |
+
In this paper, we studied the way negation and its scope are encoded in contextual representations of PLMs and to what extent this encoding is used to model NPI licensing.
|
| 530 |
+
|
| 531 |
+
Classifiers were trained to predict the presence of negation in a sentence from the contextual representation of a random token. We also trained classifiers to predict the polarity of a masked polar item from the contextual representation of a random token. A test set of sentences was designed with not licensing an NPI, inside which we identified the negation scope (roughly the clause), and the licensing scope (roughly the VP).
|
| 532 |
+
|
| 533 |
+
For these sentences, we found that the contex-
|
| 534 |
+
|
| 535 |
+
804 tual embeddings of tokens within the scope of a negation allow a better prediction of the presence of not. These embedding also allow a better prediction of the (negative) polarity of a masked PI. These results hold even when controlling for the
|
| 536 |
+
|
| 537 |
+
809 distance to not.
|
| 538 |
+
|
| 539 |
+
833
|
| 540 |
+
|
| 541 |
+
We conclude that the PLMs which were tested indeed encode a notion of negation scope in their
|
| 542 |
+
|
| 543 |
+
contextual representations. We could not find 836 however a consistent encoding of the narrower
|
| 544 |
+
|
| 545 |
+
(and probably more difficult to define) notion 838 of negative polarity licensing scope. Moreover,
|
| 546 |
+
|
| 547 |
+
variation across PLMs remains to be explained 841 through further studies.
|
| 548 |
+
|
| 549 |
+
843
|
| 550 |
+
|
| 551 |
+
## References
|
| 552 |
+
|
| 553 |
+
Hande Celikkanat, Sami Virpioja, Jörg Tiedemann, and 846 Marianna Apidianaki. 2020. Controlling the Imprint of Passivization and Negation in Contextual-
|
| 554 |
+
|
| 555 |
+
ized Representations. In Proceedings of the Third 848 BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 136-148, On-
|
| 556 |
+
|
| 557 |
+
line. Association for Computational Linguistics. 851
|
| 558 |
+
|
| 559 |
+
Mark Davies. 2015. Corpus of Contemporary Ameri-
|
| 560 |
+
|
| 561 |
+
can English (COCA). 853
|
| 562 |
+
|
| 563 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference
|
| 564 |
+
|
| 565 |
+
of the North American Chapter of the Association 858
|
| 566 |
+
|
| 567 |
+
for Computational Linguistics: Human Language 859
|
| 568 |
+
|
| 569 |
+
Technologies, Volume 1 (Long and Short Papers), 860 pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 570 |
+
|
| 571 |
+
862
|
| 572 |
+
|
| 573 |
+
Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- 863
|
| 574 |
+
|
| 575 |
+
864 ichart. 2018. The hitchhiker's guide to testing statis- 865 tical significance in natural language processing. In 866 Proceedings of the 56th Annual Meeting of the As- 867 sociation for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392, Melbourne, Australia. Association for Computational Linguistics.
|
| 576 |
+
|
| 577 |
+
870 Allyson Ettinger. 2020. What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models. Transactions of the Association for Computational Linguistics, 8:34-48.
|
| 578 |
+
|
| 579 |
+
Reto Gubelmann and Siegfried Handschuh. 2022. 875 Context matters: A pragmatic study of PLMs' nega- tion understanding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4602- 4621, Dublin, Ireland. Association for Computational Linguistics.
|
| 580 |
+
|
| 581 |
+
880 Vincent Homer. 2020. Negative Polarity, pages 1-39. John Wiley & Sons, Ltd.
|
| 582 |
+
|
| 583 |
+
882
|
| 584 |
+
|
| 585 |
+
Jaap Jumelet and Dieuwke Hupkes. 2018. Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items. In Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 222-231, Brussels, Belgium. Association for Computational Linguistics.
|
| 586 |
+
|
| 587 |
+
Nora Kassner and Hinrich Schütze. 2020. Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811-7818, Online. Association for Computational Linguistics.
|
| 588 |
+
|
| 589 |
+
Josef Klafka and Allyson Ettinger. 2020. Spying on Your Neighbors: Fine-grained Probing of Contextual Embeddings for Information about Surrounding Words. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4801-4811, Online. Association for Computational Linguistics.
|
| 590 |
+
|
| 591 |
+
Bingzhi Li, Guillaume Wisniewski, and Benoit Crabbé.
|
| 592 |
+
|
| 593 |
+
902 2022. How distributed are distributed representations? an observation on the locality of syntactic information in verb agreement tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 501-507, Dublin, Ireland. Association for Computational Linguistics.
|
| 594 |
+
|
| 595 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv.
|
| 596 |
+
|
| 597 |
+
Roser Morante and Eduardo Blanco. 2012. *SEM 2012 Shared Task: Resolving the Scope and Focus of Negation. In *SEM 2012: The First Joint Conference on Lexical and Computational Seman-
|
| 598 |
+
|
| 599 |
+
917 tics - Volume 1: Proceedings of the main conference
|
| 600 |
+
|
| 601 |
+
and the shared task, and Volume 2: Proceedings of 918
|
| 602 |
+
|
| 603 |
+
the Sixth International Workshop on Semantic Eval- 919
|
| 604 |
+
|
| 605 |
+
uation (SemEval 2012), pages 265-274, Montréal, 920
|
| 606 |
+
|
| 607 |
+
Canada. Association for Computational Linguistics. 921
|
| 608 |
+
|
| 609 |
+
Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, 922
|
| 610 |
+
|
| 611 |
+
and Christopher D. Manning. 2020. Stanza: A 923
|
| 612 |
+
|
| 613 |
+
Python natural language processing toolkit for many 924 human languages. In Proceedings of the58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations.
|
| 614 |
+
|
| 615 |
+
Henriëtte de Swart. 1998. Licensing of negative po-
|
| 616 |
+
|
| 617 |
+
larity items under inverse scope. Lingua, 105(3- 929 4):175-200.
|
| 618 |
+
|
| 619 |
+
Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Ha- 931 gen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason
|
| 620 |
+
|
| 621 |
+
Phang, Anhad Mohananey, Phu Mon Htut, Paloma 934 Jeretic, and Samuel R. Bowman. 2019. Investigating BERT's Knowledge of Language: Five Anal-
|
| 622 |
+
|
| 623 |
+
ysis Methods with NPIs. In Proceedings of the 936 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International
|
| 624 |
+
|
| 625 |
+
Joint Conference on Natural Language Processing 939 (EMNLP-IJCNLP), pages 2877-2887, Hong Kong, China. Association for Computational Linguistics.
|
| 626 |
+
|
| 627 |
+
941
|
| 628 |
+
|
| 629 |
+
Yu Zhang, Houquan Zhou, and Zhenghua Li. 2020. Fast and Accurate Neural CRF Constituency Parsing. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, pages 4046-4053, Yokohama, Japan. International
|
| 630 |
+
|
| 631 |
+
Joint Conferences on Artificial Intelligence Organi- 946 zation.
|
| 632 |
+
|
| 633 |
+
## A Hyperparameter tuning for the neg-classifiers and the pol-classifiers
|
| 634 |
+
|
| 635 |
+
949
|
| 636 |
+
|
| 637 |
+
The PLMs' contextual representations were ob- 951 tained using a GeForce RTX 2080 Ti GPU.
|
| 638 |
+
|
| 639 |
+
The neg-classifiers and the pol-classifiers were
|
| 640 |
+
|
| 641 |
+
trained on a CPU, each training taking about 15 954 min each. Then, testing them on the not+NPI test
|
| 642 |
+
|
| 643 |
+
set takes about 5 minutes. 956
|
| 644 |
+
|
| 645 |
+
To tune these classifiers, we performed a grid search with: a number of hidden layers included in $\left\lbrack {1,2}\right\rbrack$ , number of units in each layer in $\lbrack {20},{50}$ ,
|
| 646 |
+
|
| 647 |
+
${100450},{1000}\rbrack$ , and the learning rate in $\lbrack 1,{0.1}$ , 961 ${0.01},{0.001}\rbrack$ .
|
| 648 |
+
|
| 649 |
+
We selected a learning rate of 0.001, 2 hidden layers, with size 450 each, based on the accuracies on the neg-test-set and the pol-test-set. Except when the learning rate equaled 1 , all hyperparam-eter combinations resulted in similar performance (less than 1 point of accuracy, in the results of figure 3).
|
| 650 |
+
|
| 651 |
+
The code and methodology was developed first
|
| 652 |
+
|
| 653 |
+
using the BERT-base model, and then applied to 971 the other models. Including code and method- 1026 ology development, we estimate that the experi- 1027 ments reported in this paper correspond to a total 1028 of 160 hours of GPU computing. 1029 1030
|
| 654 |
+
|
| 655 |
+
## B Statistical significance test
|
| 656 |
+
|
| 657 |
+
1031
|
| 658 |
+
|
| 659 |
+
In this section we detail the test performed to as- 1032 1033 sess the statistical significance of the accuracy dif- 1034 ferences illustrated in Figures 3 and 5. 1035
|
| 660 |
+
|
| 661 |
+
For each of the four tested PLMs, and for each 1036 of 3 runs of classifier training, 1037
|
| 662 |
+
|
| 663 |
+
985 - for each position from -8 to -1 relative to the 1038 1039 not, 1040
|
| 664 |
+
|
| 665 |
+
987 - we compare the accuracy of the pol- 1041
|
| 666 |
+
|
| 667 |
+
988 classifier in the PRE-IN zone versus in 1042
|
| 668 |
+
|
| 669 |
+
989 the PRE zone (i.e. the difference be- 1043
|
| 670 |
+
|
| 671 |
+
990 tween the purple bar with respect to the 1044 pink one). 1045
|
| 672 |
+
|
| 673 |
+
993 - namely, we test the statistical signifi- 1046 1047 cance of the following positive differ- 1048 995 ence : accuracy for tokens in PRE-IN 1049
|
| 674 |
+
|
| 675 |
+
zone minus accuracy for tokens in the 1050
|
| 676 |
+
|
| 677 |
+
PRE zone. 1051
|
| 678 |
+
|
| 679 |
+
- for each position from 3 to 8 , 1052 1053
|
| 680 |
+
|
| 681 |
+
- we test the statistical significance of the 1054
|
| 682 |
+
|
| 683 |
+
following positive difference : accuracy 1055
|
| 684 |
+
|
| 685 |
+
for tokens in IN zone minus accuracy for 1056
|
| 686 |
+
|
| 687 |
+
tokens in the POST zone (i.e. the differ- 1057
|
| 688 |
+
|
| 689 |
+
ence between the blue bar with respect 1058
|
| 690 |
+
|
| 691 |
+
to the green one) 1059
|
| 692 |
+
|
| 693 |
+
1060
|
| 694 |
+
|
| 695 |
+
Each test is an approximate Fisher-Pitman 1061
|
| 696 |
+
|
| 697 |
+
permutation test (with 5000 random permu- 1062
|
| 698 |
+
|
| 699 |
+
tations, performed using the script of Dror 1063
|
| 700 |
+
|
| 701 |
+
et al. (2018), https://github.com/rtmdrr/ 1064
|
| 702 |
+
|
| 703 |
+
testSignificanceNLP.git), and all the differ- 1065
|
| 704 |
+
|
| 705 |
+
ences listed above result as statistically significant 1066
|
| 706 |
+
|
| 707 |
+
at $p < {0.001}$ . 1067
|
| 708 |
+
|
| 709 |
+
1068
|
| 710 |
+
|
| 711 |
+
1015 1069
|
| 712 |
+
|
| 713 |
+
## C Accuracies of the classifiers on the $\mathbf{{not} + {NPItestset}}$
|
| 714 |
+
|
| 715 |
+
1070
|
| 716 |
+
|
| 717 |
+
The break-downs by position for the three models 1071 1072
|
| 718 |
+
|
| 719 |
+
not presented in the main text (BERT-base, BERT- 1073
|
| 720 |
+
|
| 721 |
+
large and ROBERTA-base) are provided in Fig- 1074
|
| 722 |
+
|
| 723 |
+
ures 4 (neg-classifiers) and 5 (pol-classifiers). 1075 1076 1077 1078
|
| 724 |
+
|
| 725 |
+
1025 1079
|
| 726 |
+
|
| 727 |
+
1080 1134
|
| 728 |
+
|
| 729 |
+
1081 1135
|
| 730 |
+
|
| 731 |
+

|
| 732 |
+
|
| 733 |
+
Figure 4: Accuracy (average on 3 runs) of the other neg-classifiers (BERT-base, BERT-large and ROBERTA-base) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at $p < {0.001}$ (cf. Appendix B).
|
| 734 |
+
|
| 735 |
+
1082 1136
|
| 736 |
+
|
| 737 |
+
1083 1137
|
| 738 |
+
|
| 739 |
+
1084 1138
|
| 740 |
+
|
| 741 |
+
1085 1139
|
| 742 |
+
|
| 743 |
+
1086 1140
|
| 744 |
+
|
| 745 |
+
1087 1141
|
| 746 |
+
|
| 747 |
+
1088 1142
|
| 748 |
+
|
| 749 |
+
1089 1143
|
| 750 |
+
|
| 751 |
+
1090 1144
|
| 752 |
+
|
| 753 |
+
1091 1145
|
| 754 |
+
|
| 755 |
+
1092 1146
|
| 756 |
+
|
| 757 |
+
1093 1147
|
| 758 |
+
|
| 759 |
+
1094 1148
|
| 760 |
+
|
| 761 |
+
1095 1149
|
| 762 |
+
|
| 763 |
+
1096 1150
|
| 764 |
+
|
| 765 |
+
1097 1151
|
| 766 |
+
|
| 767 |
+
1098 1152
|
| 768 |
+
|
| 769 |
+
1099 1153
|
| 770 |
+
|
| 771 |
+
1100 1154
|
| 772 |
+
|
| 773 |
+
1101 1155
|
| 774 |
+
|
| 775 |
+
1102 1156
|
| 776 |
+
|
| 777 |
+
1103 1157
|
| 778 |
+
|
| 779 |
+
1104 1158
|
| 780 |
+
|
| 781 |
+
1105 1159
|
| 782 |
+
|
| 783 |
+
1106 1160
|
| 784 |
+
|
| 785 |
+
1107 1161
|
| 786 |
+
|
| 787 |
+
1108 1162
|
| 788 |
+
|
| 789 |
+
1109 1163
|
| 790 |
+
|
| 791 |
+
1110 1164
|
| 792 |
+
|
| 793 |
+
1111 1165
|
| 794 |
+
|
| 795 |
+
1112 1166
|
| 796 |
+
|
| 797 |
+
1113 1167
|
| 798 |
+
|
| 799 |
+
1114 1168
|
| 800 |
+
|
| 801 |
+
1115 1169
|
| 802 |
+
|
| 803 |
+
1116 1170
|
| 804 |
+
|
| 805 |
+
1117 1171
|
| 806 |
+
|
| 807 |
+
1118 1172
|
| 808 |
+
|
| 809 |
+
1119 1173
|
| 810 |
+
|
| 811 |
+
1120 1174
|
| 812 |
+
|
| 813 |
+
1121 1175
|
| 814 |
+
|
| 815 |
+
1122 1176
|
| 816 |
+
|
| 817 |
+
1123 1177
|
| 818 |
+
|
| 819 |
+
1124 1178
|
| 820 |
+
|
| 821 |
+
1125 1179
|
| 822 |
+
|
| 823 |
+
1126 1180
|
| 824 |
+
|
| 825 |
+
1127 1181
|
| 826 |
+
|
| 827 |
+
1128 1182
|
| 828 |
+
|
| 829 |
+
1129 1183
|
| 830 |
+
|
| 831 |
+
1130 1184
|
| 832 |
+
|
| 833 |
+
1131 1185
|
| 834 |
+
|
| 835 |
+
1132 1186
|
| 836 |
+
|
| 837 |
+
1133 1187
|
| 838 |
+
|
| 839 |
+
1188 1242
|
| 840 |
+
|
| 841 |
+
1189 1243
|
| 842 |
+
|
| 843 |
+
1190 1244
|
| 844 |
+
|
| 845 |
+

|
| 846 |
+
|
| 847 |
+
Figure 5: Accuracy (average on 3 runs) of the other pol-classifiers (BERT-base, BERT-large and ROBERTA-base) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at $p < {0.001}$ (cf. Appendix B).
|
| 848 |
+
|
| 849 |
+
1191 1245
|
| 850 |
+
|
| 851 |
+
1192 1246
|
| 852 |
+
|
| 853 |
+
1193 1247
|
| 854 |
+
|
| 855 |
+
1194 1248
|
| 856 |
+
|
| 857 |
+
1195 1249
|
| 858 |
+
|
| 859 |
+
1196 1250
|
| 860 |
+
|
| 861 |
+
1197 1251
|
| 862 |
+
|
| 863 |
+
1198 1252
|
| 864 |
+
|
| 865 |
+
1199 1253
|
| 866 |
+
|
| 867 |
+
1200 1254
|
| 868 |
+
|
| 869 |
+
1201 1255
|
| 870 |
+
|
| 871 |
+
1202 1256
|
| 872 |
+
|
| 873 |
+
1203 1257
|
| 874 |
+
|
| 875 |
+
1204 1258
|
| 876 |
+
|
| 877 |
+
1205 1259
|
| 878 |
+
|
| 879 |
+
1206 1260
|
| 880 |
+
|
| 881 |
+
1207 1261
|
| 882 |
+
|
| 883 |
+
1208 1262
|
| 884 |
+
|
| 885 |
+
1209 1263
|
| 886 |
+
|
| 887 |
+
1210 1264
|
| 888 |
+
|
| 889 |
+
1211 1265
|
| 890 |
+
|
| 891 |
+
1212 1266
|
| 892 |
+
|
| 893 |
+
1213 1267
|
| 894 |
+
|
| 895 |
+
1214 1268
|
| 896 |
+
|
| 897 |
+
1215 1269
|
| 898 |
+
|
| 899 |
+
1216 1270
|
| 900 |
+
|
| 901 |
+
1217 1271
|
| 902 |
+
|
| 903 |
+
1218 1272
|
| 904 |
+
|
| 905 |
+
1219 1273
|
| 906 |
+
|
| 907 |
+
1220 1274
|
| 908 |
+
|
| 909 |
+
1221 1275
|
| 910 |
+
|
| 911 |
+
1222 1276
|
| 912 |
+
|
| 913 |
+
1223 1277
|
| 914 |
+
|
| 915 |
+
1224 1278
|
| 916 |
+
|
| 917 |
+
1225 1279
|
| 918 |
+
|
| 919 |
+
1226 1280
|
| 920 |
+
|
| 921 |
+
1227 1281
|
| 922 |
+
|
| 923 |
+
1228 1282
|
| 924 |
+
|
| 925 |
+
1229 1283
|
| 926 |
+
|
| 927 |
+
1230 1284
|
| 928 |
+
|
| 929 |
+
1231 1285
|
| 930 |
+
|
| 931 |
+
1232 1286
|
| 932 |
+
|
| 933 |
+
1233 1287
|
| 934 |
+
|
| 935 |
+
1234 1288
|
| 936 |
+
|
| 937 |
+
1235 1289
|
| 938 |
+
|
| 939 |
+
1236 1290
|
| 940 |
+
|
| 941 |
+
1237 1291
|
| 942 |
+
|
| 943 |
+
1238 1292
|
| 944 |
+
|
| 945 |
+
1239 1293
|
| 946 |
+
|
| 947 |
+
1240 1294
|
| 948 |
+
|
| 949 |
+
1241 1295
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_7VPETQwnPX/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,587 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ PROBING STRUCTURAL CONSTRAINTS OF NEGATION IN PRETRAINED LANGUAGE MODELS
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
056
|
| 8 |
+
|
| 9 |
+
Anonymous Author
|
| 10 |
+
|
| 11 |
+
Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 2
|
| 14 |
+
|
| 15 |
+
Affiliation / Address line 3 email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 3
|
| 24 |
+
|
| 25 |
+
email@domain
|
| 26 |
+
|
| 27 |
+
Anonymousest Author 057
|
| 28 |
+
|
| 29 |
+
Affiliation / Address line 1 058
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3
|
| 32 |
+
|
| 33 |
+
email@domain 062
|
| 34 |
+
|
| 35 |
+
§ ABSTRACT
|
| 36 |
+
|
| 37 |
+
Contradictory results about the encoding of the semantic impact of negation in pretrained language models (PLMs) have been drawn recently (e.g. Kassner and Schütze (2020); Gubelmann and Hand-
|
| 38 |
+
|
| 39 |
+
018 schuh (2022)).
|
| 40 |
+
|
| 41 |
+
In this paper we focus rather on the way
|
| 42 |
+
|
| 43 |
+
021 PLMs encode negation and its formal impact, through the phenomenon of the Neg-
|
| 44 |
+
|
| 45 |
+
023 ative Polarity Item (NPI) licensing in English. More precisely, we use probes to identify which contextual representations
|
| 46 |
+
|
| 47 |
+
026 best encode 1) the presence of negation in a sentence, and 2) the polarity of a neigh-
|
| 48 |
+
|
| 49 |
+
028 boring masked polarity item.
|
| 50 |
+
|
| 51 |
+
We find that contextual representations of tokens inside the negation scope do allow
|
| 52 |
+
|
| 53 |
+
031 for (i) a better prediction of the presence
|
| 54 |
+
|
| 55 |
+
033 of not compared to those outside the scope and (ii) a better prediction of the right polarity of a masked polarity item licensed by not, although the magnitude of the difference varies from PLM to PLM. Impor-
|
| 56 |
+
|
| 57 |
+
038 tantly, in both cases the trend holds even when controlling for distance to not.
|
| 58 |
+
|
| 59 |
+
We thus confirm that the embeddings of these models do reflect the notion of negation scope, and do encode the impact of negation on NPI licensing. The subtle difference between licensing scope and negation scope, however, does not seem to be captured.
|
| 60 |
+
|
| 61 |
+
§ 1 INTRODUCTION
|
| 62 |
+
|
| 63 |
+
Negation has recently been the focus of various works aiming at determining the abilities of Pre-trained Language Models (PLMs) to capture linguistic knowledge.
|
| 64 |
+
|
| 65 |
+
Some works investigate the 'semantic impact' 065 of negation, namely its impact in terms of truth
|
| 66 |
+
|
| 67 |
+
values, by interpreting how the presence of nega- 067 tion impacts the probability distribution at a masked position. The rationale is that negating a
|
| 68 |
+
|
| 69 |
+
verb reverses the truth value of its clause, which 070 should be reflected in the probability distribution
|
| 70 |
+
|
| 71 |
+
at certain positions. Ettinger (2020); Kassner and 072 Schütze (2020) use factual statements such as (1),
|
| 72 |
+
|
| 73 |
+
and report that models output similar distributions 075 for the positive and negative variants of (1), and
|
| 74 |
+
|
| 75 |
+
conclude that models largely ignore negation. 077
|
| 76 |
+
|
| 77 |
+
§ (1) A ROBIN IS (NOT) A [MASK]
|
| 78 |
+
|
| 79 |
+
080
|
| 80 |
+
|
| 81 |
+
Gubelmann and Handschuh (2022) chose to
|
| 82 |
+
|
| 83 |
+
avoid factual statements and focus rather on multi- 082 sentence self-contained examples, such that, given the context provided by the first sentence, one par-
|
| 84 |
+
|
| 85 |
+
ticular word is either likely (in positive items) or 085 ruled out (in negative items) at a masked posi-
|
| 86 |
+
|
| 87 |
+
tion in the second sentence. Because this partic- 087 ular word is substantially less often the top-1 prediction in the negative items than in the positive
|
| 88 |
+
|
| 89 |
+
items, the authors draw the opposite conclusion 090 that PLMs do show sensitivity to negation.
|
| 90 |
+
|
| 91 |
+
A different line of works focused on finding out 092 to what extent negation is encoded in PLM embed-dings. Celikkanat et al. (2020) train classifiers taking as input the contextual embedding of a verb or
|
| 92 |
+
|
| 93 |
+
its subject or direct object, and predicting whether 097 the verb is negated or not. The resulting high accuracy allows them to conclude that these tokens' embeddings do contain "traces" of not. More generally, several authors have investigated whether the contextual representation of a token encodes information about surrounding tokens. To ease further reading, we will talk of a classifier taking as input an input embedding, namely the contextual representation of an input token, and predict-
|
| 94 |
+
|
| 95 |
+
ing some target information about another token 107 in the sentence. For instance, Klafka and Ettinger (2020) study how input embeddings encode ani-macy, gender, and number of surrounding words in a specific SVO context. Li et al. (2022) target the number feature of French participles in the context of object-past participle agreement. They show that the performance of the classifier depends on the syntactic position of the input token in the sentence. We will build on their idea to compare performance at predicting target information depending on the syntactic zone the input token belongs to.
|
| 96 |
+
|
| 97 |
+
In this paper, we focus on how the information about negation encoded in contextual embeddings is used. Our aim is to study PLMs' ability to capture and encode structural information concerning negation (namely negation scope), and also their ability to actually mobilize the encoding in order to capture phenomena that are direct consequences of the presence of negation. To do so, we focus on the licensing of Negative Polarity Items (NPI) by not modifying a verb. Polarity Items (PI), either positive (e.g. some), or negative (e.g. any), are words or expressions that are constrained in their distribution (Homer, 2020). A NPI will require that a word or a construction, called the licensor, be in the vicinity. And the licensor itself grammatically defines a zone of the sentence, called the licensing scope, in which the NPI can appear. The adverb not modifying a verb is one such licensor. While any is licensed by negation in (2-a) vs. (2-b), it is not licensed in (2-c), even though the verb is negated, arguably because it is not in the licensing scope ${}^{1}$ .
|
| 98 |
+
|
| 99 |
+
(2) a. Sam didn't find any books.
|
| 100 |
+
|
| 101 |
+
b. *Sam found any books.
|
| 102 |
+
|
| 103 |
+
Jumelet and Hupkes (2018) have shown that LSTM embeddings do encode the notion of licensing scope (given an input embedding, a classifier can predict the structural zone the input token belongs to), a finding later confirmed for transformer-based PLMs (Warstadt et al., 2019). Focusing on when the licensor is a verb-modifying not, we rather investigate whether this demonstrated encoding of the zones go as far as enabling a better prediction of a PI's polarity from inside the licensing scope compared to outside the scope. So instead of the question "Is this input embed-
|
| 104 |
+
|
| 105 |
+
ding the embedding of a token that is within, be- 162
|
| 106 |
+
|
| 107 |
+
fore or after the licensing scope?", we rather ask 163 the question "Given a masked PI position, and an input embedding of a neighboring token, what is the polarity of the PI?", and we study whether this question is better answered when the input embedding is inside or outside the licensing or negation scopes.
|
| 108 |
+
|
| 109 |
+
Note that our methodology differs from that of Jumelet and Hupkes (2018), who, given an input token, predict the zone this token belongs to. We instead predict the polarity of a neighboring masked polarity item and then compare accuracies depending on the input token's zone. Our motivation is that the polarity, being a lexical information, requires less linguistic preconception, and hence our probing method is a more direct translation of the NPI licensing phenomenon: we study whether and where the information of "which PIs are licit where?" is encoded, in the context of sentence negation. This method also allows us to better control the confounding factor of distance between the input embedding and the licensor not.
|
| 110 |
+
|
| 111 |
+
In the following we start in section 2 by defining the linguistic notions of negation scope and NPI licensing scope, and by showing how we actually identified them in English sentences. In section 3, we define our probing experiments and discuss their results, both for the encoding of not (section 3.1), and the encoding of NPI licensing (section 3.2). We conclude in section 4.
|
| 112 |
+
|
| 113 |
+
§ 2 DEFINING AND IDENTIFYING SCOPES
|
| 114 |
+
|
| 115 |
+
195
|
| 116 |
+
|
| 117 |
+
§ 2.1 NEGATION SCOPE
|
| 118 |
+
|
| 119 |
+
From a linguistic point of view, the scope of a negation cue is the area of the sentence whose propositional content's truth value is reversed by the presence of the cue. While in many cases it is sufficient to use the syntactic structure to recover the scope, in some cases semantics or even pragmatics come into play. ${}^{2}$ Nevertheless, annotation guidelines usually offer syntactic approximations of negation scope.
|
| 120 |
+
|
| 121 |
+
To identify the negation scope for a not ${}^{3}$ modifying a verb, we followed the syntactic constraints that emerge from the guidelines of Morante and Blanco (2012). Note though that these guide-
|
| 122 |
+
|
| 123 |
+
215
|
| 124 |
+
|
| 125 |
+
${}^{2}$ For instance in Kim did not go to the party because Bob was there., negation may scope only over the matrix clause or include the causal subordinate clause.
|
| 126 |
+
|
| 127 |
+
${}^{3}$ In all this article, not stands for either not or $n$ ’t.
|
| 128 |
+
|
| 129 |
+
${}^{1}$ We leave aside the uses of any and the like having free choice interpretations, as for instance in "Pick any card".
|
| 130 |
+
|
| 131 |
+
< g r a p h i c s >
|
| 132 |
+
|
| 133 |
+
Table 1: The "neg-patterns": patterns adapted from Jumelet and Hupkes (2018), which we used to identify some cases of not licensing a NPI and to build the not+NPI test set. Col1: pattern id in Jumelet and Hupkes (2018). Col2: syntactic pattern (defined as a phrase-structure subtree, using the Penn Treebank's annotation scheme), with the licensing scope appearing in blue. Col3: examples with colors for the four zones: pink for tokens in the PRE zone (before both scopes), purple for PRE-IN (to the left of the licensing scope, but within the negation scope), blue for IN (within both scopes) and green for POST (after both scopes). The NPI licensor is not, and appears in yellow.
|
| 134 |
+
|
| 135 |
+
270
|
| 136 |
+
|
| 137 |
+
271
|
| 138 |
+
|
| 139 |
+
276
|
| 140 |
+
|
| 141 |
+
281 lines restrict the annotation to factual eventualities, leaving aside e.g. negated future verbs. We did not retain such a restriction, hence our identification of the negation scope is independent from verb tense or modality.
|
| 142 |
+
|
| 143 |
+
§ 2.2 NPI LICENSING SCOPE
|
| 144 |
+
|
| 145 |
+
Polarity items are a notoriously complex phenomenon. To identify the NPI licensing scope, we focus on specific syntactic patterns defined by Jumelet and Hupkes (2018), retaining only those involving not as licensor. ${}^{4}$ Table 1 shows an example for each retained pattern (hereafter the neg-patterns), with the NPI licensing scope in blue.
|
| 146 |
+
|
| 147 |
+
Importantly, in the neg-patterns, the licensing scope is strictly included in the negation scope: within the clause of the negated verb, the tokens to its left belong to the negation scope but not to the licensing scope. E.g. in (3), anyone is not licit as a subject of going, whether the location argument is itself a plain PP, a NPI or a PPI (3-b).
|
| 148 |
+
|
| 149 |
+
(3) a. I'm not going anywhere.
|
| 150 |
+
|
| 151 |
+
b. *Anyone is not going to the party/ somewhere/anywhere.
|
| 152 |
+
|
| 153 |
+
We thus defined 4 zones for the not+NPI sentences, exemplified in Table 1: PRE (tokens be-
|
| 154 |
+
|
| 155 |
+
259 fore both scopes), PRE-IN (to the left of the licensing scope, but within the negation scope), IN (in both scopes), and POST (after both scopes).
|
| 156 |
+
|
| 157 |
+
We note though that the restriction exemplified in (3-b) only holds for non-embedded NPIs (de Swart, 1998), so examples like (4), with an embedded NPI in the subject of the negated verb
|
| 158 |
+
|
| 159 |
+
269
|
| 160 |
+
|
| 161 |
+
283 (hence belonging to our PRE-IN zone), are theoretically possible.
|
| 162 |
+
|
| 163 |
+
286
|
| 164 |
+
|
| 165 |
+
§ (4) EXAMPLES WITH ANY RELEVANCE TO THAT ISSUE DIDN'T COME UP IN THE DISCUSSION.
|
| 166 |
+
|
| 167 |
+
288
|
| 168 |
+
|
| 169 |
+
Yet in practice, we found that they are ex-
|
| 170 |
+
|
| 171 |
+
tremely rare: using the Corpus of Contempo- 291 rary American English (COCA, Davies 2015) ${}^{5}$ , we extracted sentences matching one of the neg-patterns, and among these, sentences having any or any-body/one/thing/time/where in the IN zone,
|
| 172 |
+
|
| 173 |
+
the PRE-IN zone or both. As shown in Table 2, 296 any* in the PRE-IN zone are way rarer than in the
|
| 174 |
+
|
| 175 |
+
classical licensing scope (IN zone) ${}^{6}$ . Hence we 298 sticked to the usual notion of direct NPI licensing scope, as illustrated in Table 1.
|
| 176 |
+
|
| 177 |
+
301
|
| 178 |
+
|
| 179 |
+
max width=
|
| 180 |
+
|
| 181 |
+
Total IN PRE-IN both
|
| 182 |
+
|
| 183 |
+
1-4
|
| 184 |
+
45,157 35,938 711 58
|
| 185 |
+
|
| 186 |
+
1-4
|
| 187 |
+
|
| 188 |
+
303
|
| 189 |
+
|
| 190 |
+
Table 2: Number of sentences from the COCA
|
| 191 |
+
|
| 192 |
+
corpus, matching the neg-patterns of Table 1: 306 Col1: total number, Col2-4: number having a
|
| 193 |
+
|
| 194 |
+
any* in the IN zone, the PRE-IN zone, and in both 308 zones respectively.
|
| 195 |
+
|
| 196 |
+
313
|
| 197 |
+
|
| 198 |
+
318
|
| 199 |
+
|
| 200 |
+
323
|
| 201 |
+
|
| 202 |
+
${}^{5}$ We used a version with texts from 1990 to 2012. COCA is distributed with some tokens in some sentences voluntarily masked, varying across distributions. We ignored such sentences.
|
| 203 |
+
|
| 204 |
+
${}^{6}$ More precisely, the figures in Table 2 correspond to an upper bound, because of (i) potential syntactic parsing errors impacting the identification of the zones, (ii) cases in which the NPI licensor is different from the not targeted by the patterns, and (iii) cases in which the any* is a free choice item and not a NPI (as in "Pick any one"). We inspected 250 examples of any* in the PRE-IN zone, and 250 examples in the IN zone. In the former, we found that almost all cases fall under (i), (ii) or (iii), less than 3% corresponding to examples such as (4)). In contrast, in the IN zone the proportion of NPIs actually licensed by the target not is ${92}\%$ .
|
| 205 |
+
|
| 206 |
+
${}^{4}$ We ignored pattern 4 (never instead of not as licensor), and 6 (too few occurrences in our data). We merged patterns 1 and 2, and corrected an obvious minor error in pattern 5 .
|
| 207 |
+
|
| 208 |
+
§ 2.3 BUILDING THE NOT+NPI TEST SET
|
| 209 |
+
|
| 210 |
+
Having defined these structural zones, we can use them to probe the traces they carry and compare the magnitude of these traces across the four zones. To do so, we built a test set of COCA sentences containing a not licensing a NPI (hereafter the not+NPI test set), matching one of the neg-patterns of Table 1, and having at least one any, anybody, anyone, anything, anytime or anywhere within the licensing scope.
|
| 211 |
+
|
| 212 |
+
The scope of negation has been implemented through an approximation using dependency parses (from the Stanza parser (Qi et al., 2020)), which proved more convenient than phrase-structure parses: we took the subtree of the negated verb, excluding not itself, and excluding dependents corresponding to sentential or verbal conjuncts and to sentential parentheticals.
|
| 213 |
+
|
| 214 |
+
More precisely, we identified the token having not as dependent (which, given our patterns, can be either the negated verb or a predicative adjective in case of a negated copula). Then, we retrieved the children of this head, except those attached to it with a "conj", "parataxis", "mark" or "discourse" dependency. In the complete subtrees of the selected dependents, all tokens were annotated as being inside the negation scope.
|
| 215 |
+
|
| 216 |
+
max width=
|
| 217 |
+
|
| 218 |
+
Genre Mag Acad Fict News Total
|
| 219 |
+
|
| 220 |
+
1-6
|
| 221 |
+
#with not 537 383 830 536 2285
|
| 222 |
+
|
| 223 |
+
1-6
|
| 224 |
+
#and a NPI 31 21 58 34 143
|
| 225 |
+
|
| 226 |
+
1-6
|
| 227 |
+
|
| 228 |
+
Table 3: Thousands of sentences in COCA: Line 1: containing a not. Line 2: containing a not and at least one NPI (among any- $\varnothing /$ body/one/where/time/thing), anywhere in the sentence.
|
| 229 |
+
|
| 230 |
+
362
|
| 231 |
+
|
| 232 |
+
For the licensing scope, we parsed the corpus using the PTB-style parser "Supar Parser"' of Zhang et al. (2020), and further retained only the
|
| 233 |
+
|
| 234 |
+
367 sentences (i) matching the neg-patterns of Table 1 and (ii) having a NPI within the licensing scope (IN zone, shown in blue in Table 1).
|
| 235 |
+
|
| 236 |
+
We finally obtained a not+NPI test set, whose statistics are provided in Table 4.
|
| 237 |
+
|
| 238 |
+
§ 3 PROBING FOR THE SCOPES
|
| 239 |
+
|
| 240 |
+
Our objective is to study how a transformer-based PLM (i) encodes the presence of a negation
|
| 241 |
+
|
| 242 |
+
max width=
|
| 243 |
+
|
| 244 |
+
$\mathbf{{Pattern}}$ Mag Acad Fict News Total
|
| 245 |
+
|
| 246 |
+
1-6
|
| 247 |
+
1/2 6.56 1.69 16.49 6.16 30.90
|
| 248 |
+
|
| 249 |
+
1-6
|
| 250 |
+
3 0.57 0.14 1.33 0.49 2.53
|
| 251 |
+
|
| 252 |
+
1-6
|
| 253 |
+
5* 0.22 0.08 0.58 0.15 1.02
|
| 254 |
+
|
| 255 |
+
1-6
|
| 256 |
+
|
| 257 |
+
Table 4: Statistics of the not+NPI test set: thousands of COCA sentences matching the neg-patterns (cf. Table 1), and having at least one any* in the IN zone (licensing scope), broken down by corpus genre.
|
| 258 |
+
|
| 259 |
+
378
|
| 260 |
+
|
| 261 |
+
379
|
| 262 |
+
|
| 263 |
+
380
|
| 264 |
+
|
| 265 |
+
381
|
| 266 |
+
|
| 267 |
+
384
|
| 268 |
+
|
| 269 |
+
389 (the "traces" of negation) and (ii) models lexico-
|
| 270 |
+
|
| 271 |
+
syntactic constraints imposed by negation, such as 391 the modeling of a NPI licensing scope. Using the terminology introduced in section 1, we will probe
|
| 272 |
+
|
| 273 |
+
whether input embeddings encode as target infor- 394 mation (i) the presence of not elsewhere in the sen-
|
| 274 |
+
|
| 275 |
+
tence, and (ii) the polarity of a masked PI. The 396 former focuses on a plain encoding of negation, whereas the latter focuses on whether the encoding of negation can be mobilized to reflect a property (NPI licensing) that is directly imposed by negation. To investigate whether such an encoding matches linguistic notions of scopes, we will contrast results depending on the zone the input token belongs to (among the four zones defined for a not
|
| 276 |
+
|
| 277 |
+
licensing a NPI, namely PRE, PRE-IN, IN, POST) 406 and its distance to not.
|
| 278 |
+
|
| 279 |
+
We study four PLMs: BERT-base-case, BERT-
|
| 280 |
+
|
| 281 |
+
large-case (Devlin et al., 2019) and ROBERTA- 409 base and ROBERTA-large (Liu et al., 2019). All
|
| 282 |
+
|
| 283 |
+
our experiments were done with each of these 411 models, and for a given model, each experiment was repeated three times. All the sentences we used for training, tuning and testing were extracted from the COCA corpus.
|
| 284 |
+
|
| 285 |
+
416
|
| 286 |
+
|
| 287 |
+
§ 3.1 PROBING FOR THE NEGATION SCOPE
|
| 288 |
+
|
| 289 |
+
In preliminary experiments, we extend Celikkanat et al. (2020)'s study by investigating the traces of
|
| 290 |
+
|
| 291 |
+
not in the contextual embedding of all the tokens 421 of a sentence containing not (instead of just the verb, subject and object).
|
| 292 |
+
|
| 293 |
+
§ 3.1.1 TRAINING NEG-CLASSIFIERS
|
| 294 |
+
|
| 295 |
+
We train binary classifiers (hereafter the m-neg- 426 classifiers, with $m$ the name of the studied PLM) taking an input contextual embedding, and predicting the presence or absence of at least one
|
| 296 |
+
|
| 297 |
+
not in the sentence. We train 3 classifiers for 430
|
| 298 |
+
|
| 299 |
+
each of the 4 tested PLMs. To train and evalu- 431 ate these classifiers, we randomly extract 40,000 sentences containing exactly one not, and 40,000 sentences not containing any not. We BERT- and ROBERTA-tokenized these sentences and for each model, we randomly selected one PLM token in each sentence to serve as input token. For these input tokens, we ignored any token not, plus all PLM tokens associated to a contracted negation: for instance don’t is BERT-tokenized into don $+ {}^{\prime } + t$ , and ROBERTA-tokenized into don’ + t. We ignore all these tokens, as they are too obvious a clue for the presence of a verbal negation. Furthermore, in order to homogenize the handling of negation whether contracted or not, we also set aside any modal or auxiliary that can form a negated contracted form. Hence, in She did leave, She did not leave or She didn't leave, the only candidate input tokens are those for She and leave ${}^{8}$ . We use ${64}\mathrm{k}$ sentences for training (neg-train-sets), and the remaining ${16}\mathrm{k}$ for testing (neg-test-set).
|
| 300 |
+
|
| 301 |
+
${}^{7}$ https://parser.yzhang.site/en/latest/index.html
|
| 302 |
+
|
| 303 |
+
We provide the obtained accuracies on this neg-test-set in Table 5, which shows that performance is significantly above chance.
|
| 304 |
+
|
| 305 |
+
max width=
|
| 306 |
+
|
| 307 |
+
Model ${\mathrm{{BERT}}}_{b}$ ${\mathrm{{BERT}}}_{l}$ ROB. $b$ ROB. ${}_{l}$
|
| 308 |
+
|
| 309 |
+
1-5
|
| 310 |
+
Accur. 74.3 73.1 72.1 76.6
|
| 311 |
+
|
| 312 |
+
1-5
|
| 313 |
+
|
| 314 |
+
Table 5: Accuracies of the neg-classifiers on the neg-test-set for each PLM (averaged over 3 runs).
|
| 315 |
+
|
| 316 |
+
§ 3.1.2 STUDYING RESULTS ON THE NOT+NPI TEST SET
|
| 317 |
+
|
| 318 |
+
To probe the negation scope, we then use the not+NPI test set (cf. section 2), and compare accuracies in PRE-IN versus PRE, and in IN versus POST.
|
| 319 |
+
|
| 320 |
+
Note though that distance to not is also likely to impact the classifiers' accuracy. Indeed, by definition the structural zones obviously correlate with distance to not. For instance, a token at distance 3 to the right of not is more likely to be in the licensing scope than a token at distance 20 . Hence, to study the impact of the input token's zone, we need to control for distance to the negation clue.
|
| 321 |
+
|
| 322 |
+
We thus break down our classifiers' accuracy on the not $+ \mathrm{{NPI}}$ test set, not only according to the input token's zone, but also according to its relative position to the negation cue. Table 6 shows an example of not+NPI sentence, and the zone and
|
| 323 |
+
|
| 324 |
+
relative position to not of each token. The target 486
|
| 325 |
+
|
| 326 |
+
not has position 0, and so do all the PLMs' sub- 487 word tokens involved in the negation complex, and all preceding modal or auxiliary, to homogenize across PLMs and across contracted/plain negation. By construction, the PRE and PRE-IN zones
|
| 327 |
+
|
| 328 |
+
correspond to negative positions, whereas IN and 492 POST correspond to positive ones.
|
| 329 |
+
|
| 330 |
+
The break-down by position for ROBERTA-
|
| 331 |
+
|
| 332 |
+
large is shown in Figure 1 (results for other models 497 are in Appendix C). Two effects can be observed,
|
| 333 |
+
|
| 334 |
+
for all the 4 PLMs: firstly, there is a general de- 499 crease of the accuracy as moving away from not, for the four zones. This contrasts with the findings
|
| 335 |
+
|
| 336 |
+
of Klafka and Ettinger (2020), who did not ob- 502 serve a distance effect in their experiments, when probing whether the contextual representation of e.g. a direct object encodes e.g. the animacy of the subject. The decrease is more rapid before not than after it, which remains to be explained. It might come from the negation scope being shorter before not than after it.
|
| 337 |
+
|
| 338 |
+
Secondly, when looking at fixed relative distances, there is a slight but almost systematic effect that when the input token is in the negation scope (either PRE-IN or IN), the accuracy is higher than when it is outside (PRE and POST) (the differences are statistically significant at $p <$ 0.001, cf. Appendix B). This tendency is more marked for the PRE vs. PRE-IN distinction than for the POST vs. IN distinction.
|
| 339 |
+
|
| 340 |
+
522
|
| 341 |
+
|
| 342 |
+
This observation can be summarized by com-
|
| 343 |
+
|
| 344 |
+
puting the average accuracy gap, namely the ac- 524 curacy differences averaged across positions (the average of the purple minus pink bars, and of blue minus green bars in Figure 3), which provide an average difference when a token is within or outside the negation scope. The average accuracy gaps for the four tested models are given in Table 7. It confirms that input embeddings of tokens inside the negation scope do allow for a slightly better prediction of the presence of not than those outside the scope. Note that the average difference is stable across models, whose size does not seem to matter. It shows that the strength of the encoding of not in contextual representations matches
|
| 345 |
+
|
| 346 |
+
the linguistic notion of negation scope. 539
|
| 347 |
+
|
| 348 |
+
${}^{8}$ COCA sentences are tokenized and tagged. We detok-enized them before BERT/ROBERTA tokenization, in order to get closer to a standard input.
|
| 349 |
+
|
| 350 |
+
540 594
|
| 351 |
+
|
| 352 |
+
< g r a p h i c s >
|
| 353 |
+
|
| 354 |
+
Table 6: Example sentence from the not+NPI test set: structural zones and relative positions to not. Any auxiliary or modal preceding the target not has position 0 too, to homogenize contracted and plain negation, and BERT versus ROBERTA's tokenization.
|
| 355 |
+
|
| 356 |
+
596
|
| 357 |
+
|
| 358 |
+
597
|
| 359 |
+
|
| 360 |
+
598
|
| 361 |
+
|
| 362 |
+
599
|
| 363 |
+
|
| 364 |
+
541 595
|
| 365 |
+
|
| 366 |
+
546 600
|
| 367 |
+
|
| 368 |
+
551 605
|
| 369 |
+
|
| 370 |
+
553 607
|
| 371 |
+
|
| 372 |
+
556 610
|
| 373 |
+
|
| 374 |
+
< g r a p h i c s >
|
| 375 |
+
|
| 376 |
+
Figure 1: Accuracy of the ROBERTA-large-neg-classifier (average on 3 runs) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at $p < {0.001}$ (cf. Appendix B). Figures for the other 3 models are provided in Appendix C.
|
| 377 |
+
|
| 378 |
+
608
|
| 379 |
+
|
| 380 |
+
609
|
| 381 |
+
|
| 382 |
+
612
|
| 383 |
+
|
| 384 |
+
615
|
| 385 |
+
|
| 386 |
+
617
|
| 387 |
+
|
| 388 |
+
619
|
| 389 |
+
|
| 390 |
+
620
|
| 391 |
+
|
| 392 |
+
621
|
| 393 |
+
|
| 394 |
+
622
|
| 395 |
+
|
| 396 |
+
623
|
| 397 |
+
|
| 398 |
+
624
|
| 399 |
+
|
| 400 |
+
625
|
| 401 |
+
|
| 402 |
+
626
|
| 403 |
+
|
| 404 |
+
627
|
| 405 |
+
|
| 406 |
+
628
|
| 407 |
+
|
| 408 |
+
max width=
|
| 409 |
+
|
| 410 |
+
${\mathrm{{BERT}}}_{b}$ ${\mathrm{{BERT}}}_{l}$ ${\mathrm{{ROB}}}_{b}$ ${\mathrm{{ROB}}}_{l}$
|
| 411 |
+
|
| 412 |
+
1-4
|
| 413 |
+
3.0 (0.6) 3.5 (0.2) 2.6 (0.2) 2.6 (1.3)
|
| 414 |
+
|
| 415 |
+
1-4
|
| 416 |
+
|
| 417 |
+
Table 7: Accuracy gaps for the neg-classifiers on the not+NPI test set, for each tested PLM, averaged over 14 relative positions and 3 runs (stdev within brackets).
|
| 418 |
+
|
| 419 |
+
583
|
| 420 |
+
|
| 421 |
+
588 We also observe that the biggest difference occurs at position -1 . This corresponds mostly to a contrast between a finite vs. non-finite negated verb (neg-patterns $1/2/3$ vs. neg-pattern 5 in Table 1), which seems well reflected in PLMs' em-
|
| 422 |
+
|
| 423 |
+
593 beddings.
|
| 424 |
+
|
| 425 |
+
629
|
| 426 |
+
|
| 427 |
+
§ 3.2 PROBING FOR THE LICENSING SCOPE
|
| 428 |
+
|
| 429 |
+
630
|
| 430 |
+
|
| 431 |
+
631
|
| 432 |
+
|
| 433 |
+
We then focused on whether this encoding of not 632
|
| 434 |
+
|
| 435 |
+
can actually be mobilized to capture the licens- 633
|
| 436 |
+
|
| 437 |
+
ing of a NPI. We built classifiers (hereafter the 634
|
| 438 |
+
|
| 439 |
+
$m$ -pol-classifiers, with $m$ the name of the studied 635
|
| 440 |
+
|
| 441 |
+
PLM), taking an input contextual embedding, and 636
|
| 442 |
+
|
| 443 |
+
predicting as target information the polarity of a 637
|
| 444 |
+
|
| 445 |
+
masked position, originally filled with a positive 638 or negative PI. Importantly, the input embedding in the training set is randomly chosen in the sen-
|
| 446 |
+
|
| 447 |
+
tence, and can correspond to a position that is or 642 isn't linguistically related to the polarity of the PI (cf. figure 2). This avoids using linguistic preconceptions while building the classifiers.
|
| 448 |
+
|
| 449 |
+
We train on sentences originally having either a 646
|
| 450 |
+
|
| 451 |
+
PPI or a NPI, which we mask before running each 647
|
| 452 |
+
|
| 453 |
+
648
|
| 454 |
+
|
| 455 |
+
< g r a p h i c s >
|
| 456 |
+
|
| 457 |
+
Figure 2: Illustration of the training of the pol-classifiers.
|
| 458 |
+
|
| 459 |
+
649
|
| 460 |
+
|
| 461 |
+
654 studied PLM. More precisely, in each COCA sub-corpus (each genre), and for each of the 6 NPI/PPI pairs listed by Jumelet and Hupkes ${\left( {2018}\right) }^{9}$ , we randomly took at most 2,000 sentences containing the NPI, and the same amount of sentences con-
|
| 462 |
+
|
| 463 |
+
664 taining the corresponding ${\mathrm{{PPI}}}^{10}$ . In each of these, we masked the PI, randomly selected one token per sentence to serve as input token (excluding the masked position) and split these into 63,529 examples for training (pol-train-set) and 15,883 for testing (pol-test-set).
|
| 464 |
+
|
| 465 |
+
max width=
|
| 466 |
+
|
| 467 |
+
Model ${\mathrm{{BERT}}}_{b}$ ${\mathrm{{BERT}}}_{l}$ ROB. $b$ ROB. ${}_{l}$
|
| 468 |
+
|
| 469 |
+
1-5
|
| 470 |
+
Accur. 64.2 63.7 56.6 68.6
|
| 471 |
+
|
| 472 |
+
1-5
|
| 473 |
+
|
| 474 |
+
Table 8: Accuracies of the pol-classifiers on the pol-test-set for each PLM (averaged over 3 runs).
|
| 475 |
+
|
| 476 |
+
Accuracies on the pol-test-set for each PLM are shown in Table 8. While still above chance, we observe that it doesn’t exceed ${69}\%$ , which is quite lower than the accuracies of the neg-classifiers (Table 5). This is not surprising since the task is more difficult. First, as stressed above, some of the training input tokens are independent, from the linguistic point of view, of the PI's polarity. Second, the cues for predicting the polarity are
|
| 477 |
+
|
| 478 |
+
686 diverse. And third, in numerous contexts, both polarities are indeed possible, even though not equally likely. We did not control the training for this, on purpose not to introduce any additional
|
| 479 |
+
|
| 480 |
+
691 bias in the data. We can thus interpret the pol- classifier's scores as how likely a given polarity is.
|
| 481 |
+
|
| 482 |
+
Next, we applied these classifiers on the not+NPI test set. The objective is to compare the classifiers' accuracy depending on the structural
|
| 483 |
+
|
| 484 |
+
701
|
| 485 |
+
|
| 486 |
+
zone the input token belongs to. If PLMs have a 702
|
| 487 |
+
|
| 488 |
+
notion of licensing scope, then the polarity predic- 703 tion should be higher when using an input token from the IN zone.
|
| 489 |
+
|
| 490 |
+
§ 3.2.1 RESULTS
|
| 491 |
+
|
| 492 |
+
Once more, we control for distance of the in- 708 put embedding to not. The break-down by position and structural zone for ROBERTA-large is provided in Figure 3 (results for other models are in Appendix C).
|
| 493 |
+
|
| 494 |
+
Again, we observe a general accuracy decrease as moving away from not, and this decrease is faster than for the previous experiment. We also note that the decrease is more rapid in the PRE-IN zone than in the IN zone (for instance at distance -4 in PRE-IN, the accuracy is less than 70%, whereas it is still above it at distance 8 in the IN zone). This tends to indicate that the traces of not are more robust in the licensing scope.
|
| 495 |
+
|
| 496 |
+
Secondly, as for the previous experiment, for each relative position, when the input token is in the negation scope (either PRE-IN or IN), the accuracy is higher than when it is outside (PRE and POST). Even though we cannot exclude that the relatively high overall accuracies may be explained by the classifier catching some regulari-
|
| 497 |
+
|
| 498 |
+
ties of the sentences containing a NPI rather than 730 a PPI (independently of the presence of not), it remains that for the not+NPI sentences, accuracy is higher when the input token is in the negation scope than outside it. Moreover, this trend is much
|
| 499 |
+
|
| 500 |
+
more marked than for the previous experiment. 735
|
| 501 |
+
|
| 502 |
+
Thirdly, the amplitude of this observation depends on the model. We provide the accuracy gaps for each PLM in Table 9. We observe that the trend is marked for ROBERTA-large and BERT-
|
| 503 |
+
|
| 504 |
+
base (gap of 8.7 and 7.4 accuracy points, actually 740 much higher than the accuracy gaps for predicting the presence of not), but lower for ROBERTA-base and BERT-large.
|
| 505 |
+
|
| 506 |
+
max width=
|
| 507 |
+
|
| 508 |
+
${\mathrm{{BERT}}}_{b}$ ${\mathrm{{BERT}}}_{l}$ ${\mathrm{{ROB}}}_{b}$ ${\mathrm{{ROB}}}_{l}$
|
| 509 |
+
|
| 510 |
+
1-4
|
| 511 |
+
7.4 (0.5) 3.1 (0.4) 1.4 (0.2) 8.7 (0.6)
|
| 512 |
+
|
| 513 |
+
1-4
|
| 514 |
+
|
| 515 |
+
745
|
| 516 |
+
|
| 517 |
+
Table 9: Accuracy gaps for the pol-classifiers on
|
| 518 |
+
|
| 519 |
+
the not+NPI test set, averaged over 14 relative po- 750 sitions and 3 runs (stdev within brackets).
|
| 520 |
+
|
| 521 |
+
This leads us to conclude that (i) PLMs do encode structural constraints imposed by not (NPI li-
|
| 522 |
+
|
| 523 |
+
censing), but to varying degrees across the PLMs 755
|
| 524 |
+
|
| 525 |
+
${}^{9}$ (any/some) $\left( {\varnothing /\text{ where/one/body/thing/time) }}\right)$
|
| 526 |
+
|
| 527 |
+
${}^{10}$ For any/some(%/one/thing), we took $2 \times {2000}$ occurrences. For any/some(body/time/where), less occurrences were available in some of the subcorpora. We took as many as possible, but keeping a strict balance between NPI and PPI sentences (between $2 \times {169}$ and $2 \times {958}$ depending on the corpus genre and on the NPI/PPI pair).
|
| 528 |
+
|
| 529 |
+
756 810
|
| 530 |
+
|
| 531 |
+
757 811
|
| 532 |
+
|
| 533 |
+
758 812
|
| 534 |
+
|
| 535 |
+
< g r a p h i c s >
|
| 536 |
+
|
| 537 |
+
Figure 3: Accuracy of the ROBERTA-large-pol-classifier (average on 3 runs) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at $p < {0.001}$ (cf. Appendix B).
|
| 538 |
+
|
| 539 |
+
817
|
| 540 |
+
|
| 541 |
+
818
|
| 542 |
+
|
| 543 |
+
819
|
| 544 |
+
|
| 545 |
+
820
|
| 546 |
+
|
| 547 |
+
822
|
| 548 |
+
|
| 549 |
+
826
|
| 550 |
+
|
| 551 |
+
759 813
|
| 552 |
+
|
| 553 |
+
760 814
|
| 554 |
+
|
| 555 |
+
761 815
|
| 556 |
+
|
| 557 |
+
762 816
|
| 558 |
+
|
| 559 |
+
767 821
|
| 560 |
+
|
| 561 |
+
828
|
| 562 |
+
|
| 563 |
+
831 we tested, and (ii) that this encoding is stronger in the negation scope than outside it, independently of the distance to not. This only partially matches the linguistic expectation that the strongest zone should be the licensing scope rather than the entire negation scope.
|
| 564 |
+
|
| 565 |
+
§ 4 CONCLUSION
|
| 566 |
+
|
| 567 |
+
In this paper, we studied the way negation and its scope are encoded in contextual representations of PLMs and to what extent this encoding is used to model NPI licensing.
|
| 568 |
+
|
| 569 |
+
Classifiers were trained to predict the presence of negation in a sentence from the contextual representation of a random token. We also trained classifiers to predict the polarity of a masked polar item from the contextual representation of a random token. A test set of sentences was designed with not licensing an NPI, inside which we identified the negation scope (roughly the clause), and the licensing scope (roughly the VP).
|
| 570 |
+
|
| 571 |
+
For these sentences, we found that the contex-
|
| 572 |
+
|
| 573 |
+
804 tual embeddings of tokens within the scope of a negation allow a better prediction of the presence of not. These embedding also allow a better prediction of the (negative) polarity of a masked PI. These results hold even when controlling for the
|
| 574 |
+
|
| 575 |
+
809 distance to not.
|
| 576 |
+
|
| 577 |
+
833
|
| 578 |
+
|
| 579 |
+
We conclude that the PLMs which were tested indeed encode a notion of negation scope in their
|
| 580 |
+
|
| 581 |
+
contextual representations. We could not find 836 however a consistent encoding of the narrower
|
| 582 |
+
|
| 583 |
+
(and probably more difficult to define) notion 838 of negative polarity licensing scope. Moreover,
|
| 584 |
+
|
| 585 |
+
variation across PLMs remains to be explained 841 through further studies.
|
| 586 |
+
|
| 587 |
+
843
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_bbk5bLa9K/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,847 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Length Dependence of Vocabulary Richness
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author
|
| 10 |
+
|
| 11 |
+
004 Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
005 Affiliation / Address line 2 006 Affiliation / Address line 3
|
| 14 |
+
|
| 15 |
+
email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 3
|
| 24 |
+
|
| 25 |
+
email@domain
|
| 26 |
+
|
| 27 |
+
Anonymousest Author 057
|
| 28 |
+
|
| 29 |
+
Affiliation / Address line 1 058
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 32 |
+
|
| 33 |
+
063
|
| 34 |
+
|
| 35 |
+
## Abstract
|
| 36 |
+
|
| 37 |
+
013 The relation between the length of a text and the number of unique words is investigated using several Swedish language
|
| 38 |
+
|
| 39 |
+
016 corpora. We consider a number of existing measures of vocabulary richness, show
|
| 40 |
+
|
| 41 |
+
018 that they are not length-independent, and try to improve on some of them based on statistical evidence. We also look at the spectrum of values over text lengths, and find that genres have characteristic shapes.
|
| 42 |
+
|
| 43 |
+
023
|
| 44 |
+
|
| 45 |
+
## 1 Introduction
|
| 46 |
+
|
| 47 |
+
Measures of lexical richness have several uses, including author identification, other forms of text classification, and estimating how difficult a text is. One of the simplest and most obvious measures of lexical richness is to compare the size of the vocabulary (that is, how many different words) to the size of the text (how many words in total). This can be done in several ways, most
|
| 48 |
+
|
| 49 |
+
033 straightforwardly as the type-token ratio (henceforth TTR), $u/n$ , where $u$ is the number of unique words (types) and $n$ is the total number of words (tokens). Thus, for the sentence "this example is this example", there are three types and five to-
|
| 50 |
+
|
| 51 |
+
038 kens, so TTR is $u/n = 3/5 = {0.6}$ .
|
| 52 |
+
|
| 53 |
+
The obvious problem with TTR is that it changes with the length of the text. As we write a text, the more words we have already written, the more likely it is that the next word will be one that has already been used, so TTR goes down as the text grows longer. Many attempts have been made to transform this measure into something independent of the length of the text, but many of those attempts were made in an age before "big data", or even before computers, and were based on a priori reasoning rather than statistical analysis (Tweedie and Baayen, 1998).
|
| 54 |
+
|
| 55 |
+
We will start by looking at some of these mea-
|
| 56 |
+
|
| 57 |
+
053 sures, and test them on a set of corpora from
|
| 58 |
+
|
| 59 |
+
Spräkbanken to see how they hold up for a wide 065 range of different $n$ . After comparing some of the
|
| 60 |
+
|
| 61 |
+
previous methods, we will briefly look into using 067 the empirical data to come up with a better suggestion. The results give rise to another question:
|
| 62 |
+
|
| 63 |
+
What if instead of aiming for a length-independent 070 measure, we consider how the values change with
|
| 64 |
+
|
| 65 |
+
the length? Can that actually tell us new and inter- 072 esting things?
|
| 66 |
+
|
| 67 |
+
We find that if we analyse the type count for 075 different sample lengths, we see clear and con-
|
| 68 |
+
|
| 69 |
+
sistent differences between different types of text. 077 This may be useful for genre classification, or for a more detailed description of the complexity of
|
| 70 |
+
|
| 71 |
+
the text. 080
|
| 72 |
+
|
| 73 |
+
Although these measures are usually applied to
|
| 74 |
+
|
| 75 |
+
specific texts, we here apply them to entire cor- 082
|
| 76 |
+
|
| 77 |
+
pora. We will discuss the effects of this after see- 083
|
| 78 |
+
|
| 79 |
+
ing the results. 084 085
|
| 80 |
+
|
| 81 |
+
086
|
| 82 |
+
|
| 83 |
+
## 2 Data
|
| 84 |
+
|
| 85 |
+
087
|
| 86 |
+
|
| 87 |
+
088
|
| 88 |
+
|
| 89 |
+
Spräkbanken (the Swedish Language Bank) at the 089
|
| 90 |
+
|
| 91 |
+
University of Gothenburg (spraakbanken.gu.se) 090 has a large collection of text corpora, mainly in
|
| 92 |
+
|
| 93 |
+
Swedish but including several other languages. In 092 this study, we use Swedish texts, focusing on large and homogeneous corpora.
|
| 94 |
+
|
| 95 |
+
We extract the type count $u$ for several differ-
|
| 96 |
+
|
| 97 |
+
ent lengths $n$ . For each $n$ , we divide the corpus 097 in chunks of length $n$ , dropping any overflow at the end, and take the mean value of $u$ for each of these chunks. (In some cases we remove the last value for being an outlier; presumably this is because it is the only value where a large part of the data is dropped due to overflow.) We use a pseudo-logarithmic scale for ease of reading, extracting values for $n = {10},{20},{50},{100},{200},{500},{1000}\ldots$ up to the maximum possible for each corpus; the
|
| 98 |
+
|
| 99 |
+
largest go up to 500 million tokens. 107
|
| 100 |
+
|
| 101 |
+
## 3 Testing existing measures
|
| 102 |
+
|
| 103 |
+
109
|
| 104 |
+
|
| 105 |
+
First of all, we can test and verify that TTR does go down. Figure 1 shows TTR for 31 corpora.
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
|
| 109 |
+
Figure 1: Type-token ratio
|
| 110 |
+
|
| 111 |
+
It seems likely that, as we compare different-size corpora, effects of size changes might be best described in terms of multiplicative changes rather than additive, so we might try looking at the logarithms of $n$ and $u$ . We see in Figure 2 that the result looks fairly close to a straight line.
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
|
| 115 |
+
Figure 2: Type count
|
| 116 |
+
|
| 117 |
+
151
|
| 118 |
+
|
| 119 |
+
The first obvious method, then, is to assume that this is indeed a straight line, and use the slope of that line as our presumed length-independent measure of richness, that is, $\log u/\log n$ . This was proposed by Herdan (1964). We see in Figure 3
|
| 120 |
+
|
| 121 |
+
161 that the measure is decreasing quite steadily for
|
| 122 |
+
|
| 123 |
+
all the texts. The six corpora used here are chosen 162
|
| 124 |
+
|
| 125 |
+
partly for being large, and partly for having large 163
|
| 126 |
+
|
| 127 |
+
differences in type count; many other corpora are 164
|
| 128 |
+
|
| 129 |
+
not nearly as well separated. 165
|
| 130 |
+
|
| 131 |
+
166
|
| 132 |
+
|
| 133 |
+
167
|
| 134 |
+
|
| 135 |
+

|
| 136 |
+
|
| 137 |
+
Figure 3: Herdan's measure
|
| 138 |
+
|
| 139 |
+
168
|
| 140 |
+
|
| 141 |
+
169
|
| 142 |
+
|
| 143 |
+
170
|
| 144 |
+
|
| 145 |
+
173
|
| 146 |
+
|
| 147 |
+
175
|
| 148 |
+
|
| 149 |
+
176
|
| 150 |
+
|
| 151 |
+
178
|
| 152 |
+
|
| 153 |
+
180
|
| 154 |
+
|
| 155 |
+
183
|
| 156 |
+
|
| 157 |
+
Let us pause for a moment and consider what 185
|
| 158 |
+
|
| 159 |
+
this figure illustrates. The fact that the measure de- 186
|
| 160 |
+
|
| 161 |
+
creases is not in itself a problem; we may be aim- 187
|
| 162 |
+
|
| 163 |
+
ing for a near-constant, but we should not expect 188 it to be completely perfect. The amount of varia-
|
| 164 |
+
|
| 165 |
+
tion is also not relevant; we could change that by 190 adding or multiplying by a constant. Regardless of how large the variation is, we would also change
|
| 166 |
+
|
| 167 |
+
the axes of the graph, so a glance at the variation of 193 a single curve in the graph does not tell us whether
|
| 168 |
+
|
| 169 |
+
the measure is near-constant. 195
|
| 170 |
+
|
| 171 |
+
What actually matters is comparing the curves. If the measure is to reliably compare different texts, regardless of the (sample) size for each text, what we need is to have the lines separated inso-
|
| 172 |
+
|
| 173 |
+
far as possible. If the lowest point of curve $A$ is 200 higher than the highest point of curve $\mathrm{B}$ , then we have successfully determined that $\mathrm{A}$ has a higher richness. We should also keep in mind that the first
|
| 174 |
+
|
| 175 |
+
few points of the curve are not as important - we 205 are probably not very interested in measuring richness for very short texts, so although the graphs go all the way from 10 , we can mostly ignore values below 1000 or so. We would be content if the measure can separate the lines from that point on.
|
| 176 |
+
|
| 177 |
+
As we see in Figure 3, this is not quite the case here. This measure works considerably better than TTR, but the curves are still close enough that their ranges overlap. We will compare with a few other
|
| 178 |
+
|
| 179 |
+
measures. 215
|
| 180 |
+
|
| 181 |
+
216 Guiraud (in 1954, as cited by Hultman and
|
| 182 |
+
|
| 183 |
+
217 Westman (1977)) proposed the measure $u/\sqrt{n}$ ,
|
| 184 |
+
|
| 185 |
+
218 shown in Figure 4. This does not separate the curves particularly well, and does not seem to have any advantage over the previous method.
|
| 186 |
+
|
| 187 |
+
221
|
| 188 |
+
|
| 189 |
+
222
|
| 190 |
+
|
| 191 |
+
223
|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
|
| 195 |
+
Figure 4: Guiraud's measure
|
| 196 |
+
|
| 197 |
+
227
|
| 198 |
+
|
| 199 |
+
229
|
| 200 |
+
|
| 201 |
+
230
|
| 202 |
+
|
| 203 |
+
231
|
| 204 |
+
|
| 205 |
+
232
|
| 206 |
+
|
| 207 |
+
233
|
| 208 |
+
|
| 209 |
+
234
|
| 210 |
+
|
| 211 |
+
237
|
| 212 |
+
|
| 213 |
+
239
|
| 214 |
+
|
| 215 |
+
240 Dugast (1979) built on Herdan by suggesting
|
| 216 |
+
|
| 217 |
+
241 $\log u/\log \log n$ , seen in Figure 5. We find no ad-
|
| 218 |
+
|
| 219 |
+
242 vantage with this method, and only added conceptual complexity with the double logarithm.
|
| 220 |
+
|
| 221 |
+
244
|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
|
| 225 |
+
Figure 5: Dugast's measure
|
| 226 |
+
|
| 227 |
+
248
|
| 228 |
+
|
| 229 |
+
249
|
| 230 |
+
|
| 231 |
+
254
|
| 232 |
+
|
| 233 |
+
256
|
| 234 |
+
|
| 235 |
+
257
|
| 236 |
+
|
| 237 |
+
258
|
| 238 |
+
|
| 239 |
+
259
|
| 240 |
+
|
| 241 |
+
260
|
| 242 |
+
|
| 243 |
+
261
|
| 244 |
+
|
| 245 |
+
262
|
| 246 |
+
|
| 247 |
+
Brunet (1978) proposed ${n}^{ \land }\left( {u}^{-a}\right)$ , where usu-
|
| 248 |
+
|
| 249 |
+
264 ally $a = {0.172}$ . This is shown in Figure 6. This too is a fairly conceptually complicated method
|
| 250 |
+
|
| 251 |
+
266 which shows no sign of improving the results.
|
| 252 |
+
|
| 253 |
+
267 Maas (1972) found another approach, with
|
| 254 |
+
|
| 255 |
+
268 $\left( {\log n - \log u}\right) /{\left( \log n\right) }^{2}$ , see Figure 7. This seems
|
| 256 |
+
|
| 257 |
+
269 marginally more effective at separating the curves.
|
| 258 |
+
|
| 259 |
+

|
| 260 |
+
|
| 261 |
+
Figure 6: Brunet's measure
|
| 262 |
+
|
| 263 |
+
270
|
| 264 |
+
|
| 265 |
+
271
|
| 266 |
+
|
| 267 |
+
272
|
| 268 |
+
|
| 269 |
+
273
|
| 270 |
+
|
| 271 |
+
274
|
| 272 |
+
|
| 273 |
+
275
|
| 274 |
+
|
| 275 |
+
276
|
| 276 |
+
|
| 277 |
+
277
|
| 278 |
+
|
| 279 |
+
278
|
| 280 |
+
|
| 281 |
+
279
|
| 282 |
+
|
| 283 |
+
280
|
| 284 |
+
|
| 285 |
+
281
|
| 286 |
+
|
| 287 |
+
282
|
| 288 |
+
|
| 289 |
+
283
|
| 290 |
+
|
| 291 |
+
284
|
| 292 |
+
|
| 293 |
+
285
|
| 294 |
+
|
| 295 |
+
286
|
| 296 |
+
|
| 297 |
+
287
|
| 298 |
+
|
| 299 |
+

|
| 300 |
+
|
| 301 |
+
Figure 7: Maas's measure
|
| 302 |
+
|
| 303 |
+
288
|
| 304 |
+
|
| 305 |
+
289
|
| 306 |
+
|
| 307 |
+
290
|
| 308 |
+
|
| 309 |
+
291
|
| 310 |
+
|
| 311 |
+
292
|
| 312 |
+
|
| 313 |
+
293
|
| 314 |
+
|
| 315 |
+
294
|
| 316 |
+
|
| 317 |
+
295
|
| 318 |
+
|
| 319 |
+
296
|
| 320 |
+
|
| 321 |
+
297
|
| 322 |
+
|
| 323 |
+
298
|
| 324 |
+
|
| 325 |
+
299
|
| 326 |
+
|
| 327 |
+
300
|
| 328 |
+
|
| 329 |
+
301
|
| 330 |
+
|
| 331 |
+
302
|
| 332 |
+
|
| 333 |
+
303
|
| 334 |
+
|
| 335 |
+
304
|
| 336 |
+
|
| 337 |
+
305
|
| 338 |
+
|
| 339 |
+
Hultman and Westman (1977) defined the OVIX 306
|
| 340 |
+
|
| 341 |
+
measure as 307
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
\frac{\log n}{\log \left( {2 - \frac{\log u}{\log n}}\right) }
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
308 309 310 311
|
| 348 |
+
|
| 349 |
+
which is seen in Figure 8. This is a measure com- 312
|
| 350 |
+
|
| 351 |
+
monly used in Sweden, including by Spräkbanken. 313 As we see, this also does a passable job, but there is a clear rising trend for most curves. This is confirmed by further testing on other corpora.
|
| 352 |
+
|
| 353 |
+
## 4 Improving measures
|
| 354 |
+
|
| 355 |
+
318
|
| 356 |
+
|
| 357 |
+
319
|
| 358 |
+
|
| 359 |
+
By analysing the way these measures depend on 320
|
| 360 |
+
|
| 361 |
+
$n$ , we may be able to adjust and improve them. 321
|
| 362 |
+
|
| 363 |
+
As noted, the fact that the curve of $\log u$ against 322
|
| 364 |
+
|
| 365 |
+
$\log n$ is close to a line suggests that $u/n$ may be 323
|
| 366 |
+
|
| 367 |
+
324
|
| 368 |
+
|
| 369 |
+

|
| 370 |
+
|
| 371 |
+
Figure 8: Ovix
|
| 372 |
+
|
| 373 |
+
325
|
| 374 |
+
|
| 375 |
+
329
|
| 376 |
+
|
| 377 |
+
330
|
| 378 |
+
|
| 379 |
+
335
|
| 380 |
+
|
| 381 |
+
337
|
| 382 |
+
|
| 383 |
+
340
|
| 384 |
+
|
| 385 |
+
342 a constant, as per Herdan. But that assumes that the line passes through(0,0); if the line passes though(0, m)for some $m$ , we should expect that $\left( {u - m}\right) /n$ is constant. We find that for a subset of the corpora, the best-fitting line gives $m = {0.4}$ , and we see in Figure 9 that $\left( {u - {0.4}}\right) /n$ does look a lot flatter. As before, we pay less attention to the values where $n < {1000}$ .
|
| 386 |
+
|
| 387 |
+

|
| 388 |
+
|
| 389 |
+
Figure 9: Herdan with constant term
|
| 390 |
+
|
| 391 |
+
362
|
| 392 |
+
|
| 393 |
+
365
|
| 394 |
+
|
| 395 |
+
366
|
| 396 |
+
|
| 397 |
+
367
|
| 398 |
+
|
| 399 |
+
368
|
| 400 |
+
|
| 401 |
+
On the other hand, we know that a text with one word certainly also has one unique word, so log-
|
| 402 |
+
|
| 403 |
+
372 ically the curve of $\log u$ against $\log n$ must pass though(0,0). Empiricism is all good and well, but if we want results that hold up for other data, perhaps we are better off not violating basic logic. What if instead of a line, we fit the points to a
|
| 404 |
+
|
| 405 |
+
377 polynomial curve with zero constant term? Trying
|
| 406 |
+
|
| 407 |
+
second, third and fourth order polynomials sug- 378
|
| 408 |
+
|
| 409 |
+
gests that third is a good compromise. We find 379
|
| 410 |
+
|
| 411 |
+
the best fit for six corpora, take the average for 380
|
| 412 |
+
|
| 413 |
+
the quadratic and cubic terms, and get the adjusted 381
|
| 414 |
+
|
| 415 |
+
measure 382
|
| 416 |
+
|
| 417 |
+
383
|
| 418 |
+
|
| 419 |
+
$$
|
| 420 |
+
\log u/\log n + {0.044}{\left( \log n\right) }^{2} - {0.0024}{\left( \log n\right) }^{3}
|
| 421 |
+
$$
|
| 422 |
+
|
| 423 |
+
384
|
| 424 |
+
|
| 425 |
+
385
|
| 426 |
+
|
| 427 |
+
You can see in Figure 10 that this separates the 386 curves considerably better than the pure Herdan
|
| 428 |
+
|
| 429 |
+
measure. From looking at the graph, this is proba- 388
|
| 430 |
+
|
| 431 |
+
bly the best option we have here, but we should 389
|
| 432 |
+
|
| 433 |
+
note that the coefficients vary quite a bit be- 390
|
| 434 |
+
|
| 435 |
+
tween corpora (standard deviations are 0.015 and 391
|
| 436 |
+
|
| 437 |
+
0.0017), so this is not universal enough to adopt as 392
|
| 438 |
+
|
| 439 |
+
some sort of standard measure. 393
|
| 440 |
+
|
| 441 |
+
394
|
| 442 |
+
|
| 443 |
+

|
| 444 |
+
|
| 445 |
+
Figure 10: Herdan with cubic fit
|
| 446 |
+
|
| 447 |
+
395
|
| 448 |
+
|
| 449 |
+
396
|
| 450 |
+
|
| 451 |
+
397
|
| 452 |
+
|
| 453 |
+
398
|
| 454 |
+
|
| 455 |
+
399
|
| 456 |
+
|
| 457 |
+
400
|
| 458 |
+
|
| 459 |
+
401
|
| 460 |
+
|
| 461 |
+
403
|
| 462 |
+
|
| 463 |
+
404
|
| 464 |
+
|
| 465 |
+
405
|
| 466 |
+
|
| 467 |
+
406
|
| 468 |
+
|
| 469 |
+
407
|
| 470 |
+
|
| 471 |
+
408
|
| 472 |
+
|
| 473 |
+
409
|
| 474 |
+
|
| 475 |
+
410
|
| 476 |
+
|
| 477 |
+
411
|
| 478 |
+
|
| 479 |
+
412
|
| 480 |
+
|
| 481 |
+
413
|
| 482 |
+
|
| 483 |
+

|
| 484 |
+
|
| 485 |
+
Figure 11: Adjusted Guiraud
|
| 486 |
+
|
| 487 |
+
414
|
| 488 |
+
|
| 489 |
+
415
|
| 490 |
+
|
| 491 |
+
416
|
| 492 |
+
|
| 493 |
+
417
|
| 494 |
+
|
| 495 |
+
418
|
| 496 |
+
|
| 497 |
+
419
|
| 498 |
+
|
| 499 |
+
420
|
| 500 |
+
|
| 501 |
+
421
|
| 502 |
+
|
| 503 |
+
422
|
| 504 |
+
|
| 505 |
+
423
|
| 506 |
+
|
| 507 |
+
424
|
| 508 |
+
|
| 509 |
+
425
|
| 510 |
+
|
| 511 |
+
426
|
| 512 |
+
|
| 513 |
+
427
|
| 514 |
+
|
| 515 |
+
428
|
| 516 |
+
|
| 517 |
+
429
|
| 518 |
+
|
| 519 |
+
430
|
| 520 |
+
|
| 521 |
+
We can also consider the Guiraud approach, and 431 try to adjust it. We notice that while TTR (where we divide by $n$ ) goes steadily down, Guiraud (where we divide by ${n}^{0.5}$ ) goes up. Perhaps we can find a middle ground? Figure 11 shows the results for $u/{n}^{0.75}$ , which looks overall much flatter and better separating the curves. This may not be a better result than the previous one, but it does have the advantage of not depending on experimentally determined coefficients.
|
| 522 |
+
|
| 523 |
+
Is there another option, using only the length and the type count? Yes, there is an option which is in principle completely independent of text length: Measure the type count (or equivalently TTR) for a fixed length. One option would be to measure only the first $n$ words of a text, but that could mean that a small part of the text has a large impact, so probably a better method is to cut the text into pieces of length $n$ and take the average, exactly as we have done above.
|
| 524 |
+
|
| 525 |
+

|
| 526 |
+
|
| 527 |
+
Figure 12: TTR at $n = {10000}$
|
| 528 |
+
|
| 529 |
+
Figure 12 shows the results for $n = {10000}$ , on 39 corpora. We see that it fairly well separates several categories of text. The eight newspaper corpora are above all but one other, with the three oldest getting the highest value, followed by the two from the late 1900s, then the two from printed
|
| 530 |
+
|
| 531 |
+
newspapers in 2000 and 2014, and last the web- 486
|
| 532 |
+
|
| 533 |
+
based news texts. The social media and blog texts 487 are a little more scattered, but all below the mean, except Twitter, which in both cases is higher. The four corpora of novels are not quite the same level, but all higher than all of the ones in the "easy read"
|
| 534 |
+
|
| 535 |
+
category. In that category, young adult literature 492 is the highest and children's literature the lowest. Parliamentary data is all below the mean but above "easy read". Near the bottom we find, perhaps surprisingly, the Bible, along with Wikipedia, neither of which are primarily known to be easy reads. Altogether, these results should tell us that this is at least a meaningful measure.
|
| 536 |
+
|
| 537 |
+
That leaves the question of choosing an $n$ . Very
|
| 538 |
+
|
| 539 |
+
low values might give strange effects, very high 502 values would make it unusable for shorter texts. Other values were tested for comparison: $n = {10}$ gives little useful information, while $n = {100}$ ranks all the novels below most of social media, and beyond that we get mostly unremarkable results from just looking at the ranking. Based on these limited results, $n = {10000}$ seems like a good choice, if we are working with relatively long texts, and otherwise we can settle for $n = {1000}$ .
|
| 540 |
+
|
| 541 |
+
## 5 Spectrum comparison
|
| 542 |
+
|
| 543 |
+
Instead of considering type counts for only one $n$ , what if we measure for many values of $n$ , and look at the whole spectrum? This is essentially what we already did in all of section 3 , and we could see that the curves for the different corpora certainly did have different shapes - some of them even crossed each other, which implies that any one number is not going to tell us the whole truth.
|
| 544 |
+
|
| 545 |
+
To compare corpora instead of methods, we need to pick one method, one way to transform $u$ based on $n$ . Using plain TTR as seen in Figure 1 would make it difficult to tell the difference between shapes, and picking one of the tested methods seems like too arbitrary a choice. So for the purposes of this section, we will evade the problem. We normalise the type count (or equivalently TTR) for each $n$ by subtracting the mean and dividing by the standard deviation. That is, the values on the vertical axis are in terms of standard deviations above the mean, counted for each separate value on the horizontal axis. (For the very highest values, the mean and sd values change erratically because of corpora dropping off. We adjust
|
| 546 |
+
|
| 547 |
+
the normalisation to gradually change from actual 539
|
| 548 |
+
|
| 549 |
+
540 mean and sd to extrapolated values.)
|
| 550 |
+
|
| 551 |
+
541 Figures 13-22 show the spectra for each category. Some curves are shorter because of limited data. Figures 13-15 show three different types of web-based texts, one set of blog texts and two different internet forums. We can see that each category is a little different, but all the curves share some characteristics - first a short rise, then a drop, then flatter, and finally a small rise. Most of them start slightly above the mean, and end below the mean.
|
| 552 |
+
|
| 553 |
+

|
| 554 |
+
|
| 555 |
+
Figure 13: Spectrum for blog texts
|
| 556 |
+
|
| 557 |
+
566
|
| 558 |
+
|
| 559 |
+
568
|
| 560 |
+
|
| 561 |
+

|
| 562 |
+
|
| 563 |
+
Figure 14: Spectrum for the Familjeliv forum
|
| 564 |
+
|
| 565 |
+
578
|
| 566 |
+
|
| 567 |
+
583
|
| 568 |
+
|
| 569 |
+
Figure 16 shows the "easy read" category. Despite being unrelated, the curves share the same shape, which is clearly different from the web-based corpora - a drop, then a rise, peaking around
|
| 570 |
+
|
| 571 |
+
593 1000 without reaching the mean, then a drop.
|
| 572 |
+
|
| 573 |
+

|
| 574 |
+
|
| 575 |
+
Figure 15: Spectrum for the Flashback forum
|
| 576 |
+
|
| 577 |
+
594
|
| 578 |
+
|
| 579 |
+
595
|
| 580 |
+
|
| 581 |
+
596
|
| 582 |
+
|
| 583 |
+
597
|
| 584 |
+
|
| 585 |
+
598
|
| 586 |
+
|
| 587 |
+
599
|
| 588 |
+
|
| 589 |
+
600
|
| 590 |
+
|
| 591 |
+
602
|
| 592 |
+
|
| 593 |
+
604
|
| 594 |
+
|
| 595 |
+
605
|
| 596 |
+
|
| 597 |
+
607
|
| 598 |
+
|
| 599 |
+
608
|
| 600 |
+
|
| 601 |
+
609
|
| 602 |
+
|
| 603 |
+
610
|
| 604 |
+
|
| 605 |
+
612
|
| 606 |
+
|
| 607 |
+

|
| 608 |
+
|
| 609 |
+
Figure 16: Spectrum for easy-read texts
|
| 610 |
+
|
| 611 |
+
613
|
| 612 |
+
|
| 613 |
+
614
|
| 614 |
+
|
| 615 |
+
615
|
| 616 |
+
|
| 617 |
+
616
|
| 618 |
+
|
| 619 |
+
617
|
| 620 |
+
|
| 621 |
+
618
|
| 622 |
+
|
| 623 |
+
619
|
| 624 |
+
|
| 625 |
+
620
|
| 626 |
+
|
| 627 |
+
622
|
| 628 |
+
|
| 629 |
+
623
|
| 630 |
+
|
| 631 |
+
624
|
| 632 |
+
|
| 633 |
+
625
|
| 634 |
+
|
| 635 |
+
626
|
| 636 |
+
|
| 637 |
+
627
|
| 638 |
+
|
| 639 |
+
628
|
| 640 |
+
|
| 641 |
+
629
|
| 642 |
+
|
| 643 |
+
Figures 17-18 show news texts, with Figure 17 630 showing three newspapers from the early 1900s,
|
| 644 |
+
|
| 645 |
+
and Figure 18 showing four more recent newspa- 632 pers and one web-based news corpus. As with the blog/forum collection, we see that these two related categories have clear similarities: a slow rise
|
| 646 |
+
|
| 647 |
+
up to between ten and a hundred thousand, and 637 then a sharp. But they are also visibly distinct, with the older newspapers having higher values and rising near the end. Aside from some more unpredictable behaviour for $n < {1000}$ , the curves in
|
| 648 |
+
|
| 649 |
+
each category are remarkably similar in both shape 642 and level.
|
| 650 |
+
|
| 651 |
+
Figures 19-20 show literary texts, with Figure 19 showing regular novels and Figure 20 showing
|
| 652 |
+
|
| 653 |
+
children's fiction and young adult fiction. They are 646
|
| 654 |
+
|
| 655 |
+
all comparatively straight and dropping slightly. 647
|
| 656 |
+
|
| 657 |
+
648 702
|
| 658 |
+
|
| 659 |
+
649 703
|
| 660 |
+
|
| 661 |
+
0.6 ,
|
| 662 |
+
|
| 663 |
+
0.4
|
| 664 |
+
|
| 665 |
+
0.2
|
| 666 |
+
|
| 667 |
+
romi
|
| 668 |
+
|
| 669 |
+
0 romg
|
| 670 |
+
|
| 671 |
+
Sd above mean romi -0.2 -0.4
|
| 672 |
+
|
| 673 |
+
-0.6
|
| 674 |
+
|
| 675 |
+
-0.8
|
| 676 |
+
|
| 677 |
+
-1
|
| 678 |
+
|
| 679 |
+
-1.2
|
| 680 |
+
|
| 681 |
+
100 1000 1E4 1E5 1E6 1E7 1.68
|
| 682 |
+
|
| 683 |
+
Sample length
|
| 684 |
+
|
| 685 |
+
Figure 19: Spectrum for novels
|
| 686 |
+
|
| 687 |
+
2.5
|
| 688 |
+
|
| 689 |
+
lalpilen1920
|
| 690 |
+
|
| 691 |
+
Sd above mean 1.5 kalmar191 ostgota...
|
| 692 |
+
|
| 693 |
+
0.5
|
| 694 |
+
|
| 695 |
+
-0.5
|
| 696 |
+
|
| 697 |
+
100 1000 1E4 1.E5 1E6 1E7 1E
|
| 698 |
+
|
| 699 |
+
Sample length
|
| 700 |
+
|
| 701 |
+
Figure 17: Spectrum for old newspapers
|
| 702 |
+
|
| 703 |
+
704
|
| 704 |
+
|
| 705 |
+
705
|
| 706 |
+
|
| 707 |
+
706
|
| 708 |
+
|
| 709 |
+
707
|
| 710 |
+
|
| 711 |
+
654 708
|
| 712 |
+
|
| 713 |
+
709
|
| 714 |
+
|
| 715 |
+
659 713
|
| 716 |
+
|
| 717 |
+
715
|
| 718 |
+
|
| 719 |
+
716
|
| 720 |
+
|
| 721 |
+
664 718
|
| 722 |
+
|
| 723 |
+
666 720
|
| 724 |
+
|
| 725 |
+

|
| 726 |
+
|
| 727 |
+
Figure 18: Spectrum for recent newspapers
|
| 728 |
+
|
| 729 |
+

|
| 730 |
+
|
| 731 |
+
Figure 20: Spectrum for youth novels
|
| 732 |
+
|
| 733 |
+
723
|
| 734 |
+
|
| 735 |
+
725
|
| 736 |
+
|
| 737 |
+
726
|
| 738 |
+
|
| 739 |
+
728
|
| 740 |
+
|
| 741 |
+
730
|
| 742 |
+
|
| 743 |
+
733
|
| 744 |
+
|
| 745 |
+
681 735 Children's literature is generally lower than young
|
| 746 |
+
|
| 747 |
+
## 6 Applicability
|
| 748 |
+
|
| 749 |
+
686 adult literature, and they both drop faster than the Is it reasonable to apply measures like these on an 740 687 curves for books aimed at adults. entire corpus instead of just separate texts? First, 688 Figure 21 shows religious texts. We see two "separate texts" is not necessarily well defined. Is 689 translations of the Bible, with very similar curves a newspaper one text, or each article? Books in a 690 - both dropping, rising, levelling out, but unlike series? Multiple entries posted on the same web 745 691 the easy read category they level out at about the page? Second, for the lower values of $n$ , running same level where they started. Also included is a the entire corpus at once should not make a big book of church hymns, which happens to level out difference. For example, if $n = {100}$ and the typi-at a similar level, but starts with a large rise. cal length of a text is 10000 , that would mean that
|
| 750 |
+
|
| 751 |
+
696 Finally, in Figure 22, we see three uncate- only about $1\%$ of samples contain two texts, and 750 gorised corpora - one from a 1700 s songwriter, the rest only one. For the higher values of $n$ , using one from a popular science magazine, and one only separate texts would leave us with no data at from Wikipedia. As expected, they show very dif- all - it would be difficult to find singular coherent ferent shapes and levels, and are clearly distinct texts spanning hundreds of millions of words. This
|
| 752 |
+
|
| 753 |
+
701 from each other as well as all the other curves. means that allowing corpora of multiple authors 755
|
| 754 |
+
|
| 755 |
+
757
|
| 756 |
+
|
| 757 |
+

|
| 758 |
+
|
| 759 |
+
Figure 21: Spectrum for religious texts
|
| 760 |
+
|
| 761 |
+
762 and topics is our only option if we want results for large $n$ .
|
| 762 |
+
|
| 763 |
+

|
| 764 |
+
|
| 765 |
+
Figure 22: Spectrum for some other texts
|
| 766 |
+
|
| 767 |
+
But we can also look at the results. Are the differences between the curves largely caused by differences in text length? If that was the case, we would expect that when a curve reaches the "critical $n$ " where we go from a single text to multiple texts, the vocabulary richness should increase rapidly. The curve we would expect to see is one that starts out mostly flat (because hardly any texts are that short), then slowly decreases (as others reach their critical $n$ and bring up the mean), then rapidly jumps up as it reaches its critical $n$ , and then slowly decreases again. This is not a pattern that we see anywhere, so we can conclude that text
|
| 768 |
+
|
| 769 |
+
809 length is not the driving factor of the curve shapes.
|
| 770 |
+
|
| 771 |
+
## 7 Conclusion
|
| 772 |
+
|
| 773 |
+
810
|
| 774 |
+
|
| 775 |
+
811
|
| 776 |
+
|
| 777 |
+
It is clear that the task of finding a length- 812
|
| 778 |
+
|
| 779 |
+
independent measure of vocabulary richness is dif- 813 ficult at best. We have seen that many traditionally used measures are not satisfactory, and some sug-
|
| 780 |
+
|
| 781 |
+
gestions as to how they can be improved. Perhaps 816 the most obvious approach is to use average TTR over a sample length, with 10000 being a good sample length when possible.
|
| 782 |
+
|
| 783 |
+
The figures show that the curves have very dif-
|
| 784 |
+
|
| 785 |
+
ferent shapes, and often cross. This means that 821 the ranking of corpora changes depending on the
|
| 786 |
+
|
| 787 |
+
length of text we are looking at, so a perfect solu- 823 tion is not possible, or at least cannot be expressed as a single number.
|
| 788 |
+
|
| 789 |
+
Is this spectrum method useful for genre classi- 826 fication? It is perhaps rare that we need to analyse
|
| 790 |
+
|
| 791 |
+
entire hundred-million-word corpora to see if they 828 are made up of novels or newspapers, but we do
|
| 792 |
+
|
| 793 |
+
see that there are some differences even for much 831 shorter lengths. We have also gained insight into
|
| 794 |
+
|
| 795 |
+
what makes it difficult to find a good measure of 833 vocabulary richness. But most importantly, we have seen that there are notable and interesting dif-
|
| 796 |
+
|
| 797 |
+
ferences between genres, and raised for future re- 836 search the question of why.
|
| 798 |
+
|
| 799 |
+
838
|
| 800 |
+
|
| 801 |
+
## References
|
| 802 |
+
|
| 803 |
+
839
|
| 804 |
+
|
| 805 |
+
840
|
| 806 |
+
|
| 807 |
+
Etienne Brunet. 1978. Le vocabulaire de Jean Giraudoux structure et évolution. Slatkine, Genève.
|
| 808 |
+
|
| 809 |
+
Daniel Dugast. 1979. Vocabulaire et stylistique, volume 8. Slatkine, Genève.
|
| 810 |
+
|
| 811 |
+
Gustav Herdan. 1964. Quantitative linguistics. Butterworth, London.
|
| 812 |
+
|
| 813 |
+
Tor G. Hultman and Margareta Westman. 1977. Gym-nasistsvenska. Liber Läromedel, Lund.
|
| 814 |
+
|
| 815 |
+
Heinz-Dieter Maas. 1972. Über den zusammenhang zwischen wortschatzumfang und länge eines textes. Zeitschrift für Literaturwissenschaft und Linguistik, 2(8):73.
|
| 816 |
+
|
| 817 |
+
Fiona J Tweedie and R Harald Baayen. 1998. How variable may a constant be? measures of lexical richness in perspective. Computers and the Humanities, 32:323-352.
|
| 818 |
+
|
| 819 |
+
841
|
| 820 |
+
|
| 821 |
+
843
|
| 822 |
+
|
| 823 |
+
846
|
| 824 |
+
|
| 825 |
+
847
|
| 826 |
+
|
| 827 |
+
848
|
| 828 |
+
|
| 829 |
+
849
|
| 830 |
+
|
| 831 |
+
850
|
| 832 |
+
|
| 833 |
+
851
|
| 834 |
+
|
| 835 |
+
853
|
| 836 |
+
|
| 837 |
+
858
|
| 838 |
+
|
| 839 |
+
859
|
| 840 |
+
|
| 841 |
+
860
|
| 842 |
+
|
| 843 |
+
861
|
| 844 |
+
|
| 845 |
+
862
|
| 846 |
+
|
| 847 |
+
863
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_bbk5bLa9K/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,799 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ LENGTH DEPENDENCE OF VOCABULARY RICHNESS
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author
|
| 10 |
+
|
| 11 |
+
004 Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
005 Affiliation / Address line 2 006 Affiliation / Address line 3
|
| 14 |
+
|
| 15 |
+
email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 3
|
| 24 |
+
|
| 25 |
+
email@domain
|
| 26 |
+
|
| 27 |
+
Anonymousest Author 057
|
| 28 |
+
|
| 29 |
+
Affiliation / Address line 1 058
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 32 |
+
|
| 33 |
+
063
|
| 34 |
+
|
| 35 |
+
§ ABSTRACT
|
| 36 |
+
|
| 37 |
+
013 The relation between the length of a text and the number of unique words is investigated using several Swedish language
|
| 38 |
+
|
| 39 |
+
016 corpora. We consider a number of existing measures of vocabulary richness, show
|
| 40 |
+
|
| 41 |
+
018 that they are not length-independent, and try to improve on some of them based on statistical evidence. We also look at the spectrum of values over text lengths, and find that genres have characteristic shapes.
|
| 42 |
+
|
| 43 |
+
023
|
| 44 |
+
|
| 45 |
+
§ 1 INTRODUCTION
|
| 46 |
+
|
| 47 |
+
Measures of lexical richness have several uses, including author identification, other forms of text classification, and estimating how difficult a text is. One of the simplest and most obvious measures of lexical richness is to compare the size of the vocabulary (that is, how many different words) to the size of the text (how many words in total). This can be done in several ways, most
|
| 48 |
+
|
| 49 |
+
033 straightforwardly as the type-token ratio (henceforth TTR), $u/n$ , where $u$ is the number of unique words (types) and $n$ is the total number of words (tokens). Thus, for the sentence "this example is this example", there are three types and five to-
|
| 50 |
+
|
| 51 |
+
038 kens, so TTR is $u/n = 3/5 = {0.6}$ .
|
| 52 |
+
|
| 53 |
+
The obvious problem with TTR is that it changes with the length of the text. As we write a text, the more words we have already written, the more likely it is that the next word will be one that has already been used, so TTR goes down as the text grows longer. Many attempts have been made to transform this measure into something independent of the length of the text, but many of those attempts were made in an age before "big data", or even before computers, and were based on a priori reasoning rather than statistical analysis (Tweedie and Baayen, 1998).
|
| 54 |
+
|
| 55 |
+
We will start by looking at some of these mea-
|
| 56 |
+
|
| 57 |
+
053 sures, and test them on a set of corpora from
|
| 58 |
+
|
| 59 |
+
Spräkbanken to see how they hold up for a wide 065 range of different $n$ . After comparing some of the
|
| 60 |
+
|
| 61 |
+
previous methods, we will briefly look into using 067 the empirical data to come up with a better suggestion. The results give rise to another question:
|
| 62 |
+
|
| 63 |
+
What if instead of aiming for a length-independent 070 measure, we consider how the values change with
|
| 64 |
+
|
| 65 |
+
the length? Can that actually tell us new and inter- 072 esting things?
|
| 66 |
+
|
| 67 |
+
We find that if we analyse the type count for 075 different sample lengths, we see clear and con-
|
| 68 |
+
|
| 69 |
+
sistent differences between different types of text. 077 This may be useful for genre classification, or for a more detailed description of the complexity of
|
| 70 |
+
|
| 71 |
+
the text. 080
|
| 72 |
+
|
| 73 |
+
Although these measures are usually applied to
|
| 74 |
+
|
| 75 |
+
specific texts, we here apply them to entire cor- 082
|
| 76 |
+
|
| 77 |
+
pora. We will discuss the effects of this after see- 083
|
| 78 |
+
|
| 79 |
+
ing the results. 084 085
|
| 80 |
+
|
| 81 |
+
086
|
| 82 |
+
|
| 83 |
+
§ 2 DATA
|
| 84 |
+
|
| 85 |
+
087
|
| 86 |
+
|
| 87 |
+
088
|
| 88 |
+
|
| 89 |
+
Spräkbanken (the Swedish Language Bank) at the 089
|
| 90 |
+
|
| 91 |
+
University of Gothenburg (spraakbanken.gu.se) 090 has a large collection of text corpora, mainly in
|
| 92 |
+
|
| 93 |
+
Swedish but including several other languages. In 092 this study, we use Swedish texts, focusing on large and homogeneous corpora.
|
| 94 |
+
|
| 95 |
+
We extract the type count $u$ for several differ-
|
| 96 |
+
|
| 97 |
+
ent lengths $n$ . For each $n$ , we divide the corpus 097 in chunks of length $n$ , dropping any overflow at the end, and take the mean value of $u$ for each of these chunks. (In some cases we remove the last value for being an outlier; presumably this is because it is the only value where a large part of the data is dropped due to overflow.) We use a pseudo-logarithmic scale for ease of reading, extracting values for $n = {10},{20},{50},{100},{200},{500},{1000}\ldots$ up to the maximum possible for each corpus; the
|
| 98 |
+
|
| 99 |
+
largest go up to 500 million tokens. 107
|
| 100 |
+
|
| 101 |
+
§ 3 TESTING EXISTING MEASURES
|
| 102 |
+
|
| 103 |
+
109
|
| 104 |
+
|
| 105 |
+
First of all, we can test and verify that TTR does go down. Figure 1 shows TTR for 31 corpora.
|
| 106 |
+
|
| 107 |
+
< g r a p h i c s >
|
| 108 |
+
|
| 109 |
+
Figure 1: Type-token ratio
|
| 110 |
+
|
| 111 |
+
It seems likely that, as we compare different-size corpora, effects of size changes might be best described in terms of multiplicative changes rather than additive, so we might try looking at the logarithms of $n$ and $u$ . We see in Figure 2 that the result looks fairly close to a straight line.
|
| 112 |
+
|
| 113 |
+
< g r a p h i c s >
|
| 114 |
+
|
| 115 |
+
Figure 2: Type count
|
| 116 |
+
|
| 117 |
+
151
|
| 118 |
+
|
| 119 |
+
The first obvious method, then, is to assume that this is indeed a straight line, and use the slope of that line as our presumed length-independent measure of richness, that is, $\log u/\log n$ . This was proposed by Herdan (1964). We see in Figure 3
|
| 120 |
+
|
| 121 |
+
161 that the measure is decreasing quite steadily for
|
| 122 |
+
|
| 123 |
+
all the texts. The six corpora used here are chosen 162
|
| 124 |
+
|
| 125 |
+
partly for being large, and partly for having large 163
|
| 126 |
+
|
| 127 |
+
differences in type count; many other corpora are 164
|
| 128 |
+
|
| 129 |
+
not nearly as well separated. 165
|
| 130 |
+
|
| 131 |
+
166
|
| 132 |
+
|
| 133 |
+
167
|
| 134 |
+
|
| 135 |
+
< g r a p h i c s >
|
| 136 |
+
|
| 137 |
+
Figure 3: Herdan's measure
|
| 138 |
+
|
| 139 |
+
168
|
| 140 |
+
|
| 141 |
+
169
|
| 142 |
+
|
| 143 |
+
170
|
| 144 |
+
|
| 145 |
+
173
|
| 146 |
+
|
| 147 |
+
175
|
| 148 |
+
|
| 149 |
+
176
|
| 150 |
+
|
| 151 |
+
178
|
| 152 |
+
|
| 153 |
+
180
|
| 154 |
+
|
| 155 |
+
183
|
| 156 |
+
|
| 157 |
+
Let us pause for a moment and consider what 185
|
| 158 |
+
|
| 159 |
+
this figure illustrates. The fact that the measure de- 186
|
| 160 |
+
|
| 161 |
+
creases is not in itself a problem; we may be aim- 187
|
| 162 |
+
|
| 163 |
+
ing for a near-constant, but we should not expect 188 it to be completely perfect. The amount of varia-
|
| 164 |
+
|
| 165 |
+
tion is also not relevant; we could change that by 190 adding or multiplying by a constant. Regardless of how large the variation is, we would also change
|
| 166 |
+
|
| 167 |
+
the axes of the graph, so a glance at the variation of 193 a single curve in the graph does not tell us whether
|
| 168 |
+
|
| 169 |
+
the measure is near-constant. 195
|
| 170 |
+
|
| 171 |
+
What actually matters is comparing the curves. If the measure is to reliably compare different texts, regardless of the (sample) size for each text, what we need is to have the lines separated inso-
|
| 172 |
+
|
| 173 |
+
far as possible. If the lowest point of curve $A$ is 200 higher than the highest point of curve $\mathrm{B}$ , then we have successfully determined that $\mathrm{A}$ has a higher richness. We should also keep in mind that the first
|
| 174 |
+
|
| 175 |
+
few points of the curve are not as important - we 205 are probably not very interested in measuring richness for very short texts, so although the graphs go all the way from 10, we can mostly ignore values below 1000 or so. We would be content if the measure can separate the lines from that point on.
|
| 176 |
+
|
| 177 |
+
As we see in Figure 3, this is not quite the case here. This measure works considerably better than TTR, but the curves are still close enough that their ranges overlap. We will compare with a few other
|
| 178 |
+
|
| 179 |
+
measures. 215
|
| 180 |
+
|
| 181 |
+
216 Guiraud (in 1954, as cited by Hultman and
|
| 182 |
+
|
| 183 |
+
217 Westman (1977)) proposed the measure $u/\sqrt{n}$ ,
|
| 184 |
+
|
| 185 |
+
218 shown in Figure 4. This does not separate the curves particularly well, and does not seem to have any advantage over the previous method.
|
| 186 |
+
|
| 187 |
+
221
|
| 188 |
+
|
| 189 |
+
222
|
| 190 |
+
|
| 191 |
+
223
|
| 192 |
+
|
| 193 |
+
< g r a p h i c s >
|
| 194 |
+
|
| 195 |
+
Figure 4: Guiraud's measure
|
| 196 |
+
|
| 197 |
+
227
|
| 198 |
+
|
| 199 |
+
229
|
| 200 |
+
|
| 201 |
+
230
|
| 202 |
+
|
| 203 |
+
231
|
| 204 |
+
|
| 205 |
+
232
|
| 206 |
+
|
| 207 |
+
233
|
| 208 |
+
|
| 209 |
+
234
|
| 210 |
+
|
| 211 |
+
237
|
| 212 |
+
|
| 213 |
+
239
|
| 214 |
+
|
| 215 |
+
240 Dugast (1979) built on Herdan by suggesting
|
| 216 |
+
|
| 217 |
+
241 $\log u/\log \log n$ , seen in Figure 5. We find no ad-
|
| 218 |
+
|
| 219 |
+
242 vantage with this method, and only added conceptual complexity with the double logarithm.
|
| 220 |
+
|
| 221 |
+
244
|
| 222 |
+
|
| 223 |
+
< g r a p h i c s >
|
| 224 |
+
|
| 225 |
+
Figure 5: Dugast's measure
|
| 226 |
+
|
| 227 |
+
248
|
| 228 |
+
|
| 229 |
+
249
|
| 230 |
+
|
| 231 |
+
254
|
| 232 |
+
|
| 233 |
+
256
|
| 234 |
+
|
| 235 |
+
257
|
| 236 |
+
|
| 237 |
+
258
|
| 238 |
+
|
| 239 |
+
259
|
| 240 |
+
|
| 241 |
+
260
|
| 242 |
+
|
| 243 |
+
261
|
| 244 |
+
|
| 245 |
+
262
|
| 246 |
+
|
| 247 |
+
Brunet (1978) proposed ${n}^{ \land }\left( {u}^{-a}\right)$ , where usu-
|
| 248 |
+
|
| 249 |
+
264 ally $a = {0.172}$ . This is shown in Figure 6. This too is a fairly conceptually complicated method
|
| 250 |
+
|
| 251 |
+
266 which shows no sign of improving the results.
|
| 252 |
+
|
| 253 |
+
267 Maas (1972) found another approach, with
|
| 254 |
+
|
| 255 |
+
268 $\left( {\log n - \log u}\right) /{\left( \log n\right) }^{2}$ , see Figure 7. This seems
|
| 256 |
+
|
| 257 |
+
269 marginally more effective at separating the curves.
|
| 258 |
+
|
| 259 |
+
< g r a p h i c s >
|
| 260 |
+
|
| 261 |
+
Figure 6: Brunet's measure
|
| 262 |
+
|
| 263 |
+
270
|
| 264 |
+
|
| 265 |
+
271
|
| 266 |
+
|
| 267 |
+
272
|
| 268 |
+
|
| 269 |
+
273
|
| 270 |
+
|
| 271 |
+
274
|
| 272 |
+
|
| 273 |
+
275
|
| 274 |
+
|
| 275 |
+
276
|
| 276 |
+
|
| 277 |
+
277
|
| 278 |
+
|
| 279 |
+
278
|
| 280 |
+
|
| 281 |
+
279
|
| 282 |
+
|
| 283 |
+
280
|
| 284 |
+
|
| 285 |
+
281
|
| 286 |
+
|
| 287 |
+
282
|
| 288 |
+
|
| 289 |
+
283
|
| 290 |
+
|
| 291 |
+
284
|
| 292 |
+
|
| 293 |
+
285
|
| 294 |
+
|
| 295 |
+
286
|
| 296 |
+
|
| 297 |
+
287
|
| 298 |
+
|
| 299 |
+
< g r a p h i c s >
|
| 300 |
+
|
| 301 |
+
Figure 7: Maas's measure
|
| 302 |
+
|
| 303 |
+
288
|
| 304 |
+
|
| 305 |
+
289
|
| 306 |
+
|
| 307 |
+
290
|
| 308 |
+
|
| 309 |
+
291
|
| 310 |
+
|
| 311 |
+
292
|
| 312 |
+
|
| 313 |
+
293
|
| 314 |
+
|
| 315 |
+
294
|
| 316 |
+
|
| 317 |
+
295
|
| 318 |
+
|
| 319 |
+
296
|
| 320 |
+
|
| 321 |
+
297
|
| 322 |
+
|
| 323 |
+
298
|
| 324 |
+
|
| 325 |
+
299
|
| 326 |
+
|
| 327 |
+
300
|
| 328 |
+
|
| 329 |
+
301
|
| 330 |
+
|
| 331 |
+
302
|
| 332 |
+
|
| 333 |
+
303
|
| 334 |
+
|
| 335 |
+
304
|
| 336 |
+
|
| 337 |
+
305
|
| 338 |
+
|
| 339 |
+
Hultman and Westman (1977) defined the OVIX 306
|
| 340 |
+
|
| 341 |
+
measure as 307
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
\frac{\log n}{\log \left( {2 - \frac{\log u}{\log n}}\right) }
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
308 309 310 311
|
| 348 |
+
|
| 349 |
+
which is seen in Figure 8. This is a measure com- 312
|
| 350 |
+
|
| 351 |
+
monly used in Sweden, including by Spräkbanken. 313 As we see, this also does a passable job, but there is a clear rising trend for most curves. This is confirmed by further testing on other corpora.
|
| 352 |
+
|
| 353 |
+
§ 4 IMPROVING MEASURES
|
| 354 |
+
|
| 355 |
+
318
|
| 356 |
+
|
| 357 |
+
319
|
| 358 |
+
|
| 359 |
+
By analysing the way these measures depend on 320
|
| 360 |
+
|
| 361 |
+
$n$ , we may be able to adjust and improve them. 321
|
| 362 |
+
|
| 363 |
+
As noted, the fact that the curve of $\log u$ against 322
|
| 364 |
+
|
| 365 |
+
$\log n$ is close to a line suggests that $u/n$ may be 323
|
| 366 |
+
|
| 367 |
+
324
|
| 368 |
+
|
| 369 |
+
< g r a p h i c s >
|
| 370 |
+
|
| 371 |
+
Figure 8: Ovix
|
| 372 |
+
|
| 373 |
+
325
|
| 374 |
+
|
| 375 |
+
329
|
| 376 |
+
|
| 377 |
+
330
|
| 378 |
+
|
| 379 |
+
335
|
| 380 |
+
|
| 381 |
+
337
|
| 382 |
+
|
| 383 |
+
340
|
| 384 |
+
|
| 385 |
+
342 a constant, as per Herdan. But that assumes that the line passes through(0,0); if the line passes though(0, m)for some $m$ , we should expect that $\left( {u - m}\right) /n$ is constant. We find that for a subset of the corpora, the best-fitting line gives $m = {0.4}$ , and we see in Figure 9 that $\left( {u - {0.4}}\right) /n$ does look a lot flatter. As before, we pay less attention to the values where $n < {1000}$ .
|
| 386 |
+
|
| 387 |
+
< g r a p h i c s >
|
| 388 |
+
|
| 389 |
+
Figure 9: Herdan with constant term
|
| 390 |
+
|
| 391 |
+
362
|
| 392 |
+
|
| 393 |
+
365
|
| 394 |
+
|
| 395 |
+
366
|
| 396 |
+
|
| 397 |
+
367
|
| 398 |
+
|
| 399 |
+
368
|
| 400 |
+
|
| 401 |
+
On the other hand, we know that a text with one word certainly also has one unique word, so log-
|
| 402 |
+
|
| 403 |
+
372 ically the curve of $\log u$ against $\log n$ must pass though(0,0). Empiricism is all good and well, but if we want results that hold up for other data, perhaps we are better off not violating basic logic. What if instead of a line, we fit the points to a
|
| 404 |
+
|
| 405 |
+
377 polynomial curve with zero constant term? Trying
|
| 406 |
+
|
| 407 |
+
second, third and fourth order polynomials sug- 378
|
| 408 |
+
|
| 409 |
+
gests that third is a good compromise. We find 379
|
| 410 |
+
|
| 411 |
+
the best fit for six corpora, take the average for 380
|
| 412 |
+
|
| 413 |
+
the quadratic and cubic terms, and get the adjusted 381
|
| 414 |
+
|
| 415 |
+
measure 382
|
| 416 |
+
|
| 417 |
+
383
|
| 418 |
+
|
| 419 |
+
$$
|
| 420 |
+
\log u/\log n + {0.044}{\left( \log n\right) }^{2} - {0.0024}{\left( \log n\right) }^{3}
|
| 421 |
+
$$
|
| 422 |
+
|
| 423 |
+
384
|
| 424 |
+
|
| 425 |
+
385
|
| 426 |
+
|
| 427 |
+
You can see in Figure 10 that this separates the 386 curves considerably better than the pure Herdan
|
| 428 |
+
|
| 429 |
+
measure. From looking at the graph, this is proba- 388
|
| 430 |
+
|
| 431 |
+
bly the best option we have here, but we should 389
|
| 432 |
+
|
| 433 |
+
note that the coefficients vary quite a bit be- 390
|
| 434 |
+
|
| 435 |
+
tween corpora (standard deviations are 0.015 and 391
|
| 436 |
+
|
| 437 |
+
0.0017), so this is not universal enough to adopt as 392
|
| 438 |
+
|
| 439 |
+
some sort of standard measure. 393
|
| 440 |
+
|
| 441 |
+
394
|
| 442 |
+
|
| 443 |
+
< g r a p h i c s >
|
| 444 |
+
|
| 445 |
+
Figure 10: Herdan with cubic fit
|
| 446 |
+
|
| 447 |
+
395
|
| 448 |
+
|
| 449 |
+
396
|
| 450 |
+
|
| 451 |
+
397
|
| 452 |
+
|
| 453 |
+
398
|
| 454 |
+
|
| 455 |
+
399
|
| 456 |
+
|
| 457 |
+
400
|
| 458 |
+
|
| 459 |
+
401
|
| 460 |
+
|
| 461 |
+
403
|
| 462 |
+
|
| 463 |
+
404
|
| 464 |
+
|
| 465 |
+
405
|
| 466 |
+
|
| 467 |
+
406
|
| 468 |
+
|
| 469 |
+
407
|
| 470 |
+
|
| 471 |
+
408
|
| 472 |
+
|
| 473 |
+
409
|
| 474 |
+
|
| 475 |
+
410
|
| 476 |
+
|
| 477 |
+
411
|
| 478 |
+
|
| 479 |
+
412
|
| 480 |
+
|
| 481 |
+
413
|
| 482 |
+
|
| 483 |
+
< g r a p h i c s >
|
| 484 |
+
|
| 485 |
+
Figure 11: Adjusted Guiraud
|
| 486 |
+
|
| 487 |
+
414
|
| 488 |
+
|
| 489 |
+
415
|
| 490 |
+
|
| 491 |
+
416
|
| 492 |
+
|
| 493 |
+
417
|
| 494 |
+
|
| 495 |
+
418
|
| 496 |
+
|
| 497 |
+
419
|
| 498 |
+
|
| 499 |
+
420
|
| 500 |
+
|
| 501 |
+
421
|
| 502 |
+
|
| 503 |
+
422
|
| 504 |
+
|
| 505 |
+
423
|
| 506 |
+
|
| 507 |
+
424
|
| 508 |
+
|
| 509 |
+
425
|
| 510 |
+
|
| 511 |
+
426
|
| 512 |
+
|
| 513 |
+
427
|
| 514 |
+
|
| 515 |
+
428
|
| 516 |
+
|
| 517 |
+
429
|
| 518 |
+
|
| 519 |
+
430
|
| 520 |
+
|
| 521 |
+
We can also consider the Guiraud approach, and 431 try to adjust it. We notice that while TTR (where we divide by $n$ ) goes steadily down, Guiraud (where we divide by ${n}^{0.5}$ ) goes up. Perhaps we can find a middle ground? Figure 11 shows the results for $u/{n}^{0.75}$ , which looks overall much flatter and better separating the curves. This may not be a better result than the previous one, but it does have the advantage of not depending on experimentally determined coefficients.
|
| 522 |
+
|
| 523 |
+
Is there another option, using only the length and the type count? Yes, there is an option which is in principle completely independent of text length: Measure the type count (or equivalently TTR) for a fixed length. One option would be to measure only the first $n$ words of a text, but that could mean that a small part of the text has a large impact, so probably a better method is to cut the text into pieces of length $n$ and take the average, exactly as we have done above.
|
| 524 |
+
|
| 525 |
+
< g r a p h i c s >
|
| 526 |
+
|
| 527 |
+
Figure 12: TTR at $n = {10000}$
|
| 528 |
+
|
| 529 |
+
Figure 12 shows the results for $n = {10000}$ , on 39 corpora. We see that it fairly well separates several categories of text. The eight newspaper corpora are above all but one other, with the three oldest getting the highest value, followed by the two from the late 1900s, then the two from printed
|
| 530 |
+
|
| 531 |
+
newspapers in 2000 and 2014, and last the web- 486
|
| 532 |
+
|
| 533 |
+
based news texts. The social media and blog texts 487 are a little more scattered, but all below the mean, except Twitter, which in both cases is higher. The four corpora of novels are not quite the same level, but all higher than all of the ones in the "easy read"
|
| 534 |
+
|
| 535 |
+
category. In that category, young adult literature 492 is the highest and children's literature the lowest. Parliamentary data is all below the mean but above "easy read". Near the bottom we find, perhaps surprisingly, the Bible, along with Wikipedia, neither of which are primarily known to be easy reads. Altogether, these results should tell us that this is at least a meaningful measure.
|
| 536 |
+
|
| 537 |
+
That leaves the question of choosing an $n$ . Very
|
| 538 |
+
|
| 539 |
+
low values might give strange effects, very high 502 values would make it unusable for shorter texts. Other values were tested for comparison: $n = {10}$ gives little useful information, while $n = {100}$ ranks all the novels below most of social media, and beyond that we get mostly unremarkable results from just looking at the ranking. Based on these limited results, $n = {10000}$ seems like a good choice, if we are working with relatively long texts, and otherwise we can settle for $n = {1000}$ .
|
| 540 |
+
|
| 541 |
+
§ 5 SPECTRUM COMPARISON
|
| 542 |
+
|
| 543 |
+
Instead of considering type counts for only one $n$ , what if we measure for many values of $n$ , and look at the whole spectrum? This is essentially what we already did in all of section 3, and we could see that the curves for the different corpora certainly did have different shapes - some of them even crossed each other, which implies that any one number is not going to tell us the whole truth.
|
| 544 |
+
|
| 545 |
+
To compare corpora instead of methods, we need to pick one method, one way to transform $u$ based on $n$ . Using plain TTR as seen in Figure 1 would make it difficult to tell the difference between shapes, and picking one of the tested methods seems like too arbitrary a choice. So for the purposes of this section, we will evade the problem. We normalise the type count (or equivalently TTR) for each $n$ by subtracting the mean and dividing by the standard deviation. That is, the values on the vertical axis are in terms of standard deviations above the mean, counted for each separate value on the horizontal axis. (For the very highest values, the mean and sd values change erratically because of corpora dropping off. We adjust
|
| 546 |
+
|
| 547 |
+
the normalisation to gradually change from actual 539
|
| 548 |
+
|
| 549 |
+
540 mean and sd to extrapolated values.)
|
| 550 |
+
|
| 551 |
+
541 Figures 13-22 show the spectra for each category. Some curves are shorter because of limited data. Figures 13-15 show three different types of web-based texts, one set of blog texts and two different internet forums. We can see that each category is a little different, but all the curves share some characteristics - first a short rise, then a drop, then flatter, and finally a small rise. Most of them start slightly above the mean, and end below the mean.
|
| 552 |
+
|
| 553 |
+
< g r a p h i c s >
|
| 554 |
+
|
| 555 |
+
Figure 13: Spectrum for blog texts
|
| 556 |
+
|
| 557 |
+
566
|
| 558 |
+
|
| 559 |
+
568
|
| 560 |
+
|
| 561 |
+
< g r a p h i c s >
|
| 562 |
+
|
| 563 |
+
Figure 14: Spectrum for the Familjeliv forum
|
| 564 |
+
|
| 565 |
+
578
|
| 566 |
+
|
| 567 |
+
583
|
| 568 |
+
|
| 569 |
+
Figure 16 shows the "easy read" category. Despite being unrelated, the curves share the same shape, which is clearly different from the web-based corpora - a drop, then a rise, peaking around
|
| 570 |
+
|
| 571 |
+
593 1000 without reaching the mean, then a drop.
|
| 572 |
+
|
| 573 |
+
< g r a p h i c s >
|
| 574 |
+
|
| 575 |
+
Figure 15: Spectrum for the Flashback forum
|
| 576 |
+
|
| 577 |
+
594
|
| 578 |
+
|
| 579 |
+
595
|
| 580 |
+
|
| 581 |
+
596
|
| 582 |
+
|
| 583 |
+
597
|
| 584 |
+
|
| 585 |
+
598
|
| 586 |
+
|
| 587 |
+
599
|
| 588 |
+
|
| 589 |
+
600
|
| 590 |
+
|
| 591 |
+
602
|
| 592 |
+
|
| 593 |
+
604
|
| 594 |
+
|
| 595 |
+
605
|
| 596 |
+
|
| 597 |
+
607
|
| 598 |
+
|
| 599 |
+
608
|
| 600 |
+
|
| 601 |
+
609
|
| 602 |
+
|
| 603 |
+
610
|
| 604 |
+
|
| 605 |
+
612
|
| 606 |
+
|
| 607 |
+
< g r a p h i c s >
|
| 608 |
+
|
| 609 |
+
Figure 16: Spectrum for easy-read texts
|
| 610 |
+
|
| 611 |
+
613
|
| 612 |
+
|
| 613 |
+
614
|
| 614 |
+
|
| 615 |
+
615
|
| 616 |
+
|
| 617 |
+
616
|
| 618 |
+
|
| 619 |
+
617
|
| 620 |
+
|
| 621 |
+
618
|
| 622 |
+
|
| 623 |
+
619
|
| 624 |
+
|
| 625 |
+
620
|
| 626 |
+
|
| 627 |
+
622
|
| 628 |
+
|
| 629 |
+
623
|
| 630 |
+
|
| 631 |
+
624
|
| 632 |
+
|
| 633 |
+
625
|
| 634 |
+
|
| 635 |
+
626
|
| 636 |
+
|
| 637 |
+
627
|
| 638 |
+
|
| 639 |
+
628
|
| 640 |
+
|
| 641 |
+
629
|
| 642 |
+
|
| 643 |
+
Figures 17-18 show news texts, with Figure 17 630 showing three newspapers from the early 1900s,
|
| 644 |
+
|
| 645 |
+
and Figure 18 showing four more recent newspa- 632 pers and one web-based news corpus. As with the blog/forum collection, we see that these two related categories have clear similarities: a slow rise
|
| 646 |
+
|
| 647 |
+
up to between ten and a hundred thousand, and 637 then a sharp. But they are also visibly distinct, with the older newspapers having higher values and rising near the end. Aside from some more unpredictable behaviour for $n < {1000}$ , the curves in
|
| 648 |
+
|
| 649 |
+
each category are remarkably similar in both shape 642 and level.
|
| 650 |
+
|
| 651 |
+
Figures 19-20 show literary texts, with Figure 19 showing regular novels and Figure 20 showing
|
| 652 |
+
|
| 653 |
+
children's fiction and young adult fiction. They are 646
|
| 654 |
+
|
| 655 |
+
all comparatively straight and dropping slightly. 647
|
| 656 |
+
|
| 657 |
+
648 702
|
| 658 |
+
|
| 659 |
+
649 703
|
| 660 |
+
|
| 661 |
+
0.6 ,
|
| 662 |
+
|
| 663 |
+
0.4
|
| 664 |
+
|
| 665 |
+
0.2
|
| 666 |
+
|
| 667 |
+
romi
|
| 668 |
+
|
| 669 |
+
0 romg
|
| 670 |
+
|
| 671 |
+
Sd above mean romi -0.2 -0.4
|
| 672 |
+
|
| 673 |
+
-0.6
|
| 674 |
+
|
| 675 |
+
-0.8
|
| 676 |
+
|
| 677 |
+
-1
|
| 678 |
+
|
| 679 |
+
-1.2
|
| 680 |
+
|
| 681 |
+
100 1000 1E4 1E5 1E6 1E7 1.68
|
| 682 |
+
|
| 683 |
+
Sample length
|
| 684 |
+
|
| 685 |
+
Figure 19: Spectrum for novels
|
| 686 |
+
|
| 687 |
+
2.5
|
| 688 |
+
|
| 689 |
+
lalpilen1920
|
| 690 |
+
|
| 691 |
+
Sd above mean 1.5 kalmar191 ostgota...
|
| 692 |
+
|
| 693 |
+
0.5
|
| 694 |
+
|
| 695 |
+
-0.5
|
| 696 |
+
|
| 697 |
+
100 1000 1E4 1.E5 1E6 1E7 1E
|
| 698 |
+
|
| 699 |
+
Sample length
|
| 700 |
+
|
| 701 |
+
Figure 17: Spectrum for old newspapers
|
| 702 |
+
|
| 703 |
+
704
|
| 704 |
+
|
| 705 |
+
705
|
| 706 |
+
|
| 707 |
+
706
|
| 708 |
+
|
| 709 |
+
707
|
| 710 |
+
|
| 711 |
+
654 708
|
| 712 |
+
|
| 713 |
+
709
|
| 714 |
+
|
| 715 |
+
659 713
|
| 716 |
+
|
| 717 |
+
715
|
| 718 |
+
|
| 719 |
+
716
|
| 720 |
+
|
| 721 |
+
664 718
|
| 722 |
+
|
| 723 |
+
666 720
|
| 724 |
+
|
| 725 |
+
< g r a p h i c s >
|
| 726 |
+
|
| 727 |
+
Figure 18: Spectrum for recent newspapers
|
| 728 |
+
|
| 729 |
+
< g r a p h i c s >
|
| 730 |
+
|
| 731 |
+
Figure 20: Spectrum for youth novels
|
| 732 |
+
|
| 733 |
+
723
|
| 734 |
+
|
| 735 |
+
725
|
| 736 |
+
|
| 737 |
+
726
|
| 738 |
+
|
| 739 |
+
728
|
| 740 |
+
|
| 741 |
+
730
|
| 742 |
+
|
| 743 |
+
733
|
| 744 |
+
|
| 745 |
+
681 735 Children's literature is generally lower than young
|
| 746 |
+
|
| 747 |
+
§ 6 APPLICABILITY
|
| 748 |
+
|
| 749 |
+
686 adult literature, and they both drop faster than the Is it reasonable to apply measures like these on an 740 687 curves for books aimed at adults. entire corpus instead of just separate texts? First, 688 Figure 21 shows religious texts. We see two "separate texts" is not necessarily well defined. Is 689 translations of the Bible, with very similar curves a newspaper one text, or each article? Books in a 690 - both dropping, rising, levelling out, but unlike series? Multiple entries posted on the same web 745 691 the easy read category they level out at about the page? Second, for the lower values of $n$ , running same level where they started. Also included is a the entire corpus at once should not make a big book of church hymns, which happens to level out difference. For example, if $n = {100}$ and the typi-at a similar level, but starts with a large rise. cal length of a text is 10000, that would mean that
|
| 750 |
+
|
| 751 |
+
696 Finally, in Figure 22, we see three uncate- only about $1\%$ of samples contain two texts, and 750 gorised corpora - one from a 1700 s songwriter, the rest only one. For the higher values of $n$ , using one from a popular science magazine, and one only separate texts would leave us with no data at from Wikipedia. As expected, they show very dif- all - it would be difficult to find singular coherent ferent shapes and levels, and are clearly distinct texts spanning hundreds of millions of words. This
|
| 752 |
+
|
| 753 |
+
701 from each other as well as all the other curves. means that allowing corpora of multiple authors 755
|
| 754 |
+
|
| 755 |
+
757
|
| 756 |
+
|
| 757 |
+
< g r a p h i c s >
|
| 758 |
+
|
| 759 |
+
Figure 21: Spectrum for religious texts
|
| 760 |
+
|
| 761 |
+
762 and topics is our only option if we want results for large $n$ .
|
| 762 |
+
|
| 763 |
+
< g r a p h i c s >
|
| 764 |
+
|
| 765 |
+
Figure 22: Spectrum for some other texts
|
| 766 |
+
|
| 767 |
+
But we can also look at the results. Are the differences between the curves largely caused by differences in text length? If that was the case, we would expect that when a curve reaches the "critical $n$ " where we go from a single text to multiple texts, the vocabulary richness should increase rapidly. The curve we would expect to see is one that starts out mostly flat (because hardly any texts are that short), then slowly decreases (as others reach their critical $n$ and bring up the mean), then rapidly jumps up as it reaches its critical $n$ , and then slowly decreases again. This is not a pattern that we see anywhere, so we can conclude that text
|
| 768 |
+
|
| 769 |
+
809 length is not the driving factor of the curve shapes.
|
| 770 |
+
|
| 771 |
+
§ 7 CONCLUSION
|
| 772 |
+
|
| 773 |
+
810
|
| 774 |
+
|
| 775 |
+
811
|
| 776 |
+
|
| 777 |
+
It is clear that the task of finding a length- 812
|
| 778 |
+
|
| 779 |
+
independent measure of vocabulary richness is dif- 813 ficult at best. We have seen that many traditionally used measures are not satisfactory, and some sug-
|
| 780 |
+
|
| 781 |
+
gestions as to how they can be improved. Perhaps 816 the most obvious approach is to use average TTR over a sample length, with 10000 being a good sample length when possible.
|
| 782 |
+
|
| 783 |
+
The figures show that the curves have very dif-
|
| 784 |
+
|
| 785 |
+
ferent shapes, and often cross. This means that 821 the ranking of corpora changes depending on the
|
| 786 |
+
|
| 787 |
+
length of text we are looking at, so a perfect solu- 823 tion is not possible, or at least cannot be expressed as a single number.
|
| 788 |
+
|
| 789 |
+
Is this spectrum method useful for genre classi- 826 fication? It is perhaps rare that we need to analyse
|
| 790 |
+
|
| 791 |
+
entire hundred-million-word corpora to see if they 828 are made up of novels or newspapers, but we do
|
| 792 |
+
|
| 793 |
+
see that there are some differences even for much 831 shorter lengths. We have also gained insight into
|
| 794 |
+
|
| 795 |
+
what makes it difficult to find a good measure of 833 vocabulary richness. But most importantly, we have seen that there are notable and interesting dif-
|
| 796 |
+
|
| 797 |
+
ferences between genres, and raised for future re- 836 search the question of why.
|
| 798 |
+
|
| 799 |
+
838
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/rrsAzPAGhs/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,733 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Good Reads and Easy Novels Readability and Literary Quality in a Corpus of US-published Fiction
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
Anonymous Author
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 1
|
| 14 |
+
|
| 15 |
+
006 Affiliation / Address line 2
|
| 16 |
+
|
| 17 |
+
Affiliation / Address line 3
|
| 18 |
+
|
| 19 |
+
email@domain
|
| 20 |
+
|
| 21 |
+
Anonymouser Author
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 1
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 2
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 3
|
| 28 |
+
|
| 29 |
+
email@domain
|
| 30 |
+
|
| 31 |
+
Anonymousest Author 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 1 059
|
| 34 |
+
|
| 35 |
+
Affiliation / Address line 2 060
|
| 36 |
+
|
| 37 |
+
Affiliation / Address line 3
|
| 38 |
+
|
| 39 |
+
email@domain 062
|
| 40 |
+
|
| 41 |
+
## Abstract
|
| 42 |
+
|
| 43 |
+
013 In this paper, we explore the extent to which readability contributes to the perception of literary quality as de-
|
| 44 |
+
|
| 45 |
+
016 fined by two categories of variables: expert-based (e.g., Pulitzer Prize, Na-
|
| 46 |
+
|
| 47 |
+
018 tional Book Award) and crowd-based (e.g., GoodReads, WorldCat). Based on a large corpus of modern and contemporary
|
| 48 |
+
|
| 49 |
+
021 fiction in English, we examine the correlation of a text's readability with its per-
|
| 50 |
+
|
| 51 |
+
023 ceived literary quality, also assessing readability measures against simpler stylomet-
|
| 52 |
+
|
| 53 |
+
026 ric features. Our results show that read- ability generally correlates with popularity
|
| 54 |
+
|
| 55 |
+
028 as measured through open platforms such as GoodReads and WorldCat but has an inverse relation with three prestigious liter-
|
| 56 |
+
|
| 57 |
+
031 ary awards. This points to a distinction between crowd- and expert-based judgments
|
| 58 |
+
|
| 59 |
+
033 of literary style, as well as to a discrimination between fame and appreciation in the reception of a book.
|
| 60 |
+
|
| 61 |
+
036
|
| 62 |
+
|
| 63 |
+
## 1 Introduction and Related Works
|
| 64 |
+
|
| 65 |
+
038 Is it overall better for a novel to strive for an easy prose, or is there a link between difficulty and literary quality? The concept of readability has been studied for decades and is defined as the ease with which a text can be read and understood (Dale and Chall, 1949). Several works have attempted to define an easy way to compute readability in order to make, for example, didactic books more accessible, reduce technical jargon in documents produced for the general public, and adjust text selections according to the intended audience (Dubay, 2004). The result has been a series of popular and amply tested measures, each with a slight difference in their model of readability. Dale and Chall
|
| 66 |
+
|
| 67 |
+
053 (1949), for example, referred to readability as the
|
| 68 |
+
|
| 69 |
+
combination of elements in a text that impact im- 065 portant aspects of a reader's experience - including
|
| 70 |
+
|
| 71 |
+
whether the reader can understand the text, finds 067 it interesting, and can read with optimal speed (Dale and Chall, 1949). Despite their shortcom-
|
| 72 |
+
|
| 73 |
+
ings (Redish, 2000), readability measures have 070 been broadly applied to a large number of different
|
| 74 |
+
|
| 75 |
+
domains. Measures of readability vary according 072 to what aspect of a text they take into account, but
|
| 76 |
+
|
| 77 |
+
they typically combine features such as sentence 075 length, word length, and the presence of complex
|
| 78 |
+
|
| 79 |
+
words. While the actual ease of a text depends on 077 reader characteristics (background, situation, ability) it is widely accepted that simple textual fea-
|
| 80 |
+
|
| 81 |
+
tures such as sentence length, syllables per word 080 and lexical diversity impact the reading experience
|
| 82 |
+
|
| 83 |
+
(Dubay, 2004). 082
|
| 84 |
+
|
| 85 |
+
The connection of readability to the quality of a text has often been often implied when it comes to
|
| 86 |
+
|
| 87 |
+
non-fiction, and early studies into readability attest 085 to the educational and social importance of devel-
|
| 88 |
+
|
| 89 |
+
oping such measures to improve technical or ex- 087 pository documents (Chall, 1947), but its role in the quality of literary fiction is much more com-
|
| 90 |
+
|
| 91 |
+
plex. An easy-to-read novel can be enjoyable 090 to read, but may also apppear poor or unorigi-
|
| 92 |
+
|
| 93 |
+
nal. In literary studies, the idea that readability 092 might be a precondition for literary success is debated, and literary texts have been assessed variously by readability measures and similar met-
|
| 94 |
+
|
| 95 |
+
rics. Sherman (1893) was one of the first schol- 097 ars to propose certain values of average sentence-length and reading ease as properties of "better" literary style. Readability naturally varies across genre, but it is a widespread conception for readers and publishers alike that bestsellers (as defined by top book-sales) are easier to read (Martin, 1996). More recently, readability has gained traction in areas of (commercial) creative writing and publishing, especially where its measures are imple-
|
| 96 |
+
|
| 97 |
+
mented in text-editing tools such as the Heming- 107
|
| 98 |
+
|
| 99 |
+
108 162
|
| 100 |
+
|
| 101 |
+
<table><tr><td colspan="10">Spearman Correlation Scores</td></tr><tr><td>READABILITY_FLESCH_GRADE -</td><td>0.0072</td><td>0.76</td><td>0.39</td><td>-0.29</td><td>1</td><td>-0.95</td><td>0.86</td><td>0.93</td><td>1.00 0.75 0.75</td></tr><tr><td>READABILITY_FLESCH_EASE -</td><td>-0.028</td><td>-0.65</td><td>-0.42</td><td>0.34</td><td>-0.95</td><td>1</td><td>-0.89</td><td>-0.86</td><td>0.50 -0.72 -0.25</td></tr><tr><td>PERDABRITY_SMOO -</td><td>0.018</td><td>0.63</td><td>0.44</td><td>-0.39</td><td>0.86</td><td>-0.89</td><td>1</td><td>0.88</td><td>0.77-0.00</td></tr><tr><td>READABILITY_ANI -</td><td>0.034</td><td>0.77</td><td>0.43</td><td>-0.32</td><td>0.93</td><td>-0.86</td><td>0.88</td><td>1</td><td>-0.25 0.77 -0.50</td></tr><tr><td>READABILITY_DALE_CHALL_NEW</td><td>-0.39</td><td>0.55</td><td>0.4</td><td>-0.5</td><td>0.75</td><td>-0.72</td><td>0.77</td><td>0.77</td><td>1-0.75</td></tr><tr><td/><td>wompcount</td><td>SENTENCE_LENOTHE</td><td>MSTTR-100</td><td>extel-ter</td><td>READABILITY FLESCH GITADE READABILITY FLESCH EASE</td><td/><td>PEADABILITY SMOG</td><td>READABILITY API</td><td>READABILITY DALE_CHALL_NEW</td></tr></table>
|
| 102 |
+
|
| 103 |
+
Figure 1: Correlations between stylometrics and flavours of readability (Spearman). All correlations between 0.09 and 0.99 are statistically significant.
|
| 104 |
+
|
| 105 |
+
164
|
| 106 |
+
|
| 107 |
+
165
|
| 108 |
+
|
| 109 |
+
166
|
| 110 |
+
|
| 111 |
+
167
|
| 112 |
+
|
| 113 |
+
168
|
| 114 |
+
|
| 115 |
+
109 163 way or Marlowe editors ${}^{T}$ . These applications tend to favour lower readability scores - which is, texts easier to read. Yet, on the large scale, few studies have included readability as a measure that could help predicting literary quality. Studying a small corpus of bestsellers and more literary, canonical works, Martin (1996) found no significant difference in readability, using a modified Flesch reading score, while Garthwaite (2014) found differences in readability between bestsellers and commercially endorsed book-list titles. Relying on multiple measures of readability and one measure of literary quality (i.e., GoodReads' average ratings), Maharjan et al. (2017) found that readability was actually a weak measure for estimating popularity in comparison to, for example, character $\mathrm{n}$ - grams. Still, many studies of literary success, popularity, or perceived literary quality have sought to approximate text complexity and have studied textual properties upon which formulae of readability are directly or indirectly based, such as sentence-length, vocabulary richness, or text compressibility (Brottrager et al., 2022; van Cranenburgh and Bod, 2017; Crosbie et al., 2013).
|
| 116 |
+
|
| 117 |
+
The question of the role of readability in literary quality is complicated by the practical and conceptual problem of defining literary quality itself, and consequently of quantifying it for large scale studies. Studies that seek to predict perceived literary quality from textual features often rely on the provisional proxy of one single gold standard, such as book-ratings from large user-platforms like GoodReads (Maharjan et al., 2018), personally or institutionally compiled canons (Mohseni et al., 2022) or sales-numbers (Wang et al., 2019). However, it has been shown that readers may have different, distinct perceptions of quality that are not necessarily based on the same criteria or prompted by the same textual features (Koolen et al., 2020).
|
| 118 |
+
|
| 119 |
+
In this paper, we explore to what extent readability might contribute to the perception of literary quality - defined through several alternative measures - in a large fiction corpus of modern and contemporary novels in English, taking into account, instead of one golden standard, different contextual perspectives on literary quality, so as to cover both crowd-based and "expert"-based stan-
|
| 120 |
+
|
| 121 |
+
dards of judgment. 185
|
| 122 |
+
|
| 123 |
+
## 2 Data and Methods
|
| 124 |
+
|
| 125 |
+
The essence of our approach consists in examining whether readability, as measured through five different algorithms, and literary quality, as approximated through six different resources, show any correlation on a large corpus of English-language fiction. We use standard correlation measures (Pearson and Spearman product-moment correlation coefficients ${r}_{p}$ and ${r}_{s}$ , respectively). For inference on the correlation measures, simple Student's t-tests are used. For robustness checks, correlation coefficients were also modelled using a Bayesian ridge model of standardized the variables - although not reported due to limited space. ${}^{2}$
|
| 126 |
+
|
| 127 |
+
### 2.1 Corpus
|
| 128 |
+
|
| 129 |
+
We use a corpus of modern and contemporary fiction in English, the so-called Chicago Corpus. [3] The Chicago Corpus is a collection of over 9000 novels from 1880 to 2000, representing works of fiction that are widespread in libraries, that is, the works of fiction that have a large number of library holdings as listed on WorldCat, a large-scale, international online library catalogue 4 . The num-
|
| 130 |
+
|
| 131 |
+
215
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
+
|
| 135 |
+
${}^{2}$ The code will be publicly available upon acceptance.
|
| 136 |
+
|
| 137 |
+
${}^{3}$ While we cannot directly provide access to the corpus, it is possible to contact the authors for requests.
|
| 138 |
+
|
| 139 |
+
${}^{4}$ https://www.worldcat.org/about
|
| 140 |
+
|
| 141 |
+
${}^{1}$ https://hemingwayapp.com/help.html https://authors.ai/marlowe/
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
216 270
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
|
| 149 |
+
(b) Distributions of quality measures. Rating count is visualised with cutoff at 5000 for legibility.
|
| 150 |
+
|
| 151 |
+
Figure 2: Distributions of measures
|
| 152 |
+
|
| 153 |
+
272
|
| 154 |
+
|
| 155 |
+
273
|
| 156 |
+
|
| 157 |
+
274
|
| 158 |
+
|
| 159 |
+
275
|
| 160 |
+
|
| 161 |
+
277
|
| 162 |
+
|
| 163 |
+
278
|
| 164 |
+
|
| 165 |
+
279
|
| 166 |
+
|
| 167 |
+
280
|
| 168 |
+
|
| 169 |
+
281
|
| 170 |
+
|
| 171 |
+
282
|
| 172 |
+
|
| 173 |
+
283
|
| 174 |
+
|
| 175 |
+
284
|
| 176 |
+
|
| 177 |
+
285
|
| 178 |
+
|
| 179 |
+
217 271
|
| 180 |
+
|
| 181 |
+
222 276
|
| 182 |
+
|
| 183 |
+
286
|
| 184 |
+
|
| 185 |
+
287 ber of holdings was used as a first filtering measure to include or exclude works in the dataset, yet there are still large differences in how many libraries hold each title, so we can use it as a met-
|
| 186 |
+
|
| 187 |
+
239 ric to score different titles within the dataset as well. The corpus is unique, to our knowledge, for its diversity and extraordinary representation of famous popular- and genre-fiction, as well as
|
| 188 |
+
|
| 189 |
+
244 seminal works from the whole period: key works of modernism and postmodernism as well as Nobel laureates and winners of major literary award.
|
| 190 |
+
|
| 191 |
+
247 Still, it should be noted that the Chicago corpus re-
|
| 192 |
+
|
| 193 |
+
248 flects a clear cultural and geographical tilt, with a
|
| 194 |
+
|
| 195 |
+
249 strong over-representation of Anglophone authors, and features only works either written in or translated into English. This tilt should be taken into
|
| 196 |
+
|
| 197 |
+
252 account especially since we correlate textual features in the corpus to readability measures that
|
| 198 |
+
|
| 199 |
+
254 were developed - and are particularly successful - in the English language context (Antunes and Lopes, 2019).
|
| 200 |
+
|
| 201 |
+
257
|
| 202 |
+
|
| 203 |
+
258
|
| 204 |
+
|
| 205 |
+
259
|
| 206 |
+
|
| 207 |
+
<table><tr><td/><td>N. Titles</td><td>N. Authors</td></tr><tr><td>Whole corpus</td><td>9089</td><td>7000</td></tr><tr><td>Pulitzer</td><td>53</td><td>46</td></tr><tr><td>NBA</td><td>104</td><td>79</td></tr><tr><td>Hugo</td><td>96</td><td>47</td></tr></table>
|
| 208 |
+
|
| 209 |
+
Table 1: Overall titles and authors in the corpus and number of long-listed titles for each award.
|
| 210 |
+
|
| 211 |
+
260
|
| 212 |
+
|
| 213 |
+
261
|
| 214 |
+
|
| 215 |
+
264
|
| 216 |
+
|
| 217 |
+
265
|
| 218 |
+
|
| 219 |
+
266
|
| 220 |
+
|
| 221 |
+
267
|
| 222 |
+
|
| 223 |
+
268
|
| 224 |
+
|
| 225 |
+
269
|
| 226 |
+
|
| 227 |
+
288
|
| 228 |
+
|
| 229 |
+
### 2.2 Measures of quality
|
| 230 |
+
|
| 231 |
+
289
|
| 232 |
+
|
| 233 |
+
We use six different measures of literary quality 291 of two main types, heuristically setting up a qual-
|
| 234 |
+
|
| 235 |
+
itative distinction between more crowd-based and 293 more expert-based measures. Expert-based measures may be supposed more institutionally pre-
|
| 236 |
+
|
| 237 |
+
scribed, where titles are distinguished by appoint- 296 ing committees (as with literary prizes). Here, we
|
| 238 |
+
|
| 239 |
+
chose to look at three prominent literary prizes in 298 Anglophone literary culture: The Pulitzer Prize, the National Book Award, and the Hugo Awards,
|
| 240 |
+
|
| 241 |
+
considering titles that were both long- and short- 301 listed for these prizes. The selection of awards
|
| 242 |
+
|
| 243 |
+
allows us to consider a main-stream vs. genre- 303 literature divide in our expert measures, since the first two prizes are assigned mainly to works of
|
| 244 |
+
|
| 245 |
+
literary fiction, while the latter is an award given 306 to works of genre fiction (science fiction and fan-
|
| 246 |
+
|
| 247 |
+
tasy). 308
|
| 248 |
+
|
| 249 |
+
Crowd-based measures may be considered 309 310 more democratic in the sense of being user-created, for example by users' ratings on
|
| 250 |
+
|
| 251 |
+
large scale reading community sites such as 313 GoodReads, or by the effect of popular demand on library acquisitions. We use three standards here: the average ratings of titles on GoodReads (from 0 to 5 stars), the average rating count of titles on
|
| 252 |
+
|
| 253 |
+
GoodReads (number of ratings given to a given ti- 318 tle), and the number of libraries that hold a title according to Worldcat. Goodreads ratings and/or rating counts are often favoured in studies of literary
|
| 254 |
+
|
| 255 |
+
quality and reception, because they seem to proffer 322
|
| 256 |
+
|
| 257 |
+
more democratic literary evaluations "in the wild", 323
|
| 258 |
+
|
| 259 |
+
324 378
|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
|
| 263 |
+
Figure 3: Quality standards and flavours of readability
|
| 264 |
+
|
| 265 |
+
397
|
| 266 |
+
|
| 267 |
+
398
|
| 268 |
+
|
| 269 |
+
400
|
| 270 |
+
|
| 271 |
+
325 379
|
| 272 |
+
|
| 273 |
+
326 380
|
| 274 |
+
|
| 275 |
+
327 381
|
| 276 |
+
|
| 277 |
+
328 382
|
| 278 |
+
|
| 279 |
+
329 383
|
| 280 |
+
|
| 281 |
+
330 384
|
| 282 |
+
|
| 283 |
+
331 385
|
| 284 |
+
|
| 285 |
+
332 386
|
| 286 |
+
|
| 287 |
+
333 387
|
| 288 |
+
|
| 289 |
+
334 388
|
| 290 |
+
|
| 291 |
+
335 389
|
| 292 |
+
|
| 293 |
+
336 390
|
| 294 |
+
|
| 295 |
+
337 391
|
| 296 |
+
|
| 297 |
+
338 392
|
| 298 |
+
|
| 299 |
+
339 393
|
| 300 |
+
|
| 301 |
+
340 394
|
| 302 |
+
|
| 303 |
+
341 395
|
| 304 |
+
|
| 305 |
+
342 396
|
| 306 |
+
|
| 307 |
+
345 399
|
| 308 |
+
|
| 309 |
+
347 401
|
| 310 |
+
|
| 311 |
+
402
|
| 312 |
+
|
| 313 |
+
403
|
| 314 |
+
|
| 315 |
+
350 404
|
| 316 |
+
|
| 317 |
+
351 considering the large diversity and geographical 352 spread of its nearly 90 million users (Nakamura, 353 2013). In slight contrast to Goodread's ratings, 354 we consider library holdings a conceptually hy- 355
|
| 318 |
+
|
| 319 |
+
356 brid measure, standing between completely free
|
| 320 |
+
|
| 321 |
+
357 reader-based votes and expert-driven choices, as
|
| 322 |
+
|
| 323 |
+
358 libraries respond to user-demand from within an
|
| 324 |
+
|
| 325 |
+
359 institutional structure.
|
| 326 |
+
|
| 327 |
+
360
|
| 328 |
+
|
| 329 |
+
361
|
| 330 |
+
|
| 331 |
+
### 2.3 Measures of readability
|
| 332 |
+
|
| 333 |
+
362 For assessing the complexity and/or difficulty of 363 literary texts, we apply various measures of read- 364 ability. Since the ${1920}\mathrm{\;s}$ , and especially with the 365 success of the Flesch and Dale-Chall formulas in 366 the 1950s, combinations of sentence-length and 367
|
| 334 |
+
|
| 335 |
+
368 words and/or syllables have been used to assess the difficulty of a text as proxies of word and sen- 369
|
| 336 |
+
|
| 337 |
+
370 tence complexity (Dale and Chall, 1948). According to Dubay (2004), there were more than 200
|
| 338 |
+
|
| 339 |
+
372 different versions of readability formulas in 1980, while new ones are still introduced and old ones
|
| 340 |
+
|
| 341 |
+
374 revised. Still, measures from what Dubay calls
|
| 342 |
+
|
| 343 |
+
375 the "classic" readability studies, continue to be the most widely used measures and to prove them-
|
| 344 |
+
|
| 345 |
+
377 selves effective in assessing text difficulty (Dubay,
|
| 346 |
+
|
| 347 |
+
2004; Stajner et al., 2012) - despite their relative 405 406 simplicity (being counts of two or three aspects of 407 texts). 408 These measures have been applied to a wide 409 range of written productions, from technical and 410 journalistic texts to fiction. Flesch, for example, 411 found that fiction tend to score a Flesch Reading Ease score in the range 70 ; Score ; 90, in contrast
|
| 348 |
+
|
| 349 |
+
to scientific text that often score below 30 (Flesch, 414 1948). In the present study we used five differ-
|
| 350 |
+
|
| 351 |
+
ent "classic" readability algorithms to measure the 416 prose of each book, chosen for their popularity and interpretability ${}^{5}$ .
|
| 352 |
+
|
| 353 |
+
- The Flesch Reading Ease is a measure of
|
| 354 |
+
|
| 355 |
+
readability based on the average sentence 421 length (ASL), and the average syllables per word (word length)(ASW). It is calculated as follows:
|
| 356 |
+
|
| 357 |
+
$$
|
| 358 |
+
\text{ Score } = {206.835} - \left( {{1.015} \times \mathrm{{ASL}}}\right)
|
| 359 |
+
$$
|
| 360 |
+
|
| 361 |
+
426
|
| 362 |
+
|
| 363 |
+
$$
|
| 364 |
+
- \left( {{84.6} \times \text{ASW}}\right)
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
428
|
| 368 |
+
|
| 369 |
+
## The Flesch-Kincaid Grade Level is a revised
|
| 370 |
+
|
| 371 |
+
429
|
| 372 |
+
|
| 373 |
+
430
|
| 374 |
+
|
| 375 |
+
431 version of the Flesch Reading Ease score.
|
| 376 |
+
|
| 377 |
+
---
|
| 378 |
+
|
| 379 |
+
${}^{5}$ All readability scores were extracted using the textstat package: https://pypi.org/project/textstat/
|
| 380 |
+
|
| 381 |
+
---
|
| 382 |
+
|
| 383 |
+
433 Like the former, it is based on the average sentence length (ASL), and the number of syllables per word (ASW). It is calculated as follows:
|
| 384 |
+
|
| 385 |
+
$$
|
| 386 |
+
\mathrm{{GL}} = \left( {{0.4} \times \mathrm{{ASL}}}\right) + \left( {{12} \times \mathrm{{ASW}}}\right) - {15}
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
- The SMOG Readability Formula is a readability score introduced by McLaughlin (McLaughlin, 1969). It measures readability based on the average sentence length and number of words with more than 3 syllables (number of polysyllables), applying the formula:
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
\text{SMOG grading} = 3 + \sqrt{\text{ polysyllablecount }}
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
- The Automated Readability Index is a readability score based on the average sentence length and number of characters per words (word length). It is calculated as follows:
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
{4.71}\frac{\text{ characters }}{\text{ words }} + {0.5}\frac{\text{ words }}{\text{ sentences }} - {21.43}
|
| 399 |
+
$$
|
| 400 |
+
|
| 401 |
+
- The New Dale-Chall Readability Formula is a 1995 revision of the Dale-Chall readability score (Chall and Dale, 1995). It is based on the average sentence length (ASL) and the percentage of "difficult words" (PDW) which were defined as words which do not appear on a list of words which 80 percent of fourth-graders would know (Dale and Chall, 1948), contained in the Dale-Chall word-list. [6] It is calculated as follows:
|
| 402 |
+
|
| 403 |
+
$$
|
| 404 |
+
\text{Raw Score} = {0.1579} \times \mathrm{{PDW}} + {0.0496} \times \mathrm{{ASL}}
|
| 405 |
+
$$
|
| 406 |
+
|
| 407 |
+
$$
|
| 408 |
+
\text{If PDW} > 5\% \text{: Adjusted Score} =
|
| 409 |
+
$$
|
| 410 |
+
|
| 411 |
+
$$
|
| 412 |
+
\text{Raw Score} + {3.6365}
|
| 413 |
+
$$
|
| 414 |
+
|
| 415 |
+
All readability scores are represented as a US-grade level, where a higher grade means a more difficult text, except for the Flesch Reading Ease. The Flesch Reading Ease indicates a score between 0 (low readability) and 100 (high readability): a higher number means a more readable text. For this reason in most of our experiments the Flesch Reading Ease looks reversed with respect to the other measures (and is negatively correlated with them).
|
| 416 |
+
|
| 417 |
+
## 3 Results
|
| 418 |
+
|
| 419 |
+
486
|
| 420 |
+
|
| 421 |
+
487
|
| 422 |
+
|
| 423 |
+
Pearson's and Spearman's correlations between 488
|
| 424 |
+
|
| 425 |
+
these five readability metrics and commonly used 489 stylometric features show - as a sanity check - that readability measures capture aspects of novels'
|
| 426 |
+
|
| 427 |
+
overall style. All measures are similarly correlated 492 to sentence-length (naturally, being a base for all measures) but also to lexical diversity and compressibility, which measure, respectively, complexity at the word- and sequence-level. More-
|
| 428 |
+
|
| 429 |
+
over, the correlations between with our "quality 497 scores" show that readability is linked with the ones closer to popularity than to appreciation.
|
| 430 |
+
|
| 431 |
+
<table><tr><td/><td/><td>Spearman Correlation Scores</td><td/></tr><tr><td/><td>-0.16</td><td>-0.063</td><td>0.13</td></tr><tr><td/><td>0.13</td><td>0.082</td><td>0.56 0.1 -0.25</td></tr><tr><td>8</td><td>-0.15</td><td>-0.11</td><td>-0.12-0.06</td></tr><tr><td/><td>-0.15</td><td>-0.061</td><td>-0.25 -0.12 -0.50</td></tr><tr><td/><td>-0.25</td><td>-0.22</td><td>-0.22-0.25</td></tr><tr><td/><td>through</td><td>Avg Setting</td><td>-1.66 Bating Count</td></tr></table>
|
| 432 |
+
|
| 433 |
+
Figure 4: Correlations between quality standards and flavours of readability. All correlations are statistically significant.
|
| 434 |
+
|
| 435 |
+
502
|
| 436 |
+
|
| 437 |
+
504
|
| 438 |
+
|
| 439 |
+
507
|
| 440 |
+
|
| 441 |
+
509
|
| 442 |
+
|
| 443 |
+
Pearsons' r, specifically in its significance testing, relies on the assumption of normally distributed data and it assumes that the two variables have a linear relationship, while Spearmans' $\mathrm{r}$ correlation coefficient is non-parametric, meaning that, while it still assumes a monotonic relation between the two variables, it does not make strong assumptions on the shape of the data. For this reason, Spearman is probably the best overall measure for this study, as we have no reason to assume that all our measures are normally distributed (and
|
| 444 |
+
|
| 445 |
+
some are evidently not, as can be seen in Figure 2). 524 For these reasons, we will mainly credit the correlations observed through Spearman'r, although we report both in [2].
|
| 446 |
+
|
| 447 |
+
### 3.1 Readability and stylometrics
|
| 448 |
+
|
| 449 |
+
529
|
| 450 |
+
|
| 451 |
+
As readability measures are supposed to be measures of style, we compute their correlation with three core stylistic features - sentence length, lexical diversity ${}^{7}$ and textual compressibility ${}^{8}$ - that
|
| 452 |
+
|
| 453 |
+
539 have been found linked to perceived literary qual-
|
| 454 |
+
|
| 455 |
+
---
|
| 456 |
+
|
| 457 |
+
${}^{7}$ We operationalized lexical diversity as the type-token ratio (TTR) of a text, using a common method insensitive to text-length: the Mean Segmental Type-Token Ratio (MSTTR). MSTTR-100 represents the average TTR of local averages in 100-word segments of each text.
|
| 458 |
+
|
| 459 |
+
${}^{8}$ Following van Cranenburgh and Bod (2017), for text compressibility, we calculated the compression ratio (origi-
|
| 460 |
+
|
| 461 |
+
${}^{6}$ See: https://countwordsworth.com/download /DaleChal-lEasyWordList.txt
|
| 462 |
+
|
| 463 |
+
---
|
| 464 |
+
|
| 465 |
+
541 ity in previous studies (van Cranenburgh and Bod, 2017; Crosbie et al., 2013; Maharjan et al., 2017; Wang et al., 2019). As can be seen in Figure 1, all readability measures have evident correlations with these three metrics, even though they don't necessarily compute them directly - for example, no readability measure computes text compressibility. However, while compressibility is not obviously correlated to readability, compressibility is a measure of redundancy or formulaicity: it appears that easier texts also have a tendency to be more sequentially repetitive. One readability measure, the new Dale-Chall, correlates with the simple length (word count) of the novels. This is a surprising effect, since, like the other measures, the new Dale-Chall is not length-dependent. As it is the only measure looking at the texts' lexicon through an index of difficult words, it seems to be picking on a tendency for longer books to have a slightly more complex vocabulary.
|
| 466 |
+
|
| 467 |
+
### 3.2 Relation with quality - GoodReads and libraries
|
| 468 |
+
|
| 469 |
+
As discussed before, we correlate readability with three possible proxies of perceived quality of novels: GoodReads' average ratings, GoodReads' rating count, and the number of libraries holding a given title according to WorldCat ${}^{9}$ . We could consider GoodReads' rating count to be a measure closer to the concept of popularity or fame, while GoodReads' average rating tells us about the appreciation of the title independently from how many readers it had. As can be seen in Figure 4 , all of our readability measures show a degree of correlation with the number of library holdings and the GoodReads' rating count: more readable books tend to have more ratings and tend to be held by more libraries.
|
| 470 |
+
|
| 471 |
+
The average rating of titles on GoodReads, on the other hand, shows a significant correlation
|
| 472 |
+
|
| 473 |
+
583 with only one of the measures, the Dale-Chall readability score, while it appears to have no link with the other four. Interestingly, the Dale-Chall score is the only measure that uses a precompiled list of words to estimate the number of difficult words in a text, instead of relying entirely on the features of the text at hand. While this could make
|
| 474 |
+
|
| 475 |
+
593
|
| 476 |
+
|
| 477 |
+
594
|
| 478 |
+
|
| 479 |
+

|
| 480 |
+
|
| 481 |
+
Figure 5: The likelihood of being acquired by less than 100 libraries increases quite steadily with difficulty of reading (Spearman's rho 0.84), as the probability of appearing in more than 500 declines. Readability is here measured as Flesch-Kincaid Grade Level.
|
| 482 |
+
|
| 483 |
+
595
|
| 484 |
+
|
| 485 |
+
596
|
| 486 |
+
|
| 487 |
+
597
|
| 488 |
+
|
| 489 |
+
598
|
| 490 |
+
|
| 491 |
+
600
|
| 492 |
+
|
| 493 |
+
605
|
| 494 |
+
|
| 495 |
+
610
|
| 496 |
+
|
| 497 |
+
615
|
| 498 |
+
|
| 499 |
+

|
| 500 |
+
|
| 501 |
+
Figure 6: The probability of being rated by less than 100 users in Goodreads strongly correlates with the difficulty of the texts as measured, in this case, by the Flesch-Kincaid Grade Level.
|
| 502 |
+
|
| 503 |
+
617
|
| 504 |
+
|
| 505 |
+
619
|
| 506 |
+
|
| 507 |
+
620
|
| 508 |
+
|
| 509 |
+
622
|
| 510 |
+
|
| 511 |
+
625
|
| 512 |
+
|
| 513 |
+
627
|
| 514 |
+
|
| 515 |
+
632
|
| 516 |
+
|
| 517 |
+
it a more fragile measure (due to linguistic change 635
|
| 518 |
+
|
| 519 |
+
and differences between genres) it appears to ac- 637 tually give it an increased modelling power for the tastes of GoodReads' average readers. It is worth mentioning that GoodReads' average ratings do not correlate, in our corpus, with the books' publication date - so a direct effect of language evolution on the measure's index can be excluded. Simplifying a bit, this points to the idea that the ease of vocabulary might relate to the average apprecia-
|
| 520 |
+
|
| 521 |
+
tion of a book as well as its fame, so that texts with 646
|
| 522 |
+
|
| 523 |
+
a simpler lexicon, together with shorter sentences 647
|
| 524 |
+
|
| 525 |
+
---
|
| 526 |
+
|
| 527 |
+
nal bit-size/compressed bit-size) using bzip2, a standard file-compressor.
|
| 528 |
+
|
| 529 |
+
${}^{9}$ Naturally this selection remains arbitrary. Expanding to other measures of perceived quality is an ongoing process.
|
| 530 |
+
|
| 531 |
+
---
|
| 532 |
+
|
| 533 |
+
648
|
| 534 |
+
|
| 535 |
+

|
| 536 |
+
|
| 537 |
+
Figure 7: Flavours of readability and awards: overall distributions.
|
| 538 |
+
|
| 539 |
+
649
|
| 540 |
+
|
| 541 |
+
650
|
| 542 |
+
|
| 543 |
+
651
|
| 544 |
+
|
| 545 |
+
652
|
| 546 |
+
|
| 547 |
+
653
|
| 548 |
+
|
| 549 |
+
654
|
| 550 |
+
|
| 551 |
+
659 or words, are both more read and better liked.
|
| 552 |
+
|
| 553 |
+

|
| 554 |
+
|
| 555 |
+
Figure 8: Flavours of readability and awards: mean value and standard error.
|
| 556 |
+
|
| 557 |
+
In Figure 3 we show the relation of each readability measure with library holdings, average Goodreads ratings and number of Goodreads' ratings. As can be seen, we should interpret the results with some caution, as the relation might not be linear: it could be that the best interpretation of the relation between, for example, readability and library holdings is modelled with a curve rather than a straight line. Yet, it appears quite evident at a glance that the probability of being held by a
|
| 558 |
+
|
| 559 |
+
681 large number of libraries, and of being rated by a large number of Goodreads users, decreases dramatically when the difficulty of the text increases beyond a certain level. As we show in Figure 5, the probability of being acquired by less than 100
|
| 560 |
+
|
| 561 |
+
686 libraries grows quite clearly with the text's dif-
|
| 562 |
+
|
| 563 |
+
688 ficulty, and the probability of being acquired by more than 500 decreases accordingly, with an in- 689 teresting peak at a medium-low point of difficulty. 690 The effect is even more evident when consider- 691 ing the probability of having less than 100 ratings on GoodReads, as appears in Figure 6. Appearing in 90 libraries is still a quite impressive measure of success, but the majority of the titles in
|
| 564 |
+
|
| 565 |
+
696 the Chicago corpus goes beyond that threshold, as well as beyond the threshold of 100 user ratings on GoodReads, so the difference in probabilities seems to point to a relative decline in popularity or fame with the increase of the texts' surface com-
|
| 566 |
+
|
| 567 |
+
701 plexity.
|
| 568 |
+
|
| 569 |
+
<table><tr><td/><td>Libs.</td><td>$\mathbf{{Rat}.n.}$</td></tr><tr><td>Flesch grade</td><td>-0.16 (-0.1)</td><td>-0.06 (-0.06)</td></tr><tr><td>Flesch ease</td><td>0.13 (0.07)</td><td>0.08 (0.09)</td></tr><tr><td>SMOG</td><td>-0.15 (-0.1)</td><td>-0.11 (-0.11)</td></tr><tr><td>ARI</td><td>-0.15 (-0.01)</td><td>0.06 (-0.06)</td></tr><tr><td>New Dale-Chall</td><td>-0.25 (-0.2)</td><td>-0.22 (-0.2)</td></tr><tr><td>Flesch grade</td><td>0.84</td><td>0.83</td></tr><tr><td>Flesch ease</td><td>-0.4</td><td>-0.48</td></tr><tr><td>SMOG</td><td>0.76</td><td>0.81</td></tr><tr><td>ARI</td><td>0.73</td><td>0.71</td></tr><tr><td>New Dale-Chall</td><td>0.78</td><td>0.82</td></tr></table>
|
| 570 |
+
|
| 571 |
+
Table 2: On the upper part of the table, Spearman's r (Pearson's in parenthesis) for each readability flavour and quality measure. On the lower, Spearman’s $r$ with the probability of being in less than 100 libraries or having less than 100 ratings.
|
| 572 |
+
|
| 573 |
+
### 3.3 Relation with quality - literary awards
|
| 574 |
+
|
| 575 |
+
The second type of quality check we selected is a categorical one: whether or not a title was long-listed for one of three prestigious awards - the Pulitzer Prize, the National Book Award and the Hugo Award.
|
| 576 |
+
|
| 577 |
+
As we show in Figures 7 and 8, as well as in Table 3, the difference between long-listed books and non long-listed books in terms of readability is small but significant for almost all measures, with long-listed books are systematically harder to read than their non-listed counterparts - again with the exception of the new Dale-Chall measure. Using this kind of quality proxy, we do not observe a value of reading ease but possibly its "dark side",
|
| 578 |
+
|
| 579 |
+
757 such as perceived simplification or a reduced expressive power of novels.
|
| 580 |
+
|
| 581 |
+
It may not surprise that these different standards should exhibit different preferences and perspectives on quality. Literary awards are notoriously elitist, even, perhaps, in a way that is wanted by their readership: the committee of the Booker Prize was accused of populism in 2011 when announcing "readability" as a new criterion for the award (Clark, 2011).
|
| 582 |
+
|
| 583 |
+
<table><tr><td/><td>T-test</td><td>p-value</td></tr><tr><td>Flesch grade</td><td>3.78</td><td>0.0001</td></tr><tr><td>Flesch ease</td><td>-4.66</td><td>0.000005</td></tr><tr><td>SMOG</td><td>3.69</td><td>0.0002</td></tr><tr><td>ARI</td><td>3.6</td><td>0.0003</td></tr><tr><td>New Dale-Chall</td><td>1.8</td><td>0.07</td></tr></table>
|
| 584 |
+
|
| 585 |
+
Table 3: T-test and p-value for the difference between long-listed and non-listed titles for each readability measure. The only measure that does not fall under the formal threshold of statistical significance is the new Dale-Chall.
|
| 586 |
+
|
| 587 |
+
## 4 Conclusions and Future Works
|
| 588 |
+
|
| 589 |
+
Readability measures proved significantly consistent, both between each other and with other relevant stylometric features, when applied on modern and contemporary fiction. Their relation with different proxies of literary quality is intriguing: more popular works, in terms of number of ratings on GoodReads and in terms of libraries willing to hold a copy of the book, appear to have a correlation with readability, while the appreciation of readers alone (independently from their number) seems to hold almost no link with it, and long-listed titles have an inverse relation with readability, tending to prefer slightly more difficult prose on the readability metrics' scale. It can be argued that we are seeing the divide between high-brow and "popular" literature, but the lack of correlation with GoodReads average rating might point to a slightly more nuanced conclusion. It is worth noting that the only measure showing a meaning-
|
| 590 |
+
|
| 591 |
+
804 ful correlation with all of the crowd-based quality metrics was the new Dale-Chall measure of readability, also the only one explicitly focusing on the presence of widely understood lexicon in a text, but it was also the only one showing no significant
|
| 592 |
+
|
| 593 |
+
809 difference between long-listed and non long-listed
|
| 594 |
+
|
| 595 |
+
titles. The only other measure having a correlation 810
|
| 596 |
+
|
| 597 |
+
higher than 0.1 with average GoodReads' ratings 811
|
| 598 |
+
|
| 599 |
+
was SMOG, which, while not using a list of hard 812
|
| 600 |
+
|
| 601 |
+
words, considers "difficult words" in its own way 813 in its computation, using the number of polysyllable words as a central element.
|
| 602 |
+
|
| 603 |
+
816
|
| 604 |
+
|
| 605 |
+
If we were to draw rough conclusions from these observations, it would seem that surface-level simplicity of style in terms of words per sen-
|
| 606 |
+
|
| 607 |
+
tence, characters per words, and similar metrics 821 "helps" a text's popularity, but has nothing to do with its likelihood of being highly liked by its readers - and it even slightly hinders its possibilities of receiving a prestigious awards. In other
|
| 608 |
+
|
| 609 |
+
words, surface-level simplicity improves a text's 826 quality only if we equate it with popularity or fame. Similarly, looking at threshold-based probability distributions showed that indeed increasing the difficulty of the novels' style might hinder
|
| 610 |
+
|
| 611 |
+
its diffusion across libraries and Goodreads' users. 831 Using a more common vocabulary might also increase readers' appreciation of the text, but only when it comes to crowd-based measures. On the other hand, the correlations of average number of ratings and library holdings with readability measures do not appear linear or monotonic, meaning
|
| 612 |
+
|
| 613 |
+
that there might also be a "point of balance" be- 838 tween too easy and too difficult, that maximizes the correlation with a novel's fame. The same might be true for the likelihood of a novel being long-listed for one of the three awards we took into
|
| 614 |
+
|
| 615 |
+
consideration. 843
|
| 616 |
+
|
| 617 |
+
Overall, readability seems to have an impact on
|
| 618 |
+
|
| 619 |
+
different perceptions of literary quality, although 846
|
| 620 |
+
|
| 621 |
+
its role and interaction with other features of the 848 text remains to be defined.
|
| 622 |
+
|
| 623 |
+
Further research points towards extending the set of correlations to more proxies of quality as well as more sophisticated stylometric measures to see whether interactions can provide a clearer picture of what we perceive as literary quality. Other further work could be to check the correlations of our measures with publication date: readability
|
| 624 |
+
|
| 625 |
+
might depend on time, either in the sense of the 858 evolution of the average novelistic style, overall language change, or even cultural selection, which would make the passage of time a particular form of "quality test" of its own accord.
|
| 626 |
+
|
| 627 |
+
863
|
| 628 |
+
|
| 629 |
+
## References
|
| 630 |
+
|
| 631 |
+
865
|
| 632 |
+
|
| 633 |
+
866 Hélder Antunes and Carla Teixeira Lopes. 2019. An- alyzing the Adequacy of Readability Indicators to a 867 Non-English Language. In Fabio Crestani, Martin Braschler, Jacques Savoy, Andreas Rauber, Henning Müller, David E. Losada, Gundula Heinatz Bürki, 870 Linda Cappellato, and Nicola Ferro, editors, ${Ex}$ - perimental IR Meets Multilinguality, Multimodality, and Interaction, volume 11696, pages 149-155. 872 Springer International Publishing, Cham.
|
| 634 |
+
|
| 635 |
+
Judith Brottrager, Annina Stahl, Arda Arslan, Ulrik 875 Brandes, and Thomas Weitin. 2022. Modeling and predicting literary reception. Journal of Computa- 877 tional Literary Studies, 1(1):1-27.
|
| 636 |
+
|
| 637 |
+
Jeanne S. Chall. 1947. This business of readability. Educational Research Bulletin, 26(1):1-13.
|
| 638 |
+
|
| 639 |
+
880 Jeanne S. Chall and Edgar Dale. 1995. Readability Revisited: The New Dale-Chall Readability Formula.
|
| 640 |
+
|
| 641 |
+
882 Brookline Books.
|
| 642 |
+
|
| 643 |
+
Alex Clark. 2011. Man Booker prize: This year's judges are betraying authors and their readers. The Observer.
|
| 644 |
+
|
| 645 |
+
887 Andreas van Cranenburgh and Rens Bod. 2017. A data-oriented model of literary language. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1228-1238, Valencia, Spain. Association for Computational Linguistics.
|
| 646 |
+
|
| 647 |
+
Tess Crosbie, Tim French, and Marc Conrad. 2013. Towards a model for replicating aesthetic literary appreciation. In Proceedings of the Fifth Workshop on Semantic Web Information Management, SWIM
|
| 648 |
+
|
| 649 |
+
897 '13, New York, NY, USA. Association for Computing Machinery.
|
| 650 |
+
|
| 651 |
+
Edgar Dale and Jeanne S. Chall. 1948. A formula for predicting readability. Educational Research Bulletin, 27(1):11-28.
|
| 652 |
+
|
| 653 |
+
902
|
| 654 |
+
|
| 655 |
+
Edgar Dale and Jeanne S. Chall. 1949. The concept of readability. Elementary English, 26(1):19-26.
|
| 656 |
+
|
| 657 |
+
William Dubay. 2004. The Principles of Readability. Impact Information.
|
| 658 |
+
|
| 659 |
+
907
|
| 660 |
+
|
| 661 |
+
Rudolph Flesch. 1948. A new readability yardstick. Journal of Applied Psychology, 32:221-233.
|
| 662 |
+
|
| 663 |
+
Craig L. Garthwaite. 2014. Demand spillovers, combative advertising, and celebrity endorsements. American Economic Journal: Applied Economics, 6(2):76-104.
|
| 664 |
+
|
| 665 |
+
Corina Koolen, Karina van Dalen-Oskam, Andreas van Cranenburgh, and Erica Nagelhout. 2020. Literary quality in the eye of the Dutch reader: The national
|
| 666 |
+
|
| 667 |
+
917 reader survey. Poetics, 79:1-13.
|
| 668 |
+
|
| 669 |
+
Suraj Maharjan, John Arevalo, Manuel Montes, 918
|
| 670 |
+
|
| 671 |
+
Fabio A. González, and Thamar Solorio. 2017. A 919
|
| 672 |
+
|
| 673 |
+
multi-task approach to predict likability of books. In 920
|
| 674 |
+
|
| 675 |
+
Proceedings of the 15th Conference of the European 921
|
| 676 |
+
|
| 677 |
+
Chapter of the Association for Computational Lin- 922 guistics: Volume 1, Long Papers, pages 1217-1227,
|
| 678 |
+
|
| 679 |
+
Valencia, Spain. Association for Computational Lin- 923
|
| 680 |
+
|
| 681 |
+
guistics. 924
|
| 682 |
+
|
| 683 |
+
Suraj Maharjan, Sudipta Kar, Manuel Montes, Fabio A. González, and Thamar Solorio. 2018. Letting emotions flow: Success prediction by modeling the flow of emotions in books. In Proceedings of the 2018
|
| 684 |
+
|
| 685 |
+
Conference of the North American Chapter of the 929 Association for Computational Linguistics: Human
|
| 686 |
+
|
| 687 |
+
Language Technologies: Volume 2, Short Papers, 931 pages 259-265, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 688 |
+
|
| 689 |
+
Claude Martin. 1996. Production, content, and uses of 934 bestselling books in quebec. Canadian Journal of
|
| 690 |
+
|
| 691 |
+
Communication, 21(4). 936
|
| 692 |
+
|
| 693 |
+
Harry G. McLaughlin. 1969. Smog grading: A new 937
|
| 694 |
+
|
| 695 |
+
readability formula. Journal of Reading, 12(1):639- 938
|
| 696 |
+
|
| 697 |
+
646. 939
|
| 698 |
+
|
| 699 |
+
Mahdi Mohseni, Christoph Redies, and Volker Gast.
|
| 700 |
+
|
| 701 |
+
2022. Approximate entropy in canonical and non- 941 canonical fiction. Entropy, 24(2):278.
|
| 702 |
+
|
| 703 |
+
Lisa Nakamura. 2013. "Words with friends": So-
|
| 704 |
+
|
| 705 |
+
cially networked reading on Goodreads. PMLA, 944 128(1):238-243.
|
| 706 |
+
|
| 707 |
+
946
|
| 708 |
+
|
| 709 |
+
Janice Redish. 2000. Readability formulas have even more limitations than Klare discusses. ACM J. Com-put. Doc., 24(3):132-137.
|
| 710 |
+
|
| 711 |
+
949
|
| 712 |
+
|
| 713 |
+
Lucius A. Sherman. 1893. Analytics of Literature: $A$
|
| 714 |
+
|
| 715 |
+
Manual for the Objective Study of English Prose and 951 Poetry. Athenaeum Press. Ginn.
|
| 716 |
+
|
| 717 |
+
Sanja Stajner, Richard Evans, Constantin Orasan, and Ruslan Mitkov. 2012. What can readability measures really tell us about text complexity? In Pro-
|
| 718 |
+
|
| 719 |
+
ceedings of Workshop on natural language process- 956 ing for improving textual accessibility, pages 14- 22, Istanbul, Turkey. Association for Computational Linguistics.
|
| 720 |
+
|
| 721 |
+
959
|
| 722 |
+
|
| 723 |
+
Xindi Wang, Burcu Yucesoy, Onur Varol, Tina Eliassi-Rad, and Albert-László Barabási. 2019. Success
|
| 724 |
+
|
| 725 |
+
in books: Predicting book sales before publication. 961 EPJ Data Science, 8(1):31.
|
| 726 |
+
|
| 727 |
+
963
|
| 728 |
+
|
| 729 |
+
964
|
| 730 |
+
|
| 731 |
+
965 966
|
| 732 |
+
|
| 733 |
+
967 968 969 970 971
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/rrsAzPAGhs/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,724 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ GOOD READS AND EASY NOVELS READABILITY AND LITERARY QUALITY IN A CORPUS OF US-PUBLISHED FICTION
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
Anonymous Author
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 1
|
| 14 |
+
|
| 15 |
+
006 Affiliation / Address line 2
|
| 16 |
+
|
| 17 |
+
Affiliation / Address line 3
|
| 18 |
+
|
| 19 |
+
email@domain
|
| 20 |
+
|
| 21 |
+
Anonymouser Author
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 1
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 2
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 3
|
| 28 |
+
|
| 29 |
+
email@domain
|
| 30 |
+
|
| 31 |
+
Anonymousest Author 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 1 059
|
| 34 |
+
|
| 35 |
+
Affiliation / Address line 2 060
|
| 36 |
+
|
| 37 |
+
Affiliation / Address line 3
|
| 38 |
+
|
| 39 |
+
email@domain 062
|
| 40 |
+
|
| 41 |
+
§ ABSTRACT
|
| 42 |
+
|
| 43 |
+
013 In this paper, we explore the extent to which readability contributes to the perception of literary quality as de-
|
| 44 |
+
|
| 45 |
+
016 fined by two categories of variables: expert-based (e.g., Pulitzer Prize, Na-
|
| 46 |
+
|
| 47 |
+
018 tional Book Award) and crowd-based (e.g., GoodReads, WorldCat). Based on a large corpus of modern and contemporary
|
| 48 |
+
|
| 49 |
+
021 fiction in English, we examine the correlation of a text's readability with its per-
|
| 50 |
+
|
| 51 |
+
023 ceived literary quality, also assessing readability measures against simpler stylomet-
|
| 52 |
+
|
| 53 |
+
026 ric features. Our results show that read- ability generally correlates with popularity
|
| 54 |
+
|
| 55 |
+
028 as measured through open platforms such as GoodReads and WorldCat but has an inverse relation with three prestigious liter-
|
| 56 |
+
|
| 57 |
+
031 ary awards. This points to a distinction between crowd- and expert-based judgments
|
| 58 |
+
|
| 59 |
+
033 of literary style, as well as to a discrimination between fame and appreciation in the reception of a book.
|
| 60 |
+
|
| 61 |
+
036
|
| 62 |
+
|
| 63 |
+
§ 1 INTRODUCTION AND RELATED WORKS
|
| 64 |
+
|
| 65 |
+
038 Is it overall better for a novel to strive for an easy prose, or is there a link between difficulty and literary quality? The concept of readability has been studied for decades and is defined as the ease with which a text can be read and understood (Dale and Chall, 1949). Several works have attempted to define an easy way to compute readability in order to make, for example, didactic books more accessible, reduce technical jargon in documents produced for the general public, and adjust text selections according to the intended audience (Dubay, 2004). The result has been a series of popular and amply tested measures, each with a slight difference in their model of readability. Dale and Chall
|
| 66 |
+
|
| 67 |
+
053 (1949), for example, referred to readability as the
|
| 68 |
+
|
| 69 |
+
combination of elements in a text that impact im- 065 portant aspects of a reader's experience - including
|
| 70 |
+
|
| 71 |
+
whether the reader can understand the text, finds 067 it interesting, and can read with optimal speed (Dale and Chall, 1949). Despite their shortcom-
|
| 72 |
+
|
| 73 |
+
ings (Redish, 2000), readability measures have 070 been broadly applied to a large number of different
|
| 74 |
+
|
| 75 |
+
domains. Measures of readability vary according 072 to what aspect of a text they take into account, but
|
| 76 |
+
|
| 77 |
+
they typically combine features such as sentence 075 length, word length, and the presence of complex
|
| 78 |
+
|
| 79 |
+
words. While the actual ease of a text depends on 077 reader characteristics (background, situation, ability) it is widely accepted that simple textual fea-
|
| 80 |
+
|
| 81 |
+
tures such as sentence length, syllables per word 080 and lexical diversity impact the reading experience
|
| 82 |
+
|
| 83 |
+
(Dubay, 2004). 082
|
| 84 |
+
|
| 85 |
+
The connection of readability to the quality of a text has often been often implied when it comes to
|
| 86 |
+
|
| 87 |
+
non-fiction, and early studies into readability attest 085 to the educational and social importance of devel-
|
| 88 |
+
|
| 89 |
+
oping such measures to improve technical or ex- 087 pository documents (Chall, 1947), but its role in the quality of literary fiction is much more com-
|
| 90 |
+
|
| 91 |
+
plex. An easy-to-read novel can be enjoyable 090 to read, but may also apppear poor or unorigi-
|
| 92 |
+
|
| 93 |
+
nal. In literary studies, the idea that readability 092 might be a precondition for literary success is debated, and literary texts have been assessed variously by readability measures and similar met-
|
| 94 |
+
|
| 95 |
+
rics. Sherman (1893) was one of the first schol- 097 ars to propose certain values of average sentence-length and reading ease as properties of "better" literary style. Readability naturally varies across genre, but it is a widespread conception for readers and publishers alike that bestsellers (as defined by top book-sales) are easier to read (Martin, 1996). More recently, readability has gained traction in areas of (commercial) creative writing and publishing, especially where its measures are imple-
|
| 96 |
+
|
| 97 |
+
mented in text-editing tools such as the Heming- 107
|
| 98 |
+
|
| 99 |
+
108 162
|
| 100 |
+
|
| 101 |
+
max width=
|
| 102 |
+
|
| 103 |
+
10|c|Spearman Correlation Scores
|
| 104 |
+
|
| 105 |
+
1-10
|
| 106 |
+
READABILITY_FLESCH_GRADE - 0.0072 0.76 0.39 -0.29 1 -0.95 0.86 0.93 1.00 0.75 0.75
|
| 107 |
+
|
| 108 |
+
1-10
|
| 109 |
+
READABILITY_FLESCH_EASE - -0.028 -0.65 -0.42 0.34 -0.95 1 -0.89 -0.86 0.50 -0.72 -0.25
|
| 110 |
+
|
| 111 |
+
1-10
|
| 112 |
+
PERDABRITY_SMOO - 0.018 0.63 0.44 -0.39 0.86 -0.89 1 0.88 0.77-0.00
|
| 113 |
+
|
| 114 |
+
1-10
|
| 115 |
+
READABILITY_ANI - 0.034 0.77 0.43 -0.32 0.93 -0.86 0.88 1 -0.25 0.77 -0.50
|
| 116 |
+
|
| 117 |
+
1-10
|
| 118 |
+
READABILITY_DALE_CHALL_NEW -0.39 0.55 0.4 -0.5 0.75 -0.72 0.77 0.77 1-0.75
|
| 119 |
+
|
| 120 |
+
1-10
|
| 121 |
+
X wompcount SENTENCE_LENOTHE MSTTR-100 extel-ter READABILITY FLESCH GITADE READABILITY FLESCH EASE X PEADABILITY SMOG READABILITY API READABILITY DALE_CHALL_NEW
|
| 122 |
+
|
| 123 |
+
1-10
|
| 124 |
+
|
| 125 |
+
Figure 1: Correlations between stylometrics and flavours of readability (Spearman). All correlations between 0.09 and 0.99 are statistically significant.
|
| 126 |
+
|
| 127 |
+
164
|
| 128 |
+
|
| 129 |
+
165
|
| 130 |
+
|
| 131 |
+
166
|
| 132 |
+
|
| 133 |
+
167
|
| 134 |
+
|
| 135 |
+
168
|
| 136 |
+
|
| 137 |
+
109 163 way or Marlowe editors ${}^{T}$ . These applications tend to favour lower readability scores - which is, texts easier to read. Yet, on the large scale, few studies have included readability as a measure that could help predicting literary quality. Studying a small corpus of bestsellers and more literary, canonical works, Martin (1996) found no significant difference in readability, using a modified Flesch reading score, while Garthwaite (2014) found differences in readability between bestsellers and commercially endorsed book-list titles. Relying on multiple measures of readability and one measure of literary quality (i.e., GoodReads' average ratings), Maharjan et al. (2017) found that readability was actually a weak measure for estimating popularity in comparison to, for example, character $\mathrm{n}$ - grams. Still, many studies of literary success, popularity, or perceived literary quality have sought to approximate text complexity and have studied textual properties upon which formulae of readability are directly or indirectly based, such as sentence-length, vocabulary richness, or text compressibility (Brottrager et al., 2022; van Cranenburgh and Bod, 2017; Crosbie et al., 2013).
|
| 138 |
+
|
| 139 |
+
The question of the role of readability in literary quality is complicated by the practical and conceptual problem of defining literary quality itself, and consequently of quantifying it for large scale studies. Studies that seek to predict perceived literary quality from textual features often rely on the provisional proxy of one single gold standard, such as book-ratings from large user-platforms like GoodReads (Maharjan et al., 2018), personally or institutionally compiled canons (Mohseni et al., 2022) or sales-numbers (Wang et al., 2019). However, it has been shown that readers may have different, distinct perceptions of quality that are not necessarily based on the same criteria or prompted by the same textual features (Koolen et al., 2020).
|
| 140 |
+
|
| 141 |
+
In this paper, we explore to what extent readability might contribute to the perception of literary quality - defined through several alternative measures - in a large fiction corpus of modern and contemporary novels in English, taking into account, instead of one golden standard, different contextual perspectives on literary quality, so as to cover both crowd-based and "expert"-based stan-
|
| 142 |
+
|
| 143 |
+
dards of judgment. 185
|
| 144 |
+
|
| 145 |
+
§ 2 DATA AND METHODS
|
| 146 |
+
|
| 147 |
+
The essence of our approach consists in examining whether readability, as measured through five different algorithms, and literary quality, as approximated through six different resources, show any correlation on a large corpus of English-language fiction. We use standard correlation measures (Pearson and Spearman product-moment correlation coefficients ${r}_{p}$ and ${r}_{s}$ , respectively). For inference on the correlation measures, simple Student's t-tests are used. For robustness checks, correlation coefficients were also modelled using a Bayesian ridge model of standardized the variables - although not reported due to limited space. ${}^{2}$
|
| 148 |
+
|
| 149 |
+
§ 2.1 CORPUS
|
| 150 |
+
|
| 151 |
+
We use a corpus of modern and contemporary fiction in English, the so-called Chicago Corpus. [3] The Chicago Corpus is a collection of over 9000 novels from 1880 to 2000, representing works of fiction that are widespread in libraries, that is, the works of fiction that have a large number of library holdings as listed on WorldCat, a large-scale, international online library catalogue 4 . The num-
|
| 152 |
+
|
| 153 |
+
215
|
| 154 |
+
|
| 155 |
+
${}^{2}$ The code will be publicly available upon acceptance.
|
| 156 |
+
|
| 157 |
+
${}^{3}$ While we cannot directly provide access to the corpus, it is possible to contact the authors for requests.
|
| 158 |
+
|
| 159 |
+
${}^{4}$ https://www.worldcat.org/about
|
| 160 |
+
|
| 161 |
+
${}^{1}$ https://hemingwayapp.com/help.html https://authors.ai/marlowe/
|
| 162 |
+
|
| 163 |
+
216 270
|
| 164 |
+
|
| 165 |
+
< g r a p h i c s >
|
| 166 |
+
|
| 167 |
+
(b) Distributions of quality measures. Rating count is visualised with cutoff at 5000 for legibility.
|
| 168 |
+
|
| 169 |
+
Figure 2: Distributions of measures
|
| 170 |
+
|
| 171 |
+
272
|
| 172 |
+
|
| 173 |
+
273
|
| 174 |
+
|
| 175 |
+
274
|
| 176 |
+
|
| 177 |
+
275
|
| 178 |
+
|
| 179 |
+
277
|
| 180 |
+
|
| 181 |
+
278
|
| 182 |
+
|
| 183 |
+
279
|
| 184 |
+
|
| 185 |
+
280
|
| 186 |
+
|
| 187 |
+
281
|
| 188 |
+
|
| 189 |
+
282
|
| 190 |
+
|
| 191 |
+
283
|
| 192 |
+
|
| 193 |
+
284
|
| 194 |
+
|
| 195 |
+
285
|
| 196 |
+
|
| 197 |
+
217 271
|
| 198 |
+
|
| 199 |
+
222 276
|
| 200 |
+
|
| 201 |
+
286
|
| 202 |
+
|
| 203 |
+
287 ber of holdings was used as a first filtering measure to include or exclude works in the dataset, yet there are still large differences in how many libraries hold each title, so we can use it as a met-
|
| 204 |
+
|
| 205 |
+
239 ric to score different titles within the dataset as well. The corpus is unique, to our knowledge, for its diversity and extraordinary representation of famous popular- and genre-fiction, as well as
|
| 206 |
+
|
| 207 |
+
244 seminal works from the whole period: key works of modernism and postmodernism as well as Nobel laureates and winners of major literary award.
|
| 208 |
+
|
| 209 |
+
247 Still, it should be noted that the Chicago corpus re-
|
| 210 |
+
|
| 211 |
+
248 flects a clear cultural and geographical tilt, with a
|
| 212 |
+
|
| 213 |
+
249 strong over-representation of Anglophone authors, and features only works either written in or translated into English. This tilt should be taken into
|
| 214 |
+
|
| 215 |
+
252 account especially since we correlate textual features in the corpus to readability measures that
|
| 216 |
+
|
| 217 |
+
254 were developed - and are particularly successful - in the English language context (Antunes and Lopes, 2019).
|
| 218 |
+
|
| 219 |
+
257
|
| 220 |
+
|
| 221 |
+
258
|
| 222 |
+
|
| 223 |
+
259
|
| 224 |
+
|
| 225 |
+
max width=
|
| 226 |
+
|
| 227 |
+
X N. Titles N. Authors
|
| 228 |
+
|
| 229 |
+
1-3
|
| 230 |
+
Whole corpus 9089 7000
|
| 231 |
+
|
| 232 |
+
1-3
|
| 233 |
+
Pulitzer 53 46
|
| 234 |
+
|
| 235 |
+
1-3
|
| 236 |
+
NBA 104 79
|
| 237 |
+
|
| 238 |
+
1-3
|
| 239 |
+
Hugo 96 47
|
| 240 |
+
|
| 241 |
+
1-3
|
| 242 |
+
|
| 243 |
+
Table 1: Overall titles and authors in the corpus and number of long-listed titles for each award.
|
| 244 |
+
|
| 245 |
+
260
|
| 246 |
+
|
| 247 |
+
261
|
| 248 |
+
|
| 249 |
+
264
|
| 250 |
+
|
| 251 |
+
265
|
| 252 |
+
|
| 253 |
+
266
|
| 254 |
+
|
| 255 |
+
267
|
| 256 |
+
|
| 257 |
+
268
|
| 258 |
+
|
| 259 |
+
269
|
| 260 |
+
|
| 261 |
+
288
|
| 262 |
+
|
| 263 |
+
§ 2.2 MEASURES OF QUALITY
|
| 264 |
+
|
| 265 |
+
289
|
| 266 |
+
|
| 267 |
+
We use six different measures of literary quality 291 of two main types, heuristically setting up a qual-
|
| 268 |
+
|
| 269 |
+
itative distinction between more crowd-based and 293 more expert-based measures. Expert-based measures may be supposed more institutionally pre-
|
| 270 |
+
|
| 271 |
+
scribed, where titles are distinguished by appoint- 296 ing committees (as with literary prizes). Here, we
|
| 272 |
+
|
| 273 |
+
chose to look at three prominent literary prizes in 298 Anglophone literary culture: The Pulitzer Prize, the National Book Award, and the Hugo Awards,
|
| 274 |
+
|
| 275 |
+
considering titles that were both long- and short- 301 listed for these prizes. The selection of awards
|
| 276 |
+
|
| 277 |
+
allows us to consider a main-stream vs. genre- 303 literature divide in our expert measures, since the first two prizes are assigned mainly to works of
|
| 278 |
+
|
| 279 |
+
literary fiction, while the latter is an award given 306 to works of genre fiction (science fiction and fan-
|
| 280 |
+
|
| 281 |
+
tasy). 308
|
| 282 |
+
|
| 283 |
+
Crowd-based measures may be considered 309 310 more democratic in the sense of being user-created, for example by users' ratings on
|
| 284 |
+
|
| 285 |
+
large scale reading community sites such as 313 GoodReads, or by the effect of popular demand on library acquisitions. We use three standards here: the average ratings of titles on GoodReads (from 0 to 5 stars), the average rating count of titles on
|
| 286 |
+
|
| 287 |
+
GoodReads (number of ratings given to a given ti- 318 tle), and the number of libraries that hold a title according to Worldcat. Goodreads ratings and/or rating counts are often favoured in studies of literary
|
| 288 |
+
|
| 289 |
+
quality and reception, because they seem to proffer 322
|
| 290 |
+
|
| 291 |
+
more democratic literary evaluations "in the wild", 323
|
| 292 |
+
|
| 293 |
+
324 378
|
| 294 |
+
|
| 295 |
+
< g r a p h i c s >
|
| 296 |
+
|
| 297 |
+
Figure 3: Quality standards and flavours of readability
|
| 298 |
+
|
| 299 |
+
397
|
| 300 |
+
|
| 301 |
+
398
|
| 302 |
+
|
| 303 |
+
400
|
| 304 |
+
|
| 305 |
+
325 379
|
| 306 |
+
|
| 307 |
+
326 380
|
| 308 |
+
|
| 309 |
+
327 381
|
| 310 |
+
|
| 311 |
+
328 382
|
| 312 |
+
|
| 313 |
+
329 383
|
| 314 |
+
|
| 315 |
+
330 384
|
| 316 |
+
|
| 317 |
+
331 385
|
| 318 |
+
|
| 319 |
+
332 386
|
| 320 |
+
|
| 321 |
+
333 387
|
| 322 |
+
|
| 323 |
+
334 388
|
| 324 |
+
|
| 325 |
+
335 389
|
| 326 |
+
|
| 327 |
+
336 390
|
| 328 |
+
|
| 329 |
+
337 391
|
| 330 |
+
|
| 331 |
+
338 392
|
| 332 |
+
|
| 333 |
+
339 393
|
| 334 |
+
|
| 335 |
+
340 394
|
| 336 |
+
|
| 337 |
+
341 395
|
| 338 |
+
|
| 339 |
+
342 396
|
| 340 |
+
|
| 341 |
+
345 399
|
| 342 |
+
|
| 343 |
+
347 401
|
| 344 |
+
|
| 345 |
+
402
|
| 346 |
+
|
| 347 |
+
403
|
| 348 |
+
|
| 349 |
+
350 404
|
| 350 |
+
|
| 351 |
+
351 considering the large diversity and geographical 352 spread of its nearly 90 million users (Nakamura, 353 2013). In slight contrast to Goodread's ratings, 354 we consider library holdings a conceptually hy- 355
|
| 352 |
+
|
| 353 |
+
356 brid measure, standing between completely free
|
| 354 |
+
|
| 355 |
+
357 reader-based votes and expert-driven choices, as
|
| 356 |
+
|
| 357 |
+
358 libraries respond to user-demand from within an
|
| 358 |
+
|
| 359 |
+
359 institutional structure.
|
| 360 |
+
|
| 361 |
+
360
|
| 362 |
+
|
| 363 |
+
361
|
| 364 |
+
|
| 365 |
+
§ 2.3 MEASURES OF READABILITY
|
| 366 |
+
|
| 367 |
+
362 For assessing the complexity and/or difficulty of 363 literary texts, we apply various measures of read- 364 ability. Since the ${1920}\mathrm{\;s}$ , and especially with the 365 success of the Flesch and Dale-Chall formulas in 366 the 1950s, combinations of sentence-length and 367
|
| 368 |
+
|
| 369 |
+
368 words and/or syllables have been used to assess the difficulty of a text as proxies of word and sen- 369
|
| 370 |
+
|
| 371 |
+
370 tence complexity (Dale and Chall, 1948). According to Dubay (2004), there were more than 200
|
| 372 |
+
|
| 373 |
+
372 different versions of readability formulas in 1980, while new ones are still introduced and old ones
|
| 374 |
+
|
| 375 |
+
374 revised. Still, measures from what Dubay calls
|
| 376 |
+
|
| 377 |
+
375 the "classic" readability studies, continue to be the most widely used measures and to prove them-
|
| 378 |
+
|
| 379 |
+
377 selves effective in assessing text difficulty (Dubay,
|
| 380 |
+
|
| 381 |
+
2004; Stajner et al., 2012) - despite their relative 405 406 simplicity (being counts of two or three aspects of 407 texts). 408 These measures have been applied to a wide 409 range of written productions, from technical and 410 journalistic texts to fiction. Flesch, for example, 411 found that fiction tend to score a Flesch Reading Ease score in the range 70 ; Score ; 90, in contrast
|
| 382 |
+
|
| 383 |
+
to scientific text that often score below 30 (Flesch, 414 1948). In the present study we used five differ-
|
| 384 |
+
|
| 385 |
+
ent "classic" readability algorithms to measure the 416 prose of each book, chosen for their popularity and interpretability ${}^{5}$ .
|
| 386 |
+
|
| 387 |
+
* The Flesch Reading Ease is a measure of
|
| 388 |
+
|
| 389 |
+
readability based on the average sentence 421 length (ASL), and the average syllables per word (word length)(ASW). It is calculated as follows:
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
\text{ Score } = {206.835} - \left( {{1.015} \times \mathrm{{ASL}}}\right)
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
426
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
- \left( {{84.6} \times \text{ ASW }}\right)
|
| 399 |
+
$$
|
| 400 |
+
|
| 401 |
+
428
|
| 402 |
+
|
| 403 |
+
§ THE FLESCH-KINCAID GRADE LEVEL IS A REVISED
|
| 404 |
+
|
| 405 |
+
429
|
| 406 |
+
|
| 407 |
+
430
|
| 408 |
+
|
| 409 |
+
431 version of the Flesch Reading Ease score.
|
| 410 |
+
|
| 411 |
+
${}^{5}$ All readability scores were extracted using the textstat package: https://pypi.org/project/textstat/
|
| 412 |
+
|
| 413 |
+
433 Like the former, it is based on the average sentence length (ASL), and the number of syllables per word (ASW). It is calculated as follows:
|
| 414 |
+
|
| 415 |
+
$$
|
| 416 |
+
\mathrm{{GL}} = \left( {{0.4} \times \mathrm{{ASL}}}\right) + \left( {{12} \times \mathrm{{ASW}}}\right) - {15}
|
| 417 |
+
$$
|
| 418 |
+
|
| 419 |
+
* The SMOG Readability Formula is a readability score introduced by McLaughlin (McLaughlin, 1969). It measures readability based on the average sentence length and number of words with more than 3 syllables (number of polysyllables), applying the formula:
|
| 420 |
+
|
| 421 |
+
$$
|
| 422 |
+
\text{ SMOG grading } = 3 + \sqrt{\text{ polysyllablecount }}
|
| 423 |
+
$$
|
| 424 |
+
|
| 425 |
+
* The Automated Readability Index is a readability score based on the average sentence length and number of characters per words (word length). It is calculated as follows:
|
| 426 |
+
|
| 427 |
+
$$
|
| 428 |
+
{4.71}\frac{\text{ characters }}{\text{ words }} + {0.5}\frac{\text{ words }}{\text{ sentences }} - {21.43}
|
| 429 |
+
$$
|
| 430 |
+
|
| 431 |
+
* The New Dale-Chall Readability Formula is a 1995 revision of the Dale-Chall readability score (Chall and Dale, 1995). It is based on the average sentence length (ASL) and the percentage of "difficult words" (PDW) which were defined as words which do not appear on a list of words which 80 percent of fourth-graders would know (Dale and Chall, 1948), contained in the Dale-Chall word-list. [6] It is calculated as follows:
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
\text{ Raw Score } = {0.1579} \times \mathrm{{PDW}} + {0.0496} \times \mathrm{{ASL}}
|
| 435 |
+
$$
|
| 436 |
+
|
| 437 |
+
$$
|
| 438 |
+
\text{ If PDW } > 5\% \text{ : Adjusted Score } =
|
| 439 |
+
$$
|
| 440 |
+
|
| 441 |
+
$$
|
| 442 |
+
\text{ Raw Score } + {3.6365}
|
| 443 |
+
$$
|
| 444 |
+
|
| 445 |
+
All readability scores are represented as a US-grade level, where a higher grade means a more difficult text, except for the Flesch Reading Ease. The Flesch Reading Ease indicates a score between 0 (low readability) and 100 (high readability): a higher number means a more readable text. For this reason in most of our experiments the Flesch Reading Ease looks reversed with respect to the other measures (and is negatively correlated with them).
|
| 446 |
+
|
| 447 |
+
§ 3 RESULTS
|
| 448 |
+
|
| 449 |
+
486
|
| 450 |
+
|
| 451 |
+
487
|
| 452 |
+
|
| 453 |
+
Pearson's and Spearman's correlations between 488
|
| 454 |
+
|
| 455 |
+
these five readability metrics and commonly used 489 stylometric features show - as a sanity check - that readability measures capture aspects of novels'
|
| 456 |
+
|
| 457 |
+
overall style. All measures are similarly correlated 492 to sentence-length (naturally, being a base for all measures) but also to lexical diversity and compressibility, which measure, respectively, complexity at the word- and sequence-level. More-
|
| 458 |
+
|
| 459 |
+
over, the correlations between with our "quality 497 scores" show that readability is linked with the ones closer to popularity than to appreciation.
|
| 460 |
+
|
| 461 |
+
max width=
|
| 462 |
+
|
| 463 |
+
X X Spearman Correlation Scores X
|
| 464 |
+
|
| 465 |
+
1-4
|
| 466 |
+
X -0.16 -0.063 0.13
|
| 467 |
+
|
| 468 |
+
1-4
|
| 469 |
+
X 0.13 0.082 0.56 0.1 -0.25
|
| 470 |
+
|
| 471 |
+
1-4
|
| 472 |
+
8 -0.15 -0.11 -0.12-0.06
|
| 473 |
+
|
| 474 |
+
1-4
|
| 475 |
+
X -0.15 -0.061 -0.25 -0.12 -0.50
|
| 476 |
+
|
| 477 |
+
1-4
|
| 478 |
+
X -0.25 -0.22 -0.22-0.25
|
| 479 |
+
|
| 480 |
+
1-4
|
| 481 |
+
X through Avg Setting -1.66 Bating Count
|
| 482 |
+
|
| 483 |
+
1-4
|
| 484 |
+
|
| 485 |
+
Figure 4: Correlations between quality standards and flavours of readability. All correlations are statistically significant.
|
| 486 |
+
|
| 487 |
+
502
|
| 488 |
+
|
| 489 |
+
504
|
| 490 |
+
|
| 491 |
+
507
|
| 492 |
+
|
| 493 |
+
509
|
| 494 |
+
|
| 495 |
+
Pearsons' r, specifically in its significance testing, relies on the assumption of normally distributed data and it assumes that the two variables have a linear relationship, while Spearmans' $\mathrm{r}$ correlation coefficient is non-parametric, meaning that, while it still assumes a monotonic relation between the two variables, it does not make strong assumptions on the shape of the data. For this reason, Spearman is probably the best overall measure for this study, as we have no reason to assume that all our measures are normally distributed (and
|
| 496 |
+
|
| 497 |
+
some are evidently not, as can be seen in Figure 2). 524 For these reasons, we will mainly credit the correlations observed through Spearman'r, although we report both in [2].
|
| 498 |
+
|
| 499 |
+
§ 3.1 READABILITY AND STYLOMETRICS
|
| 500 |
+
|
| 501 |
+
529
|
| 502 |
+
|
| 503 |
+
As readability measures are supposed to be measures of style, we compute their correlation with three core stylistic features - sentence length, lexical diversity ${}^{7}$ and textual compressibility ${}^{8}$ - that
|
| 504 |
+
|
| 505 |
+
539 have been found linked to perceived literary qual-
|
| 506 |
+
|
| 507 |
+
${}^{7}$ We operationalized lexical diversity as the type-token ratio (TTR) of a text, using a common method insensitive to text-length: the Mean Segmental Type-Token Ratio (MSTTR). MSTTR-100 represents the average TTR of local averages in 100-word segments of each text.
|
| 508 |
+
|
| 509 |
+
${}^{8}$ Following van Cranenburgh and Bod (2017), for text compressibility, we calculated the compression ratio (origi-
|
| 510 |
+
|
| 511 |
+
${}^{6}$ See: https://countwordsworth.com/download /DaleChal-lEasyWordList.txt
|
| 512 |
+
|
| 513 |
+
541 ity in previous studies (van Cranenburgh and Bod, 2017; Crosbie et al., 2013; Maharjan et al., 2017; Wang et al., 2019). As can be seen in Figure 1, all readability measures have evident correlations with these three metrics, even though they don't necessarily compute them directly - for example, no readability measure computes text compressibility. However, while compressibility is not obviously correlated to readability, compressibility is a measure of redundancy or formulaicity: it appears that easier texts also have a tendency to be more sequentially repetitive. One readability measure, the new Dale-Chall, correlates with the simple length (word count) of the novels. This is a surprising effect, since, like the other measures, the new Dale-Chall is not length-dependent. As it is the only measure looking at the texts' lexicon through an index of difficult words, it seems to be picking on a tendency for longer books to have a slightly more complex vocabulary.
|
| 514 |
+
|
| 515 |
+
§ 3.2 RELATION WITH QUALITY - GOODREADS AND LIBRARIES
|
| 516 |
+
|
| 517 |
+
As discussed before, we correlate readability with three possible proxies of perceived quality of novels: GoodReads' average ratings, GoodReads' rating count, and the number of libraries holding a given title according to WorldCat ${}^{9}$ . We could consider GoodReads' rating count to be a measure closer to the concept of popularity or fame, while GoodReads' average rating tells us about the appreciation of the title independently from how many readers it had. As can be seen in Figure 4, all of our readability measures show a degree of correlation with the number of library holdings and the GoodReads' rating count: more readable books tend to have more ratings and tend to be held by more libraries.
|
| 518 |
+
|
| 519 |
+
The average rating of titles on GoodReads, on the other hand, shows a significant correlation
|
| 520 |
+
|
| 521 |
+
583 with only one of the measures, the Dale-Chall readability score, while it appears to have no link with the other four. Interestingly, the Dale-Chall score is the only measure that uses a precompiled list of words to estimate the number of difficult words in a text, instead of relying entirely on the features of the text at hand. While this could make
|
| 522 |
+
|
| 523 |
+
593
|
| 524 |
+
|
| 525 |
+
594
|
| 526 |
+
|
| 527 |
+
< g r a p h i c s >
|
| 528 |
+
|
| 529 |
+
Figure 5: The likelihood of being acquired by less than 100 libraries increases quite steadily with difficulty of reading (Spearman's rho 0.84), as the probability of appearing in more than 500 declines. Readability is here measured as Flesch-Kincaid Grade Level.
|
| 530 |
+
|
| 531 |
+
595
|
| 532 |
+
|
| 533 |
+
596
|
| 534 |
+
|
| 535 |
+
597
|
| 536 |
+
|
| 537 |
+
598
|
| 538 |
+
|
| 539 |
+
600
|
| 540 |
+
|
| 541 |
+
605
|
| 542 |
+
|
| 543 |
+
610
|
| 544 |
+
|
| 545 |
+
615
|
| 546 |
+
|
| 547 |
+
< g r a p h i c s >
|
| 548 |
+
|
| 549 |
+
Figure 6: The probability of being rated by less than 100 users in Goodreads strongly correlates with the difficulty of the texts as measured, in this case, by the Flesch-Kincaid Grade Level.
|
| 550 |
+
|
| 551 |
+
617
|
| 552 |
+
|
| 553 |
+
619
|
| 554 |
+
|
| 555 |
+
620
|
| 556 |
+
|
| 557 |
+
622
|
| 558 |
+
|
| 559 |
+
625
|
| 560 |
+
|
| 561 |
+
627
|
| 562 |
+
|
| 563 |
+
632
|
| 564 |
+
|
| 565 |
+
it a more fragile measure (due to linguistic change 635
|
| 566 |
+
|
| 567 |
+
and differences between genres) it appears to ac- 637 tually give it an increased modelling power for the tastes of GoodReads' average readers. It is worth mentioning that GoodReads' average ratings do not correlate, in our corpus, with the books' publication date - so a direct effect of language evolution on the measure's index can be excluded. Simplifying a bit, this points to the idea that the ease of vocabulary might relate to the average apprecia-
|
| 568 |
+
|
| 569 |
+
tion of a book as well as its fame, so that texts with 646
|
| 570 |
+
|
| 571 |
+
a simpler lexicon, together with shorter sentences 647
|
| 572 |
+
|
| 573 |
+
nal bit-size/compressed bit-size) using bzip2, a standard file-compressor.
|
| 574 |
+
|
| 575 |
+
${}^{9}$ Naturally this selection remains arbitrary. Expanding to other measures of perceived quality is an ongoing process.
|
| 576 |
+
|
| 577 |
+
648
|
| 578 |
+
|
| 579 |
+
< g r a p h i c s >
|
| 580 |
+
|
| 581 |
+
Figure 7: Flavours of readability and awards: overall distributions.
|
| 582 |
+
|
| 583 |
+
649
|
| 584 |
+
|
| 585 |
+
650
|
| 586 |
+
|
| 587 |
+
651
|
| 588 |
+
|
| 589 |
+
652
|
| 590 |
+
|
| 591 |
+
653
|
| 592 |
+
|
| 593 |
+
654
|
| 594 |
+
|
| 595 |
+
659 or words, are both more read and better liked.
|
| 596 |
+
|
| 597 |
+
< g r a p h i c s >
|
| 598 |
+
|
| 599 |
+
Figure 8: Flavours of readability and awards: mean value and standard error.
|
| 600 |
+
|
| 601 |
+
In Figure 3 we show the relation of each readability measure with library holdings, average Goodreads ratings and number of Goodreads' ratings. As can be seen, we should interpret the results with some caution, as the relation might not be linear: it could be that the best interpretation of the relation between, for example, readability and library holdings is modelled with a curve rather than a straight line. Yet, it appears quite evident at a glance that the probability of being held by a
|
| 602 |
+
|
| 603 |
+
681 large number of libraries, and of being rated by a large number of Goodreads users, decreases dramatically when the difficulty of the text increases beyond a certain level. As we show in Figure 5, the probability of being acquired by less than 100
|
| 604 |
+
|
| 605 |
+
686 libraries grows quite clearly with the text's dif-
|
| 606 |
+
|
| 607 |
+
688 ficulty, and the probability of being acquired by more than 500 decreases accordingly, with an in- 689 teresting peak at a medium-low point of difficulty. 690 The effect is even more evident when consider- 691 ing the probability of having less than 100 ratings on GoodReads, as appears in Figure 6. Appearing in 90 libraries is still a quite impressive measure of success, but the majority of the titles in
|
| 608 |
+
|
| 609 |
+
696 the Chicago corpus goes beyond that threshold, as well as beyond the threshold of 100 user ratings on GoodReads, so the difference in probabilities seems to point to a relative decline in popularity or fame with the increase of the texts' surface com-
|
| 610 |
+
|
| 611 |
+
701 plexity.
|
| 612 |
+
|
| 613 |
+
max width=
|
| 614 |
+
|
| 615 |
+
X Libs. $\mathbf{{Rat}.n.}$
|
| 616 |
+
|
| 617 |
+
1-3
|
| 618 |
+
Flesch grade -0.16 (-0.1) -0.06 (-0.06)
|
| 619 |
+
|
| 620 |
+
1-3
|
| 621 |
+
Flesch ease 0.13 (0.07) 0.08 (0.09)
|
| 622 |
+
|
| 623 |
+
1-3
|
| 624 |
+
SMOG -0.15 (-0.1) -0.11 (-0.11)
|
| 625 |
+
|
| 626 |
+
1-3
|
| 627 |
+
ARI -0.15 (-0.01) 0.06 (-0.06)
|
| 628 |
+
|
| 629 |
+
1-3
|
| 630 |
+
New Dale-Chall -0.25 (-0.2) -0.22 (-0.2)
|
| 631 |
+
|
| 632 |
+
1-3
|
| 633 |
+
Flesch grade 0.84 0.83
|
| 634 |
+
|
| 635 |
+
1-3
|
| 636 |
+
Flesch ease -0.4 -0.48
|
| 637 |
+
|
| 638 |
+
1-3
|
| 639 |
+
SMOG 0.76 0.81
|
| 640 |
+
|
| 641 |
+
1-3
|
| 642 |
+
ARI 0.73 0.71
|
| 643 |
+
|
| 644 |
+
1-3
|
| 645 |
+
New Dale-Chall 0.78 0.82
|
| 646 |
+
|
| 647 |
+
1-3
|
| 648 |
+
|
| 649 |
+
Table 2: On the upper part of the table, Spearman's r (Pearson's in parenthesis) for each readability flavour and quality measure. On the lower, Spearman’s $r$ with the probability of being in less than 100 libraries or having less than 100 ratings.
|
| 650 |
+
|
| 651 |
+
§ 3.3 RELATION WITH QUALITY - LITERARY AWARDS
|
| 652 |
+
|
| 653 |
+
The second type of quality check we selected is a categorical one: whether or not a title was long-listed for one of three prestigious awards - the Pulitzer Prize, the National Book Award and the Hugo Award.
|
| 654 |
+
|
| 655 |
+
As we show in Figures 7 and 8, as well as in Table 3, the difference between long-listed books and non long-listed books in terms of readability is small but significant for almost all measures, with long-listed books are systematically harder to read than their non-listed counterparts - again with the exception of the new Dale-Chall measure. Using this kind of quality proxy, we do not observe a value of reading ease but possibly its "dark side",
|
| 656 |
+
|
| 657 |
+
757 such as perceived simplification or a reduced expressive power of novels.
|
| 658 |
+
|
| 659 |
+
It may not surprise that these different standards should exhibit different preferences and perspectives on quality. Literary awards are notoriously elitist, even, perhaps, in a way that is wanted by their readership: the committee of the Booker Prize was accused of populism in 2011 when announcing "readability" as a new criterion for the award (Clark, 2011).
|
| 660 |
+
|
| 661 |
+
max width=
|
| 662 |
+
|
| 663 |
+
X T-test p-value
|
| 664 |
+
|
| 665 |
+
1-3
|
| 666 |
+
Flesch grade 3.78 0.0001
|
| 667 |
+
|
| 668 |
+
1-3
|
| 669 |
+
Flesch ease -4.66 0.000005
|
| 670 |
+
|
| 671 |
+
1-3
|
| 672 |
+
SMOG 3.69 0.0002
|
| 673 |
+
|
| 674 |
+
1-3
|
| 675 |
+
ARI 3.6 0.0003
|
| 676 |
+
|
| 677 |
+
1-3
|
| 678 |
+
New Dale-Chall 1.8 0.07
|
| 679 |
+
|
| 680 |
+
1-3
|
| 681 |
+
|
| 682 |
+
Table 3: T-test and p-value for the difference between long-listed and non-listed titles for each readability measure. The only measure that does not fall under the formal threshold of statistical significance is the new Dale-Chall.
|
| 683 |
+
|
| 684 |
+
§ 4 CONCLUSIONS AND FUTURE WORKS
|
| 685 |
+
|
| 686 |
+
Readability measures proved significantly consistent, both between each other and with other relevant stylometric features, when applied on modern and contemporary fiction. Their relation with different proxies of literary quality is intriguing: more popular works, in terms of number of ratings on GoodReads and in terms of libraries willing to hold a copy of the book, appear to have a correlation with readability, while the appreciation of readers alone (independently from their number) seems to hold almost no link with it, and long-listed titles have an inverse relation with readability, tending to prefer slightly more difficult prose on the readability metrics' scale. It can be argued that we are seeing the divide between high-brow and "popular" literature, but the lack of correlation with GoodReads average rating might point to a slightly more nuanced conclusion. It is worth noting that the only measure showing a meaning-
|
| 687 |
+
|
| 688 |
+
804 ful correlation with all of the crowd-based quality metrics was the new Dale-Chall measure of readability, also the only one explicitly focusing on the presence of widely understood lexicon in a text, but it was also the only one showing no significant
|
| 689 |
+
|
| 690 |
+
809 difference between long-listed and non long-listed
|
| 691 |
+
|
| 692 |
+
titles. The only other measure having a correlation 810
|
| 693 |
+
|
| 694 |
+
higher than 0.1 with average GoodReads' ratings 811
|
| 695 |
+
|
| 696 |
+
was SMOG, which, while not using a list of hard 812
|
| 697 |
+
|
| 698 |
+
words, considers "difficult words" in its own way 813 in its computation, using the number of polysyllable words as a central element.
|
| 699 |
+
|
| 700 |
+
816
|
| 701 |
+
|
| 702 |
+
If we were to draw rough conclusions from these observations, it would seem that surface-level simplicity of style in terms of words per sen-
|
| 703 |
+
|
| 704 |
+
tence, characters per words, and similar metrics 821 "helps" a text's popularity, but has nothing to do with its likelihood of being highly liked by its readers - and it even slightly hinders its possibilities of receiving a prestigious awards. In other
|
| 705 |
+
|
| 706 |
+
words, surface-level simplicity improves a text's 826 quality only if we equate it with popularity or fame. Similarly, looking at threshold-based probability distributions showed that indeed increasing the difficulty of the novels' style might hinder
|
| 707 |
+
|
| 708 |
+
its diffusion across libraries and Goodreads' users. 831 Using a more common vocabulary might also increase readers' appreciation of the text, but only when it comes to crowd-based measures. On the other hand, the correlations of average number of ratings and library holdings with readability measures do not appear linear or monotonic, meaning
|
| 709 |
+
|
| 710 |
+
that there might also be a "point of balance" be- 838 tween too easy and too difficult, that maximizes the correlation with a novel's fame. The same might be true for the likelihood of a novel being long-listed for one of the three awards we took into
|
| 711 |
+
|
| 712 |
+
consideration. 843
|
| 713 |
+
|
| 714 |
+
Overall, readability seems to have an impact on
|
| 715 |
+
|
| 716 |
+
different perceptions of literary quality, although 846
|
| 717 |
+
|
| 718 |
+
its role and interaction with other features of the 848 text remains to be defined.
|
| 719 |
+
|
| 720 |
+
Further research points towards extending the set of correlations to more proxies of quality as well as more sophisticated stylometric measures to see whether interactions can provide a clearer picture of what we perceive as literary quality. Other further work could be to check the correlations of our measures with publication date: readability
|
| 721 |
+
|
| 722 |
+
might depend on time, either in the sense of the 858 evolution of the average novelistic style, overall language change, or even cultural selection, which would make the passage of time a particular form of "quality test" of its own accord.
|
| 723 |
+
|
| 724 |
+
863
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/tcxy7vRVKlg/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,749 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Training and Evaluating Norwegian Sentence Embedding Models
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
004 058
|
| 12 |
+
|
| 13 |
+
005 059
|
| 14 |
+
|
| 15 |
+
006 060
|
| 16 |
+
|
| 17 |
+
## Abstract
|
| 18 |
+
|
| 19 |
+
We train and evaluate Norwegian sentence embedding models using the contrastive learning methodology SimCSE. We start
|
| 20 |
+
|
| 21 |
+
016 from pre-trained Norwegian encoder models and train both unsupervised and super-
|
| 22 |
+
|
| 23 |
+
018 vised models. The models are evaluated on a machine-translated version of semantic textual similarity datasets, as well as bi-
|
| 24 |
+
|
| 25 |
+
021 nary classification tasks. We show that we can train good Norwegian sentence em-
|
| 26 |
+
|
| 27 |
+
023 bedding models, that clearly outperform the pre-trained encoder models, as well as
|
| 28 |
+
|
| 29 |
+
026 the multilingual mBERT, on the task of sentence similarity.
|
| 30 |
+
|
| 31 |
+
028
|
| 32 |
+
|
| 33 |
+
## 1 Introduction
|
| 34 |
+
|
| 35 |
+
Recently there have been a huge increase in the
|
| 36 |
+
|
| 37 |
+
031 capabilities of natural language processing systems. The new dominant paradigm is using large
|
| 38 |
+
|
| 39 |
+
033 language models such as BERT (Devlin et al., 2019) or GPT (Radford et al., 2018) as a starting model which one adapts to any given task one wishes to solve. There exists several different versions of BERT-type encoder models in Norwegian
|
| 40 |
+
|
| 41 |
+
038 (Kummervold et al., 2021), (Kutuzov et al., 2021), (Pyysalo et al., 2021). It is well-known that BERT-type models that give contextual words embed-dings do not give particularly good sentence em-beddings (Reimers and Gurevych, 2019). For this reason we train and evaluate Norwegian sentence embedding models, using the pre-trained encoder models as starting points.
|
| 42 |
+
|
| 43 |
+
We train models using the state of the art Sim-CSE methodology, similarly to the original paper (Gao et al., 2021). Like them, we train both unsupervised and supervised models. We start with a pretrained bidirectional language encoder model such as BERT or RoBERTa (Liu et al., 2019). For
|
| 44 |
+
|
| 45 |
+
053 the unsupervised version we sample texts from the
|
| 46 |
+
|
| 47 |
+
061
|
| 48 |
+
|
| 49 |
+
062
|
| 50 |
+
|
| 51 |
+
063
|
| 52 |
+
|
| 53 |
+
064
|
| 54 |
+
|
| 55 |
+
Norwegian Colossal Corpus (NCC) dataset (Kum- 065 mervold et al., 2022). We then pass them through
|
| 56 |
+
|
| 57 |
+
the model using two different dropout masks and 067 predict contrastively which pairs within a batch represent the same text. For the supervised ver-
|
| 58 |
+
|
| 59 |
+
sion, we train on a machine-translated version of 070 natural language inference (NLI) data, where we use sentences related by "entailment" as positive sentences, and sentences labeled as contradiction as hard negative sentences. We train on both the Norwegian dataset, and a combined dataset of
|
| 60 |
+
|
| 61 |
+
both Norwegian and English NLI data, and show 077 that the latter gives better results for sentence representations in Norwegian. We evaluate our mod-
|
| 62 |
+
|
| 63 |
+
els on a machine translated version of semantic 080 textual similarities (STS) datasets, as well as on
|
| 64 |
+
|
| 65 |
+
the sequence classification problems in Norwe- 082 gian "Talk of Norway" and the binary classification version of the NoReC review dataset (Velldal
|
| 66 |
+
|
| 67 |
+
et al., 2018). 085
|
| 68 |
+
|
| 69 |
+
Our main contributions are:
|
| 70 |
+
|
| 71 |
+
087
|
| 72 |
+
|
| 73 |
+
1. We train and evaluate Norwegian unsupervised and supervised sentence embedding
|
| 74 |
+
|
| 75 |
+
models. 090
|
| 76 |
+
|
| 77 |
+
2. We demonstrate a new way to compare the 092 various existing Norwegian language models by measuring their performance after training
|
| 78 |
+
|
| 79 |
+
them to make sentence embeddings. 095
|
| 80 |
+
|
| 81 |
+
3. We show that our sentence encoders some- 097 times get better performance than the base encoder on classification. In particular, we obtain new state of the art results on the classification problem "Talk of Norway".
|
| 82 |
+
|
| 83 |
+
102
|
| 84 |
+
|
| 85 |
+
4. Through our experiments we illustrate the usefulness of machine translated datasets for training and evaluating Norwegian language models. In particular, we show that super-
|
| 86 |
+
|
| 87 |
+
vised training on machine translated data out- 107 performs unsupervised training on Norwe-
|
| 88 |
+
|
| 89 |
+
109 gian data.
|
| 90 |
+
|
| 91 |
+
## 2 Related work
|
| 92 |
+
|
| 93 |
+
The fundamental technique we build on is that of training large transformer models (Vaswani et al., 2017). In particular, we utilize the large encoder models Bidirectional Encoder Representations from Transformers (BERT) and Robustly Optimized BERT (RoBERTa) by using them as pre-trained starting points.
|
| 94 |
+
|
| 95 |
+
Our work builds upon existing language models trained in Norwegian. The National Library of Norway has trained BERT models in Norwegian (Kummervold et al., 2021), which we call NB-BERT, which exists in both base and large size. Also, the language technology group at the University of Oslo has trained their version of a BERT for Norwegian called NorBERT (Kutuzov et al., 2021). There is also a WikiBERT model trained on Norwegian Wikipedia (Pyysalo et al., 2021). We also test the multilingual version of BERT (Devlin et al., 2019), which is trained in Norwegian and many other languages.
|
| 96 |
+
|
| 97 |
+
Our work uses existing methodology for making sentence embedding models. The first paper to improve BERT to make better sentence representations by training it for that purpose, was the Sentence-BERT paper (Reimers and Gurevych, 2019), which trained sentence embedding models by using siamese networks. We build upon the newer Simple Contrastive learning of Sentence Embeddings (SimCSE) methodology (Gao et al., 2021), which uses a contrastive training objective to create sentence embeddings from a pre-trained encoder. The idea behind both of these works is that of finding a training procedure that better extracts the knowledge about sentences that already exists in the pre-trained encoder model.
|
| 98 |
+
|
| 99 |
+
## 3 Data
|
| 100 |
+
|
| 101 |
+
For the unsupervised models, we sample data from the Norwegian Colossal Corpus (NCC) (Kummer-vold et al., 2022). This is a dataset of different smaller Norwegian text corpuses that has been collected into one corpus by the National Library of Norway to train language models. This is primarily a Norwegian corpus, although there are some amounts of other languages present. The dataset description estimates that ${87}\%$ of documents are in Norwegian, with about $6 - 7\%$ of documents in
|
| 102 |
+
|
| 103 |
+
Sentence: Deltakerne mente at hvis inter- 162 163 essenter var seriøse om â forbedre finansrap-porteringsmodellen, ville en gruppe bli op-prettet og finansiert spesielt for dette formälet.
|
| 104 |
+
|
| 105 |
+
Positive: Deltakerne forventer at seriøse in-
|
| 106 |
+
|
| 107 |
+
teressenter vil danne en gruppe for à forbedre 168 finansrapporteringsmodellen.
|
| 108 |
+
|
| 109 |
+
Negative: A group was created to improve the financial reporting model.
|
| 110 |
+
|
| 111 |
+
Figure 1: An example of a triplet of sentences of mixed language in the Norwegian/English NLI dataset.
|
| 112 |
+
|
| 113 |
+
English and the rest in other European languages 178
|
| 114 |
+
|
| 115 |
+
(mostly other Nordic languages). We sample 1 180 million texts from the dataset for training unsupervised. Some are longer than one sentence, but all are truncated to max 32 tokens before training, thus they are all approximately sentence length.
|
| 116 |
+
|
| 117 |
+
For supervised training we train with data collected for the task of natural language inference (NLI). This task is that of taking a pair of sentences and predicting the relationship between them as either "entailment", "neutral" or "contradiction". The authors of the SimCSE paper use NLI data to create triples of a sentence with one positive and one hard negative and show that this data work well for training sentence models using contrastive learning, thus we follow this practice. We use a dataset that has been curated for training in Norwegian by the National Library of Norway. ${}^{1}$ The original data is based on the English datasets the Stanford Natural Language Inference (SNLI) Corpus (Bowman et al., 2015) and Multi-Genre Natural Language Inference (MNLI) dataset (Williams et al., 2018). The Norwegian data is machine translated from the MNLI dataset and has about 128 thousand triples. There is also a combined Norwegian and English version of the dataset made by taking a combination of the translated Norwegian MNLI data and English MNLI and SNLI data. 2 Also included are extra combined Norwegian/English sentence triples: For each of the translated triples there is a joint
|
| 118 |
+
|
| 119 |
+
215
|
| 120 |
+
|
| 121 |
+
---
|
| 122 |
+
|
| 123 |
+
${}^{1}$ https://huggingface.co/datasets/NbAiLab/mnli-norwegian
|
| 124 |
+
|
| 125 |
+
${}^{2}$ The same English data that was used to train English SimCSE: https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse
|
| 126 |
+
|
| 127 |
+
---
|
| 128 |
+
|
| 129 |
+
217 Sentence 1: en mann skjærer opp en agurk . Sentence 2: en mann skjærer en agurk. Similarity: 4.2
|
| 130 |
+
|
| 131 |
+
Sentence 1: en mann spiller harpe. Sentence 2: en mann spiller et keyboard . Similarity: 1.5
|
| 132 |
+
|
| 133 |
+
Figure 2: Examples from the translated STS-Benchmark dataset. Similarity ratings are from 0- 5.
|
| 134 |
+
|
| 135 |
+
Norwegian/English triple consisting of one or two sentences in each of English and Norwegian, see Figure 1 for an example. The English/Norwegian dataset contains about 531 thousand triples of sentences.
|
| 136 |
+
|
| 137 |
+
For evaluation we also machine translate the standard English datasets for semantic textual similarity STS12-16 (Agirre et al., 2012), (Agirre et al., 2013), (Agirre et al., 2014), (Agirre et al., 2015), (Agirre et al., 2016), STSBenchmark (Cer et al., 2017), and SICK relatedness (Marelli et al., 2014). The task is predicting how similar a pair of sentences are to each other on a scale of 0 -5 . We use these datasets only for validation and testing and never for training. In fig. 2 we see two examples from the translated STS Benchmark dataset.
|
| 138 |
+
|
| 139 |
+
The usage of translated datasets is a weakness compared to having original data in Norwegian. This project can also be viewed as an exploration of what performance it is possible to get from auto-translated English datasets: To the degree they are shown to be useful, one will have much more data one could potentially work with in Norwegian language processing. We note that
|
| 140 |
+
|
| 141 |
+
259 for sentence similiarity, a similar exploration of translated data has been done for Swedish in (Is-bister and Sahlgren, 2020). They conclude that they do not recommend the usage of automatically translated STS datasets for fine-tuning, but that it should probably have limited negative consequences for comparing models. We partly follow their recommendation: We only use translated STS data for valdiation and evaluation, but we do perform supervised training on translated
|
| 142 |
+
|
| 143 |
+
269 NLI data.
|
| 144 |
+
|
| 145 |
+
## 4 Experiments
|
| 146 |
+
|
| 147 |
+
270
|
| 148 |
+
|
| 149 |
+
271
|
| 150 |
+
|
| 151 |
+
Our experiments follow the implementations in 272
|
| 152 |
+
|
| 153 |
+
the SimCSE paper closely. We start with a pre- 273 trained encoder model that is either BERT or RoBERTa.
|
| 154 |
+
|
| 155 |
+
For unsupervised training we sample one mil- 276 lion texts from the NCC dataset. We then pass each text through the model using two different dropout masks to obtain two different text representations ${s}_{i}$ and ${s}_{i}^{ + }$ for each text. Here dropout
|
| 156 |
+
|
| 157 |
+
functions as a form of continuous augmentation of 281 embeddings. Then we contrastively predict which pairs of texts within a batch are the same using cross-entropy loss on the cosine similarity scores. In other words, the loss for text $i$ is given by
|
| 158 |
+
|
| 159 |
+
286
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
{\operatorname{loss}}_{i} = - \log \frac{{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{i}^{ + }}\right) /\tau }}{\mathop{\sum }\limits_{{j = 1}}^{b}{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{j}^{ + }}\right) /\tau }},
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
288
|
| 166 |
+
|
| 167 |
+
where sim is cosine similarity and $\tau$ is a tempera- 291 ture hyperparameter which we simply set to 0.05 ,
|
| 168 |
+
|
| 169 |
+
which is the outcome of optimization done in the 293 SimCSE paper.
|
| 170 |
+
|
| 171 |
+
For training unsupervised models, the models
|
| 172 |
+
|
| 173 |
+
we start from are given by their names on hug- 296 gingface as
|
| 174 |
+
|
| 175 |
+
298
|
| 176 |
+
|
| 177 |
+
- bert-base-cased [english model]
|
| 178 |
+
|
| 179 |
+
- roberta-base [english model] 300 301
|
| 180 |
+
|
| 181 |
+
- bert-base-multilingual-cased 302 303
|
| 182 |
+
|
| 183 |
+
- TurkuNLP/wikibert-base-no-cased 304
|
| 184 |
+
|
| 185 |
+
305
|
| 186 |
+
|
| 187 |
+
- Itgoslo/norbert2 306
|
| 188 |
+
|
| 189 |
+
307
|
| 190 |
+
|
| 191 |
+
- NbAiLab/nb-bert-base 308
|
| 192 |
+
|
| 193 |
+
309
|
| 194 |
+
|
| 195 |
+
- NbAiLab/nb-bert-large 310
|
| 196 |
+
|
| 197 |
+
The english models are included as a sanity
|
| 198 |
+
|
| 199 |
+
check: Since we are using automatically trans- 313 lated datasets to choose the best models, we want to compare their performance with some models that are expected to perform worse than Norwegian models. For the same reason we also test on the English STS datasets.
|
| 200 |
+
|
| 201 |
+
We train the supervised models using NLI data where each sentence has one paired sentenced labeled as entailment, which is regarded as a positive sample, and one sentence labeled with con-
|
| 202 |
+
|
| 203 |
+
tradiction, which is considered a negative sample. 323
|
| 204 |
+
|
| 205 |
+
325
|
| 206 |
+
|
| 207 |
+
<table><tr><td>Model</td><td>$\mathbf{{Avg}.{STS}}$</td></tr><tr><td>BERT</td><td>34.29</td></tr><tr><td>RoBERTa</td><td>25.56</td></tr><tr><td>mBERT</td><td>48.34</td></tr><tr><td>WikiBERT</td><td>42.21</td></tr><tr><td>NorBERT</td><td>54.42</td></tr><tr><td>NB-BERT-base</td><td>50.41</td></tr><tr><td>NB-BERT-large</td><td>49.90</td></tr></table>
|
| 208 |
+
|
| 209 |
+
Table 1: Average performance of models before training using average of the last layer on Norwegian STS.
|
| 210 |
+
|
| 211 |
+
We thus obtain three different sentence representations ${s}_{i},{s}_{i}^{ + },{s}_{i}^{ - }$ . As in the SimCSE paper, we train contrastively trying to predict the positive pairs, and add the negative sentence representation ${s}_{i}^{ - }$ to the loss function as follows:
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
{\operatorname{loss}}_{i} = - \log \frac{{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{i}^{ + }}\right) /\tau }}{\mathop{\sum }\limits_{{j = 1}}^{b}{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{j}^{ + }}\right) /\tau } + {e}^{\operatorname{sim}\left( {{s}_{i},{s}_{j}^{ - }}\right) /\tau }}
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
(1)
|
| 218 |
+
|
| 219 |
+
For training supervised models we start with the following models:
|
| 220 |
+
|
| 221 |
+
- bert-base-multilingual-cased
|
| 222 |
+
|
| 223 |
+
- TurkuNLP/wikibert-base-no-cased
|
| 224 |
+
|
| 225 |
+
- Itgoslo/norbert2
|
| 226 |
+
|
| 227 |
+
- NbAiLab/nb-bert-base
|
| 228 |
+
|
| 229 |
+
- NbAiLab/nb-bert-large
|
| 230 |
+
|
| 231 |
+
We train with the same settings as in the Sim-
|
| 232 |
+
|
| 233 |
+
362 CSE paper: We set a max sequence length of 32, and use the learning rates and batch sizes given in the appendix of the SimCSE paper (which vary by model type and size). Each model is trained
|
| 234 |
+
|
| 235 |
+
367 on a single NVIDIA 3090 GPU. For some models we have to use gradient accumulation to achieve the correct batch size due to lack of RAM, which changes training dynamics a bit, since contrastive loss depends on the entire batch. We do not see any noticable effects on results from this. We train with the Adam optimizer with linear weight decay and put a multi-layer perceptron (MLP) on top of the model for training. Unsupervised we train for one epoch, and supervised for three. The best model is selected by evaluating on the dev
|
| 236 |
+
|
| 237 |
+
part of the STS Benchmark dataset. For evalua- 378
|
| 238 |
+
|
| 239 |
+
tion we test both with and without this MLP, and 379
|
| 240 |
+
|
| 241 |
+
find that generally, testing without the MLP gives 380
|
| 242 |
+
|
| 243 |
+
slightly better results. We train three versions of 381 each model and report average scores.
|
| 244 |
+
|
| 245 |
+
The models are also fine-tuned on two Norwe-
|
| 246 |
+
|
| 247 |
+
gian sequence classification tasks. Talk of Nor- 384 way (ToN) is a subset of the Norwegian parliament speeches dataset (Lapponi et al., 2018), where the task is to classify whether the speech was given by SV or FrP (politically left or right, respectively) selected in (Kummervold et al., 2021). 3 NoReC is a dataset of reviews in Norwegian from different domains such as movies, video games and music (Velldal et al., 2018). From this dataset one can extract a binary classification task by taking the subset of reviews that are clearly positive or negative and letting the task be to classify them as positive or negative (Øvrelid et al., 2020).We take the text representations made by the model before the MLP, and add a linear classification layer on top and fine-tune the entire model on the training dataset. For both the fine-tuning datasets we do a grid search for hyperparameters under the following conditions (these are the same hyperparame-ters as in the finetuning examples in the appendix of the original BERT paper (Devlin et al., 2019)):
|
| 248 |
+
|
| 249 |
+
- epochs=2, 3, 4
|
| 250 |
+
|
| 251 |
+
- learning rate $= 2\mathrm{e} - 5,3\mathrm{e} - 5,5\mathrm{e} - 5$
|
| 252 |
+
|
| 253 |
+
- batch size 16,32
|
| 254 |
+
|
| 255 |
+
We use the macro f1 score on the validation set to select the best model for each training run. We do three training runs and report the average of test scores.
|
| 256 |
+
|
| 257 |
+
## 5 Results sentence similarity
|
| 258 |
+
|
| 259 |
+
We evaluate the trained models on the semantic textual similarity datasets. We evaluate our models both on the Norwegian version of the datasets, and the original English. We report Spearman's correlation for the STS datasets.
|
| 260 |
+
|
| 261 |
+
### 5.1 Evaluation in Norwegian
|
| 262 |
+
|
| 263 |
+
In Table 1 we see the average performance on the Norwegian STS before training using the average of the last layer to compare embeddings. We also tested using the average of first and last layers (giving similar numbers) and using "cls" token
|
| 264 |
+
|
| 265 |
+
431
|
| 266 |
+
|
| 267 |
+
---
|
| 268 |
+
|
| 269 |
+
${}^{3}$ https://huggingface.co/datasets/NbAiLab/norwegian_parliament
|
| 270 |
+
|
| 271 |
+
---
|
| 272 |
+
|
| 273 |
+
432 486
|
| 274 |
+
|
| 275 |
+
<table><tr><td>Model</td><td>STS12</td><td>STS13</td><td>STS14</td><td>STS15</td><td>STS16</td><td>STSB</td><td>SICKR</td><td>$\mathbf{{Avg}.}$</td></tr><tr><td>BERT</td><td>55.21</td><td>49.64</td><td>49.29</td><td>63.68</td><td>54.39</td><td>54.67</td><td>50.93</td><td>53.97</td></tr><tr><td>RoBERTa</td><td>60.30</td><td>59.12</td><td>57.15</td><td>68.73</td><td>64.33</td><td>64.04</td><td>54.39</td><td>61.15</td></tr><tr><td>mBERT</td><td>60.88</td><td>62.31</td><td>55.91</td><td>70.78</td><td>66.80</td><td>61.87</td><td>57.13</td><td>62.24</td></tr><tr><td>WikiBERT</td><td>63.38</td><td>70.21</td><td>62.63</td><td>74.04</td><td>70.90</td><td>70.88</td><td>62.52</td><td>67.79</td></tr><tr><td>NorBERT</td><td>56.41</td><td>65.33</td><td>54.32</td><td>68.95</td><td>68.00</td><td>62.40</td><td>64.54</td><td>62.85</td></tr><tr><td>NB-BERT-base</td><td>59.40</td><td>70.70</td><td>57.93</td><td>71.87</td><td>69.94</td><td>69.25</td><td>63.98</td><td>66.15</td></tr><tr><td>NB-BERT-large</td><td>70.45</td><td>80.80</td><td>72.79</td><td>81.53</td><td>78.41</td><td>79.35</td><td>69.18</td><td>76.07</td></tr></table>
|
| 276 |
+
|
| 277 |
+
488
|
| 278 |
+
|
| 279 |
+
489
|
| 280 |
+
|
| 281 |
+
490
|
| 282 |
+
|
| 283 |
+
491
|
| 284 |
+
|
| 285 |
+
493
|
| 286 |
+
|
| 287 |
+
494
|
| 288 |
+
|
| 289 |
+
433 487
|
| 290 |
+
|
| 291 |
+
438 492
|
| 292 |
+
|
| 293 |
+
(a) Performance of unsupervised models on the Norwegian STS datasets.
|
| 294 |
+
|
| 295 |
+
<table><tr><td>Model</td><td>STS12</td><td>STS13</td><td>STS14</td><td>STS15</td><td>STS16</td><td>STSB</td><td>SICKR</td><td>$\mathbf{{Avg}.}$</td></tr><tr><td>mBERT</td><td>73.43</td><td>69.09</td><td>70.84</td><td>81.50</td><td>73.82</td><td>76.47</td><td>72.79</td><td>73.99</td></tr><tr><td>WikiBERT</td><td>73.29</td><td>64.48</td><td>69.24</td><td>80.32</td><td>74.51</td><td>75.42</td><td>69.94</td><td>72.45</td></tr><tr><td>NorBERT</td><td>74.30</td><td>70.69</td><td>72.09</td><td>82.56</td><td>76.91</td><td>79.33</td><td>73.74</td><td>75.66</td></tr><tr><td>NB-BERT-base</td><td>76.31</td><td>77.20</td><td>75.43</td><td>84.47</td><td>77.69</td><td>82.14</td><td>77.97</td><td>78.75</td></tr><tr><td>NB-BERT-large</td><td>77.07</td><td>83.65</td><td>80.28</td><td>86.24</td><td>81.87</td><td>84.37</td><td>78.44</td><td>81.70</td></tr></table>
|
| 296 |
+
|
| 297 |
+
495
|
| 298 |
+
|
| 299 |
+
496
|
| 300 |
+
|
| 301 |
+
497
|
| 302 |
+
|
| 303 |
+
498
|
| 304 |
+
|
| 305 |
+
499
|
| 306 |
+
|
| 307 |
+
500
|
| 308 |
+
|
| 309 |
+
501
|
| 310 |
+
|
| 311 |
+
(b) Performance on the Norwegian STS datasets of supervised models trained on both Norwegian and English NLI data. 502
|
| 312 |
+
|
| 313 |
+
<table><tr><td>Model</td><td>STS12</td><td>STS13</td><td>STS14</td><td>STS15</td><td>STS16</td><td>STSB</td><td>SICKR</td><td>$\mathbf{{Avg}.}$</td></tr><tr><td>mBERT</td><td>69.28</td><td>71.50</td><td>69.44</td><td>78.12</td><td>74.38</td><td>71.12</td><td>67.70</td><td>71.65</td></tr><tr><td>WikiBERT</td><td>70.14</td><td>71.18</td><td>71.79</td><td>77.56</td><td>76.20</td><td>74.20</td><td>67.32</td><td>72.63</td></tr><tr><td>NorBERT</td><td>70.79</td><td>74.46</td><td>72.44</td><td>80.66</td><td>77.73</td><td>76.65</td><td>71.56</td><td>74.90</td></tr><tr><td>NB-BERT-base</td><td>72.41</td><td>79.22</td><td>74.67</td><td>81.47</td><td>77.72</td><td>78.49</td><td>73.50</td><td>76.78</td></tr><tr><td>NB-BERT-large</td><td>74.67</td><td>83.65</td><td>79.47</td><td>84.15</td><td>81.82</td><td>82.25</td><td>74.75</td><td>80.11</td></tr></table>
|
| 314 |
+
|
| 315 |
+
503
|
| 316 |
+
|
| 317 |
+
504
|
| 318 |
+
|
| 319 |
+
505
|
| 320 |
+
|
| 321 |
+
506
|
| 322 |
+
|
| 323 |
+
507
|
| 324 |
+
|
| 325 |
+
509 (c) Performance on the Norwegian STS datasets of supervised models trained on Norwegian NLI data.
|
| 326 |
+
|
| 327 |
+
Table 2: Results of our models tested on the Norwegian STS datasets. 512 (giving worse numbers). Thus we have a baseline to compare how much the models have learned from the training.
|
| 328 |
+
|
| 329 |
+
In Table 2a we see the performance of our unsupervised models on the Norwegian STS datasets. These are the results when we test without the MLP, which on average performs slightly better than using MLP also for testing.
|
| 330 |
+
|
| 331 |
+
In Table 2b we see the results from training supervised models on the combination of Norwegian and English NLI data, while Table 2c shows the performance when training on only Norwegian NLI data. We see that training with English included improves performance over merely training in Norwegian for all models.
|
| 332 |
+
|
| 333 |
+
We see that the supervised models perform much better than the unsupervised ones. This would usually not be surprising, but considering the supervised data is automatically translated and therefore presumably of lower quality than the unsupervised data, it is interesting to note.
|
| 334 |
+
|
| 335 |
+
### 5.2 Evaluation in English
|
| 336 |
+
|
| 337 |
+
In Table 3a we show the results from testing our
|
| 338 |
+
|
| 339 |
+
485 unsupervised models on the English dataset. In
|
| 340 |
+
|
| 341 |
+
Table 3b we show the results from testing our su- 514 pervised models trained on the combined English and Norwegian dataset on the English STS data, while Table 3c shows the results for supervised models trained only on Norwegian data.
|
| 342 |
+
|
| 343 |
+
519
|
| 344 |
+
|
| 345 |
+
Since we have automatically translated the STS data, we are unsure how accurate the ground truth
|
| 346 |
+
|
| 347 |
+
labels in Norwegian will be, since there will be 522 examples of sentences where the similarity of the
|
| 348 |
+
|
| 349 |
+
sentences changes because of differing transla- 524 tions. However we think that this should not influence comparisons between different models very much. This is supported by the fact that the internal ranking between models for the Norwegian
|
| 350 |
+
|
| 351 |
+
and the English dataset is the same among the Nor- 529 wegian unsupervised models. (English models unsurprisingly are higher in the rankings when tested on English)
|
| 352 |
+
|
| 353 |
+
One of the more interesting findings in this pa- 534 per is how strong performance our models get on the English STS data. NB-BERT-base was initialized from the mBERT checkpoint which can
|
| 354 |
+
|
| 355 |
+
partly explain this, but not all models was started 538
|
| 356 |
+
|
| 357 |
+
from a model pre-trained in English. The un- 539
|
| 358 |
+
|
| 359 |
+
540 594
|
| 360 |
+
|
| 361 |
+
<table><tr><td>Model</td><td>STS12</td><td>STS13</td><td>STS14</td><td>STS15</td><td>STS16</td><td>STSB</td><td>SICKR</td><td>$\mathbf{{Avg}.}$</td></tr><tr><td>BERT(english)</td><td>54.76</td><td>70.77</td><td>57.39</td><td>69.32</td><td>69.19</td><td>61.66</td><td>66.29</td><td>64.20</td></tr><tr><td>roBERTa(english)</td><td>65.26</td><td>77.06</td><td>67.09</td><td>76.88</td><td>76.71</td><td>75.32</td><td>65.60</td><td>71.99</td></tr><tr><td>mBERT</td><td>63.56</td><td>73.10</td><td>63.95</td><td>74.67</td><td>73.56</td><td>68.58</td><td>61.61</td><td>68.43</td></tr><tr><td>WikiBERT</td><td>64.68</td><td>77.60</td><td>67.04</td><td>76.20</td><td>76.30</td><td>74.63</td><td>65.34</td><td>71.68</td></tr><tr><td>NorBERT</td><td>52.96</td><td>62.30</td><td>54.99</td><td>67.45</td><td>69.83</td><td>63.68</td><td>62.40</td><td>61.94</td></tr><tr><td>NB-BERT-base</td><td>56.23</td><td>72.06</td><td>57.93</td><td>68.71</td><td>71.09</td><td>67.25</td><td>61.63</td><td>64.99</td></tr><tr><td>NB-BERT-large</td><td>72.54</td><td>83.68</td><td>76.08</td><td>83.03</td><td>81.09</td><td>81.32</td><td>68.80</td><td>78.08</td></tr></table>
|
| 362 |
+
|
| 363 |
+
596
|
| 364 |
+
|
| 365 |
+
597
|
| 366 |
+
|
| 367 |
+
598
|
| 368 |
+
|
| 369 |
+
599
|
| 370 |
+
|
| 371 |
+
600
|
| 372 |
+
|
| 373 |
+
601
|
| 374 |
+
|
| 375 |
+
602
|
| 376 |
+
|
| 377 |
+
541 595
|
| 378 |
+
|
| 379 |
+
(a) Performance of unsupervised models on English STS datasets.
|
| 380 |
+
|
| 381 |
+
<table><tr><td>Model</td><td>STS12</td><td>STS13</td><td>STS14</td><td>STS15</td><td>STS16</td><td>STSB</td><td>SICKR</td><td>$\mathbf{{Avg}.}$</td></tr><tr><td>mBERT</td><td>76.88</td><td>79.69</td><td>77.58</td><td>84.99</td><td>78.52</td><td>81.36</td><td>77.30</td><td>79.47</td></tr><tr><td>WikiBERT</td><td>72.45</td><td>59.56</td><td>67.08</td><td>80.87</td><td>75.21</td><td>75.31</td><td>74.01</td><td>72.07</td></tr><tr><td>NorBERT</td><td>73.39</td><td>69.40</td><td>72.65</td><td>83.10</td><td>77.30</td><td>80.48</td><td>76.55</td><td>76.13</td></tr><tr><td>NBBert-base</td><td>76.93</td><td>78.78</td><td>77.76</td><td>85.28</td><td>80.29</td><td>82.96</td><td>78.49</td><td>80.07</td></tr><tr><td>NBBert-large</td><td>78.30</td><td>85.92</td><td>81.78</td><td>87.11</td><td>83.24</td><td>85.72</td><td>79.56</td><td>83.09</td></tr></table>
|
| 382 |
+
|
| 383 |
+
603
|
| 384 |
+
|
| 385 |
+
604
|
| 386 |
+
|
| 387 |
+
605
|
| 388 |
+
|
| 389 |
+
606
|
| 390 |
+
|
| 391 |
+
607
|
| 392 |
+
|
| 393 |
+
608
|
| 394 |
+
|
| 395 |
+
609
|
| 396 |
+
|
| 397 |
+
(b) Performance of supervised models on English STS datasets fine-tuned on both Norwegian and English MNLI. (c) Performance of supervised models on English STS datasets fine-tuned on Norwegian MNLI.
|
| 398 |
+
|
| 399 |
+
<table><tr><td>Model</td><td>STS12</td><td>STS13</td><td>STS14</td><td>STS15</td><td>STS16</td><td>STSB</td><td>SICKR</td><td>$\mathbf{{Avg}.}$</td></tr><tr><td>mBERT</td><td>72.62</td><td>79.36</td><td>75.84</td><td>81.87</td><td>79.70</td><td>77.48</td><td>70.18</td><td>76.72</td></tr><tr><td>WikiBERT</td><td>65.47</td><td>65.30</td><td>67.40</td><td>76.86</td><td>73.12</td><td>68.91</td><td>60.59</td><td>68.24</td></tr><tr><td>NorBERT</td><td>66.90</td><td>68.62</td><td>69.63</td><td>79.35</td><td>76.23</td><td>73.38</td><td>69.66</td><td>71.97</td></tr><tr><td>NBBert-base</td><td>71.57</td><td>80.30</td><td>76.30</td><td>81.55</td><td>79.23</td><td>78.09</td><td>71.12</td><td>76.88</td></tr><tr><td>NBBert-large</td><td>76.42</td><td>85.58</td><td>81.23</td><td>85.49</td><td>83.21</td><td>83.15</td><td>75.04</td><td>81.45</td></tr></table>
|
| 400 |
+
|
| 401 |
+
Table 3: Results of our models tested on the English STS datasets.
|
| 402 |
+
|
| 403 |
+
610
|
| 404 |
+
|
| 405 |
+
611
|
| 406 |
+
|
| 407 |
+
612
|
| 408 |
+
|
| 409 |
+
613
|
| 410 |
+
|
| 411 |
+
615
|
| 412 |
+
|
| 413 |
+
617
|
| 414 |
+
|
| 415 |
+
619
|
| 416 |
+
|
| 417 |
+
620 supervised NB-BERT-large achieves a score of 78.08 on English STS. For comparison, the best unsupervised model in the original SimCSE paper, SimCSE-RoBERTa-large, achieved a score of 78.90. Thus we see that we have a model pre-trained on a Norwegian corpus (containg some English), further trained unsupervised in Norwegian, that achieves less than 1% worse score than the best English model, trained in English. This model is also better than the best unsupervised English model in the original SentenceBERT paper. The supervised NB-BERT trained only on Norwegian NLI achieved a score of 81.45, while the version trained on Norwegian and English NLI
|
| 418 |
+
|
| 419 |
+
583 achieve a score of 83.09. Comparably the supervised original English version SimCSE-BERT-base got a score of 81.57 and SimCSE-RoBERTa-large 83.76. Thus we see that we achieve comparable performance between a supervised Norwe-
|
| 420 |
+
|
| 421 |
+
588 gian large BERT and a supervised English base BERT, when testing in English. Our best supervised model is less than $1\%$ away from the best English SimCSE model, although this is less surprising than for the unsupervised models, since we
|
| 422 |
+
|
| 423 |
+
593
|
| 424 |
+
|
| 425 |
+
in this case fine-tune our model also on English 622 NLI. We also note that our best supervised model which is trained on only Norwegian is better than the best supervised English model in the Sentence-BERT paper. Thus it does seem like the models learn a lot for performing well at English sentence similarity even though the pre-training is mostly in Norwegian. The strong performance in English of NB-BERT models was already noted in (Kum-
|
| 426 |
+
|
| 427 |
+
mervold et al., 2021). 632
|
| 428 |
+
|
| 429 |
+
To see if we can better understand the
|
| 430 |
+
|
| 431 |
+
above findings, we tested the English supervised 637 SimCSE-RoBERTa-large on Norwegian STS, and achieved only an average score of 54.23 . Thus a very good English model scores badly in Norwegian, while a very good Norwegian model scores
|
| 432 |
+
|
| 433 |
+
well in English. This might indicate that the rea- 642 son the Norwegian models all perform so well in English is that there is enough English in the Norwegian training data (probably including many snippets in the Norwegian parts) that the models
|
| 434 |
+
|
| 435 |
+
learn quite a lot of English. 647
|
| 436 |
+
|
| 437 |
+
648
|
| 438 |
+
|
| 439 |
+
BERT 76.7
|
| 440 |
+
|
| 441 |
+
RoBERTa 79.8 mBERT WikiBERT NorBERT NB-BERT-base 82.7 NB-BERT-large 89.7 (a) Performance of unsupervised models when fine-tuned on the Talk of Norway dataset. mBERT 79.3 WikiBERT 82.6 NorBERT 85.7 NB-BERT-base 83.4 NB-BERT-large 89.3 (b) Performance of supervised models trained on Norwegian NLI when fine-tuned on the Talk of Norway dataset. mBERT 79.2 WikiBERT 81.1 NorBERT 84.9 NB-BERT-base 83.3 NB-BERT-large 89.3 (c) Performance of supervised models trained in on Norwegian and English NLI on the Talk of Norway dataset.
|
| 442 |
+
|
| 443 |
+
Table 4: Performance of our models on the ToN dataset.
|
| 444 |
+
|
| 445 |
+
649
|
| 446 |
+
|
| 447 |
+
650
|
| 448 |
+
|
| 449 |
+
651
|
| 450 |
+
|
| 451 |
+
## 6 Results classification
|
| 452 |
+
|
| 453 |
+
We report macro F1 score for the binary classification tasks.
|
| 454 |
+
|
| 455 |
+
### 6.1 ToN binary classification
|
| 456 |
+
|
| 457 |
+
In Table 4a we see the performance of the unsupervised models when fine-tuned on the Talk of Norway dataset. In Table 4b we see the perfor-
|
| 458 |
+
|
| 459 |
+
686 mance of the supervised models trained on Norwegian NLI and then fine-tuned on the ToN dataset, while Table 4c shows the performance when train-
|
| 460 |
+
|
| 461 |
+
689 ing on both Norwegian and English NLI.
|
| 462 |
+
|
| 463 |
+
691 We see that training the models to give bet- ter sentence embeddings gives some performance gains on this task, compared to fine-tuning the base model: In (Kummervold et al., 2021) it is reported that NB-BERT achieves a score of 81.8 , while NorBERT scores 78.2 and mBERT 78.4 on this task. All our numbers are slightly higher.
|
| 464 |
+
|
| 465 |
+
We see that for this classification task training to make sentence models with English NLI data included did not help: the numbers are very similar
|
| 466 |
+
|
| 467 |
+
701 with and without it. (a) Performance of unsupervised models, fine-tuned on the NoReC binary classification dataset. mBERT 72.2 WikiBERT 77.9 NorBERT 82.4 NB-BERT-base 85.9
|
| 468 |
+
|
| 469 |
+
<table><tr><td>BERT</td><td>63.1</td></tr><tr><td>RoBERTa</td><td>64.4</td></tr><tr><td>mBERT</td><td>70.3</td></tr><tr><td>WikiBERT</td><td>77.0</td></tr><tr><td>NorBERT</td><td>82.0</td></tr><tr><td>NB-BERT-base</td><td>84.3</td></tr><tr><td>NB-BERT-large</td><td>87.6</td></tr></table>
|
| 470 |
+
|
| 471 |
+
NB-BERT-large 87.0 (b) Performance of supervised models trained on only Norwegian NLI when fine-tuned on the NoReC binary classification dataet. mBERT 74.4 WikiBERT 77.6 NorBERT 81.0 NB-BERT-base 84.9
|
| 472 |
+
|
| 473 |
+
NB-BERT-large 87.3 (c) Performance of supervised models trained on Norwegian and English NLI when fine-tuned on the NoReC binary classification dataset.
|
| 474 |
+
|
| 475 |
+
Table 5: Performance of our models on the NoReC binary classification dataset.
|
| 476 |
+
|
| 477 |
+
### 6.2 NoReC binary classification
|
| 478 |
+
|
| 479 |
+
In Table 5a we see the performance of unsupervised models on the NoReC binary classification task. In Table 5b we see the results of supervised models trained on Norwegian NLI, while in Table 5c we see the results of supervised models trained on Norwegian and English NLI.
|
| 480 |
+
|
| 481 |
+
For this task it is less clear that we get gains from training sentence embedding models: The highest reported number for this task is NB-BERT-base which is reported as 86.4 in (Kummervold et al., 2021) and 83.9 in (Kutuzov et al., 2021). Our best score for NB-BERT-base is 85.9, which is not better than this. Our best model NB-BERT-large also does not achieve a higher score than about ${87}\%$ , which is only slightly better than the smaller models. We do not know the reason we get improvements for ToN classification, and not here. The mBERT model do improve with training, but that is not so surprising, since it is not already as strong in Norwegian as most of the other models.
|
| 482 |
+
|
| 483 |
+
## 7 Discussion
|
| 484 |
+
|
| 485 |
+
757
|
| 486 |
+
|
| 487 |
+
We believe that our models perform well on the semantic sentence similarity task, even if we do not have any strict comparison since this is the first evalutation of Norwegian sentence embedding models on the STS data. The Norwegian dataset corresponds to the English one, so the scores of English models on English STS and Norwegian models on Norwegian STS should in principle correspond to each other, but because of the extra noise added by the automatic translation we are not surprised that the Norwegian numbers are a bit worse. We see that the models improve a lot compared to before training, and because they perform quite well even for the English STS datasets, we are confident that they have indeed learned something useful in Norwegian.
|
| 488 |
+
|
| 489 |
+
The supervised models perform better than our unsupervised models even though the supervised models are trained on machine translated data. This shows that machine translated data could be useful for doing NLP in smaller languages, at least for some tasks such as ours. The difference in the numbers we get for unsupervised and supervised training are similar to the ones in the original Sim-CSE paper. It is a bit unclear to what extent the specific content and language of the training data is important for performing well on STS tasks. For example, one can improve the performance of English SimCSE by training on unrelated image data (Jian et al., 2022). This might be because the task is a form of clustering, and images and text in other languages are structurally similar enough that the models learn something useful.
|
| 490 |
+
|
| 491 |
+
From doing our experiments we get comparisons of the different Norwegian language models. This is because this method of making sentence embeddings is mostly a way of extracting the knowledge already learned by the models, since the amount of training we do is much smaller than the amount the models already have been pre-trained. An unsuprising conclusion is that the scale of the model is the most important factor in making good language models. NB-BERT-large is the best model by clear margins for all of our evaluations. This conforms to the general tendency in recent NLP that scaling up models is more effective than tailoring data or architecture on a given scale. Next, we find that for binary classification the models NB-BERT-base and Nor-
|
| 492 |
+
|
| 493 |
+
809 BERT perform quite similary, while WikiBERT is
|
| 494 |
+
|
| 495 |
+
generally a bit weaker, while all of them clearly 810
|
| 496 |
+
|
| 497 |
+
outperform mBERT. For sentence similarity we 811
|
| 498 |
+
|
| 499 |
+
find different rankings among models: Here un- 812
|
| 500 |
+
|
| 501 |
+
supervised WikiBERT is the second best model, 813
|
| 502 |
+
|
| 503 |
+
while the supervised version is the weakest of the 814
|
| 504 |
+
|
| 505 |
+
Norwegian supervised models. Supervised NB- 815
|
| 506 |
+
|
| 507 |
+
BERT-base is clearly the second best model, while 816 NorBERT performs worse on the STS task.
|
| 508 |
+
|
| 509 |
+
We see that training sentence embedding models slightly improves performance on the binary
|
| 510 |
+
|
| 511 |
+
classification tasks, but not by much compared 821 with the base models. There is no clear tendency on whether training supervised or unsupervised improves performance on classification more, since the numbers we get are similar in both
|
| 512 |
+
|
| 513 |
+
cases. 826
|
| 514 |
+
|
| 515 |
+
828
|
| 516 |
+
|
| 517 |
+
## References
|
| 518 |
+
|
| 519 |
+
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel 830
|
| 520 |
+
|
| 521 |
+
Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei 831 Guo, Iñigo Lopez-Gazpio, Montse Maritxalar, Rada
|
| 522 |
+
|
| 523 |
+
Mihalcea, German Rigau, Larraitz Uria, and Janyce 833 Wiebe. 2015. SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability. In Proceedings of the 9th International
|
| 524 |
+
|
| 525 |
+
Workshop on Semantic Evaluation (SemEval 2015), 836 pages 252-263, Denver, Colorado. Association for
|
| 526 |
+
|
| 527 |
+
Computational Linguistics. 838
|
| 528 |
+
|
| 529 |
+
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei
|
| 530 |
+
|
| 531 |
+
Guo, Rada Mihalcea, German Rigau, and Janyce 841 Wiebe. 2014. SemEval-2014 task 10: Multilingual
|
| 532 |
+
|
| 533 |
+
semantic textual similarity. In Proceedings of the 843 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 81-91, Dublin, Ireland. Association for Computational Linguistics.
|
| 534 |
+
|
| 535 |
+
846
|
| 536 |
+
|
| 537 |
+
Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab,
|
| 538 |
+
|
| 539 |
+
Aitor Gonzalez-Agirre, Rada Mihalcea, German 848 Rigau, and Janyce Wiebe. 2016. SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the
|
| 540 |
+
|
| 541 |
+
10th International Workshop on Semantic Evalua- 851 tion (SemEval-2016), pages 497-511, San Diego,
|
| 542 |
+
|
| 543 |
+
California. Association for Computational Linguis- 853 tics.
|
| 544 |
+
|
| 545 |
+
Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In *SEM 2012:
|
| 546 |
+
|
| 547 |
+
The First Joint Conference on Lexical and Compu- 858 tational Semantics - Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385- 393, Montréal, Canada. Association for Computa-
|
| 548 |
+
|
| 549 |
+
tional Linguistics. 863
|
| 550 |
+
|
| 551 |
+
Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- 865 Agirre, and Weiwei Guo. 2013. *SEM 2013 shared 866 task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 32-43, Atlanta, Georgia, USA. As- 870 sociation for Computational Linguistics.
|
| 552 |
+
|
| 553 |
+
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.
|
| 554 |
+
|
| 555 |
+
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017
|
| 556 |
+
|
| 557 |
+
880 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings
|
| 558 |
+
|
| 559 |
+
882 of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.
|
| 560 |
+
|
| 561 |
+
885
|
| 562 |
+
|
| 563 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
|
| 564 |
+
|
| 565 |
+
887 Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 566 |
+
|
| 567 |
+
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Empirical Methods in Natural Lan-
|
| 568 |
+
|
| 569 |
+
897 guage Processing (EMNLP).
|
| 570 |
+
|
| 571 |
+
Tim Isbister and Magnus Sahlgren. 2020. Why not simply translate? A first swedish evalua-
|
| 572 |
+
|
| 573 |
+
900 tion benchmark for semantic similarity. CoRR, abs/2009.03116.
|
| 574 |
+
|
| 575 |
+
902
|
| 576 |
+
|
| 577 |
+
Yiran Jian, Chongyang Gao, and Soroush Vosoughi. 2022. Non-linguistic supervision for contrastive learning of sentence embeddings. In Advances in Neural Information Processing Systems.
|
| 578 |
+
|
| 579 |
+
907 Per Kummervold, Freddy Wetjen, and Javier de la Rosa. 2022. The Norwegian colossal corpus: A text corpus for training large Norwegian language models. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 3852- 3860, Marseille, France. European Language Resources Association.
|
| 580 |
+
|
| 581 |
+
Per E Kummervold, Javier De la Rosa, Freddy Wet-jen, and Svein Arne Brygfjeld. 2021. Operationaliz-ing a national digital library: The case for a Norwegian transformer model. In Proceedings of the 23rd
|
| 582 |
+
|
| 583 |
+
917 Nordic Conference on Computational Linguistics
|
| 584 |
+
|
| 585 |
+
(NoDaLiDa), pages 20-29, Reykjavik, Iceland (On- 918
|
| 586 |
+
|
| 587 |
+
line). Linköping University Electronic Press, Swe- 919
|
| 588 |
+
|
| 589 |
+
den. 920
|
| 590 |
+
|
| 591 |
+
Andrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja 921
|
| 592 |
+
|
| 593 |
+
$\varnothing$ vrelid, and Stephan Oepen. 2021. Large-scale 922
|
| 594 |
+
|
| 595 |
+
contextualised language modelling for Norwegian. 923
|
| 596 |
+
|
| 597 |
+
In Proceedings of the 23rd Nordic Conference on 924
|
| 598 |
+
|
| 599 |
+
Computational Linguistics (NoDaLiDa), pages 30- 925
|
| 600 |
+
|
| 601 |
+
40, Reykjavik, Iceland (Online). Linköping Univer- 926 sity Electronic Press, Sweden.
|
| 602 |
+
|
| 603 |
+
Emanuele Lapponi, Martin G. Søyland, Erik Velldal,
|
| 604 |
+
|
| 605 |
+
and Stephan Oepen. 2018. The Talk of Norway: 929
|
| 606 |
+
|
| 607 |
+
a richly annotated corpus of the Norwegian parlia- 930 ment, 1998-2016. Language Resources and Evalu-
|
| 608 |
+
|
| 609 |
+
ation, pages 1-21. 932
|
| 610 |
+
|
| 611 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- 933
|
| 612 |
+
|
| 613 |
+
dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, 934 Luke Zettlemoyer, and Veselin Stoyanov. 2019.
|
| 614 |
+
|
| 615 |
+
Roberta: A robustly optimized BERT pretraining 936 approach. CoRR, abs/1907.11692.
|
| 616 |
+
|
| 617 |
+
Marco Marelli, Stefano Menini, Marco Baroni, Luisa 938
|
| 618 |
+
|
| 619 |
+
Bentivogli, Raffaella Bernardi, and Roberto Zam- 939 parelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In
|
| 620 |
+
|
| 621 |
+
Proceedings of the Ninth International Conference 941 on Language Resources and Evaluation (LREC'14), pages 216-223, Reykjavik, Iceland. European Lan-
|
| 622 |
+
|
| 623 |
+
guage Resources Association (ELRA). 944
|
| 624 |
+
|
| 625 |
+
Lilja Øvrelid, Petter Mæhlum, Jeremy Barnes, and
|
| 626 |
+
|
| 627 |
+
Erik Velldal. 2020. A fine-grained sentiment dataset 946 for Norwegian. In Proceedings of the 12th Edition of the Language Resources and Evaluation Confer-
|
| 628 |
+
|
| 629 |
+
ence, Marseille, France, 2020. 949
|
| 630 |
+
|
| 631 |
+
Sampo Pyysalo, Jenna Kanerva, Antti Virtanen, and
|
| 632 |
+
|
| 633 |
+
Filip Ginter. 2021. WikiBERT models: Deep trans- 951 fer learning for many languages. In Proceedings of the 23rd Nordic Conference on Computational
|
| 634 |
+
|
| 635 |
+
Linguistics (NoDaLiDa), pages 1-10, Reykjavik, 954 Iceland (Online). Linköping University Electronic Press, Sweden.
|
| 636 |
+
|
| 637 |
+
956
|
| 638 |
+
|
| 639 |
+
Alec Radford, Karthik Harasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under-
|
| 640 |
+
|
| 641 |
+
standing by generative pre-training. 959
|
| 642 |
+
|
| 643 |
+
Nils Reimers and Iryna Gurevych. 2019. Sentence- 960 BERT: Sentence embeddings using Siamese BERT- 961 networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics. 966
|
| 644 |
+
|
| 645 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- 970 cessing Systems, volume 30. Curran Associates, Inc. 971
|
| 646 |
+
|
| 647 |
+
972 Erik Velldal, Lilja Øvrelid, Eivind Alexander Bergem, 1026 973 Cathrine Stadsnes, Samia Touileb, and Fredrik 1027
|
| 648 |
+
|
| 649 |
+
974 Jørgensen. 2018. NoReC: The Norwegian review 1028
|
| 650 |
+
|
| 651 |
+
975 corpus. In Proceedings of the Eleventh International 1029 Conference on Language Resources and Evaluation 1030 976 (LREC 2018), Miyazaki, Japan. European Language
|
| 652 |
+
|
| 653 |
+
977 Resources Association (ELRA). 1031
|
| 654 |
+
|
| 655 |
+
978 1032
|
| 656 |
+
|
| 657 |
+
Adina Williams, Nikita Nangia, and Samuel Bowman. 1033
|
| 658 |
+
|
| 659 |
+
980 2018. A broad-coverage challenge corpus for sen- 1034 tence understanding through inference. In Proceed-
|
| 660 |
+
|
| 661 |
+
ings of the 2018 Conference of the North American 1035
|
| 662 |
+
|
| 663 |
+
Chapter of the Association for Computational Lin- 1036
|
| 664 |
+
|
| 665 |
+
983 guistics: Human Language Technologies, Volume 1037
|
| 666 |
+
|
| 667 |
+
1 (Long Papers), pages 1112-1122, New Orleans, 1038
|
| 668 |
+
|
| 669 |
+
985 Louisiana. Association for Computational Linguis- 1039 tics.
|
| 670 |
+
|
| 671 |
+
986 1040
|
| 672 |
+
|
| 673 |
+
987 1041
|
| 674 |
+
|
| 675 |
+
988 1042
|
| 676 |
+
|
| 677 |
+
989 1043
|
| 678 |
+
|
| 679 |
+
990 1044
|
| 680 |
+
|
| 681 |
+
991 1045
|
| 682 |
+
|
| 683 |
+
992 1046
|
| 684 |
+
|
| 685 |
+
993 1047
|
| 686 |
+
|
| 687 |
+
994 1048
|
| 688 |
+
|
| 689 |
+
995 1049
|
| 690 |
+
|
| 691 |
+
996 1050
|
| 692 |
+
|
| 693 |
+
997 1051
|
| 694 |
+
|
| 695 |
+
998 1052
|
| 696 |
+
|
| 697 |
+
999 1053
|
| 698 |
+
|
| 699 |
+
1000 1054
|
| 700 |
+
|
| 701 |
+
1001 1055
|
| 702 |
+
|
| 703 |
+
1002 1056
|
| 704 |
+
|
| 705 |
+
1003 1057
|
| 706 |
+
|
| 707 |
+
1004 1058
|
| 708 |
+
|
| 709 |
+
1005 1059
|
| 710 |
+
|
| 711 |
+
1006 1060
|
| 712 |
+
|
| 713 |
+
1007 1061
|
| 714 |
+
|
| 715 |
+
1008 1062
|
| 716 |
+
|
| 717 |
+
1009 1063
|
| 718 |
+
|
| 719 |
+
1010 1064
|
| 720 |
+
|
| 721 |
+
1011 1065
|
| 722 |
+
|
| 723 |
+
1012 1066
|
| 724 |
+
|
| 725 |
+
1013 1067
|
| 726 |
+
|
| 727 |
+
1014 1068
|
| 728 |
+
|
| 729 |
+
1015 1069
|
| 730 |
+
|
| 731 |
+
1016 1070
|
| 732 |
+
|
| 733 |
+
1017 1071
|
| 734 |
+
|
| 735 |
+
1018 1072
|
| 736 |
+
|
| 737 |
+
1019 1073
|
| 738 |
+
|
| 739 |
+
1020 1074
|
| 740 |
+
|
| 741 |
+
1021 1075
|
| 742 |
+
|
| 743 |
+
1022 1076
|
| 744 |
+
|
| 745 |
+
1023 1077
|
| 746 |
+
|
| 747 |
+
1024 1078
|
| 748 |
+
|
| 749 |
+
1025 1079
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/tcxy7vRVKlg/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,680 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ TRAINING AND EVALUATING NORWEGIAN SENTENCE EMBEDDING MODELS
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
004 058
|
| 12 |
+
|
| 13 |
+
005 059
|
| 14 |
+
|
| 15 |
+
006 060
|
| 16 |
+
|
| 17 |
+
§ ABSTRACT
|
| 18 |
+
|
| 19 |
+
We train and evaluate Norwegian sentence embedding models using the contrastive learning methodology SimCSE. We start
|
| 20 |
+
|
| 21 |
+
016 from pre-trained Norwegian encoder models and train both unsupervised and super-
|
| 22 |
+
|
| 23 |
+
018 vised models. The models are evaluated on a machine-translated version of semantic textual similarity datasets, as well as bi-
|
| 24 |
+
|
| 25 |
+
021 nary classification tasks. We show that we can train good Norwegian sentence em-
|
| 26 |
+
|
| 27 |
+
023 bedding models, that clearly outperform the pre-trained encoder models, as well as
|
| 28 |
+
|
| 29 |
+
026 the multilingual mBERT, on the task of sentence similarity.
|
| 30 |
+
|
| 31 |
+
028
|
| 32 |
+
|
| 33 |
+
§ 1 INTRODUCTION
|
| 34 |
+
|
| 35 |
+
Recently there have been a huge increase in the
|
| 36 |
+
|
| 37 |
+
031 capabilities of natural language processing systems. The new dominant paradigm is using large
|
| 38 |
+
|
| 39 |
+
033 language models such as BERT (Devlin et al., 2019) or GPT (Radford et al., 2018) as a starting model which one adapts to any given task one wishes to solve. There exists several different versions of BERT-type encoder models in Norwegian
|
| 40 |
+
|
| 41 |
+
038 (Kummervold et al., 2021), (Kutuzov et al., 2021), (Pyysalo et al., 2021). It is well-known that BERT-type models that give contextual words embed-dings do not give particularly good sentence em-beddings (Reimers and Gurevych, 2019). For this reason we train and evaluate Norwegian sentence embedding models, using the pre-trained encoder models as starting points.
|
| 42 |
+
|
| 43 |
+
We train models using the state of the art Sim-CSE methodology, similarly to the original paper (Gao et al., 2021). Like them, we train both unsupervised and supervised models. We start with a pretrained bidirectional language encoder model such as BERT or RoBERTa (Liu et al., 2019). For
|
| 44 |
+
|
| 45 |
+
053 the unsupervised version we sample texts from the
|
| 46 |
+
|
| 47 |
+
061
|
| 48 |
+
|
| 49 |
+
062
|
| 50 |
+
|
| 51 |
+
063
|
| 52 |
+
|
| 53 |
+
064
|
| 54 |
+
|
| 55 |
+
Norwegian Colossal Corpus (NCC) dataset (Kum- 065 mervold et al., 2022). We then pass them through
|
| 56 |
+
|
| 57 |
+
the model using two different dropout masks and 067 predict contrastively which pairs within a batch represent the same text. For the supervised ver-
|
| 58 |
+
|
| 59 |
+
sion, we train on a machine-translated version of 070 natural language inference (NLI) data, where we use sentences related by "entailment" as positive sentences, and sentences labeled as contradiction as hard negative sentences. We train on both the Norwegian dataset, and a combined dataset of
|
| 60 |
+
|
| 61 |
+
both Norwegian and English NLI data, and show 077 that the latter gives better results for sentence representations in Norwegian. We evaluate our mod-
|
| 62 |
+
|
| 63 |
+
els on a machine translated version of semantic 080 textual similarities (STS) datasets, as well as on
|
| 64 |
+
|
| 65 |
+
the sequence classification problems in Norwe- 082 gian "Talk of Norway" and the binary classification version of the NoReC review dataset (Velldal
|
| 66 |
+
|
| 67 |
+
et al., 2018). 085
|
| 68 |
+
|
| 69 |
+
Our main contributions are:
|
| 70 |
+
|
| 71 |
+
087
|
| 72 |
+
|
| 73 |
+
1. We train and evaluate Norwegian unsupervised and supervised sentence embedding
|
| 74 |
+
|
| 75 |
+
models. 090
|
| 76 |
+
|
| 77 |
+
2. We demonstrate a new way to compare the 092 various existing Norwegian language models by measuring their performance after training
|
| 78 |
+
|
| 79 |
+
them to make sentence embeddings. 095
|
| 80 |
+
|
| 81 |
+
3. We show that our sentence encoders some- 097 times get better performance than the base encoder on classification. In particular, we obtain new state of the art results on the classification problem "Talk of Norway".
|
| 82 |
+
|
| 83 |
+
102
|
| 84 |
+
|
| 85 |
+
4. Through our experiments we illustrate the usefulness of machine translated datasets for training and evaluating Norwegian language models. In particular, we show that super-
|
| 86 |
+
|
| 87 |
+
vised training on machine translated data out- 107 performs unsupervised training on Norwe-
|
| 88 |
+
|
| 89 |
+
109 gian data.
|
| 90 |
+
|
| 91 |
+
§ 2 RELATED WORK
|
| 92 |
+
|
| 93 |
+
The fundamental technique we build on is that of training large transformer models (Vaswani et al., 2017). In particular, we utilize the large encoder models Bidirectional Encoder Representations from Transformers (BERT) and Robustly Optimized BERT (RoBERTa) by using them as pre-trained starting points.
|
| 94 |
+
|
| 95 |
+
Our work builds upon existing language models trained in Norwegian. The National Library of Norway has trained BERT models in Norwegian (Kummervold et al., 2021), which we call NB-BERT, which exists in both base and large size. Also, the language technology group at the University of Oslo has trained their version of a BERT for Norwegian called NorBERT (Kutuzov et al., 2021). There is also a WikiBERT model trained on Norwegian Wikipedia (Pyysalo et al., 2021). We also test the multilingual version of BERT (Devlin et al., 2019), which is trained in Norwegian and many other languages.
|
| 96 |
+
|
| 97 |
+
Our work uses existing methodology for making sentence embedding models. The first paper to improve BERT to make better sentence representations by training it for that purpose, was the Sentence-BERT paper (Reimers and Gurevych, 2019), which trained sentence embedding models by using siamese networks. We build upon the newer Simple Contrastive learning of Sentence Embeddings (SimCSE) methodology (Gao et al., 2021), which uses a contrastive training objective to create sentence embeddings from a pre-trained encoder. The idea behind both of these works is that of finding a training procedure that better extracts the knowledge about sentences that already exists in the pre-trained encoder model.
|
| 98 |
+
|
| 99 |
+
§ 3 DATA
|
| 100 |
+
|
| 101 |
+
For the unsupervised models, we sample data from the Norwegian Colossal Corpus (NCC) (Kummer-vold et al., 2022). This is a dataset of different smaller Norwegian text corpuses that has been collected into one corpus by the National Library of Norway to train language models. This is primarily a Norwegian corpus, although there are some amounts of other languages present. The dataset description estimates that ${87}\%$ of documents are in Norwegian, with about $6 - 7\%$ of documents in
|
| 102 |
+
|
| 103 |
+
Sentence: Deltakerne mente at hvis inter- 162 163 essenter var seriøse om â forbedre finansrap-porteringsmodellen, ville en gruppe bli op-prettet og finansiert spesielt for dette formälet.
|
| 104 |
+
|
| 105 |
+
Positive: Deltakerne forventer at seriøse in-
|
| 106 |
+
|
| 107 |
+
teressenter vil danne en gruppe for à forbedre 168 finansrapporteringsmodellen.
|
| 108 |
+
|
| 109 |
+
Negative: A group was created to improve the financial reporting model.
|
| 110 |
+
|
| 111 |
+
Figure 1: An example of a triplet of sentences of mixed language in the Norwegian/English NLI dataset.
|
| 112 |
+
|
| 113 |
+
English and the rest in other European languages 178
|
| 114 |
+
|
| 115 |
+
(mostly other Nordic languages). We sample 1 180 million texts from the dataset for training unsupervised. Some are longer than one sentence, but all are truncated to max 32 tokens before training, thus they are all approximately sentence length.
|
| 116 |
+
|
| 117 |
+
For supervised training we train with data collected for the task of natural language inference (NLI). This task is that of taking a pair of sentences and predicting the relationship between them as either "entailment", "neutral" or "contradiction". The authors of the SimCSE paper use NLI data to create triples of a sentence with one positive and one hard negative and show that this data work well for training sentence models using contrastive learning, thus we follow this practice. We use a dataset that has been curated for training in Norwegian by the National Library of Norway. ${}^{1}$ The original data is based on the English datasets the Stanford Natural Language Inference (SNLI) Corpus (Bowman et al., 2015) and Multi-Genre Natural Language Inference (MNLI) dataset (Williams et al., 2018). The Norwegian data is machine translated from the MNLI dataset and has about 128 thousand triples. There is also a combined Norwegian and English version of the dataset made by taking a combination of the translated Norwegian MNLI data and English MNLI and SNLI data. 2 Also included are extra combined Norwegian/English sentence triples: For each of the translated triples there is a joint
|
| 118 |
+
|
| 119 |
+
215
|
| 120 |
+
|
| 121 |
+
${}^{1}$ https://huggingface.co/datasets/NbAiLab/mnli-norwegian
|
| 122 |
+
|
| 123 |
+
${}^{2}$ The same English data that was used to train English SimCSE: https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse
|
| 124 |
+
|
| 125 |
+
217 Sentence 1: en mann skjærer opp en agurk . Sentence 2: en mann skjærer en agurk. Similarity: 4.2
|
| 126 |
+
|
| 127 |
+
Sentence 1: en mann spiller harpe. Sentence 2: en mann spiller et keyboard . Similarity: 1.5
|
| 128 |
+
|
| 129 |
+
Figure 2: Examples from the translated STS-Benchmark dataset. Similarity ratings are from 0- 5.
|
| 130 |
+
|
| 131 |
+
Norwegian/English triple consisting of one or two sentences in each of English and Norwegian, see Figure 1 for an example. The English/Norwegian dataset contains about 531 thousand triples of sentences.
|
| 132 |
+
|
| 133 |
+
For evaluation we also machine translate the standard English datasets for semantic textual similarity STS12-16 (Agirre et al., 2012), (Agirre et al., 2013), (Agirre et al., 2014), (Agirre et al., 2015), (Agirre et al., 2016), STSBenchmark (Cer et al., 2017), and SICK relatedness (Marelli et al., 2014). The task is predicting how similar a pair of sentences are to each other on a scale of 0 -5 . We use these datasets only for validation and testing and never for training. In fig. 2 we see two examples from the translated STS Benchmark dataset.
|
| 134 |
+
|
| 135 |
+
The usage of translated datasets is a weakness compared to having original data in Norwegian. This project can also be viewed as an exploration of what performance it is possible to get from auto-translated English datasets: To the degree they are shown to be useful, one will have much more data one could potentially work with in Norwegian language processing. We note that
|
| 136 |
+
|
| 137 |
+
259 for sentence similiarity, a similar exploration of translated data has been done for Swedish in (Is-bister and Sahlgren, 2020). They conclude that they do not recommend the usage of automatically translated STS datasets for fine-tuning, but that it should probably have limited negative consequences for comparing models. We partly follow their recommendation: We only use translated STS data for valdiation and evaluation, but we do perform supervised training on translated
|
| 138 |
+
|
| 139 |
+
269 NLI data.
|
| 140 |
+
|
| 141 |
+
§ 4 EXPERIMENTS
|
| 142 |
+
|
| 143 |
+
270
|
| 144 |
+
|
| 145 |
+
271
|
| 146 |
+
|
| 147 |
+
Our experiments follow the implementations in 272
|
| 148 |
+
|
| 149 |
+
the SimCSE paper closely. We start with a pre- 273 trained encoder model that is either BERT or RoBERTa.
|
| 150 |
+
|
| 151 |
+
For unsupervised training we sample one mil- 276 lion texts from the NCC dataset. We then pass each text through the model using two different dropout masks to obtain two different text representations ${s}_{i}$ and ${s}_{i}^{ + }$ for each text. Here dropout
|
| 152 |
+
|
| 153 |
+
functions as a form of continuous augmentation of 281 embeddings. Then we contrastively predict which pairs of texts within a batch are the same using cross-entropy loss on the cosine similarity scores. In other words, the loss for text $i$ is given by
|
| 154 |
+
|
| 155 |
+
286
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
{\operatorname{loss}}_{i} = - \log \frac{{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{i}^{ + }}\right) /\tau }}{\mathop{\sum }\limits_{{j = 1}}^{b}{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{j}^{ + }}\right) /\tau }},
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
288
|
| 162 |
+
|
| 163 |
+
where sim is cosine similarity and $\tau$ is a tempera- 291 ture hyperparameter which we simply set to 0.05 ,
|
| 164 |
+
|
| 165 |
+
which is the outcome of optimization done in the 293 SimCSE paper.
|
| 166 |
+
|
| 167 |
+
For training unsupervised models, the models
|
| 168 |
+
|
| 169 |
+
we start from are given by their names on hug- 296 gingface as
|
| 170 |
+
|
| 171 |
+
298
|
| 172 |
+
|
| 173 |
+
* bert-base-cased [english model]
|
| 174 |
+
|
| 175 |
+
* roberta-base [english model] 300 301
|
| 176 |
+
|
| 177 |
+
* bert-base-multilingual-cased 302 303
|
| 178 |
+
|
| 179 |
+
* TurkuNLP/wikibert-base-no-cased 304
|
| 180 |
+
|
| 181 |
+
305
|
| 182 |
+
|
| 183 |
+
* Itgoslo/norbert2 306
|
| 184 |
+
|
| 185 |
+
307
|
| 186 |
+
|
| 187 |
+
* NbAiLab/nb-bert-base 308
|
| 188 |
+
|
| 189 |
+
309
|
| 190 |
+
|
| 191 |
+
* NbAiLab/nb-bert-large 310
|
| 192 |
+
|
| 193 |
+
The english models are included as a sanity
|
| 194 |
+
|
| 195 |
+
check: Since we are using automatically trans- 313 lated datasets to choose the best models, we want to compare their performance with some models that are expected to perform worse than Norwegian models. For the same reason we also test on the English STS datasets.
|
| 196 |
+
|
| 197 |
+
We train the supervised models using NLI data where each sentence has one paired sentenced labeled as entailment, which is regarded as a positive sample, and one sentence labeled with con-
|
| 198 |
+
|
| 199 |
+
tradiction, which is considered a negative sample. 323
|
| 200 |
+
|
| 201 |
+
325
|
| 202 |
+
|
| 203 |
+
max width=
|
| 204 |
+
|
| 205 |
+
Model $\mathbf{{Avg}.{STS}}$
|
| 206 |
+
|
| 207 |
+
1-2
|
| 208 |
+
BERT 34.29
|
| 209 |
+
|
| 210 |
+
1-2
|
| 211 |
+
RoBERTa 25.56
|
| 212 |
+
|
| 213 |
+
1-2
|
| 214 |
+
mBERT 48.34
|
| 215 |
+
|
| 216 |
+
1-2
|
| 217 |
+
WikiBERT 42.21
|
| 218 |
+
|
| 219 |
+
1-2
|
| 220 |
+
NorBERT 54.42
|
| 221 |
+
|
| 222 |
+
1-2
|
| 223 |
+
NB-BERT-base 50.41
|
| 224 |
+
|
| 225 |
+
1-2
|
| 226 |
+
NB-BERT-large 49.90
|
| 227 |
+
|
| 228 |
+
1-2
|
| 229 |
+
|
| 230 |
+
Table 1: Average performance of models before training using average of the last layer on Norwegian STS.
|
| 231 |
+
|
| 232 |
+
We thus obtain three different sentence representations ${s}_{i},{s}_{i}^{ + },{s}_{i}^{ - }$ . As in the SimCSE paper, we train contrastively trying to predict the positive pairs, and add the negative sentence representation ${s}_{i}^{ - }$ to the loss function as follows:
|
| 233 |
+
|
| 234 |
+
$$
|
| 235 |
+
{\operatorname{loss}}_{i} = - \log \frac{{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{i}^{ + }}\right) /\tau }}{\mathop{\sum }\limits_{{j = 1}}^{b}{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{j}^{ + }}\right) /\tau } + {e}^{\operatorname{sim}\left( {{s}_{i},{s}_{j}^{ - }}\right) /\tau }}
|
| 236 |
+
$$
|
| 237 |
+
|
| 238 |
+
(1)
|
| 239 |
+
|
| 240 |
+
For training supervised models we start with the following models:
|
| 241 |
+
|
| 242 |
+
* bert-base-multilingual-cased
|
| 243 |
+
|
| 244 |
+
* TurkuNLP/wikibert-base-no-cased
|
| 245 |
+
|
| 246 |
+
* Itgoslo/norbert2
|
| 247 |
+
|
| 248 |
+
* NbAiLab/nb-bert-base
|
| 249 |
+
|
| 250 |
+
* NbAiLab/nb-bert-large
|
| 251 |
+
|
| 252 |
+
We train with the same settings as in the Sim-
|
| 253 |
+
|
| 254 |
+
362 CSE paper: We set a max sequence length of 32, and use the learning rates and batch sizes given in the appendix of the SimCSE paper (which vary by model type and size). Each model is trained
|
| 255 |
+
|
| 256 |
+
367 on a single NVIDIA 3090 GPU. For some models we have to use gradient accumulation to achieve the correct batch size due to lack of RAM, which changes training dynamics a bit, since contrastive loss depends on the entire batch. We do not see any noticable effects on results from this. We train with the Adam optimizer with linear weight decay and put a multi-layer perceptron (MLP) on top of the model for training. Unsupervised we train for one epoch, and supervised for three. The best model is selected by evaluating on the dev
|
| 257 |
+
|
| 258 |
+
part of the STS Benchmark dataset. For evalua- 378
|
| 259 |
+
|
| 260 |
+
tion we test both with and without this MLP, and 379
|
| 261 |
+
|
| 262 |
+
find that generally, testing without the MLP gives 380
|
| 263 |
+
|
| 264 |
+
slightly better results. We train three versions of 381 each model and report average scores.
|
| 265 |
+
|
| 266 |
+
The models are also fine-tuned on two Norwe-
|
| 267 |
+
|
| 268 |
+
gian sequence classification tasks. Talk of Nor- 384 way (ToN) is a subset of the Norwegian parliament speeches dataset (Lapponi et al., 2018), where the task is to classify whether the speech was given by SV or FrP (politically left or right, respectively) selected in (Kummervold et al., 2021). 3 NoReC is a dataset of reviews in Norwegian from different domains such as movies, video games and music (Velldal et al., 2018). From this dataset one can extract a binary classification task by taking the subset of reviews that are clearly positive or negative and letting the task be to classify them as positive or negative (Øvrelid et al., 2020).We take the text representations made by the model before the MLP, and add a linear classification layer on top and fine-tune the entire model on the training dataset. For both the fine-tuning datasets we do a grid search for hyperparameters under the following conditions (these are the same hyperparame-ters as in the finetuning examples in the appendix of the original BERT paper (Devlin et al., 2019)):
|
| 269 |
+
|
| 270 |
+
* epochs=2, 3, 4
|
| 271 |
+
|
| 272 |
+
* learning rate $= 2\mathrm{e} - 5,3\mathrm{e} - 5,5\mathrm{e} - 5$
|
| 273 |
+
|
| 274 |
+
* batch size 16,32
|
| 275 |
+
|
| 276 |
+
We use the macro f1 score on the validation set to select the best model for each training run. We do three training runs and report the average of test scores.
|
| 277 |
+
|
| 278 |
+
§ 5 RESULTS SENTENCE SIMILARITY
|
| 279 |
+
|
| 280 |
+
We evaluate the trained models on the semantic textual similarity datasets. We evaluate our models both on the Norwegian version of the datasets, and the original English. We report Spearman's correlation for the STS datasets.
|
| 281 |
+
|
| 282 |
+
§ 5.1 EVALUATION IN NORWEGIAN
|
| 283 |
+
|
| 284 |
+
In Table 1 we see the average performance on the Norwegian STS before training using the average of the last layer to compare embeddings. We also tested using the average of first and last layers (giving similar numbers) and using "cls" token
|
| 285 |
+
|
| 286 |
+
431
|
| 287 |
+
|
| 288 |
+
${}^{3}$ https://huggingface.co/datasets/NbAiLab/norwegian_parliament
|
| 289 |
+
|
| 290 |
+
432 486
|
| 291 |
+
|
| 292 |
+
max width=
|
| 293 |
+
|
| 294 |
+
Model STS12 STS13 STS14 STS15 STS16 STSB SICKR $\mathbf{{Avg}.}$
|
| 295 |
+
|
| 296 |
+
1-9
|
| 297 |
+
BERT 55.21 49.64 49.29 63.68 54.39 54.67 50.93 53.97
|
| 298 |
+
|
| 299 |
+
1-9
|
| 300 |
+
RoBERTa 60.30 59.12 57.15 68.73 64.33 64.04 54.39 61.15
|
| 301 |
+
|
| 302 |
+
1-9
|
| 303 |
+
mBERT 60.88 62.31 55.91 70.78 66.80 61.87 57.13 62.24
|
| 304 |
+
|
| 305 |
+
1-9
|
| 306 |
+
WikiBERT 63.38 70.21 62.63 74.04 70.90 70.88 62.52 67.79
|
| 307 |
+
|
| 308 |
+
1-9
|
| 309 |
+
NorBERT 56.41 65.33 54.32 68.95 68.00 62.40 64.54 62.85
|
| 310 |
+
|
| 311 |
+
1-9
|
| 312 |
+
NB-BERT-base 59.40 70.70 57.93 71.87 69.94 69.25 63.98 66.15
|
| 313 |
+
|
| 314 |
+
1-9
|
| 315 |
+
NB-BERT-large 70.45 80.80 72.79 81.53 78.41 79.35 69.18 76.07
|
| 316 |
+
|
| 317 |
+
1-9
|
| 318 |
+
|
| 319 |
+
488
|
| 320 |
+
|
| 321 |
+
489
|
| 322 |
+
|
| 323 |
+
490
|
| 324 |
+
|
| 325 |
+
491
|
| 326 |
+
|
| 327 |
+
493
|
| 328 |
+
|
| 329 |
+
494
|
| 330 |
+
|
| 331 |
+
433 487
|
| 332 |
+
|
| 333 |
+
438 492
|
| 334 |
+
|
| 335 |
+
(a) Performance of unsupervised models on the Norwegian STS datasets.
|
| 336 |
+
|
| 337 |
+
max width=
|
| 338 |
+
|
| 339 |
+
Model STS12 STS13 STS14 STS15 STS16 STSB SICKR $\mathbf{{Avg}.}$
|
| 340 |
+
|
| 341 |
+
1-9
|
| 342 |
+
mBERT 73.43 69.09 70.84 81.50 73.82 76.47 72.79 73.99
|
| 343 |
+
|
| 344 |
+
1-9
|
| 345 |
+
WikiBERT 73.29 64.48 69.24 80.32 74.51 75.42 69.94 72.45
|
| 346 |
+
|
| 347 |
+
1-9
|
| 348 |
+
NorBERT 74.30 70.69 72.09 82.56 76.91 79.33 73.74 75.66
|
| 349 |
+
|
| 350 |
+
1-9
|
| 351 |
+
NB-BERT-base 76.31 77.20 75.43 84.47 77.69 82.14 77.97 78.75
|
| 352 |
+
|
| 353 |
+
1-9
|
| 354 |
+
NB-BERT-large 77.07 83.65 80.28 86.24 81.87 84.37 78.44 81.70
|
| 355 |
+
|
| 356 |
+
1-9
|
| 357 |
+
|
| 358 |
+
495
|
| 359 |
+
|
| 360 |
+
496
|
| 361 |
+
|
| 362 |
+
497
|
| 363 |
+
|
| 364 |
+
498
|
| 365 |
+
|
| 366 |
+
499
|
| 367 |
+
|
| 368 |
+
500
|
| 369 |
+
|
| 370 |
+
501
|
| 371 |
+
|
| 372 |
+
(b) Performance on the Norwegian STS datasets of supervised models trained on both Norwegian and English NLI data. 502
|
| 373 |
+
|
| 374 |
+
max width=
|
| 375 |
+
|
| 376 |
+
Model STS12 STS13 STS14 STS15 STS16 STSB SICKR $\mathbf{{Avg}.}$
|
| 377 |
+
|
| 378 |
+
1-9
|
| 379 |
+
mBERT 69.28 71.50 69.44 78.12 74.38 71.12 67.70 71.65
|
| 380 |
+
|
| 381 |
+
1-9
|
| 382 |
+
WikiBERT 70.14 71.18 71.79 77.56 76.20 74.20 67.32 72.63
|
| 383 |
+
|
| 384 |
+
1-9
|
| 385 |
+
NorBERT 70.79 74.46 72.44 80.66 77.73 76.65 71.56 74.90
|
| 386 |
+
|
| 387 |
+
1-9
|
| 388 |
+
NB-BERT-base 72.41 79.22 74.67 81.47 77.72 78.49 73.50 76.78
|
| 389 |
+
|
| 390 |
+
1-9
|
| 391 |
+
NB-BERT-large 74.67 83.65 79.47 84.15 81.82 82.25 74.75 80.11
|
| 392 |
+
|
| 393 |
+
1-9
|
| 394 |
+
|
| 395 |
+
503
|
| 396 |
+
|
| 397 |
+
504
|
| 398 |
+
|
| 399 |
+
505
|
| 400 |
+
|
| 401 |
+
506
|
| 402 |
+
|
| 403 |
+
507
|
| 404 |
+
|
| 405 |
+
509 (c) Performance on the Norwegian STS datasets of supervised models trained on Norwegian NLI data.
|
| 406 |
+
|
| 407 |
+
Table 2: Results of our models tested on the Norwegian STS datasets. 512 (giving worse numbers). Thus we have a baseline to compare how much the models have learned from the training.
|
| 408 |
+
|
| 409 |
+
In Table 2a we see the performance of our unsupervised models on the Norwegian STS datasets. These are the results when we test without the MLP, which on average performs slightly better than using MLP also for testing.
|
| 410 |
+
|
| 411 |
+
In Table 2b we see the results from training supervised models on the combination of Norwegian and English NLI data, while Table 2c shows the performance when training on only Norwegian NLI data. We see that training with English included improves performance over merely training in Norwegian for all models.
|
| 412 |
+
|
| 413 |
+
We see that the supervised models perform much better than the unsupervised ones. This would usually not be surprising, but considering the supervised data is automatically translated and therefore presumably of lower quality than the unsupervised data, it is interesting to note.
|
| 414 |
+
|
| 415 |
+
§ 5.2 EVALUATION IN ENGLISH
|
| 416 |
+
|
| 417 |
+
In Table 3a we show the results from testing our
|
| 418 |
+
|
| 419 |
+
485 unsupervised models on the English dataset. In
|
| 420 |
+
|
| 421 |
+
Table 3b we show the results from testing our su- 514 pervised models trained on the combined English and Norwegian dataset on the English STS data, while Table 3c shows the results for supervised models trained only on Norwegian data.
|
| 422 |
+
|
| 423 |
+
519
|
| 424 |
+
|
| 425 |
+
Since we have automatically translated the STS data, we are unsure how accurate the ground truth
|
| 426 |
+
|
| 427 |
+
labels in Norwegian will be, since there will be 522 examples of sentences where the similarity of the
|
| 428 |
+
|
| 429 |
+
sentences changes because of differing transla- 524 tions. However we think that this should not influence comparisons between different models very much. This is supported by the fact that the internal ranking between models for the Norwegian
|
| 430 |
+
|
| 431 |
+
and the English dataset is the same among the Nor- 529 wegian unsupervised models. (English models unsurprisingly are higher in the rankings when tested on English)
|
| 432 |
+
|
| 433 |
+
One of the more interesting findings in this pa- 534 per is how strong performance our models get on the English STS data. NB-BERT-base was initialized from the mBERT checkpoint which can
|
| 434 |
+
|
| 435 |
+
partly explain this, but not all models was started 538
|
| 436 |
+
|
| 437 |
+
from a model pre-trained in English. The un- 539
|
| 438 |
+
|
| 439 |
+
540 594
|
| 440 |
+
|
| 441 |
+
max width=
|
| 442 |
+
|
| 443 |
+
Model STS12 STS13 STS14 STS15 STS16 STSB SICKR $\mathbf{{Avg}.}$
|
| 444 |
+
|
| 445 |
+
1-9
|
| 446 |
+
BERT(english) 54.76 70.77 57.39 69.32 69.19 61.66 66.29 64.20
|
| 447 |
+
|
| 448 |
+
1-9
|
| 449 |
+
roBERTa(english) 65.26 77.06 67.09 76.88 76.71 75.32 65.60 71.99
|
| 450 |
+
|
| 451 |
+
1-9
|
| 452 |
+
mBERT 63.56 73.10 63.95 74.67 73.56 68.58 61.61 68.43
|
| 453 |
+
|
| 454 |
+
1-9
|
| 455 |
+
WikiBERT 64.68 77.60 67.04 76.20 76.30 74.63 65.34 71.68
|
| 456 |
+
|
| 457 |
+
1-9
|
| 458 |
+
NorBERT 52.96 62.30 54.99 67.45 69.83 63.68 62.40 61.94
|
| 459 |
+
|
| 460 |
+
1-9
|
| 461 |
+
NB-BERT-base 56.23 72.06 57.93 68.71 71.09 67.25 61.63 64.99
|
| 462 |
+
|
| 463 |
+
1-9
|
| 464 |
+
NB-BERT-large 72.54 83.68 76.08 83.03 81.09 81.32 68.80 78.08
|
| 465 |
+
|
| 466 |
+
1-9
|
| 467 |
+
|
| 468 |
+
596
|
| 469 |
+
|
| 470 |
+
597
|
| 471 |
+
|
| 472 |
+
598
|
| 473 |
+
|
| 474 |
+
599
|
| 475 |
+
|
| 476 |
+
600
|
| 477 |
+
|
| 478 |
+
601
|
| 479 |
+
|
| 480 |
+
602
|
| 481 |
+
|
| 482 |
+
541 595
|
| 483 |
+
|
| 484 |
+
(a) Performance of unsupervised models on English STS datasets.
|
| 485 |
+
|
| 486 |
+
max width=
|
| 487 |
+
|
| 488 |
+
Model STS12 STS13 STS14 STS15 STS16 STSB SICKR $\mathbf{{Avg}.}$
|
| 489 |
+
|
| 490 |
+
1-9
|
| 491 |
+
mBERT 76.88 79.69 77.58 84.99 78.52 81.36 77.30 79.47
|
| 492 |
+
|
| 493 |
+
1-9
|
| 494 |
+
WikiBERT 72.45 59.56 67.08 80.87 75.21 75.31 74.01 72.07
|
| 495 |
+
|
| 496 |
+
1-9
|
| 497 |
+
NorBERT 73.39 69.40 72.65 83.10 77.30 80.48 76.55 76.13
|
| 498 |
+
|
| 499 |
+
1-9
|
| 500 |
+
NBBert-base 76.93 78.78 77.76 85.28 80.29 82.96 78.49 80.07
|
| 501 |
+
|
| 502 |
+
1-9
|
| 503 |
+
NBBert-large 78.30 85.92 81.78 87.11 83.24 85.72 79.56 83.09
|
| 504 |
+
|
| 505 |
+
1-9
|
| 506 |
+
|
| 507 |
+
603
|
| 508 |
+
|
| 509 |
+
604
|
| 510 |
+
|
| 511 |
+
605
|
| 512 |
+
|
| 513 |
+
606
|
| 514 |
+
|
| 515 |
+
607
|
| 516 |
+
|
| 517 |
+
608
|
| 518 |
+
|
| 519 |
+
609
|
| 520 |
+
|
| 521 |
+
(b) Performance of supervised models on English STS datasets fine-tuned on both Norwegian and English MNLI. (c) Performance of supervised models on English STS datasets fine-tuned on Norwegian MNLI.
|
| 522 |
+
|
| 523 |
+
max width=
|
| 524 |
+
|
| 525 |
+
Model STS12 STS13 STS14 STS15 STS16 STSB SICKR $\mathbf{{Avg}.}$
|
| 526 |
+
|
| 527 |
+
1-9
|
| 528 |
+
mBERT 72.62 79.36 75.84 81.87 79.70 77.48 70.18 76.72
|
| 529 |
+
|
| 530 |
+
1-9
|
| 531 |
+
WikiBERT 65.47 65.30 67.40 76.86 73.12 68.91 60.59 68.24
|
| 532 |
+
|
| 533 |
+
1-9
|
| 534 |
+
NorBERT 66.90 68.62 69.63 79.35 76.23 73.38 69.66 71.97
|
| 535 |
+
|
| 536 |
+
1-9
|
| 537 |
+
NBBert-base 71.57 80.30 76.30 81.55 79.23 78.09 71.12 76.88
|
| 538 |
+
|
| 539 |
+
1-9
|
| 540 |
+
NBBert-large 76.42 85.58 81.23 85.49 83.21 83.15 75.04 81.45
|
| 541 |
+
|
| 542 |
+
1-9
|
| 543 |
+
|
| 544 |
+
Table 3: Results of our models tested on the English STS datasets.
|
| 545 |
+
|
| 546 |
+
610
|
| 547 |
+
|
| 548 |
+
611
|
| 549 |
+
|
| 550 |
+
612
|
| 551 |
+
|
| 552 |
+
613
|
| 553 |
+
|
| 554 |
+
615
|
| 555 |
+
|
| 556 |
+
617
|
| 557 |
+
|
| 558 |
+
619
|
| 559 |
+
|
| 560 |
+
620 supervised NB-BERT-large achieves a score of 78.08 on English STS. For comparison, the best unsupervised model in the original SimCSE paper, SimCSE-RoBERTa-large, achieved a score of 78.90. Thus we see that we have a model pre-trained on a Norwegian corpus (containg some English), further trained unsupervised in Norwegian, that achieves less than 1% worse score than the best English model, trained in English. This model is also better than the best unsupervised English model in the original SentenceBERT paper. The supervised NB-BERT trained only on Norwegian NLI achieved a score of 81.45, while the version trained on Norwegian and English NLI
|
| 561 |
+
|
| 562 |
+
583 achieve a score of 83.09. Comparably the supervised original English version SimCSE-BERT-base got a score of 81.57 and SimCSE-RoBERTa-large 83.76. Thus we see that we achieve comparable performance between a supervised Norwe-
|
| 563 |
+
|
| 564 |
+
588 gian large BERT and a supervised English base BERT, when testing in English. Our best supervised model is less than $1\%$ away from the best English SimCSE model, although this is less surprising than for the unsupervised models, since we
|
| 565 |
+
|
| 566 |
+
593
|
| 567 |
+
|
| 568 |
+
in this case fine-tune our model also on English 622 NLI. We also note that our best supervised model which is trained on only Norwegian is better than the best supervised English model in the Sentence-BERT paper. Thus it does seem like the models learn a lot for performing well at English sentence similarity even though the pre-training is mostly in Norwegian. The strong performance in English of NB-BERT models was already noted in (Kum-
|
| 569 |
+
|
| 570 |
+
mervold et al., 2021). 632
|
| 571 |
+
|
| 572 |
+
To see if we can better understand the
|
| 573 |
+
|
| 574 |
+
above findings, we tested the English supervised 637 SimCSE-RoBERTa-large on Norwegian STS, and achieved only an average score of 54.23 . Thus a very good English model scores badly in Norwegian, while a very good Norwegian model scores
|
| 575 |
+
|
| 576 |
+
well in English. This might indicate that the rea- 642 son the Norwegian models all perform so well in English is that there is enough English in the Norwegian training data (probably including many snippets in the Norwegian parts) that the models
|
| 577 |
+
|
| 578 |
+
learn quite a lot of English. 647
|
| 579 |
+
|
| 580 |
+
648
|
| 581 |
+
|
| 582 |
+
BERT 76.7
|
| 583 |
+
|
| 584 |
+
RoBERTa 79.8 mBERT WikiBERT NorBERT NB-BERT-base 82.7 NB-BERT-large 89.7 (a) Performance of unsupervised models when fine-tuned on the Talk of Norway dataset. mBERT 79.3 WikiBERT 82.6 NorBERT 85.7 NB-BERT-base 83.4 NB-BERT-large 89.3 (b) Performance of supervised models trained on Norwegian NLI when fine-tuned on the Talk of Norway dataset. mBERT 79.2 WikiBERT 81.1 NorBERT 84.9 NB-BERT-base 83.3 NB-BERT-large 89.3 (c) Performance of supervised models trained in on Norwegian and English NLI on the Talk of Norway dataset.
|
| 585 |
+
|
| 586 |
+
Table 4: Performance of our models on the ToN dataset.
|
| 587 |
+
|
| 588 |
+
649
|
| 589 |
+
|
| 590 |
+
650
|
| 591 |
+
|
| 592 |
+
651
|
| 593 |
+
|
| 594 |
+
§ 6 RESULTS CLASSIFICATION
|
| 595 |
+
|
| 596 |
+
We report macro F1 score for the binary classification tasks.
|
| 597 |
+
|
| 598 |
+
§ 6.1 TON BINARY CLASSIFICATION
|
| 599 |
+
|
| 600 |
+
In Table 4a we see the performance of the unsupervised models when fine-tuned on the Talk of Norway dataset. In Table 4b we see the perfor-
|
| 601 |
+
|
| 602 |
+
686 mance of the supervised models trained on Norwegian NLI and then fine-tuned on the ToN dataset, while Table 4c shows the performance when train-
|
| 603 |
+
|
| 604 |
+
689 ing on both Norwegian and English NLI.
|
| 605 |
+
|
| 606 |
+
691 We see that training the models to give bet- ter sentence embeddings gives some performance gains on this task, compared to fine-tuning the base model: In (Kummervold et al., 2021) it is reported that NB-BERT achieves a score of 81.8, while NorBERT scores 78.2 and mBERT 78.4 on this task. All our numbers are slightly higher.
|
| 607 |
+
|
| 608 |
+
We see that for this classification task training to make sentence models with English NLI data included did not help: the numbers are very similar
|
| 609 |
+
|
| 610 |
+
701 with and without it. (a) Performance of unsupervised models, fine-tuned on the NoReC binary classification dataset. mBERT 72.2 WikiBERT 77.9 NorBERT 82.4 NB-BERT-base 85.9
|
| 611 |
+
|
| 612 |
+
max width=
|
| 613 |
+
|
| 614 |
+
BERT 63.1
|
| 615 |
+
|
| 616 |
+
1-2
|
| 617 |
+
RoBERTa 64.4
|
| 618 |
+
|
| 619 |
+
1-2
|
| 620 |
+
mBERT 70.3
|
| 621 |
+
|
| 622 |
+
1-2
|
| 623 |
+
WikiBERT 77.0
|
| 624 |
+
|
| 625 |
+
1-2
|
| 626 |
+
NorBERT 82.0
|
| 627 |
+
|
| 628 |
+
1-2
|
| 629 |
+
NB-BERT-base 84.3
|
| 630 |
+
|
| 631 |
+
1-2
|
| 632 |
+
NB-BERT-large 87.6
|
| 633 |
+
|
| 634 |
+
1-2
|
| 635 |
+
|
| 636 |
+
NB-BERT-large 87.0 (b) Performance of supervised models trained on only Norwegian NLI when fine-tuned on the NoReC binary classification dataet. mBERT 74.4 WikiBERT 77.6 NorBERT 81.0 NB-BERT-base 84.9
|
| 637 |
+
|
| 638 |
+
NB-BERT-large 87.3 (c) Performance of supervised models trained on Norwegian and English NLI when fine-tuned on the NoReC binary classification dataset.
|
| 639 |
+
|
| 640 |
+
Table 5: Performance of our models on the NoReC binary classification dataset.
|
| 641 |
+
|
| 642 |
+
§ 6.2 NOREC BINARY CLASSIFICATION
|
| 643 |
+
|
| 644 |
+
In Table 5a we see the performance of unsupervised models on the NoReC binary classification task. In Table 5b we see the results of supervised models trained on Norwegian NLI, while in Table 5c we see the results of supervised models trained on Norwegian and English NLI.
|
| 645 |
+
|
| 646 |
+
For this task it is less clear that we get gains from training sentence embedding models: The highest reported number for this task is NB-BERT-base which is reported as 86.4 in (Kummervold et al., 2021) and 83.9 in (Kutuzov et al., 2021). Our best score for NB-BERT-base is 85.9, which is not better than this. Our best model NB-BERT-large also does not achieve a higher score than about ${87}\%$ , which is only slightly better than the smaller models. We do not know the reason we get improvements for ToN classification, and not here. The mBERT model do improve with training, but that is not so surprising, since it is not already as strong in Norwegian as most of the other models.
|
| 647 |
+
|
| 648 |
+
§ 7 DISCUSSION
|
| 649 |
+
|
| 650 |
+
757
|
| 651 |
+
|
| 652 |
+
We believe that our models perform well on the semantic sentence similarity task, even if we do not have any strict comparison since this is the first evalutation of Norwegian sentence embedding models on the STS data. The Norwegian dataset corresponds to the English one, so the scores of English models on English STS and Norwegian models on Norwegian STS should in principle correspond to each other, but because of the extra noise added by the automatic translation we are not surprised that the Norwegian numbers are a bit worse. We see that the models improve a lot compared to before training, and because they perform quite well even for the English STS datasets, we are confident that they have indeed learned something useful in Norwegian.
|
| 653 |
+
|
| 654 |
+
The supervised models perform better than our unsupervised models even though the supervised models are trained on machine translated data. This shows that machine translated data could be useful for doing NLP in smaller languages, at least for some tasks such as ours. The difference in the numbers we get for unsupervised and supervised training are similar to the ones in the original Sim-CSE paper. It is a bit unclear to what extent the specific content and language of the training data is important for performing well on STS tasks. For example, one can improve the performance of English SimCSE by training on unrelated image data (Jian et al., 2022). This might be because the task is a form of clustering, and images and text in other languages are structurally similar enough that the models learn something useful.
|
| 655 |
+
|
| 656 |
+
From doing our experiments we get comparisons of the different Norwegian language models. This is because this method of making sentence embeddings is mostly a way of extracting the knowledge already learned by the models, since the amount of training we do is much smaller than the amount the models already have been pre-trained. An unsuprising conclusion is that the scale of the model is the most important factor in making good language models. NB-BERT-large is the best model by clear margins for all of our evaluations. This conforms to the general tendency in recent NLP that scaling up models is more effective than tailoring data or architecture on a given scale. Next, we find that for binary classification the models NB-BERT-base and Nor-
|
| 657 |
+
|
| 658 |
+
809 BERT perform quite similary, while WikiBERT is
|
| 659 |
+
|
| 660 |
+
generally a bit weaker, while all of them clearly 810
|
| 661 |
+
|
| 662 |
+
outperform mBERT. For sentence similarity we 811
|
| 663 |
+
|
| 664 |
+
find different rankings among models: Here un- 812
|
| 665 |
+
|
| 666 |
+
supervised WikiBERT is the second best model, 813
|
| 667 |
+
|
| 668 |
+
while the supervised version is the weakest of the 814
|
| 669 |
+
|
| 670 |
+
Norwegian supervised models. Supervised NB- 815
|
| 671 |
+
|
| 672 |
+
BERT-base is clearly the second best model, while 816 NorBERT performs worse on the STS task.
|
| 673 |
+
|
| 674 |
+
We see that training sentence embedding models slightly improves performance on the binary
|
| 675 |
+
|
| 676 |
+
classification tasks, but not by much compared 821 with the base models. There is no clear tendency on whether training supervised or unsupervised improves performance on classification more, since the numbers we get are similar in both
|
| 677 |
+
|
| 678 |
+
cases. 826
|
| 679 |
+
|
| 680 |
+
828
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/uygq9_N7TL/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,571 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Uncertainty-Aware Natural Language Inference with Stochastic Weight Averaging
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
Anonymous Author
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 1
|
| 14 |
+
|
| 15 |
+
006 Affiliation / Address line 2
|
| 16 |
+
|
| 17 |
+
Affiliation / Address line 3
|
| 18 |
+
|
| 19 |
+
email@domain
|
| 20 |
+
|
| 21 |
+
Anonymouser Author
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 1
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 2
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 3
|
| 28 |
+
|
| 29 |
+
email@domain
|
| 30 |
+
|
| 31 |
+
Anonymousest Author 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 1 059
|
| 34 |
+
|
| 35 |
+
Affiliation / Address line 2 060
|
| 36 |
+
|
| 37 |
+
Affiliation / Address line 3
|
| 38 |
+
|
| 39 |
+
email@domain
|
| 40 |
+
|
| 41 |
+
## Abstract
|
| 42 |
+
|
| 43 |
+
This paper introduces Bayesian uncertainty modeling using Stochastic Weight Averaging-Gaussian (SWAG) in Natural
|
| 44 |
+
|
| 45 |
+
016 Language Understanding (NLU) tasks. We apply the approach to standard
|
| 46 |
+
|
| 47 |
+
018 tasks in natural language inference (NLI) and demonstrate the effectiveness of the method in terms of prediction accuracy
|
| 48 |
+
|
| 49 |
+
021 and correlation with human annotation disagreements. We argue that the uncer-
|
| 50 |
+
|
| 51 |
+
023 tainty representations in SWAG better reflect subjective interpretation and the nat-
|
| 52 |
+
|
| 53 |
+
026 ural variation that is also present in human language understanding. The results re-
|
| 54 |
+
|
| 55 |
+
028 veal the importance of uncertainty modeling, an often neglected aspect of neural language modeling, in NLU tasks.
|
| 56 |
+
|
| 57 |
+
031
|
| 58 |
+
|
| 59 |
+
## 1 Introduction
|
| 60 |
+
|
| 61 |
+
033
|
| 62 |
+
|
| 63 |
+
Arguably, human language understanding is not objective nor deterministic. The same utterance or
|
| 64 |
+
|
| 65 |
+
036 text can be interpreted in different ways by different people depending on their language standards,
|
| 66 |
+
|
| 67 |
+
038 background knowledge and world views, the linguistic context, as well as the situation in which the utterance or text appears. This uncertainty about potential readings is typically not modeled
|
| 68 |
+
|
| 69 |
+
043 in Natural Language Understanding (NLU) re- search and is often ignored in NLU benchmarks and datasets. Instead, they usually assign a single interpretation as a gold standard to be predicted by an artificial system ignoring the inherent ambiguity of language and potential disagreements that humans arrive at.
|
| 70 |
+
|
| 71 |
+
Some datasets like SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) do, however, contain information about different readings in the
|
| 72 |
+
|
| 73 |
+
053 form of annotation disagreement. These datasets
|
| 74 |
+
|
| 75 |
+
include the labels from five different rounds of an- 065 notation which show in some cases clear disagree-
|
| 76 |
+
|
| 77 |
+
ment about the correct label for the sentence pair. 067 Those labeling discrepancies can certainly be a result of annotation mistakes but more commonly
|
| 78 |
+
|
| 79 |
+
they arise from differences in understanding the 070 task, the given information and how it relates to
|
| 80 |
+
|
| 81 |
+
world knowledge and personal experience. 072
|
| 82 |
+
|
| 83 |
+
Moving towards uncertainty-aware neural lan-
|
| 84 |
+
|
| 85 |
+
guage models, we present our initial results us- 075 ing Stochastic Weight Averaging (SWA) (Izmailov
|
| 86 |
+
|
| 87 |
+
et al., 2018) and SWA-Gaussian (SWAG) (Mad- 077 dox et al., 2019) on the task of Natural Language Inference. SWAG provides a scalable approach to
|
| 88 |
+
|
| 89 |
+
calibrate neural networks and to model uncertainty 080 presentations and is straightforward to apply with
|
| 90 |
+
|
| 91 |
+
standard neural architectures. Our study addresses 082 the two main questions:
|
| 92 |
+
|
| 93 |
+
- How does uncertainty modeling using SWAG
|
| 94 |
+
|
| 95 |
+
influence prediction performance and gener- 085 alization in NLI tasks?
|
| 96 |
+
|
| 97 |
+
087
|
| 98 |
+
|
| 99 |
+
- How well does the calibrated model reflect
|
| 100 |
+
|
| 101 |
+
human disagreement and annotation vari- 089
|
| 102 |
+
|
| 103 |
+
ance? 090
|
| 104 |
+
|
| 105 |
+
In this paper, we first test the performance of 092 SWA and SWAG in SNLI and MNLI tasks. We then study if adding weight averaging improves
|
| 106 |
+
|
| 107 |
+
the generalization power of NLI models as tested 095 through cross-dataset experiments. Finally, we
|
| 108 |
+
|
| 109 |
+
analyse the probability distributions from SWA 097
|
| 110 |
+
|
| 111 |
+
and SWAG to test how well the model uncertainty 098
|
| 112 |
+
|
| 113 |
+
corresponds to annotator disagreements. 099
|
| 114 |
+
|
| 115 |
+
## 2 Background and Related Work
|
| 116 |
+
|
| 117 |
+
102
|
| 118 |
+
|
| 119 |
+
### 2.1 Uncertainty in human annotations
|
| 120 |
+
|
| 121 |
+
In a recent position paper Plank (2022) argue that instead of taking human label variation as a prob-
|
| 122 |
+
|
| 123 |
+
lem, we should embrace it as an opportunity and 107 take it into consideration in all the steps of the ML 109 pipeline: data, modeling and evaluation. The paper provides a comprehensive survey of research on (i) reasons for human label variation, (ii) modeling human label variation, and (iii) evaluating with human label variation.
|
| 124 |
+
|
| 125 |
+
Pavlick and Kwiatkowski (2019) studied human disagreements in NLI tasks and argue that we should move to an evaluation objective that more closely corresponds to the natural interpretation variance that exists in data. Such a move would require that NLU models be properly calibrated to reflect the distribution we can expect and, hence, move to a more natural inference engine.
|
| 126 |
+
|
| 127 |
+
Chen et al. (2020) propose Uncertain NLI (UNLI), a task that moves away from categorical labels into probabilistic values. They use a scalar regression model and show that the model predictions correlate with human judgement.
|
| 128 |
+
|
| 129 |
+
### 2.2 Representing Model Uncertainty
|
| 130 |
+
|
| 131 |
+
The approach to uncertainty modeling that we consider is related to the well-established technique of model ensembling. Stochastic optimization procedures applied in training deep neural networks are non-deterministic and depend on hyper-parameters and initial seeds. Ensembles have been used as a pragmatic solution to average over several solutions, and the positive impact on model performance pushed ensembling into the standard toolbox of deep learning. Related to en-sembling is the technique of checkpoint averaging (refer to e.g. Gao et al., 2022), which is also known to improve performance.
|
| 132 |
+
|
| 133 |
+
Intuitively, ensembles and checkpoint averages also reflect the idea of different views and interpretations of the data and, therefore, provide a framework for uncertainty modeling. SWA and SWAG build on that idea, and SWAG provides a generic and efficient approach for approximating Bayesian uncertainty and model calibration.
|
| 134 |
+
|
| 135 |
+
SWA (Izmailov et al., 2018) is a checkpoint averaging method that tracks the optimization trajectory for a model during training, using the average of encountered values as the eventual parameters:
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
{\theta }_{\mathrm{{SWA}}} = \frac{1}{T}\mathop{\sum }\limits_{{i = 1}}^{T}{\theta }_{i} \tag{1}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
with ${\theta }_{\mathrm{{SWA}}}$ denoting the SWA solution for parame-
|
| 142 |
+
|
| 143 |
+
161 ter $\theta$ after $\mathrm{T}$ epochs of training.
|
| 144 |
+
|
| 145 |
+
SWAG (Maddox et al., 2019) extends this 162
|
| 146 |
+
|
| 147 |
+
method to estimate Gaussian posteriors for model 163 parameters, by also estimating a covariance matrix for the parameters. For computational feasibility, a low-rank plus diagonal approximation to the covariance matrix is used:
|
| 148 |
+
|
| 149 |
+
168
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
{\sum }_{\text{low-rank }} \approx \frac{1}{T - 1}\mathop{\sum }\limits_{{i = 1}}^{T}\left( {{\theta }_{i} - {\widehat{\theta }}_{i}}\right) {\left( {\theta }_{i} - {\widehat{\theta }}_{i}\right) }^{T} \tag{2}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
{\sum }_{\text{diag }} = \operatorname{diag}\left( {\frac{1}{T}\mathop{\sum }\limits_{{i = 1}}^{T}{\theta }_{i}^{2} - {\theta }_{\mathrm{{SWA}}}^{2}}\right) \tag{3}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
where ${\widehat{\theta }}_{i}$ in (2) is the running estimate of the parameters’ mean obtained from the first $i$ samples.
|
| 160 |
+
|
| 161 |
+
The resulting posterior approximations are given 178 by
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
{\theta }_{\mathrm{{SWAG}}} \sim \mathcal{N}\left( {{\theta }_{\mathrm{{SWA}}},\frac{1}{2}\left( {{\sum }_{\text{diag }} + {\sum }_{\text{low-rank }}}\right) }\right) .
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
180(4)
|
| 168 |
+
|
| 169 |
+
Once the posteriors are thus approximated, in test time, the model is utilized by sampling from the
|
| 170 |
+
|
| 171 |
+
approximated posteriors for $N$ times, and tak- 185 ing the average of the predicted distributions from these samples as the answer of the model.
|
| 172 |
+
|
| 173 |
+
One of the advantages of SWAG is the possi- 188 bility to seamlessly start with any pre-trained so-
|
| 174 |
+
|
| 175 |
+
lution. Approximating the posterior is then done 190 during fine-tuning without the need to change the underlying model.
|
| 176 |
+
|
| 177 |
+
193
|
| 178 |
+
|
| 179 |
+
## 3 Experiments
|
| 180 |
+
|
| 181 |
+
195
|
| 182 |
+
|
| 183 |
+
We test the performance of SWA and SWAG on
|
| 184 |
+
|
| 185 |
+
the natural language inference task using three 198 NLI datasets, including cross-dataset experiments,
|
| 186 |
+
|
| 187 |
+
and study the effect on both hard and soft labeling. 200
|
| 188 |
+
|
| 189 |
+
### 3.1 Datasets
|
| 190 |
+
|
| 191 |
+
We use Stanford Natural Language Inference cor-
|
| 192 |
+
|
| 193 |
+
pus (SNLI) (Bowman et al., 2015) and Multi- 205 Genre Natural Language Inference (MNLI) corpus (Williams et al., 2018) as the datasets in our experiments. We also study cross-dataset generalisation capability of the model with and without weight averaging. For those experiments we also include SICK (Marelli et al., 2014) as a test set. In cross-dataset generalization experiments we first fine-tune the model with a training data from one NLI dataset (e.g. SNLI) and then test with a test
|
| 194 |
+
|
| 195 |
+
set from another NLI dataset (e.g. MNLI-mm). 215
|
| 196 |
+
|
| 197 |
+
SNLI is a dataset of ${570}\mathrm{k}$ sentence pairs which have been manually labeled with entailment, contradiction, and neutral labels. The source for the premise sentences in SNLI were image captions from the Flickr30k corpus (Young et al., 2014).
|
| 198 |
+
|
| 199 |
+
MNLI is made of ${433}\mathrm{\;k}$ sentence pairs labeled with entailment, contradiction and neutral, containing examples from ten genres of written and spoken English. Five of the genres are included in the training set. The development and test sets have been split into matched (MNLI-m) and mismatched (MNLI-mm) sets, where the former includes only sentences from the same genres as the training data, and the latter includes genres not present in the training data. ${}^{1}$
|
| 200 |
+
|
| 201 |
+
SICK includes 9,840 examples with logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was constructed automatically by taking pairs of sentences from a random subset of the $8\mathrm{\;K}$ Image-Flickr (Young et al., 2014) and the SemEval 2012 STS MSRVideo Description (Agirre et al., 2012) datasets by using rule-based approach to construct examples for the different logical inference types.
|
| 202 |
+
|
| 203 |
+
### 3.2 Methods
|
| 204 |
+
|
| 205 |
+
In all the experiments we fine tune a pre-trained RoBERTa-base model (Liu et al., 2019) from the Hugging Face Transformers library (Wolf et al., 2020). As a common practice in the NLI tasks, we use the majority-vote gold labels for training even if multiple annotations are available.
|
| 206 |
+
|
| 207 |
+
We add stochastic weight averaging to the RoBERTa model by using the SWA implementation from PyTorch 1.12 and the SWAG implementation by (Maddox et al., 2019) To study how well SWA and SWAG perform in NLI as compared to a baseline model, we ran the same fine-tuning with SNLI and MNLI datasets utilizing SWA and SWAG for weight averaging.
|
| 208 |
+
|
| 209 |
+
<table><tr><td>$\mathbf{{Dataset}}$</td><td>$\mathbf{{Method}}$</td><td>Acc (%)</td><td>SD</td><td>$\Delta$</td></tr><tr><td>SNLI</td><td>base</td><td>90.80</td><td>0.26</td><td>-</td></tr><tr><td>SNLI</td><td>SWA</td><td>91.47</td><td>0.24</td><td>+0.67</td></tr><tr><td>SNLI</td><td>SWAG</td><td>91.59</td><td>0.14</td><td>+0.79</td></tr><tr><td>MNLI-m</td><td>base</td><td>86.53</td><td>0.20</td><td>-</td></tr><tr><td>MNLI-m</td><td>SWA</td><td>87.60</td><td>0.19</td><td>$+ {1.07}$</td></tr><tr><td>MNLI-m</td><td>SWAG</td><td>87.76</td><td>0.12</td><td>+1.23</td></tr><tr><td>MNLI-mm</td><td>base</td><td>86.31</td><td>0.26</td><td>-</td></tr><tr><td>MNLI-mm</td><td>SWA</td><td>87.34</td><td>0.29</td><td>+1.03</td></tr><tr><td>MNLI-mm</td><td>SWAG</td><td>87.51</td><td>0.19</td><td>+1.20</td></tr></table>
|
| 210 |
+
|
| 211 |
+
Table 1: Comparison of SWA and SWAG performance on NLI benchmarks (mean accuracy and standard deviation over 5 runs). $\Delta$ is the difference to the baseline result (base) with no weight averaging.
|
| 212 |
+
|
| 213 |
+
270
|
| 214 |
+
|
| 215 |
+
271
|
| 216 |
+
|
| 217 |
+
272
|
| 218 |
+
|
| 219 |
+
273
|
| 220 |
+
|
| 221 |
+
274
|
| 222 |
+
|
| 223 |
+
275
|
| 224 |
+
|
| 225 |
+
276
|
| 226 |
+
|
| 227 |
+
277
|
| 228 |
+
|
| 229 |
+
278
|
| 230 |
+
|
| 231 |
+
279
|
| 232 |
+
|
| 233 |
+
280
|
| 234 |
+
|
| 235 |
+
281
|
| 236 |
+
|
| 237 |
+
283
|
| 238 |
+
|
| 239 |
+
### 3.3 Results
|
| 240 |
+
|
| 241 |
+
285
|
| 242 |
+
|
| 243 |
+
286
|
| 244 |
+
|
| 245 |
+
The standard evaluation for the NLI task is the ac- 287
|
| 246 |
+
|
| 247 |
+
curacy on aggregated gold labels. However, as two 288
|
| 248 |
+
|
| 249 |
+
of the test data sets (from SNLI and MNLI) also 289 contains multiple human annotations, we also use
|
| 250 |
+
|
| 251 |
+
those for measuring the cross entropy of the pre- 291 dicted distribution on the human label distribution
|
| 252 |
+
|
| 253 |
+
(soft labeling, e.g. Peterson et al., 2019; Pavlick 293 and Kwiatkowski, 2019).
|
| 254 |
+
|
| 255 |
+
296
|
| 256 |
+
|
| 257 |
+
#### 3.3.1 Accuracy
|
| 258 |
+
|
| 259 |
+
298
|
| 260 |
+
|
| 261 |
+
The basic classification results are in Table 1. We
|
| 262 |
+
|
| 263 |
+
report average accuracies and standard deviation 300
|
| 264 |
+
|
| 265 |
+
over 5 runs with different random seeds. 301
|
| 266 |
+
|
| 267 |
+
Both SWA and SWAG provide significant im-
|
| 268 |
+
|
| 269 |
+
provements over the baseline without weight aver- 303
|
| 270 |
+
|
| 271 |
+
aging. SWAG performs slightly better than SWA 304 across all the three experiments.
|
| 272 |
+
|
| 273 |
+
In order to test if weight averaging improves the 306 generalization capability of NLI models, we fur-
|
| 274 |
+
|
| 275 |
+
ther performed cross-dataset generalization tests 308 following (Talman and Chatzikyriakidis, 2019). The results are reported in Table 2.
|
| 276 |
+
|
| 277 |
+
The results of cross-dataset experiments are
|
| 278 |
+
|
| 279 |
+
slightly mixed: We do not notice a clear advan- 313 tage of SWAG over SWA, but with the exception of training with MNLI and testing with SICK, we do notice improvement for weight averaging approaches as compared to the baseline. The performance on SICK drops significantly in all cases and the difference between the approaches is minimal, showing that the NLI training data is not a good fit for that benchmark.
|
| 280 |
+
|
| 281 |
+
The other cross-dataset results highlight the ad-
|
| 282 |
+
|
| 283 |
+
vantage of weight averaging, indicating that the 323
|
| 284 |
+
|
| 285 |
+
---
|
| 286 |
+
|
| 287 |
+
${}^{1}$ As the test data for MNLI have not been made publicly available, we use the development sets when reporting the results for MNLI.
|
| 288 |
+
|
| 289 |
+
2 https://pytorch.org/docs/1.12/optim.html#stochastic-weight-averaging
|
| 290 |
+
|
| 291 |
+
'https://github.com/wjmaddox/swa_gaus sian
|
| 292 |
+
|
| 293 |
+
---
|
| 294 |
+
|
| 295 |
+
324
|
| 296 |
+
|
| 297 |
+
<table><tr><td>Dataset</td><td>Method</td><td>$\mathbf{{Acc}\left( \% \right) }$</td><td>SD</td><td>$\Delta$</td></tr><tr><td>SNLI $\rightarrow$ MNLI-m</td><td>base</td><td>77.31</td><td>0.57</td><td/></tr><tr><td>SNLI $\rightarrow$ MNLI-m</td><td>SWA</td><td>79.67</td><td>0.37</td><td>2.37</td></tr><tr><td>SNLI $\rightarrow$ MNLI-m</td><td>SWAG</td><td>79.33</td><td>0.21</td><td>2.03</td></tr><tr><td>$\mathrm{{SNLI}} \rightarrow$ MNLI-mm</td><td>base</td><td>77.40</td><td>0.78</td><td/></tr><tr><td>SNLI $\rightarrow$ MNLI-mm</td><td>SWA</td><td>79.44</td><td>0.19</td><td>2.04</td></tr><tr><td>SNLI $\rightarrow$ MNLI-mm</td><td>SWAG</td><td>79.24</td><td>0.29</td><td>1.84</td></tr><tr><td>$\mathrm{{SNLI}} \rightarrow \mathrm{{SICK}}$</td><td>base</td><td>57.08</td><td>0.77</td><td/></tr><tr><td>SNLI $\rightarrow$ SICK</td><td>SWA</td><td>57.09</td><td>0.32</td><td>0.01</td></tr><tr><td>SNLI $\rightarrow$ SICK</td><td>SWAG</td><td>57.17</td><td>0.37</td><td>0.08</td></tr><tr><td>$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$</td><td>base</td><td>82.84</td><td>0.74</td><td/></tr><tr><td>$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$</td><td>SWA</td><td>84.15</td><td>0.35</td><td>1.31</td></tr><tr><td>$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$</td><td>SWAG</td><td>84.45</td><td>0.27</td><td>1.61</td></tr><tr><td>$\mathrm{{MNLI}} \rightarrow \mathrm{{SICK}}$</td><td>base</td><td>56.63</td><td>0.94</td><td/></tr><tr><td>$\mathrm{{MNLI}} \rightarrow \mathrm{{SICK}}$</td><td>SWA</td><td>56.17</td><td>0.60</td><td>-0.46</td></tr><tr><td>MNLI $\rightarrow$ SICK</td><td>SWAG</td><td>56.53</td><td>0.91</td><td>-0.10</td></tr></table>
|
| 298 |
+
|
| 299 |
+
Table 2: Cross-dataset experiments with and without weight averaging (mean accuracy and standard deviation over 5 runs with different random seeds), where the left hand side of the arrow is the training set and the right hand side is the testing set.
|
| 300 |
+
|
| 301 |
+
325
|
| 302 |
+
|
| 303 |
+
326
|
| 304 |
+
|
| 305 |
+
327
|
| 306 |
+
|
| 307 |
+
328
|
| 308 |
+
|
| 309 |
+
329
|
| 310 |
+
|
| 311 |
+
330 improved modeling of uncertainty can lead to better generalizations.
|
| 312 |
+
|
| 313 |
+
#### 3.3.2 Cross Entropy
|
| 314 |
+
|
| 315 |
+
We also test how well weight averaging approaches can be used to model annotator disagreement and annotation uncertainty in the NLI testsets of SNLI and MNLI. These two datasets come with five different annotation labels for every data point, often with high disagreement between human annotators indicating inherently confusing data points with high aleatoric uncertainty (Der Kiureghian and Ditlevsen, 2009). For quantifying the goodness of fit of the model predictions, we calculate the cross entropy between the predicted and annotation distributions.
|
| 316 |
+
|
| 317 |
+
362 Table 3 depicts the resulting cross entropy val- ues, with lower values denoting more faithful predictions. SWA and SWAG result in consistently more similar distributions with that of annotations, complementing their overall better accuracy results (Section 3.3). In contrast to the accuracy results, here SWAG outperforms SWA in all cases, indicating that the Gaussian posterior helps to model the data uncertainty more accurately. The results also carry over to the cross-dataset experiments as shown on the table.
|
| 318 |
+
|
| 319 |
+
The comparison between system predictions
|
| 320 |
+
|
| 321 |
+
<table><tr><td>$\mathbf{{Dataset}}$</td><td>$\mathbf{{Method}}$</td><td>Cross Entropy</td><td>$\Delta$</td></tr><tr><td>SNLI</td><td>base</td><td>0.83</td><td/></tr><tr><td>SNLI</td><td>SWA</td><td>0.75</td><td>-0.08</td></tr><tr><td>SNLI</td><td>SWAG</td><td>0.69</td><td>-0.14</td></tr><tr><td>MNLI-m</td><td>base</td><td>0.87</td><td/></tr><tr><td>MNLI-m</td><td>SWA</td><td>0.80</td><td>-0.07</td></tr><tr><td>MNLI-m</td><td>SWAG</td><td>0.73</td><td>-0.14</td></tr><tr><td>MNLI-mm</td><td>base</td><td>0.84</td><td/></tr><tr><td>MNLI-mm</td><td>SWA</td><td>0.77</td><td>-0.07</td></tr><tr><td>MNLI-mm</td><td>SWAG</td><td>0.69</td><td>-0.15</td></tr><tr><td>$\mathrm{{SNLI}} \rightarrow$ MNLI-m</td><td>base</td><td>1.13</td><td/></tr><tr><td>SNLI $\rightarrow$ MNLI-m</td><td>SWA</td><td>0.90</td><td>-0.23</td></tr><tr><td>SNLI $\rightarrow$ MNLI-m</td><td>SWAG</td><td>0.80</td><td>-0.33</td></tr><tr><td>SNLI $\rightarrow$ MNLI-mm</td><td>base</td><td>1.12</td><td/></tr><tr><td>SNLI $\rightarrow$ MNLI-mm</td><td>SWA</td><td>0.88</td><td>-0.24</td></tr><tr><td>SNLI $\rightarrow$ MNLI-mm</td><td>SWAG</td><td>0.79</td><td>-0.33</td></tr><tr><td>$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$</td><td>base</td><td>1.04</td><td/></tr><tr><td>$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$</td><td>SWA</td><td>0.97</td><td>-0.07</td></tr><tr><td>$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$</td><td>SWAG</td><td>0.89</td><td>-0.15</td></tr></table>
|
| 322 |
+
|
| 323 |
+
Table 3: Comparison of cross entropies between data annotation distributions using base, SWA and SWAG methods. $\Delta$ is the difference to the baseline cross entropy values.
|
| 324 |
+
|
| 325 |
+
378
|
| 326 |
+
|
| 327 |
+
379
|
| 328 |
+
|
| 329 |
+
380
|
| 330 |
+
|
| 331 |
+
381
|
| 332 |
+
|
| 333 |
+
382
|
| 334 |
+
|
| 335 |
+
383
|
| 336 |
+
|
| 337 |
+
384
|
| 338 |
+
|
| 339 |
+
385
|
| 340 |
+
|
| 341 |
+
386
|
| 342 |
+
|
| 343 |
+
389
|
| 344 |
+
|
| 345 |
+
394
|
| 346 |
+
|
| 347 |
+
and annotator variation deserves some further 399 analysis. Preliminary study (see examples in Ap-
|
| 348 |
+
|
| 349 |
+
pendix A) indicates that the prediction uncertainty 401 in SWAG for individual instances very well follows human annotation confusion. Furthermore,
|
| 350 |
+
|
| 351 |
+
we identified cases with a larger mismatch be- 404 tween system predictions and human disagree-
|
| 352 |
+
|
| 353 |
+
ment where the latter is mainly caused by erro- 406 neous or at least questionable decisions. This points to the use of SWAG in an active learning
|
| 354 |
+
|
| 355 |
+
scenario, where annotation noise can be identified 409
|
| 356 |
+
|
| 357 |
+
using a well calibrated prediction model. 411
|
| 358 |
+
|
| 359 |
+
## 4 Conclusions
|
| 360 |
+
|
| 361 |
+
414
|
| 362 |
+
|
| 363 |
+
Our results show that weight averaging provides
|
| 364 |
+
|
| 365 |
+
consistent and significant improvement for both 416 SNLI and MNLI datasets. The cross-dataset results are slightly mixed but also show the trend of improved cross-domain generalization. Finally,
|
| 366 |
+
|
| 367 |
+
we demonstrate a clear increase in the correlation 421 with human annotation variance when comparing SWAG with non-Bayesian approaches.
|
| 368 |
+
|
| 369 |
+
For future work we consider making use of multiple annotations also during training and exten-
|
| 370 |
+
|
| 371 |
+
sions of SWAG such as MultiSWAG (Wilson and 426 Izmailov, 2020). We also plan to test the methods on different NLU datasets, especially those with a high number of annotations (e.g. Nie et al., 2020), and compare the annotation variation and system
|
| 372 |
+
|
| 373 |
+
predictions in more detail. 431
|
| 374 |
+
|
| 375 |
+
---
|
| 376 |
+
|
| 377 |
+
${}^{4}$ Note that for the Baseline and SWA models, we consider the output from the eventual softmax function as the predicted distribution, while for the SWAG model, we use the average output distribution from $N = {20}$ sampled models.
|
| 378 |
+
|
| 379 |
+
---
|
| 380 |
+
|
| 381 |
+
## References
|
| 382 |
+
|
| 383 |
+
433
|
| 384 |
+
|
| 385 |
+
Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics. 438
|
| 386 |
+
|
| 387 |
+
Samuel R. Bowman, Gabor Angeli, Christopher Potts, 440 and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 443 632-642, Lisbon, Portugal. Association for Computational Linguistics.
|
| 388 |
+
|
| 389 |
+
445 Tongfei Chen, Zhengping Jiang, Adam Poliak, Keisuke Sakaguchi, and Benjamin Van Durme. 2020. Uncertain natural language inference. In Proceedings 448 of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8772-8779, On- 450 line. Association for Computational Linguistics.
|
| 390 |
+
|
| 391 |
+
Armen Der Kiureghian and Ove Ditlevsen. 2009. Aleatory or epistemic? does it matter? Structural 453 safety, 31(2):105-112.
|
| 392 |
+
|
| 393 |
+
Yingbo Gao, Christian Herold, Zijian Yang, and Her- 455 mann Ney. 2022. Revisiting checkpoint averaging for neural machine translation. In Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, pages 188-196, Online only. Association for Computational Linguistics.
|
| 394 |
+
|
| 395 |
+
Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. 2018. Averaging weights leads to wider optima and better generalization. Uncertainty in Artificial Intelligence (UAI).
|
| 396 |
+
|
| 397 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692.
|
| 398 |
+
|
| 399 |
+
Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, and Andrew Gordon Wilson. 2019. A simple baseline for bayesian uncertainty in deep learning. Advances in Neural Information Processing Systems, 32.
|
| 400 |
+
|
| 401 |
+
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zam-parelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 216-223, Reykjavik, Iceland. European Language Resources Association (ELRA).
|
| 402 |
+
|
| 403 |
+
Yixin Nie, Xiang Zhou, and Mohit Bansal. 2020. What can we learn from collective human opinions on natural language inference data? In Proceedings of the 485 2020 Conference on Empirical Methods in Natural
|
| 404 |
+
|
| 405 |
+
Language Processing (EMNLP), pages 9131-9143, 486
|
| 406 |
+
|
| 407 |
+
Online. Association for Computational Linguistics. 487
|
| 408 |
+
|
| 409 |
+
488
|
| 410 |
+
|
| 411 |
+
Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent 489 disagreements in human textual inferences. Trans- 490 actions of the Association for Computational Lin-
|
| 412 |
+
|
| 413 |
+
guistics, 7:677-694. 491
|
| 414 |
+
|
| 415 |
+
492
|
| 416 |
+
|
| 417 |
+
Joshua C. Peterson, Ruairidh M. Battleday, Thomas L. 493 Griffiths, and Olga Russakovsky. 2019. Human uncertainty makes classification more robust. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 9616-9625.
|
| 418 |
+
|
| 419 |
+
497
|
| 420 |
+
|
| 421 |
+
Barbara Plank. 2022. The "problem" of human label
|
| 422 |
+
|
| 423 |
+
variation: On ground truth in data, modeling and 499
|
| 424 |
+
|
| 425 |
+
evaluation. In Proceedings of the 2022 Conference 500 on Empirical Methods in Natural Language Pro-
|
| 426 |
+
|
| 427 |
+
cessing, pages 10671-10682. Association for Com- 501
|
| 428 |
+
|
| 429 |
+
putational Linguistics. 502
|
| 430 |
+
|
| 431 |
+
Aarne Talman and Stergios Chatzikyriakidis. 2019. 504 Testing the generalization power of neural network models across NLI benchmarks. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing
|
| 432 |
+
|
| 433 |
+
and Interpreting Neural Networks for NLP, pages 507 85-94, Florence, Italy. Association for Computa-
|
| 434 |
+
|
| 435 |
+
tional Linguistics. 509
|
| 436 |
+
|
| 437 |
+
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen-
|
| 438 |
+
|
| 439 |
+
tence understanding through inference. In Proceed- 512 ings of the 2018 Conference of the North American
|
| 440 |
+
|
| 441 |
+
Chapter of the Association for Computational Lin- 514 guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis-
|
| 442 |
+
|
| 443 |
+
tics. 517
|
| 444 |
+
|
| 445 |
+
Andrew G Wilson and Pavel Izmailov. 2020. Bayesian 519 deep learning and a probabilistic perspective of generalization. In Advances in Neural Information Processing Systems, volume 33, pages 4697-4708. Cur-
|
| 446 |
+
|
| 447 |
+
ran Associates, Inc. 522
|
| 448 |
+
|
| 449 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien 524 Chaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, Remi Louf, Morgan Funtow-icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame,
|
| 450 |
+
|
| 451 |
+
Quentin Lhoest, and Alexander Rush. 2020. Trans- 529 formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
|
| 452 |
+
|
| 453 |
+
534
|
| 454 |
+
|
| 455 |
+
Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. TACL, 2:67-
|
| 456 |
+
|
| 457 |
+
78. 538 539
|
| 458 |
+
|
| 459 |
+
540 594
|
| 460 |
+
|
| 461 |
+

|
| 462 |
+
|
| 463 |
+
Table 4: Comparison of probability distributions of human annotations vs. SWAG model predictions, for three randomly selected data points from the SNLI dataset. (Left and middle) Correctly predicted cases, as indicated by low cross entropy, (Right) A wrongly predicted case, as indicated by high cross entropy. SWAG points indicate the outputted probability distributions from $N = {20}$ samples.
|
| 464 |
+
|
| 465 |
+
608
|
| 466 |
+
|
| 467 |
+
609
|
| 468 |
+
|
| 469 |
+
541 595
|
| 470 |
+
|
| 471 |
+
542 596
|
| 472 |
+
|
| 473 |
+
543 597
|
| 474 |
+
|
| 475 |
+
544 598
|
| 476 |
+
|
| 477 |
+
545 599
|
| 478 |
+
|
| 479 |
+
546 600
|
| 480 |
+
|
| 481 |
+
547
|
| 482 |
+
|
| 483 |
+
548
|
| 484 |
+
|
| 485 |
+
549
|
| 486 |
+
|
| 487 |
+
550
|
| 488 |
+
|
| 489 |
+
551 605
|
| 490 |
+
|
| 491 |
+
552
|
| 492 |
+
|
| 493 |
+
553 607
|
| 494 |
+
|
| 495 |
+
556 610
|
| 496 |
+
|
| 497 |
+
558 612
|
| 498 |
+
|
| 499 |
+
561 615
|
| 500 |
+
|
| 501 |
+
563
|
| 502 |
+
|
| 503 |
+
## A Appendix
|
| 504 |
+
|
| 505 |
+
Here we showcase and discuss three randomly
|
| 506 |
+
|
| 507 |
+
566 selected data points from the SNLI dataset, and compare the predictions of the $N = {20}$ samples
|
| 508 |
+
|
| 509 |
+
568 from the SWAG model with the annotation distributions for each of these points. Table 4 presents two cases (left and middle) in which the SWAG model makes the correct prediction, and another case (right) in which the model makes the wrong
|
| 510 |
+
|
| 511 |
+
573 prediction. In the high agreement cases, indicated by lower cross entropies between the annotations and prediction, the SWAG model not only selects the correct label for the instance, but also predicts the annotator disagreement correctly when such a
|
| 512 |
+
|
| 513 |
+
578 disagreement exists (middle) versus when it does not (left).
|
| 514 |
+
|
| 515 |
+
581 The third figure presents a case where the pre- dictions of the SWAG samples are more certain
|
| 516 |
+
|
| 517 |
+
583 than expected: Annotators disagree on whether the hypothesis is Entailment or Neutral, whereas the model predictions place all probability mass to the Neutral class. The corresponding cross entropy is high, which reflects this disagreement. It
|
| 518 |
+
|
| 519 |
+
588 should be noted that this is also a fairly controversial and difficult data point, and to conclude Entailment requires making some strong assumptions. Ideally, such disagreements between system predictions and annotator distributions may also
|
| 520 |
+
|
| 521 |
+
593 be used as cues within the training process itself.
|
| 522 |
+
|
| 523 |
+
Two potential venues are (1) using the incongru- 617
|
| 524 |
+
|
| 525 |
+
ence between the two distributions as the loss sig- 618
|
| 526 |
+
|
| 527 |
+
nal to drive the optimization process directly (as 619
|
| 528 |
+
|
| 529 |
+
opposed to using only the gold label and the pre- 620
|
| 530 |
+
|
| 531 |
+
dicted class label), and (2) using the incongruence 621
|
| 532 |
+
|
| 533 |
+
in predictions in an active learning scenario. 622
|
| 534 |
+
|
| 535 |
+
623
|
| 536 |
+
|
| 537 |
+
624
|
| 538 |
+
|
| 539 |
+
625
|
| 540 |
+
|
| 541 |
+
626
|
| 542 |
+
|
| 543 |
+
627
|
| 544 |
+
|
| 545 |
+
628
|
| 546 |
+
|
| 547 |
+
629
|
| 548 |
+
|
| 549 |
+
630
|
| 550 |
+
|
| 551 |
+
631
|
| 552 |
+
|
| 553 |
+
632
|
| 554 |
+
|
| 555 |
+
633
|
| 556 |
+
|
| 557 |
+
634
|
| 558 |
+
|
| 559 |
+
635
|
| 560 |
+
|
| 561 |
+
636
|
| 562 |
+
|
| 563 |
+
637
|
| 564 |
+
|
| 565 |
+
638
|
| 566 |
+
|
| 567 |
+
639
|
| 568 |
+
|
| 569 |
+
640 641 642
|
| 570 |
+
|
| 571 |
+
643 644 645 646 647
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/uygq9_N7TL/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,509 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ UNCERTAINTY-AWARE NATURAL LANGUAGE INFERENCE WITH STOCHASTIC WEIGHT AVERAGING
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
Anonymous Author
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 1
|
| 14 |
+
|
| 15 |
+
006 Affiliation / Address line 2
|
| 16 |
+
|
| 17 |
+
Affiliation / Address line 3
|
| 18 |
+
|
| 19 |
+
email@domain
|
| 20 |
+
|
| 21 |
+
Anonymouser Author
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 1
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 2
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 3
|
| 28 |
+
|
| 29 |
+
email@domain
|
| 30 |
+
|
| 31 |
+
Anonymousest Author 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 1 059
|
| 34 |
+
|
| 35 |
+
Affiliation / Address line 2 060
|
| 36 |
+
|
| 37 |
+
Affiliation / Address line 3
|
| 38 |
+
|
| 39 |
+
email@domain
|
| 40 |
+
|
| 41 |
+
§ ABSTRACT
|
| 42 |
+
|
| 43 |
+
This paper introduces Bayesian uncertainty modeling using Stochastic Weight Averaging-Gaussian (SWAG) in Natural
|
| 44 |
+
|
| 45 |
+
016 Language Understanding (NLU) tasks. We apply the approach to standard
|
| 46 |
+
|
| 47 |
+
018 tasks in natural language inference (NLI) and demonstrate the effectiveness of the method in terms of prediction accuracy
|
| 48 |
+
|
| 49 |
+
021 and correlation with human annotation disagreements. We argue that the uncer-
|
| 50 |
+
|
| 51 |
+
023 tainty representations in SWAG better reflect subjective interpretation and the nat-
|
| 52 |
+
|
| 53 |
+
026 ural variation that is also present in human language understanding. The results re-
|
| 54 |
+
|
| 55 |
+
028 veal the importance of uncertainty modeling, an often neglected aspect of neural language modeling, in NLU tasks.
|
| 56 |
+
|
| 57 |
+
031
|
| 58 |
+
|
| 59 |
+
§ 1 INTRODUCTION
|
| 60 |
+
|
| 61 |
+
033
|
| 62 |
+
|
| 63 |
+
Arguably, human language understanding is not objective nor deterministic. The same utterance or
|
| 64 |
+
|
| 65 |
+
036 text can be interpreted in different ways by different people depending on their language standards,
|
| 66 |
+
|
| 67 |
+
038 background knowledge and world views, the linguistic context, as well as the situation in which the utterance or text appears. This uncertainty about potential readings is typically not modeled
|
| 68 |
+
|
| 69 |
+
043 in Natural Language Understanding (NLU) re- search and is often ignored in NLU benchmarks and datasets. Instead, they usually assign a single interpretation as a gold standard to be predicted by an artificial system ignoring the inherent ambiguity of language and potential disagreements that humans arrive at.
|
| 70 |
+
|
| 71 |
+
Some datasets like SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) do, however, contain information about different readings in the
|
| 72 |
+
|
| 73 |
+
053 form of annotation disagreement. These datasets
|
| 74 |
+
|
| 75 |
+
include the labels from five different rounds of an- 065 notation which show in some cases clear disagree-
|
| 76 |
+
|
| 77 |
+
ment about the correct label for the sentence pair. 067 Those labeling discrepancies can certainly be a result of annotation mistakes but more commonly
|
| 78 |
+
|
| 79 |
+
they arise from differences in understanding the 070 task, the given information and how it relates to
|
| 80 |
+
|
| 81 |
+
world knowledge and personal experience. 072
|
| 82 |
+
|
| 83 |
+
Moving towards uncertainty-aware neural lan-
|
| 84 |
+
|
| 85 |
+
guage models, we present our initial results us- 075 ing Stochastic Weight Averaging (SWA) (Izmailov
|
| 86 |
+
|
| 87 |
+
et al., 2018) and SWA-Gaussian (SWAG) (Mad- 077 dox et al., 2019) on the task of Natural Language Inference. SWAG provides a scalable approach to
|
| 88 |
+
|
| 89 |
+
calibrate neural networks and to model uncertainty 080 presentations and is straightforward to apply with
|
| 90 |
+
|
| 91 |
+
standard neural architectures. Our study addresses 082 the two main questions:
|
| 92 |
+
|
| 93 |
+
* How does uncertainty modeling using SWAG
|
| 94 |
+
|
| 95 |
+
influence prediction performance and gener- 085 alization in NLI tasks?
|
| 96 |
+
|
| 97 |
+
087
|
| 98 |
+
|
| 99 |
+
* How well does the calibrated model reflect
|
| 100 |
+
|
| 101 |
+
human disagreement and annotation vari- 089
|
| 102 |
+
|
| 103 |
+
ance? 090
|
| 104 |
+
|
| 105 |
+
In this paper, we first test the performance of 092 SWA and SWAG in SNLI and MNLI tasks. We then study if adding weight averaging improves
|
| 106 |
+
|
| 107 |
+
the generalization power of NLI models as tested 095 through cross-dataset experiments. Finally, we
|
| 108 |
+
|
| 109 |
+
analyse the probability distributions from SWA 097
|
| 110 |
+
|
| 111 |
+
and SWAG to test how well the model uncertainty 098
|
| 112 |
+
|
| 113 |
+
corresponds to annotator disagreements. 099
|
| 114 |
+
|
| 115 |
+
§ 2 BACKGROUND AND RELATED WORK
|
| 116 |
+
|
| 117 |
+
102
|
| 118 |
+
|
| 119 |
+
§ 2.1 UNCERTAINTY IN HUMAN ANNOTATIONS
|
| 120 |
+
|
| 121 |
+
In a recent position paper Plank (2022) argue that instead of taking human label variation as a prob-
|
| 122 |
+
|
| 123 |
+
lem, we should embrace it as an opportunity and 107 take it into consideration in all the steps of the ML 109 pipeline: data, modeling and evaluation. The paper provides a comprehensive survey of research on (i) reasons for human label variation, (ii) modeling human label variation, and (iii) evaluating with human label variation.
|
| 124 |
+
|
| 125 |
+
Pavlick and Kwiatkowski (2019) studied human disagreements in NLI tasks and argue that we should move to an evaluation objective that more closely corresponds to the natural interpretation variance that exists in data. Such a move would require that NLU models be properly calibrated to reflect the distribution we can expect and, hence, move to a more natural inference engine.
|
| 126 |
+
|
| 127 |
+
Chen et al. (2020) propose Uncertain NLI (UNLI), a task that moves away from categorical labels into probabilistic values. They use a scalar regression model and show that the model predictions correlate with human judgement.
|
| 128 |
+
|
| 129 |
+
§ 2.2 REPRESENTING MODEL UNCERTAINTY
|
| 130 |
+
|
| 131 |
+
The approach to uncertainty modeling that we consider is related to the well-established technique of model ensembling. Stochastic optimization procedures applied in training deep neural networks are non-deterministic and depend on hyper-parameters and initial seeds. Ensembles have been used as a pragmatic solution to average over several solutions, and the positive impact on model performance pushed ensembling into the standard toolbox of deep learning. Related to en-sembling is the technique of checkpoint averaging (refer to e.g. Gao et al., 2022), which is also known to improve performance.
|
| 132 |
+
|
| 133 |
+
Intuitively, ensembles and checkpoint averages also reflect the idea of different views and interpretations of the data and, therefore, provide a framework for uncertainty modeling. SWA and SWAG build on that idea, and SWAG provides a generic and efficient approach for approximating Bayesian uncertainty and model calibration.
|
| 134 |
+
|
| 135 |
+
SWA (Izmailov et al., 2018) is a checkpoint averaging method that tracks the optimization trajectory for a model during training, using the average of encountered values as the eventual parameters:
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
{\theta }_{\mathrm{{SWA}}} = \frac{1}{T}\mathop{\sum }\limits_{{i = 1}}^{T}{\theta }_{i} \tag{1}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
with ${\theta }_{\mathrm{{SWA}}}$ denoting the SWA solution for parame-
|
| 142 |
+
|
| 143 |
+
161 ter $\theta$ after $\mathrm{T}$ epochs of training.
|
| 144 |
+
|
| 145 |
+
SWAG (Maddox et al., 2019) extends this 162
|
| 146 |
+
|
| 147 |
+
method to estimate Gaussian posteriors for model 163 parameters, by also estimating a covariance matrix for the parameters. For computational feasibility, a low-rank plus diagonal approximation to the covariance matrix is used:
|
| 148 |
+
|
| 149 |
+
168
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
{\sum }_{\text{ low-rank }} \approx \frac{1}{T - 1}\mathop{\sum }\limits_{{i = 1}}^{T}\left( {{\theta }_{i} - {\widehat{\theta }}_{i}}\right) {\left( {\theta }_{i} - {\widehat{\theta }}_{i}\right) }^{T} \tag{2}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
{\sum }_{\text{ diag }} = \operatorname{diag}\left( {\frac{1}{T}\mathop{\sum }\limits_{{i = 1}}^{T}{\theta }_{i}^{2} - {\theta }_{\mathrm{{SWA}}}^{2}}\right) \tag{3}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
where ${\widehat{\theta }}_{i}$ in (2) is the running estimate of the parameters’ mean obtained from the first $i$ samples.
|
| 160 |
+
|
| 161 |
+
The resulting posterior approximations are given 178 by
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
{\theta }_{\mathrm{{SWAG}}} \sim \mathcal{N}\left( {{\theta }_{\mathrm{{SWA}}},\frac{1}{2}\left( {{\sum }_{\text{ diag }} + {\sum }_{\text{ low-rank }}}\right) }\right) .
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
180(4)
|
| 168 |
+
|
| 169 |
+
Once the posteriors are thus approximated, in test time, the model is utilized by sampling from the
|
| 170 |
+
|
| 171 |
+
approximated posteriors for $N$ times, and tak- 185 ing the average of the predicted distributions from these samples as the answer of the model.
|
| 172 |
+
|
| 173 |
+
One of the advantages of SWAG is the possi- 188 bility to seamlessly start with any pre-trained so-
|
| 174 |
+
|
| 175 |
+
lution. Approximating the posterior is then done 190 during fine-tuning without the need to change the underlying model.
|
| 176 |
+
|
| 177 |
+
193
|
| 178 |
+
|
| 179 |
+
§ 3 EXPERIMENTS
|
| 180 |
+
|
| 181 |
+
195
|
| 182 |
+
|
| 183 |
+
We test the performance of SWA and SWAG on
|
| 184 |
+
|
| 185 |
+
the natural language inference task using three 198 NLI datasets, including cross-dataset experiments,
|
| 186 |
+
|
| 187 |
+
and study the effect on both hard and soft labeling. 200
|
| 188 |
+
|
| 189 |
+
§ 3.1 DATASETS
|
| 190 |
+
|
| 191 |
+
We use Stanford Natural Language Inference cor-
|
| 192 |
+
|
| 193 |
+
pus (SNLI) (Bowman et al., 2015) and Multi- 205 Genre Natural Language Inference (MNLI) corpus (Williams et al., 2018) as the datasets in our experiments. We also study cross-dataset generalisation capability of the model with and without weight averaging. For those experiments we also include SICK (Marelli et al., 2014) as a test set. In cross-dataset generalization experiments we first fine-tune the model with a training data from one NLI dataset (e.g. SNLI) and then test with a test
|
| 194 |
+
|
| 195 |
+
set from another NLI dataset (e.g. MNLI-mm). 215
|
| 196 |
+
|
| 197 |
+
SNLI is a dataset of ${570}\mathrm{k}$ sentence pairs which have been manually labeled with entailment, contradiction, and neutral labels. The source for the premise sentences in SNLI were image captions from the Flickr30k corpus (Young et al., 2014).
|
| 198 |
+
|
| 199 |
+
MNLI is made of ${433}\mathrm{\;k}$ sentence pairs labeled with entailment, contradiction and neutral, containing examples from ten genres of written and spoken English. Five of the genres are included in the training set. The development and test sets have been split into matched (MNLI-m) and mismatched (MNLI-mm) sets, where the former includes only sentences from the same genres as the training data, and the latter includes genres not present in the training data. ${}^{1}$
|
| 200 |
+
|
| 201 |
+
SICK includes 9,840 examples with logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was constructed automatically by taking pairs of sentences from a random subset of the $8\mathrm{\;K}$ Image-Flickr (Young et al., 2014) and the SemEval 2012 STS MSRVideo Description (Agirre et al., 2012) datasets by using rule-based approach to construct examples for the different logical inference types.
|
| 202 |
+
|
| 203 |
+
§ 3.2 METHODS
|
| 204 |
+
|
| 205 |
+
In all the experiments we fine tune a pre-trained RoBERTa-base model (Liu et al., 2019) from the Hugging Face Transformers library (Wolf et al., 2020). As a common practice in the NLI tasks, we use the majority-vote gold labels for training even if multiple annotations are available.
|
| 206 |
+
|
| 207 |
+
We add stochastic weight averaging to the RoBERTa model by using the SWA implementation from PyTorch 1.12 and the SWAG implementation by (Maddox et al., 2019) To study how well SWA and SWAG perform in NLI as compared to a baseline model, we ran the same fine-tuning with SNLI and MNLI datasets utilizing SWA and SWAG for weight averaging.
|
| 208 |
+
|
| 209 |
+
max width=
|
| 210 |
+
|
| 211 |
+
$\mathbf{{Dataset}}$ $\mathbf{{Method}}$ Acc (%) SD $\Delta$
|
| 212 |
+
|
| 213 |
+
1-5
|
| 214 |
+
SNLI base 90.80 0.26 -
|
| 215 |
+
|
| 216 |
+
1-5
|
| 217 |
+
SNLI SWA 91.47 0.24 +0.67
|
| 218 |
+
|
| 219 |
+
1-5
|
| 220 |
+
SNLI SWAG 91.59 0.14 +0.79
|
| 221 |
+
|
| 222 |
+
1-5
|
| 223 |
+
MNLI-m base 86.53 0.20 -
|
| 224 |
+
|
| 225 |
+
1-5
|
| 226 |
+
MNLI-m SWA 87.60 0.19 $+ {1.07}$
|
| 227 |
+
|
| 228 |
+
1-5
|
| 229 |
+
MNLI-m SWAG 87.76 0.12 +1.23
|
| 230 |
+
|
| 231 |
+
1-5
|
| 232 |
+
MNLI-mm base 86.31 0.26 -
|
| 233 |
+
|
| 234 |
+
1-5
|
| 235 |
+
MNLI-mm SWA 87.34 0.29 +1.03
|
| 236 |
+
|
| 237 |
+
1-5
|
| 238 |
+
MNLI-mm SWAG 87.51 0.19 +1.20
|
| 239 |
+
|
| 240 |
+
1-5
|
| 241 |
+
|
| 242 |
+
Table 1: Comparison of SWA and SWAG performance on NLI benchmarks (mean accuracy and standard deviation over 5 runs). $\Delta$ is the difference to the baseline result (base) with no weight averaging.
|
| 243 |
+
|
| 244 |
+
270
|
| 245 |
+
|
| 246 |
+
271
|
| 247 |
+
|
| 248 |
+
272
|
| 249 |
+
|
| 250 |
+
273
|
| 251 |
+
|
| 252 |
+
274
|
| 253 |
+
|
| 254 |
+
275
|
| 255 |
+
|
| 256 |
+
276
|
| 257 |
+
|
| 258 |
+
277
|
| 259 |
+
|
| 260 |
+
278
|
| 261 |
+
|
| 262 |
+
279
|
| 263 |
+
|
| 264 |
+
280
|
| 265 |
+
|
| 266 |
+
281
|
| 267 |
+
|
| 268 |
+
283
|
| 269 |
+
|
| 270 |
+
§ 3.3 RESULTS
|
| 271 |
+
|
| 272 |
+
285
|
| 273 |
+
|
| 274 |
+
286
|
| 275 |
+
|
| 276 |
+
The standard evaluation for the NLI task is the ac- 287
|
| 277 |
+
|
| 278 |
+
curacy on aggregated gold labels. However, as two 288
|
| 279 |
+
|
| 280 |
+
of the test data sets (from SNLI and MNLI) also 289 contains multiple human annotations, we also use
|
| 281 |
+
|
| 282 |
+
those for measuring the cross entropy of the pre- 291 dicted distribution on the human label distribution
|
| 283 |
+
|
| 284 |
+
(soft labeling, e.g. Peterson et al., 2019; Pavlick 293 and Kwiatkowski, 2019).
|
| 285 |
+
|
| 286 |
+
296
|
| 287 |
+
|
| 288 |
+
§ 3.3.1 ACCURACY
|
| 289 |
+
|
| 290 |
+
298
|
| 291 |
+
|
| 292 |
+
The basic classification results are in Table 1. We
|
| 293 |
+
|
| 294 |
+
report average accuracies and standard deviation 300
|
| 295 |
+
|
| 296 |
+
over 5 runs with different random seeds. 301
|
| 297 |
+
|
| 298 |
+
Both SWA and SWAG provide significant im-
|
| 299 |
+
|
| 300 |
+
provements over the baseline without weight aver- 303
|
| 301 |
+
|
| 302 |
+
aging. SWAG performs slightly better than SWA 304 across all the three experiments.
|
| 303 |
+
|
| 304 |
+
In order to test if weight averaging improves the 306 generalization capability of NLI models, we fur-
|
| 305 |
+
|
| 306 |
+
ther performed cross-dataset generalization tests 308 following (Talman and Chatzikyriakidis, 2019). The results are reported in Table 2.
|
| 307 |
+
|
| 308 |
+
The results of cross-dataset experiments are
|
| 309 |
+
|
| 310 |
+
slightly mixed: We do not notice a clear advan- 313 tage of SWAG over SWA, but with the exception of training with MNLI and testing with SICK, we do notice improvement for weight averaging approaches as compared to the baseline. The performance on SICK drops significantly in all cases and the difference between the approaches is minimal, showing that the NLI training data is not a good fit for that benchmark.
|
| 311 |
+
|
| 312 |
+
The other cross-dataset results highlight the ad-
|
| 313 |
+
|
| 314 |
+
vantage of weight averaging, indicating that the 323
|
| 315 |
+
|
| 316 |
+
${}^{1}$ As the test data for MNLI have not been made publicly available, we use the development sets when reporting the results for MNLI.
|
| 317 |
+
|
| 318 |
+
2 https://pytorch.org/docs/1.12/optim.html#stochastic-weight-averaging
|
| 319 |
+
|
| 320 |
+
'https://github.com/wjmaddox/swa_gaus sian
|
| 321 |
+
|
| 322 |
+
324
|
| 323 |
+
|
| 324 |
+
max width=
|
| 325 |
+
|
| 326 |
+
Dataset Method $\mathbf{{Acc}\left( \% \right) }$ SD $\Delta$
|
| 327 |
+
|
| 328 |
+
1-5
|
| 329 |
+
SNLI $\rightarrow$ MNLI-m base 77.31 0.57 X
|
| 330 |
+
|
| 331 |
+
1-5
|
| 332 |
+
SNLI $\rightarrow$ MNLI-m SWA 79.67 0.37 2.37
|
| 333 |
+
|
| 334 |
+
1-5
|
| 335 |
+
SNLI $\rightarrow$ MNLI-m SWAG 79.33 0.21 2.03
|
| 336 |
+
|
| 337 |
+
1-5
|
| 338 |
+
$\mathrm{{SNLI}} \rightarrow$ MNLI-mm base 77.40 0.78 X
|
| 339 |
+
|
| 340 |
+
1-5
|
| 341 |
+
SNLI $\rightarrow$ MNLI-mm SWA 79.44 0.19 2.04
|
| 342 |
+
|
| 343 |
+
1-5
|
| 344 |
+
SNLI $\rightarrow$ MNLI-mm SWAG 79.24 0.29 1.84
|
| 345 |
+
|
| 346 |
+
1-5
|
| 347 |
+
$\mathrm{{SNLI}} \rightarrow \mathrm{{SICK}}$ base 57.08 0.77 X
|
| 348 |
+
|
| 349 |
+
1-5
|
| 350 |
+
SNLI $\rightarrow$ SICK SWA 57.09 0.32 0.01
|
| 351 |
+
|
| 352 |
+
1-5
|
| 353 |
+
SNLI $\rightarrow$ SICK SWAG 57.17 0.37 0.08
|
| 354 |
+
|
| 355 |
+
1-5
|
| 356 |
+
$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$ base 82.84 0.74 X
|
| 357 |
+
|
| 358 |
+
1-5
|
| 359 |
+
$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$ SWA 84.15 0.35 1.31
|
| 360 |
+
|
| 361 |
+
1-5
|
| 362 |
+
$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$ SWAG 84.45 0.27 1.61
|
| 363 |
+
|
| 364 |
+
1-5
|
| 365 |
+
$\mathrm{{MNLI}} \rightarrow \mathrm{{SICK}}$ base 56.63 0.94 X
|
| 366 |
+
|
| 367 |
+
1-5
|
| 368 |
+
$\mathrm{{MNLI}} \rightarrow \mathrm{{SICK}}$ SWA 56.17 0.60 -0.46
|
| 369 |
+
|
| 370 |
+
1-5
|
| 371 |
+
MNLI $\rightarrow$ SICK SWAG 56.53 0.91 -0.10
|
| 372 |
+
|
| 373 |
+
1-5
|
| 374 |
+
|
| 375 |
+
Table 2: Cross-dataset experiments with and without weight averaging (mean accuracy and standard deviation over 5 runs with different random seeds), where the left hand side of the arrow is the training set and the right hand side is the testing set.
|
| 376 |
+
|
| 377 |
+
325
|
| 378 |
+
|
| 379 |
+
326
|
| 380 |
+
|
| 381 |
+
327
|
| 382 |
+
|
| 383 |
+
328
|
| 384 |
+
|
| 385 |
+
329
|
| 386 |
+
|
| 387 |
+
330 improved modeling of uncertainty can lead to better generalizations.
|
| 388 |
+
|
| 389 |
+
§ 3.3.2 CROSS ENTROPY
|
| 390 |
+
|
| 391 |
+
We also test how well weight averaging approaches can be used to model annotator disagreement and annotation uncertainty in the NLI testsets of SNLI and MNLI. These two datasets come with five different annotation labels for every data point, often with high disagreement between human annotators indicating inherently confusing data points with high aleatoric uncertainty (Der Kiureghian and Ditlevsen, 2009). For quantifying the goodness of fit of the model predictions, we calculate the cross entropy between the predicted and annotation distributions.
|
| 392 |
+
|
| 393 |
+
362 Table 3 depicts the resulting cross entropy val- ues, with lower values denoting more faithful predictions. SWA and SWAG result in consistently more similar distributions with that of annotations, complementing their overall better accuracy results (Section 3.3). In contrast to the accuracy results, here SWAG outperforms SWA in all cases, indicating that the Gaussian posterior helps to model the data uncertainty more accurately. The results also carry over to the cross-dataset experiments as shown on the table.
|
| 394 |
+
|
| 395 |
+
The comparison between system predictions
|
| 396 |
+
|
| 397 |
+
max width=
|
| 398 |
+
|
| 399 |
+
$\mathbf{{Dataset}}$ $\mathbf{{Method}}$ Cross Entropy $\Delta$
|
| 400 |
+
|
| 401 |
+
1-4
|
| 402 |
+
SNLI base 0.83 X
|
| 403 |
+
|
| 404 |
+
1-4
|
| 405 |
+
SNLI SWA 0.75 -0.08
|
| 406 |
+
|
| 407 |
+
1-4
|
| 408 |
+
SNLI SWAG 0.69 -0.14
|
| 409 |
+
|
| 410 |
+
1-4
|
| 411 |
+
MNLI-m base 0.87 X
|
| 412 |
+
|
| 413 |
+
1-4
|
| 414 |
+
MNLI-m SWA 0.80 -0.07
|
| 415 |
+
|
| 416 |
+
1-4
|
| 417 |
+
MNLI-m SWAG 0.73 -0.14
|
| 418 |
+
|
| 419 |
+
1-4
|
| 420 |
+
MNLI-mm base 0.84 X
|
| 421 |
+
|
| 422 |
+
1-4
|
| 423 |
+
MNLI-mm SWA 0.77 -0.07
|
| 424 |
+
|
| 425 |
+
1-4
|
| 426 |
+
MNLI-mm SWAG 0.69 -0.15
|
| 427 |
+
|
| 428 |
+
1-4
|
| 429 |
+
$\mathrm{{SNLI}} \rightarrow$ MNLI-m base 1.13 X
|
| 430 |
+
|
| 431 |
+
1-4
|
| 432 |
+
SNLI $\rightarrow$ MNLI-m SWA 0.90 -0.23
|
| 433 |
+
|
| 434 |
+
1-4
|
| 435 |
+
SNLI $\rightarrow$ MNLI-m SWAG 0.80 -0.33
|
| 436 |
+
|
| 437 |
+
1-4
|
| 438 |
+
SNLI $\rightarrow$ MNLI-mm base 1.12 X
|
| 439 |
+
|
| 440 |
+
1-4
|
| 441 |
+
SNLI $\rightarrow$ MNLI-mm SWA 0.88 -0.24
|
| 442 |
+
|
| 443 |
+
1-4
|
| 444 |
+
SNLI $\rightarrow$ MNLI-mm SWAG 0.79 -0.33
|
| 445 |
+
|
| 446 |
+
1-4
|
| 447 |
+
$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$ base 1.04 X
|
| 448 |
+
|
| 449 |
+
1-4
|
| 450 |
+
$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$ SWA 0.97 -0.07
|
| 451 |
+
|
| 452 |
+
1-4
|
| 453 |
+
$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$ SWAG 0.89 -0.15
|
| 454 |
+
|
| 455 |
+
1-4
|
| 456 |
+
|
| 457 |
+
Table 3: Comparison of cross entropies between data annotation distributions using base, SWA and SWAG methods. $\Delta$ is the difference to the baseline cross entropy values.
|
| 458 |
+
|
| 459 |
+
378
|
| 460 |
+
|
| 461 |
+
379
|
| 462 |
+
|
| 463 |
+
380
|
| 464 |
+
|
| 465 |
+
381
|
| 466 |
+
|
| 467 |
+
382
|
| 468 |
+
|
| 469 |
+
383
|
| 470 |
+
|
| 471 |
+
384
|
| 472 |
+
|
| 473 |
+
385
|
| 474 |
+
|
| 475 |
+
386
|
| 476 |
+
|
| 477 |
+
389
|
| 478 |
+
|
| 479 |
+
394
|
| 480 |
+
|
| 481 |
+
and annotator variation deserves some further 399 analysis. Preliminary study (see examples in Ap-
|
| 482 |
+
|
| 483 |
+
pendix A) indicates that the prediction uncertainty 401 in SWAG for individual instances very well follows human annotation confusion. Furthermore,
|
| 484 |
+
|
| 485 |
+
we identified cases with a larger mismatch be- 404 tween system predictions and human disagree-
|
| 486 |
+
|
| 487 |
+
ment where the latter is mainly caused by erro- 406 neous or at least questionable decisions. This points to the use of SWAG in an active learning
|
| 488 |
+
|
| 489 |
+
scenario, where annotation noise can be identified 409
|
| 490 |
+
|
| 491 |
+
using a well calibrated prediction model. 411
|
| 492 |
+
|
| 493 |
+
§ 4 CONCLUSIONS
|
| 494 |
+
|
| 495 |
+
414
|
| 496 |
+
|
| 497 |
+
Our results show that weight averaging provides
|
| 498 |
+
|
| 499 |
+
consistent and significant improvement for both 416 SNLI and MNLI datasets. The cross-dataset results are slightly mixed but also show the trend of improved cross-domain generalization. Finally,
|
| 500 |
+
|
| 501 |
+
we demonstrate a clear increase in the correlation 421 with human annotation variance when comparing SWAG with non-Bayesian approaches.
|
| 502 |
+
|
| 503 |
+
For future work we consider making use of multiple annotations also during training and exten-
|
| 504 |
+
|
| 505 |
+
sions of SWAG such as MultiSWAG (Wilson and 426 Izmailov, 2020). We also plan to test the methods on different NLU datasets, especially those with a high number of annotations (e.g. Nie et al., 2020), and compare the annotation variation and system
|
| 506 |
+
|
| 507 |
+
predictions in more detail. 431
|
| 508 |
+
|
| 509 |
+
${}^{4}$ Note that for the Baseline and SWA models, we consider the output from the eventual softmax function as the predicted distribution, while for the SWAG model, we use the average output distribution from $N = {20}$ sampled models.
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wEJaCIkgLG/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,1121 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Danish Clinical Named Entity Recognition and Relation Extraction
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
004 058
|
| 12 |
+
|
| 13 |
+
005 059
|
| 14 |
+
|
| 15 |
+
006 060
|
| 16 |
+
|
| 17 |
+
007 061
|
| 18 |
+
|
| 19 |
+
008 062
|
| 20 |
+
|
| 21 |
+
## Abstract
|
| 22 |
+
|
| 23 |
+
016
|
| 24 |
+
|
| 25 |
+
Electronic health records contain impor-
|
| 26 |
+
|
| 27 |
+
018 tant information regarding the patients' medical history but much of this information is stored in unstructured narra-
|
| 28 |
+
|
| 29 |
+
021 tive text. This paper presents the first Danish clinical named entity recognition
|
| 30 |
+
|
| 31 |
+
023 and relation extraction dataset for extrac-
|
| 32 |
+
|
| 33 |
+
025 tion of six types of clinical events, six types of attributes, and three types of 026 relations. The dataset contains 11,607 027 paragraphs from Danish electronic health 028
|
| 34 |
+
|
| 35 |
+
029 records containing 54,631 clinical events,
|
| 36 |
+
|
| 37 |
+
030 41,954 attributes, and 14,604 relations.
|
| 38 |
+
|
| 39 |
+
031 We detail the methodology of developing the annotation scheme, and train a
|
| 40 |
+
|
| 41 |
+
033 transformer-based architecture on the developed dataset with macro F1 performance of ${60.05}\% ,{44.85}\%$ , and ${70.64}\%$
|
| 42 |
+
|
| 43 |
+
036 for clinical events, attributes, and relations, respectively.
|
| 44 |
+
|
| 45 |
+
038
|
| 46 |
+
|
| 47 |
+
## 1 Introduction
|
| 48 |
+
|
| 49 |
+
040 Electronic health records (EHR) contain important information regarding the patients' medical his-
|
| 50 |
+
|
| 51 |
+
043 tory including diagnoses, medications, treatment plans, allergies, and test results. However, much of this information is stored in unstructured narrative text. While this information could be used to guide diagnostic decision making and treatment
|
| 52 |
+
|
| 53 |
+
048 plans, the unstructured format makes it infeasible to fully exploit in clinical practice and research.
|
| 54 |
+
|
| 55 |
+
Natural language processing (NLP) algorithms could be used to transform the unstructured narrative text of the EHR into structured information
|
| 56 |
+
|
| 57 |
+
053 and give medical doctors (MD) a fast overview of
|
| 58 |
+
|
| 59 |
+
063
|
| 60 |
+
|
| 61 |
+
064
|
| 62 |
+
|
| 63 |
+
065
|
| 64 |
+
|
| 65 |
+
066
|
| 66 |
+
|
| 67 |
+
067
|
| 68 |
+
|
| 69 |
+
068
|
| 70 |
+
|
| 71 |
+
even a medical history spanning multiple years. 069
|
| 72 |
+
|
| 73 |
+
NLP models' ability to process and extract infor- 070 mation from written text keeps improving with
|
| 74 |
+
|
| 75 |
+
benchmark-breaking models being published on 072 a regular basis. For example, transformer-based
|
| 76 |
+
|
| 77 |
+
models such as GPT-3 (Brown et al., 2020), BERT 075 (Devlin et al., 2019), and ELECTRA (Clark et al.,
|
| 78 |
+
|
| 79 |
+
2020) have recently shown promising results for 077 many NLP tasks, e.g. named entity recognition and relation extraction (NER). In NER, models
|
| 80 |
+
|
| 81 |
+
are trained to tag words with predefined entities 080 and find the relations between them. In clinical
|
| 82 |
+
|
| 83 |
+
NER, entities such as diseases, treatments, drugs, 082 and tests have been extracted automatically from EHRs. However, many of the developed datasets
|
| 84 |
+
|
| 85 |
+
are only in English and for specific clinical spe- 085 cialities or note types (Uzuner et al., 2007, 2010;
|
| 86 |
+
|
| 87 |
+
Bethard et al., 2016). 087
|
| 88 |
+
|
| 89 |
+
This paper describes the methodology for developing the first Danish clinical NER dataset.
|
| 90 |
+
|
| 91 |
+
The dataset consists of text paragraphs from Dan- 090 ish EHRs spanning multiple departments and note
|
| 92 |
+
|
| 93 |
+
types. 092
|
| 94 |
+
|
| 95 |
+
First, the paper describes the clinical dataset, the strategy for choosing entities tailored to extract
|
| 96 |
+
|
| 97 |
+
important information from EHRs, and the anno- 095 tation scheme. Next, we train a transformer-based
|
| 98 |
+
|
| 99 |
+
architecture on the developed NER dataset. 097 098
|
| 100 |
+
|
| 101 |
+
## 2 Methods
|
| 102 |
+
|
| 103 |
+
This section describes the data, annotation
|
| 104 |
+
|
| 105 |
+
scheme, and model used for Danish clinical NER. 102
|
| 106 |
+
|
| 107 |
+
### 2.1 Data
|
| 108 |
+
|
| 109 |
+
We extracted 11,607 paragraphs with a length between 11 and 75 words from EHRs from Odense
|
| 110 |
+
|
| 111 |
+
University Hospital in Denmark. Paragraphs were 107 sampled randomly from different EHR note types across every department of the hospital to ensure the data distribution would resemble that of EHRs: ${46}\%$ were from clinical contacts, ${13}\%$ primary journals, 10% care data, 3% epicrises, 3% ambulatory care contacts, $2\%$ surgical notes, $2\%$ emergency room journals, and ${20}\%$ were from 55 different minor EHR note types. Paragraphs were lowercased and anonymised by two of the authors.
|
| 112 |
+
|
| 113 |
+
<table><tr><td>Clinical event</td><td>Description</td></tr><tr><td>Disease</td><td>A disorder of structure or function, especially one that has a known cause and a distinctive group of symptoms, signs, or anatomical changes. Examples include cancer, influenza, and narcolepsy.</td></tr><tr><td>$\mathbf{{Symptom}}$</td><td>A symptom is a physical or mental feature which is regarded as indicating a condition of disease, particularly such a feature that is apparent to the patient. We include abnormal findings, which the MD makes when examining the patient objectively, as these are sometimes coinciding with symptoms-e.g. bruises. Examples include headache, stomach ache, and pain.</td></tr><tr><td>Diagnostic</td><td>Any tool or method concerned with the diagnosis of illnesses or other problems. Includes measurements and tests. Examples include CT scans, blood samples, and temperatures.</td></tr><tr><td>Treatment</td><td>A treatment is any medical care given to a patient for an illness or injury. Examples include medication, plaster, and rehabilitation.</td></tr><tr><td>$\mathbf{{Anatomy}}$</td><td>Any part of human anatomy. Includes body fluids and excrements. Examples include arms, organs, and blood.</td></tr><tr><td>Result</td><td>All results of diagnostics that do not carry any meaning without being coupled to the diagnostic. Examples include numbers that indicate length, temperature, or volumes. Diseases or symptoms found by diagnostics are annotated as such, e.g. a tumour found by a CT scan.</td></tr></table>
|
| 114 |
+
|
| 115 |
+
Table 1: Description of clinical events. Descriptions were inspired by the Oxford English Dictionary.
|
| 116 |
+
|
| 117 |
+
### 2.2 Annotation
|
| 118 |
+
|
| 119 |
+
#### 2.2.1 Annotation scheme
|
| 120 |
+
|
| 121 |
+
Two MDs with expert clinical domain knowledge developed the annotation scheme through an iterative process of making annotation rules and testing them.
|
| 122 |
+
|
| 123 |
+
Annotation rules were made to extract clinically relevant information from the medical history. Focus was for the rules to be as complete as possible to capture all important information about the medical history while still being simple to use for the annotators.
|
| 124 |
+
|
| 125 |
+
We extracted three types of information: clinical events, the attributes of the clinical events, and relations between the clinical events.
|
| 126 |
+
|
| 127 |
+
Clinical events were: diseases; symptoms, including abnormal findings; diagnostics; treatments; anatomies including body fluids and excrements; and results. Symptoms and abnormal findings were joined in one as they sometimes coincided. Normal findings were not included as there were so many that they would cloud the visualisation of the history. Table 1 shows all clini-
|
| 128 |
+
|
| 129 |
+
<table><tr><td>Attributes</td><td>Description</td></tr><tr><td>$\mathbf{{Prior}}$</td><td>Entities that occurred in prior admissions or in the distant past. Includes treatments that are being stopped at that point in time.</td></tr><tr><td>Current</td><td>Entities that occur in the present. Includes prescribed medicine.</td></tr><tr><td>Future</td><td>Entities that occur or might occur in the future-e.g. the risk of skin cancer, or ordering diagnostics for a later day.</td></tr><tr><td>Doubt</td><td>Any entity that is not confirmed. Includes any treatments that might need to be started in the future.</td></tr><tr><td>Negation</td><td>Entities such as diseases or symptoms that are mentioned as not being present.</td></tr><tr><td>Non-patient</td><td>Entities that are not related to the patient in question. One example is the disease history of the patient's relatives.</td></tr></table>
|
| 130 |
+
|
| 131 |
+
Table 2: Description of attributes.
|
| 132 |
+
|
| 133 |
+
162
|
| 134 |
+
|
| 135 |
+
163
|
| 136 |
+
|
| 137 |
+
168 cal events and their descriptions as defined by the medical experts.
|
| 138 |
+
|
| 139 |
+
Clinical events were further described by their attributes. Attributes were: prior; current; future; doubt; negation; and non-patient. All clinical events could take one of the six attributes except anatomies and results. Anatomies did not take any attributes while results could only take a prior or current attribute. Table 2 shows all attributes and their descriptions.
|
| 140 |
+
|
| 141 |
+
Clinical events could connect to each other in limited ways through one-way relations. Diseases, diagnostics, and symptoms could connect to anatomies through a "has location" relation. Diseases, symptoms, and anatomies could connect to treatments through a "is treated with" relation. Diagnostics could connect to results through a "has
|
| 142 |
+
|
| 143 |
+
result" relation. 190
|
| 144 |
+
|
| 145 |
+
Figure 1 shows an overview of the clinical
|
| 146 |
+
|
| 147 |
+
events, attributes, and relations. Appendix A 193 shows the full annotation guidelines with further
|
| 148 |
+
|
| 149 |
+
details and explanations to the annotators. 195
|
| 150 |
+
|
| 151 |
+
#### 2.2.2 Annotation process
|
| 152 |
+
|
| 153 |
+
Six annotators were recruited for the task. Five were Master of Science in Medicine students and
|
| 154 |
+
|
| 155 |
+
one was a MD. 200 Figure 2 shows the process of annotator training. It included reading the annotation guide and an iterative process of annotating a learning set of 55 paragraphs (not included in dataset) followed
|
| 156 |
+
|
| 157 |
+
by error analysis until a final test was made on 205 a set of 98 gold paragraphs annotated by an expert MD. Paragraphs were annotated using the CLAMP software (Soysal et al., 2017). We report the micro F1 of each annotator on the gold set.
|
| 158 |
+
|
| 159 |
+
Figure 3 shows an example of an annotated 210 paragraph.
|
| 160 |
+
|
| 161 |
+
### 2.3 Entity and relation extraction model
|
| 162 |
+
|
| 163 |
+
This section describes the architecture of the
|
| 164 |
+
|
| 165 |
+
Princeton University Relation Extraction system 215
|
| 166 |
+
|
| 167 |
+
216
|
| 168 |
+
|
| 169 |
+
"is treated with". Orange: "has location". Grey: "has result". (B) Attributes. Anatomy (dashed lines) takes no attributes. Other clinical events must take one attribute. Results only take prior or current attributes.
|
| 170 |
+
|
| 171 |
+

|
| 172 |
+
|
| 173 |
+
Figure 1: (A) Clinical events and relations between them. Symptoms include abnormal findings. Anatomies include body fluids and excrements. Diagnostics include measurements and tests. Blue:
|
| 174 |
+
|
| 175 |
+
217
|
| 176 |
+
|
| 177 |
+
218
|
| 178 |
+
|
| 179 |
+
219
|
| 180 |
+
|
| 181 |
+
220
|
| 182 |
+
|
| 183 |
+
221
|
| 184 |
+
|
| 185 |
+
222
|
| 186 |
+
|
| 187 |
+
223
|
| 188 |
+
|
| 189 |
+
227
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
|
| 193 |
+
Figure 2: Annotator training process. Figure inspired by Sun et al. (2013).
|
| 194 |
+
|
| 195 |
+
229
|
| 196 |
+
|
| 197 |
+
232
|
| 198 |
+
|
| 199 |
+
234
|
| 200 |
+
|
| 201 |
+
237
|
| 202 |
+
|
| 203 |
+
239
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
|
| 207 |
+
Figure 3: Example of annotated paragraph. % signifies that no attribute could be assigned to the clinical event per the annotation scheme.
|
| 208 |
+
|
| 209 |
+
249 (PURE) (Zhong and Chen, 2021) which we used and adapted for Danish clinical NER. It further describes the dataset used and the training of the models.
|
| 210 |
+
|
| 211 |
+
254
|
| 212 |
+
|
| 213 |
+
#### 2.3.1 Model architecture
|
| 214 |
+
|
| 215 |
+
PURE is a NER deep learning model based on a transformer structure. The model has a separate
|
| 216 |
+
|
| 217 |
+
259 entity and relation extraction part. For entity extraction, the model takes as input all possible text spans up to a maximum length. A transformer extracts contextual word embeddings for the start and end token of each span. They
|
| 218 |
+
|
| 219 |
+
264 are concatenated with a learned span width embedding and classified by a feedforward network.
|
| 220 |
+
|
| 221 |
+
When extracting relations, for each candidate pair of entities, the text is passed through a transformer with inserted entity start and end marker to-
|
| 222 |
+
|
| 223 |
+
269 kens for the subject and object entity, also indicat-
|
| 224 |
+
|
| 225 |
+
270
|
| 226 |
+
|
| 227 |
+
271
|
| 228 |
+
|
| 229 |
+
272
|
| 230 |
+
|
| 231 |
+
273
|
| 232 |
+
|
| 233 |
+
274
|
| 234 |
+
|
| 235 |
+
275
|
| 236 |
+
|
| 237 |
+
276
|
| 238 |
+
|
| 239 |
+
277
|
| 240 |
+
|
| 241 |
+
278
|
| 242 |
+
|
| 243 |
+
279
|
| 244 |
+
|
| 245 |
+
280
|
| 246 |
+
|
| 247 |
+
281
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+
Figure 4: (A) Classification of clinical events from start and end tokens of span. Span width embedding not depicted. (B) Classification of attribute using clinical event marker tokens. (C) Classification of relation using subject/object and clinical event marker tokens. Figure inspired by Zhong and Chen (2021).
|
| 252 |
+
|
| 253 |
+
282
|
| 254 |
+
|
| 255 |
+
283
|
| 256 |
+
|
| 257 |
+
284
|
| 258 |
+
|
| 259 |
+
285
|
| 260 |
+
|
| 261 |
+
286
|
| 262 |
+
|
| 263 |
+
287
|
| 264 |
+
|
| 265 |
+
288
|
| 266 |
+
|
| 267 |
+
289
|
| 268 |
+
|
| 269 |
+
290
|
| 270 |
+
|
| 271 |
+
291
|
| 272 |
+
|
| 273 |
+
293
|
| 274 |
+
|
| 275 |
+
296
|
| 276 |
+
|
| 277 |
+
298 ing the type. The concatenation of the start marker token for the candidate subject and object entity is
|
| 278 |
+
|
| 279 |
+
classified by a feedforward neural network. 301
|
| 280 |
+
|
| 281 |
+
We used PURE's entity extraction approach for 303
|
| 282 |
+
|
| 283 |
+
clinical events and the relation extraction approach 304
|
| 284 |
+
|
| 285 |
+
for relations between clinical events. 305
|
| 286 |
+
|
| 287 |
+
We used our own approach adapted from the 306
|
| 288 |
+
|
| 289 |
+
PURE relation extraction approach for attributes. 307
|
| 290 |
+
|
| 291 |
+
We inserted clinical event start and end marker 308
|
| 292 |
+
|
| 293 |
+
tokens, passed all tokens through a transformer, 309
|
| 294 |
+
|
| 295 |
+
concatenated the start and end marker tokens, and 310
|
| 296 |
+
|
| 297 |
+
classified the attribute using a feedforward net- 311
|
| 298 |
+
|
| 299 |
+
work. The marker tokens were used for classi- 312
|
| 300 |
+
|
| 301 |
+
fication instead of the word(s) forming the clini- 313 cal event to guide the model to look more at the context rather than the specific word-the context being the important factor in attribute classifica-
|
| 302 |
+
|
| 303 |
+
tion. Additionally, enriching the input with the 318 type of the clinical event could guide the model if 319 attributes were described differently for different 320
|
| 304 |
+
|
| 305 |
+
clinical events. 321
|
| 306 |
+
|
| 307 |
+
Figure 4 shows the three types of extraction 322
|
| 308 |
+
|
| 309 |
+
tasks. 323
|
| 310 |
+
|
| 311 |
+
#### 2.3.2 Datasets
|
| 312 |
+
|
| 313 |
+
Table 3 shows the number of clinical events, attributes, and relations by type in the train, validation, and test set. The dataset had a total of 11,607 paragraphs, each containing a varying number of clinical events, attributes, and relations. On average, each paragraph contained 4.7 clinical events, 3.6 attributes, and 1.3 relations. We split the paragraphs in train, validation, and test sets for an approximate ${80}\% - {10}\% - {10}\%$ ratio between each type of clinical event, attribute, and relation. The sets were unbalanced on type of entity or relation-e.g. for the attributes training set, there were 23,217 current and only 480 non-patient attributes. All datasets were in the json format used by PURE (see Zhong and Chen (2021)).
|
| 314 |
+
|
| 315 |
+
#### 2.3.3 Training
|
| 316 |
+
|
| 317 |
+
When training the clinical event extraction model, we used a Danish Clinical ELECTRA pretrained on the narrative text from 299,718 EHRs from Odense University Hospital as the transformer base (Pedersen et al., 2022). The model had $\sim {13}\mathrm{M}$ parameters and consisted of 12 transformer layers with 4 attention heads. We used a dropout of 0.1 after the last ELECTRA hidden layer output. We tested classification heads with two hidden layers of varying size, each followed by a dropout of 0.2 and a ReLU activation function. We used a maximum span of 8 and a train batch size of 32 . We trained for 100 epochs using the AdamW optimizer with learning rate 1e-5 for the transformer layers and 1e-4 for the classification head, and a warm-up proportion of 0.1 .
|
| 318 |
+
|
| 319 |
+
When training each of the models for extracting attributes and relations, we used the same transformer base with a normalisation layer and a
|
| 320 |
+
|
| 321 |
+
362 dropout of 0.1 after the concatenation of tokens. We tested classification heads with two hidden
|
| 322 |
+
|
| 323 |
+
365 layers of varying size, each followed by a dropout of 0.2 and a ReLU activation function. We fur-
|
| 324 |
+
|
| 325 |
+
367 ther tested a classification head only consisting of a single classification layer. We used a train batch size of 32 and a maximum sequence length of 128 . We trained for 20 epochs using the AdamW optimizer with learning rate $2\mathrm{e} - 5$ and a warm-up proportion of 0.1 .
|
| 326 |
+
|
| 327 |
+
We modified the training method of PURE to guide the models towards equal performance on all classes. We used a weighted loss function to 376 counteract the unbalanced dataset (experiment in 377 Appendix B). Class weights were calculated for
|
| 328 |
+
|
| 329 |
+
the training of each model using the default for- 378
|
| 330 |
+
|
| 331 |
+
mula in Scikit-learn (Pedregosa et al., 2011): 379
|
| 332 |
+
|
| 333 |
+
380
|
| 334 |
+
|
| 335 |
+
$$
|
| 336 |
+
{w}_{x} = \frac{{n}_{\text{samples }}}{{n}_{\text{classes }} \cdot {n}_{x}} \tag{1}
|
| 337 |
+
$$
|
| 338 |
+
|
| 339 |
+
381 382
|
| 340 |
+
|
| 341 |
+
where $x$ is the class, ${n}_{\text{samples }}$ is the number of to- 384 tal samples, and ${n}_{\text{classes }}$ is the number of classes. The negative class, i.e. samples not to be given any label by the model, was given a weight of 1 .
|
| 342 |
+
|
| 343 |
+
To further enforce equal performance on all
|
| 344 |
+
|
| 345 |
+
classes, we chose the best model for each of the 389 clinical event, attribute, and relation extraction
|
| 346 |
+
|
| 347 |
+
tasks as the model iteration with the best macro 391 F1 on the validation set, rather than the micro F1 standard of PURE (experiment in Appendix B).
|
| 348 |
+
|
| 349 |
+
The negative class was excluded when calculating 394 the F1. We only trained the attribute and relation
|
| 350 |
+
|
| 351 |
+
models to make classifications that were allowed 396 for the connected clinical events according to the annotation scheme. Appendix C shows the results
|
| 352 |
+
|
| 353 |
+
of the hyperparameter search. We report the micro 399 and macro recall, precision, and F1 for the best
|
| 354 |
+
|
| 355 |
+
models on the test set. 401
|
| 356 |
+
|
| 357 |
+
## 3 Results
|
| 358 |
+
|
| 359 |
+
404
|
| 360 |
+
|
| 361 |
+
This section presents the agreement of the annota-
|
| 362 |
+
|
| 363 |
+
tors on the gold set and the results of the Danish 406 clinical NER models.
|
| 364 |
+
|
| 365 |
+
### 3.1 Annotation
|
| 366 |
+
|
| 367 |
+
409
|
| 368 |
+
|
| 369 |
+
Table 4 shows the annotators' micro F1 per-
|
| 370 |
+
|
| 371 |
+
formance on the gold set. For clinical events, 411 it ranged ${83.71}\% - {91.24}\%$ (average ${85.62}\%$ ) for overlapping matches, and 74.12%-85.15% (average 77.67%) for exact matches. For attributes, it
|
| 372 |
+
|
| 373 |
+
ranged ${79.21}\% - {86.19}\%$ (average ${81.71}\%$ ) and for 416 relations 71.28%-90.06% (average 77.79%).
|
| 374 |
+
|
| 375 |
+
### 3.2 Entity and relation extraction model
|
| 376 |
+
|
| 377 |
+
The models that had the best validation perfor-
|
| 378 |
+
|
| 379 |
+
mance in the hyperparameter search were: 421
|
| 380 |
+
|
| 381 |
+
- A clinical event extraction model with two hidden layers of size 450 in the classification head.
|
| 382 |
+
|
| 383 |
+
426
|
| 384 |
+
|
| 385 |
+
- An attribute extraction model with a single classification layer.
|
| 386 |
+
|
| 387 |
+
- A relation extraction model with two hidden 430
|
| 388 |
+
|
| 389 |
+
layers of size 150 in the classification head. 431
|
| 390 |
+
|
| 391 |
+
433 487
|
| 392 |
+
|
| 393 |
+
<table><tr><td/><td>Train (% of row total)</td><td>Validation (% of row total)</td><td>Test (% of row total)</td><td>Total (% of column total)</td></tr><tr><td>Paragraphs</td><td>9,687 (83%)</td><td>960 (8%)</td><td>960 (8%)</td><td>11,607 (100%)</td></tr><tr><td colspan="5">Clinical events</td></tr><tr><td>Diseases</td><td>2,033 (78%)</td><td>295 (11%)</td><td>272 (10%)</td><td>2,600 (5%)</td></tr><tr><td>Symptoms</td><td>11,937 (80%)</td><td>1,455 (10%)</td><td>1,571 (10%)</td><td>14,963 (27%)</td></tr><tr><td>Diagnostics</td><td>8,921 (80%)</td><td>1,095 (10%)</td><td>1,194 (11%)</td><td>11,210 (21%)</td></tr><tr><td>Treatments</td><td>6,918 (79%)</td><td>911 (10%)</td><td>882 (10%)</td><td>8,711 (16%)</td></tr><tr><td>Anatomies</td><td>10,172 (80%)</td><td>1,227 (10%)</td><td>1,278 (10%)</td><td>12,677 (23%)</td></tr><tr><td>Results</td><td>3,522 (79%)</td><td>473 (11%)</td><td>475 (11%)</td><td>4,470 (8%)</td></tr><tr><td>TOTAL</td><td>43,503 (80%)</td><td>5,456 (10%)</td><td>5,672 (10%)</td><td>54,631 (100%)</td></tr><tr><td colspan="5">Attributes</td></tr><tr><td>Prior</td><td>2,028 (80%)</td><td>237 (9%)</td><td>283 (11%)</td><td>2,548 (6%)</td></tr><tr><td>Current</td><td>23,217 (79%)</td><td>3,021 (10%)</td><td>3,109 (11%)</td><td>29,347 (70%)</td></tr><tr><td>Future</td><td>1,237 (79%)</td><td>161 (10%)</td><td>160 (10%)</td><td>1,558 (4%)</td></tr><tr><td>Doubt</td><td>2,479 (82%)</td><td>263 (9%)</td><td>289 (10%)</td><td>3,031 (7%)</td></tr><tr><td>Negation</td><td>3,890 (80%)</td><td>496 (10%)</td><td>500 (10%)</td><td>4,886 (12%)</td></tr><tr><td>Non-patient</td><td>480 (82%)</td><td>51 (9%)</td><td>53 (9%)</td><td>584 (1%)</td></tr><tr><td>TOTAL</td><td>33,331 (79%)</td><td>4,229 (10%)</td><td>4,394 (10%)</td><td>41,954 (100%)</td></tr><tr><td colspan="5">Relations</td></tr><tr><td>is treated with</td><td>1,485 (80%)</td><td>175 (9%)</td><td>197 (11%)</td><td>1,857 (13%)</td></tr><tr><td>has location</td><td>6,501 (80%)</td><td>779 (10%)</td><td>823 (10%)</td><td>8,103 (55%)</td></tr><tr><td>has result</td><td>3,652 (79%)</td><td>499 (11%)</td><td>493 (11%)</td><td>4,644 (32%)</td></tr><tr><td>TOTAL</td><td>11,638 (80%)</td><td>1,453 (10%)</td><td>1,513 (10%)</td><td>14,604 (100%)</td></tr></table>
|
| 394 |
+
|
| 395 |
+
Table 3: Composition of the train, validation and test sets by type of clinical event, attribute, and relation.
|
| 396 |
+
|
| 397 |
+
486
|
| 398 |
+
|
| 399 |
+
488
|
| 400 |
+
|
| 401 |
+
489
|
| 402 |
+
|
| 403 |
+
490
|
| 404 |
+
|
| 405 |
+
437 491
|
| 406 |
+
|
| 407 |
+
438 492
|
| 408 |
+
|
| 409 |
+
439
|
| 410 |
+
|
| 411 |
+
443 497
|
| 412 |
+
|
| 413 |
+
447 501
|
| 414 |
+
|
| 415 |
+
448 502
|
| 416 |
+
|
| 417 |
+
449
|
| 418 |
+
|
| 419 |
+
450 504
|
| 420 |
+
|
| 421 |
+
451
|
| 422 |
+
|
| 423 |
+
452
|
| 424 |
+
|
| 425 |
+
453 507
|
| 426 |
+
|
| 427 |
+
455
|
| 428 |
+
|
| 429 |
+
<table><tr><td>Annotator</td><td>A</td><td>$\mathbf{B}$</td><td>C</td><td>D</td><td>E</td><td>$\mathbf{F}$</td></tr><tr><td/><td colspan="6">Overlap match, micro F1%</td></tr><tr><td>Clinical event</td><td>91.24</td><td>84.22</td><td>84.41</td><td>85.71</td><td>84.43</td><td>83.71</td></tr><tr><td>Attribute</td><td>86.19</td><td>83.06</td><td>79.21</td><td>81.29</td><td>79.75</td><td>80.75</td></tr><tr><td>Relation</td><td>90.06</td><td>76.97</td><td>75.60</td><td>77.01</td><td>71.28</td><td>75.84</td></tr><tr><td/><td colspan="6">Exact match, micro F1%</td></tr><tr><td>Clinical event</td><td>85.15</td><td>76.08</td><td>76.29</td><td>78.69</td><td>74.12</td><td>75.71</td></tr></table>
|
| 430 |
+
|
| 431 |
+
456
|
| 432 |
+
|
| 433 |
+
457
|
| 434 |
+
|
| 435 |
+
458
|
| 436 |
+
|
| 437 |
+
459
|
| 438 |
+
|
| 439 |
+
460
|
| 440 |
+
|
| 441 |
+
461 Table 4: The anonymised annotators' performance
|
| 442 |
+
|
| 443 |
+
462 on the gold set. Exact match: a match is defined
|
| 444 |
+
|
| 445 |
+
463 as the exact tokens annotated in the gold set with
|
| 446 |
+
|
| 447 |
+
464 the same label. Overlap match: a match is defined
|
| 448 |
+
|
| 449 |
+
465 as minimum one token overlapping with the gold
|
| 450 |
+
|
| 451 |
+
466 set annotation of the same label. Only an overlap
|
| 452 |
+
|
| 453 |
+
467 match F1 is calculated for attributes and relations
|
| 454 |
+
|
| 455 |
+
468 as evaluating an exact match would propagate the potential error in the span of the clinical event to
|
| 456 |
+
|
| 457 |
+
470 which the attribute or relation is connected.
|
| 458 |
+
|
| 459 |
+
475
|
| 460 |
+
|
| 461 |
+
<table><tr><td/><td colspan="3">$\mathbf{{Micro}}$</td><td colspan="3">$\mathbf{{Macro}}$</td></tr><tr><td/><td>$\mathbf{R}\%$</td><td>$\mathbf{P}\%$</td><td>F1%</td><td>R%</td><td>$\mathbf{P}\%$</td><td>F1%</td></tr><tr><td/><td colspan="6">Overlap match</td></tr><tr><td>Clinical events</td><td>66.29</td><td>77.31</td><td>71.38</td><td>64.88</td><td>72.60</td><td>68.20</td></tr><tr><td/><td colspan="6">Exact match</td></tr><tr><td>Clinical events</td><td>60.97</td><td>65.64</td><td>63.22</td><td>59.84</td><td>61.30</td><td>60.05</td></tr><tr><td>Attributes</td><td>66.04</td><td>66.04</td><td>66.04</td><td>51.60</td><td>42.64</td><td>44.85</td></tr><tr><td>Relations</td><td>75.88</td><td>72.66</td><td>74.23</td><td>74.74</td><td>67.85</td><td>70.64</td></tr></table>
|
| 462 |
+
|
| 463 |
+
Table 5: Performance of the best clinical event, attribute, and relation extraction models on the test set. Attributes and relations are only reported with an exact match as the models do not consider the span of the clinical event from which the attribute or relation is classified. R: Recall. P: Precision.
|
| 464 |
+
|
| 465 |
+
480
|
| 466 |
+
|
| 467 |
+
485
|
| 468 |
+
|
| 469 |
+
Table 5 shows the performance of the best mod- 509 els on the test set. Clinical events were extracted with exact micro F1 63.22% and macro F1 60.05%, attributes with micro F1 66.04% and macro F1 44.85%, and relations with micro F1
|
| 470 |
+
|
| 471 |
+
74.23% and macro F1 70.64%. The negative class 514 was excluded when calculating the recall, precision, and F1 scores.
|
| 472 |
+
|
| 473 |
+
Figure 5 shows the confusion matrices of per- 517 formance on clinical events, attributes, and rela-
|
| 474 |
+
|
| 475 |
+
tions. The confusion matrices include the clinical 519
|
| 476 |
+
|
| 477 |
+
events and relations that were not extracted and 520
|
| 478 |
+
|
| 479 |
+
falsely extracted by the model ('O'). 521
|
| 480 |
+
|
| 481 |
+
The model for clinical event extraction per- 522 formed best on anatomies (69%) and worst on re-
|
| 482 |
+
|
| 483 |
+
sults (53%). 1,568 spans were falsely extracted 524
|
| 484 |
+
|
| 485 |
+
as a clinical event with symptoms being the most 525
|
| 486 |
+
|
| 487 |
+
frequent (21%). The model for attribute extrac- 526
|
| 488 |
+
|
| 489 |
+
tion performed best on negations (84%) and worst 527
|
| 490 |
+
|
| 491 |
+
on non-patient (23%). The model for relation ex- 528
|
| 492 |
+
|
| 493 |
+
traction performed best on "has result" (93%) and 529 530 worst on "is treated with" (62%). 432 false rela-
|
| 494 |
+
|
| 495 |
+
tions were extracted of which "has location" was 532 the most frequent misclassification (45%). 533
|
| 496 |
+
|
| 497 |
+
## 4 Discussion and limitations
|
| 498 |
+
|
| 499 |
+
534
|
| 500 |
+
|
| 501 |
+
535
|
| 502 |
+
|
| 503 |
+
This paper presented a methodology for develop- 536
|
| 504 |
+
|
| 505 |
+
ing a dataset for Danish clinical NER. It presented 537
|
| 506 |
+
|
| 507 |
+
an annotation scheme for annotation of all clinical 538
|
| 508 |
+
|
| 509 |
+
events, their attributes, and relations that are rele- 539
|
| 510 |
+
|
| 511 |
+
540 594
|
| 512 |
+
|
| 513 |
+

|
| 514 |
+
|
| 515 |
+
Figure 5: Confusion matrices of performance on (A) clinical events, (B) attributes, and (C) relations. 'O' counts the clinical events and relations that were not extracted and falsely extracted by the model.
|
| 516 |
+
|
| 517 |
+
602
|
| 518 |
+
|
| 519 |
+
603
|
| 520 |
+
|
| 521 |
+
541 595
|
| 522 |
+
|
| 523 |
+
542 596
|
| 524 |
+
|
| 525 |
+
543 597
|
| 526 |
+
|
| 527 |
+
544 598
|
| 528 |
+
|
| 529 |
+
545 599
|
| 530 |
+
|
| 531 |
+
546 600
|
| 532 |
+
|
| 533 |
+
547 601
|
| 534 |
+
|
| 535 |
+
551 605
|
| 536 |
+
|
| 537 |
+
607
|
| 538 |
+
|
| 539 |
+
609 vant for the medical history. The dataset included text paragraphs from Danish EHRs spanning multiple departments and note types.
|
| 540 |
+
|
| 541 |
+
We trained and adapted PURE NER deep learning models to extract clinical events (overlap match macro F1 68.20%; exact match macro F1 ${60.05}\%$ ), attributes of clinical events (macro F1 ${44.85}\%$ ), and relations between clinical events
|
| 542 |
+
|
| 543 |
+
566 (macro F1 70.64%). The results are promising for Danish clinical NER but need improvement. A discussion of possible improvements to the methodology, limitations, and future work is provided below.
|
| 544 |
+
|
| 545 |
+
The clinical event extraction model had similar performance on all classes with accuracies between ${53}\%$ (results) and ${69}\%$ (anatomies). There was little contamination between classes as most errors were caused by failure to extract or false extraction of a clinical event. There was some contamination between symptoms and diseases with ${12}\%$ of diseases being classified as symptoms and $5\%$ of symptoms being classified as diseases. This supports claims by annotators that diseases and symptoms in some cases are difficult to differentiate and that extra attention must be given to dif-
|
| 546 |
+
|
| 547 |
+
583 ferentiate these in the annotation guidelines.
|
| 548 |
+
|
| 549 |
+
The attribute extraction model had large differences in performance with accuracies between ${23}\%$ (non-patient) and ${84}\%$ (negation). There were more misclassifications of the non-patient attribute as doubt $\left( {{40}\% }\right)$ than correct classifications. The future and doubt attributes had significant contamination between them with ${25}\%$ and ${11}\%$ misclassifications as the other class, respec-
|
| 550 |
+
|
| 551 |
+
593 tively. The many misclassifications between non-
|
| 552 |
+
|
| 553 |
+
610 patient and doubt attributes, and especially future and doubt attributes, could indicate that the model would improve if the non-patient, doubt, and future attributes were merged to a single class of uncertain attributes. This would most likely not harm the usefulness of the model to MDs significantly.
|
| 554 |
+
|
| 555 |
+
The fact that more prior attributes were misclassified as current (41%) than correct classifica-
|
| 556 |
+
|
| 557 |
+
tions (36%) likewise indicates that these two at- 620 tributes could be merged into a single class of clin-
|
| 558 |
+
|
| 559 |
+
ical events that occurred. This would, however, 622 decrease the usefulness of the model as it is important for MDs reviewing the medical history to
|
| 560 |
+
|
| 561 |
+
know if a clinical event is prior or current. 625
|
| 562 |
+
|
| 563 |
+
The relation model extracted ${93}\%$ of the "has
|
| 564 |
+
|
| 565 |
+
result" relations, and ${62}\%$ and ${69}\%$ of the "is 627 treated with" and "has location" relations, respectively. The differences are likely caused by the fact
|
| 566 |
+
|
| 567 |
+
that the "has result" relation only connects diag- 630 nostics to results while the two other relations have
|
| 568 |
+
|
| 569 |
+
three different one-way relationships. 632
|
| 570 |
+
|
| 571 |
+
In this paper, we only explored one type of NER model and tested a limited set of architectures and hyperparameters. Future work could in-
|
| 572 |
+
|
| 573 |
+
clude testing other architectures and enriching the 637 model input with more information, e.g. the output of a text parser, which could help differentiate attributes dealing with the time-aspect. The six annotators had an average micro F1 (overlap
|
| 574 |
+
|
| 575 |
+
match) of ${85.62}\% ,{81.71}\%$ , and ${77.79}\%$ for clin- 642 ical events, attributes, and relations, respectively. Merging certain attributes and more emphasis on differences between symptoms and diseases could
|
| 576 |
+
|
| 577 |
+
increase these scores. 646
|
| 578 |
+
|
| 579 |
+
The Danish clinical NER dataset is not made 647 publicly available due to it containing sensitive
|
| 580 |
+
|
| 581 |
+
649 information. We advise interested researchers to contact us for sharing possibilities.
|
| 582 |
+
|
| 583 |
+
## 5 Conclusions
|
| 584 |
+
|
| 585 |
+
This paper presented methodology and annotation scheme for developing the first Danish clinical NER dataset. The corpus consists of 11,607 paragraphs annotated for six entity types, six attributes, and three relations. The corpus was used to fine-tune language models which showed promising results for classifying the entities, attributes, and re-
|
| 586 |
+
|
| 587 |
+
661 lations of the dataset.
|
| 588 |
+
|
| 589 |
+
664
|
| 590 |
+
|
| 591 |
+
## References
|
| 592 |
+
|
| 593 |
+
Steven Bethard, Guergana Savova, Wei-Te Chen, Leon
|
| 594 |
+
|
| 595 |
+
666 Derczynski, James Pustejovsky, and Marc Verhagen. 2016. Semeval-2016 task 12: Clinical tempeval. In Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), pages 1052- 1062.
|
| 596 |
+
|
| 597 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
|
| 598 |
+
|
| 599 |
+
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. https://openreview.net/forum?id=r1xMH1BtvB Electra: Pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representations.
|
| 600 |
+
|
| 601 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. https://doi.org/10.18653/v1/N19-1423 BERT: Pre-training of deep bidirectional transformers for
|
| 602 |
+
|
| 603 |
+
686 language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational
|
| 604 |
+
|
| 605 |
+
691 Linguistics.
|
| 606 |
+
|
| 607 |
+
Jannik S Pedersen, Martin S Laursen, Cristina Soguero-Ruiz, Thiusius R Savarimuthu, Ras-mus Søgaard Hansen, and Pernille J Vinholt. 2022. Domain over size: Clinical electra surpasses general bert for bleeding site classification in the free text of electronic health records. In 2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), pages 1-4. IEEE.
|
| 608 |
+
|
| 609 |
+
Fabian Pedregosa, Gaël Varoquaux, Alexandre Gram- 701 fort, Vincent Michel, Bertrand Thirion, Olivier
|
| 610 |
+
|
| 611 |
+
Grisel, Mathieu Blondel, Peter Prettenhofer, Ron 702
|
| 612 |
+
|
| 613 |
+
Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: 703 Machine learning in python. the Journal of machine 704 Learning research, 12:2825-2830. 705
|
| 614 |
+
|
| 615 |
+
Ergin Soysal, Jingqi Wang, Min Jiang, Yonghui 706
|
| 616 |
+
|
| 617 |
+
Wu, Serguei Pakhomov, Hongfang Liu, and Hua 707
|
| 618 |
+
|
| 619 |
+
Xu. 2017. https://doi.org/10.1093/jamia/ocx132 708 CLAMP - a toolkit for efficiently building customized clinical natural language processing pipelines. Journal of the American Medical Informatics Association, 25(3):331-336.
|
| 620 |
+
|
| 621 |
+
Weiyi Sun, Anna Rumshisky, and Ozlem Uzuner. 2013. https://doi.org/https://doi.org/10.1016/j.jbi.2013.07.004 Annotating temporal information in clinical nar-
|
| 622 |
+
|
| 623 |
+
ratives. Journal of Biomedical Informatics, 715 46:S5-S12. Supplement: 2012 i2b2 NLP Challenge on Temporal Relations in Clinical Data.
|
| 624 |
+
|
| 625 |
+
Özlem Uzuner, Yuan Luo, and Peter Szolovits. 2007. 718 Evaluating the state-of-the-art in automatic de-
|
| 626 |
+
|
| 627 |
+
identification. Journal of the American Medical In- 720 formatics Association, 14(5):550-563.
|
| 628 |
+
|
| 629 |
+
Özlem Uzuner, Imre Solti, and Eithon Cadag. 2010.
|
| 630 |
+
|
| 631 |
+
Extracting medication information from clinical 723 text. Journal of the American Medical Informatics
|
| 632 |
+
|
| 633 |
+
Association, 17(5):514-518. 725
|
| 634 |
+
|
| 635 |
+
Zexuan Zhong and Danqi Chen. 2021. A frustratingly easy approach for entity and relation extraction. In
|
| 636 |
+
|
| 637 |
+
Proceedings of the 2021 Conference of the North 728 American Chapter of the Association for Computa-
|
| 638 |
+
|
| 639 |
+
tional Linguistics: Human Language Technologies, 730 pages 50-61.
|
| 640 |
+
|
| 641 |
+
## Appendices
|
| 642 |
+
|
| 643 |
+
## A Annotation guidelines
|
| 644 |
+
|
| 645 |
+
733
|
| 646 |
+
|
| 647 |
+
### A.1 Clinical events
|
| 648 |
+
|
| 649 |
+
735
|
| 650 |
+
|
| 651 |
+
#### A.1.1 Disease
|
| 652 |
+
|
| 653 |
+
736
|
| 654 |
+
|
| 655 |
+
Contains all diseases including diseases that could 737 738
|
| 656 |
+
|
| 657 |
+
be considered a result of a Diagnostic. 739
|
| 658 |
+
|
| 659 |
+
#### A.1.2 Symptom
|
| 660 |
+
|
| 661 |
+
740
|
| 662 |
+
|
| 663 |
+
Includes all symptoms and abnormal findings. Findings that are not abnormal should not be annotated. However, a negation of an abnormal finding
|
| 664 |
+
|
| 665 |
+
should be annotated because the abnormal finding 745 is mentioned even though it is not present. For example, "fracture" should be annotated in the sentence "there is no sign of fracture."
|
| 666 |
+
|
| 667 |
+
If there is a negation of a non-abnormal finding,
|
| 668 |
+
|
| 669 |
+
it should be annotated in the entity. For example, 750 "cannot hear" is annotated in the sentence "patient cannot hear anything."
|
| 670 |
+
|
| 671 |
+
In the sentence "no symptoms," the word "symptoms" should not be annotated as a symptom, as it does not contain any information. 755
|
| 672 |
+
|
| 673 |
+
In case a symptom or abnormal finding is found 757 by a Diagnostic, there may be a coincidence with the Result entity. Here, it is annotated as Symptom 759 if the entity can provide sufficient meaning alone. For example, "cyst" or "tumour."
|
| 674 |
+
|
| 675 |
+
If the Symptom cannot stand alone and one needs to know which Diagnostic was carried out in order to understand the result, the entity should instead be annotated as Result and have a "has result" relationship from the Diagnostic entity. For example, this applies to "Temp: 24 C" and "Stix: 3+". "Temp" and "Stix" are annotated as Diagnostic with "is treated with" relationship to Result "24 C" and "3+."
|
| 676 |
+
|
| 677 |
+
#### A.1.3 Result
|
| 678 |
+
|
| 679 |
+
Includes all results of Diagnostic, e.g. values and blood test results.
|
| 680 |
+
|
| 681 |
+
A Result cannot stand on its own. A relation from the Diagnostic is needed for it to make sense. These can be entities like "stable", "positive", "negative", "24 C" or "3+".
|
| 682 |
+
|
| 683 |
+
Typically, this entity will appear in sentence structures with a colon: "Diagnostic: Result". Note that the two entities are mentioned very close to each other in the text-in this case only with a colon in between. An example could be "Temp: 24 C" or "Stix: 3+". "Temp" and "Stix" are annotated as Diagnostics with a "has result" relation to Result "24 C" and "3+".
|
| 684 |
+
|
| 685 |
+
Entities that can instead be annotated as Symptom will typically be mentioned further away or completely lack a Diagnostic as a Symptom can stand alone and make sense.
|
| 686 |
+
|
| 687 |
+
See also the description for Symptom.
|
| 688 |
+
|
| 689 |
+
#### A.1.4 Diagnostic
|
| 690 |
+
|
| 691 |
+
Includes all diagnostics, measurements, and tests. This can include CT scans, blood tests, MR scans, and recordings of a newborn's length, temperature, etc.
|
| 692 |
+
|
| 693 |
+
Note that "blood sample results" and "radiology description" are not a Diagnostic and should not be annotated.
|
| 694 |
+
|
| 695 |
+
If KAD is mentioned along with a volume, e.g. "KAD emptied of ${200}\mathrm{\;{mL}}$ ," it is marked as Diagnostic - Result. If there is no volume specified, KAD is annotated as Treatment.
|
| 696 |
+
|
| 697 |
+
#### A.1.5 Treatment
|
| 698 |
+
|
| 699 |
+
Includes all forms of treatment including medica-
|
| 700 |
+
|
| 701 |
+
809 tion.
|
| 702 |
+
|
| 703 |
+
To annotate entities as concisely as possible, for 810
|
| 704 |
+
|
| 705 |
+
example in the sentence "good effect of ${2.5}\mathrm{{mg}}$ 811
|
| 706 |
+
|
| 707 |
+
morphine IV," only "morphine" should be anno- 812
|
| 708 |
+
|
| 709 |
+
tated as Treatment. 813
|
| 710 |
+
|
| 711 |
+
In the sentence "treated for xxx," the word 814 "treatment" should not be annotated as Treatment
|
| 712 |
+
|
| 713 |
+
as it does not contain any information. 816
|
| 714 |
+
|
| 715 |
+
If KAD is mentioned without a volume indication, it should be annotated as Treatment. If KAD is mentioned with a volume, for example "KAD
|
| 716 |
+
|
| 717 |
+
emptied for ${200}\mathrm{\;{mL}}$ ," it should be annotated as 821 "Diagnostic - Result."
|
| 718 |
+
|
| 719 |
+
#### A.1.6 Anatomy
|
| 720 |
+
|
| 721 |
+
823
|
| 722 |
+
|
| 723 |
+
Includes all mentions of anatomies and things
|
| 724 |
+
|
| 725 |
+
from the body (blood, feces, urine, sweat, etc.). 826
|
| 726 |
+
|
| 727 |
+
Typically used to indicate the location of a Dis-
|
| 728 |
+
|
| 729 |
+
ease or Symptom, a Diagnostic, or a Treatment. 828 Examples: "brain", "left foot" or "duodenum".
|
| 730 |
+
|
| 731 |
+
When Anatomy is described by an adjacent
|
| 732 |
+
|
| 733 |
+
word, for example "left", this should be included 831 in the entity.
|
| 734 |
+
|
| 735 |
+
Remember to annotate the Anatomy entities 833 that should not be linked to other entities.
|
| 736 |
+
|
| 737 |
+
### A.2 Attributes
|
| 738 |
+
|
| 739 |
+
836
|
| 740 |
+
|
| 741 |
+
#### A.2.1 Current
|
| 742 |
+
|
| 743 |
+
The entity is either present, carried out, or cur- 838 rent. If medication is prescribed to the patient, this should also be marked as "Treatment - Current", as it can be assumed that the treatment will start and it may be the last time it is mentioned in the journal. On the other hand, "Scheduling a CT for Tuesday." should be marked as "Future" as it will be described in a future medical note, for example with the result.
|
| 744 |
+
|
| 745 |
+
#### A.2.2 Negation
|
| 746 |
+
|
| 747 |
+
848
|
| 748 |
+
|
| 749 |
+
The entity is not present. For example, if it is mentioned that the patient does not have a fracture, the fracture should be marked as Symptom - Negation.
|
| 750 |
+
|
| 751 |
+
Note that the word "not" should not be part of the 853 marked entity. However, if there is a negation of a normal finding, it should be annotated as such. For example, "cannot hear" in the sentence "patient cannot hear anything" is annotated as Symp-
|
| 752 |
+
|
| 753 |
+
tom - Present. 858
|
| 754 |
+
|
| 755 |
+
#### A.2.3 Prior
|
| 756 |
+
|
| 757 |
+
If the entity refers to a previous case, i.e., a pre-
|
| 758 |
+
|
| 759 |
+
vious hospitalisation or if it happened a long time 862
|
| 760 |
+
|
| 761 |
+
ago. For example, it should be annotated as a prior 863
|
| 762 |
+
|
| 763 |
+
Treatment when a cast or drain is removed, as the
|
| 764 |
+
|
| 765 |
+
865 treatment is finished. However, if a CT scan from the previous day is mentioned, it should be annotated as Current.
|
| 766 |
+
|
| 767 |
+
#### A.2.4 Future
|
| 768 |
+
|
| 769 |
+
Everything that takes place in the future. For example, cancer is annotated as Disease - Future if it is mentioned that "there is a risk of cancer if you use tanning beds too often."
|
| 770 |
+
|
| 771 |
+
It is marked as Diagnostic - Future if an MRI scan is planned for the next day. However, if it is written "the treatment with xxx starts" or "rp. xxx" it should be marked as Treatment - Current as it is assumed that the treatment will certainly
|
| 772 |
+
|
| 773 |
+
880 happen.
|
| 774 |
+
|
| 775 |
+
Also includes references to possible future
|
| 776 |
+
|
| 777 |
+
882 treatments.
|
| 778 |
+
|
| 779 |
+
#### A.2.5 Doubt
|
| 780 |
+
|
| 781 |
+
If the patient might have a disease that has not yet been confirmed.
|
| 782 |
+
|
| 783 |
+
If a Treatment should be given provided that certain things change.
|
| 784 |
+
|
| 785 |
+
The difference between Doubt and Future is that Future is more certain - it is going to happen - while Doubt is more uncertain or conditional.
|
| 786 |
+
|
| 787 |
+
#### A.2.6 Non-patient
|
| 788 |
+
|
| 789 |
+
If an entity does not have a direct connection to the patient. This can occur when a general letter is sent out regarding cancer screening. Cancer should then be annotated as Disease - Non-patient. If it is mentioned that the patient's mother had a certain disease, it should also be annotated in this way.
|
| 790 |
+
|
| 791 |
+
902
|
| 792 |
+
|
| 793 |
+
### A.3 Relations
|
| 794 |
+
|
| 795 |
+
When entities are annotated, the relationships between entities can be annotated. This is done
|
| 796 |
+
|
| 797 |
+
907 by pulling the "From entity" over to the "To entity". The direction of the relationship is important. Therefore, pay attention to the name of the relationship and read it out loud if necessary, "Entity - Relation - Entity" and listen to see if it makes sense or if the arrow needs to be reversed. CLAMP will show which relationships can be annotated for the pair being drawn between.
|
| 798 |
+
|
| 799 |
+
## has location
|
| 800 |
+
|
| 801 |
+
917 From entities: Disease, Symptom, Diagnostic.
|
| 802 |
+
|
| 803 |
+
To entities: Anatomy. 918
|
| 804 |
+
|
| 805 |
+
919
|
| 806 |
+
|
| 807 |
+
## has result
|
| 808 |
+
|
| 809 |
+
920
|
| 810 |
+
|
| 811 |
+
From entities: Diagnostic. 921 922
|
| 812 |
+
|
| 813 |
+
To entities: Result. 923
|
| 814 |
+
|
| 815 |
+
924
|
| 816 |
+
|
| 817 |
+
## is treated with
|
| 818 |
+
|
| 819 |
+
925
|
| 820 |
+
|
| 821 |
+
From entities: Disease, Symptom, Anatomy. 926
|
| 822 |
+
|
| 823 |
+
To entities: Treatment. 927 928
|
| 824 |
+
|
| 825 |
+
The "is treated with" relation links the en- 929 930
|
| 826 |
+
|
| 827 |
+
tities Disease, Symptom, and Anatomy to a 931 Treatment. In some cases, sentences describing a required treatment could be linked to both an
|
| 828 |
+
|
| 829 |
+
Anatomy and Treatment entity. In this case, 934 the Treatment should be linked to the Symptom
|
| 830 |
+
|
| 831 |
+
instead of the Anatomy. You should only link the 936 Anatomy to the Treatment using the "is treated with" relation if the Treatment cannot be linked to
|
| 832 |
+
|
| 833 |
+
anything else. Example: "Left knee skin scraping 939 is treated with plaster." Annotation: skin scraping
|
| 834 |
+
|
| 835 |
+
- "Treated with" - plaster. 941
|
| 836 |
+
|
| 837 |
+
### A.4 General notes
|
| 838 |
+
|
| 839 |
+
944
|
| 840 |
+
|
| 841 |
+
It is important not to annotate periods, commas, 946 etc. unless they are part of an abbreviation. For example, in "Patient has cancer," only "cancer" and not "cancer." should be marked. If you double-click a word, CLAMP will only mark the word
|
| 842 |
+
|
| 843 |
+
and not any punctuation next to the word. This 951 can make it a bit troublesome to include periods in abbreviations.
|
| 844 |
+
|
| 845 |
+
Entities should be annotated as concisely as 954 possible without losing meaning. This means
|
| 846 |
+
|
| 847 |
+
that in the sentence "there are signs of cancer," 956 only "cancer" and not "signs of cancer" should be marked as an entity. If an entity has some describing words next to it, the following rule can be used to decide how much should be annotated. In the
|
| 848 |
+
|
| 849 |
+
sentence "pain in the front of the arm," only "arm" 961 is marked as Anatomy since "front" and "arm" are connected through the word "of." In the sentence "pain in the left arm," "left arm" is marked as
|
| 850 |
+
|
| 851 |
+
Anatomy since there are no words between "left" 966 and "arm". In sentences describing a prescription of medication, only the name is marked as Treatment, and not, for example, the quantity indication
|
| 852 |
+
|
| 853 |
+
or the number of days. 970
|
| 854 |
+
|
| 855 |
+
Entities may not overlap with each other. 971
|
| 856 |
+
|
| 857 |
+
973
|
| 858 |
+
|
| 859 |
+
<table><tr><td rowspan="2">Evaluation metric</td><td rowspan="2">Loss</td><td colspan="3">Micro</td><td colspan="3">Macro</td></tr><tr><td>$\mathbf{R}$</td><td>$\mathbf{P}$</td><td>F1</td><td>$\mathbf{R}$</td><td>$\mathbf{P}$</td><td>$\mathbf{{F1}}$</td></tr><tr><td rowspan="2">Micro F1</td><td>Unweighted</td><td>0.79</td><td>0.79</td><td>0.79</td><td>0.38</td><td>0.41</td><td>0.39</td></tr><tr><td>Weighted</td><td>0.62</td><td>0.62</td><td>0.62</td><td>0.45</td><td>0.33</td><td>0.34</td></tr><tr><td rowspan="2">Macro F1</td><td>Unweighted</td><td>0.77</td><td>0.77</td><td>0.77</td><td>0.42</td><td>0.42</td><td>0.41</td></tr><tr><td>Weighted</td><td>0.60</td><td>0.60</td><td>0.60</td><td>0.51</td><td>0.42</td><td>0.44</td></tr></table>
|
| 860 |
+
|
| 861 |
+
Table 6: Micro and macro recall, precision, and F1 score on the validation set when selecting the best iteration of the model based on micro and macro F1 score with unweighted and weighted loss. R: Recall. P: Precision.
|
| 862 |
+
|
| 863 |
+
978
|
| 864 |
+
|
| 865 |
+
982
|
| 866 |
+
|
| 867 |
+
983
|
| 868 |
+
|
| 869 |
+
## B Selection of loss and evaluation metric
|
| 870 |
+
|
| 871 |
+
984
|
| 872 |
+
|
| 873 |
+
985 This appendix details experiments performed to 986 test whether to use unweighted or weighted loss 987
|
| 874 |
+
|
| 875 |
+
988 and if to select the best model iteration using mi-
|
| 876 |
+
|
| 877 |
+
989 cro or macro F1.
|
| 878 |
+
|
| 879 |
+
990 The attribute extraction was selected for testing the loss and evaluation metric because it was the most unbalanced. We ran the test with a Danish
|
| 880 |
+
|
| 881 |
+
993 clinical ELECTRA transformer base with normalisation and a dropout of 0.1 after the concatenation
|
| 882 |
+
|
| 883 |
+
995 of tokens, and a classification head with two hidden layers of size 75 , each followed by a dropout of 0.2 and a ReLU activation function. We used a train batch size of 32 and a maximum sequence length of 128. We trained for 20 epochs using the
|
| 884 |
+
|
| 885 |
+
1000 AdamW optimizer with learning rate 2e-5 and a warm-up proportion of 0.1 .
|
| 886 |
+
|
| 887 |
+
Class weights were calculated for the training of each model using the default formula in Scikit-
|
| 888 |
+
|
| 889 |
+
1005 learn (Pedregosa et al., 2011):
|
| 890 |
+
|
| 891 |
+
$$
|
| 892 |
+
{w}_{x} = \frac{{n}_{\text{samples }}}{{n}_{\text{classes }} \cdot {n}_{x}} \tag{2}
|
| 893 |
+
$$
|
| 894 |
+
|
| 895 |
+
1010 where $x$ is the class, ${n}_{\text{samples }}$ is the number of total samples, and ${n}_{\text{classes }}$ is the number of classes. The negative class, i.e. samples not to be given any label by the model, was given a weight of 1 .
|
| 896 |
+
|
| 897 |
+
Table 6 shows the micro and macro recall, pre-
|
| 898 |
+
|
| 899 |
+
1015 cision, and F1 score on the validation set when selecting the best iteration of the model based on micro and macro F1 score with unweighted and 1019 weighted loss.
|
| 900 |
+
|
| 901 |
+
1020 Figure 6 shows that using the micro F1 to select 1021 the best iteration of the model resulted in some 1022 classes being practically excluded during classi- 1023 fication. Using the macro F1 to select the best 1024 model iteration and training with a weighted loss 1025 gave the most equal performance on all classes
|
| 902 |
+
|
| 903 |
+
<table><tr><td/><td>Classification head hidden layers</td><td>Validation Exact F1 %</td></tr><tr><td rowspan="5">Clinical event</td><td>2x75</td><td>58.49</td></tr><tr><td>2x 150</td><td>59.82</td></tr><tr><td>2x 300</td><td>60.68</td></tr><tr><td>2x 450</td><td>61.34</td></tr><tr><td>2x 600</td><td>60.91</td></tr><tr><td rowspan="5">Attribute</td><td>None</td><td>48.01</td></tr><tr><td>2x50</td><td>43.20</td></tr><tr><td>2x75</td><td>43.85</td></tr><tr><td>2x 150</td><td>44.10</td></tr><tr><td>2x 300</td><td>44.32</td></tr><tr><td rowspan="4">Relation</td><td>None</td><td>66.15</td></tr><tr><td>2x75</td><td>68.39</td></tr><tr><td>2x 150</td><td>68.85</td></tr><tr><td>2x 300</td><td>67.39</td></tr></table>
|
| 904 |
+
|
| 905 |
+
Table 7: Results of the hyperparameter search.
|
| 906 |
+
|
| 907 |
+
1026
|
| 908 |
+
|
| 909 |
+
1027
|
| 910 |
+
|
| 911 |
+
1028
|
| 912 |
+
|
| 913 |
+
1029
|
| 914 |
+
|
| 915 |
+
1030
|
| 916 |
+
|
| 917 |
+
1031
|
| 918 |
+
|
| 919 |
+
1032
|
| 920 |
+
|
| 921 |
+
1033
|
| 922 |
+
|
| 923 |
+
1034
|
| 924 |
+
|
| 925 |
+
1035
|
| 926 |
+
|
| 927 |
+
1036
|
| 928 |
+
|
| 929 |
+
1037
|
| 930 |
+
|
| 931 |
+
1038
|
| 932 |
+
|
| 933 |
+
1039
|
| 934 |
+
|
| 935 |
+
1040
|
| 936 |
+
|
| 937 |
+
1041
|
| 938 |
+
|
| 939 |
+
1042
|
| 940 |
+
|
| 941 |
+
1043
|
| 942 |
+
|
| 943 |
+
## C Hyperparameter search
|
| 944 |
+
|
| 945 |
+
1044
|
| 946 |
+
|
| 947 |
+
1045
|
| 948 |
+
|
| 949 |
+
Table 7 shows the results of the hyperparameter 1046
|
| 950 |
+
|
| 951 |
+
search. 1047
|
| 952 |
+
|
| 953 |
+
1048
|
| 954 |
+
|
| 955 |
+
1049
|
| 956 |
+
|
| 957 |
+
1050
|
| 958 |
+
|
| 959 |
+
1051
|
| 960 |
+
|
| 961 |
+
1052
|
| 962 |
+
|
| 963 |
+
1053
|
| 964 |
+
|
| 965 |
+
1054
|
| 966 |
+
|
| 967 |
+
1055
|
| 968 |
+
|
| 969 |
+
1056
|
| 970 |
+
|
| 971 |
+
1057
|
| 972 |
+
|
| 973 |
+
1058
|
| 974 |
+
|
| 975 |
+
1059
|
| 976 |
+
|
| 977 |
+
1060
|
| 978 |
+
|
| 979 |
+
1061
|
| 980 |
+
|
| 981 |
+
1062
|
| 982 |
+
|
| 983 |
+
1063
|
| 984 |
+
|
| 985 |
+
1064
|
| 986 |
+
|
| 987 |
+
1065
|
| 988 |
+
|
| 989 |
+
1066
|
| 990 |
+
|
| 991 |
+
1067
|
| 992 |
+
|
| 993 |
+
1068
|
| 994 |
+
|
| 995 |
+
1069
|
| 996 |
+
|
| 997 |
+
1070
|
| 998 |
+
|
| 999 |
+
1071
|
| 1000 |
+
|
| 1001 |
+
1072
|
| 1002 |
+
|
| 1003 |
+
1073
|
| 1004 |
+
|
| 1005 |
+
1074
|
| 1006 |
+
|
| 1007 |
+
1075 1076
|
| 1008 |
+
|
| 1009 |
+
1077 1078 1079
|
| 1010 |
+
|
| 1011 |
+
1080 1134
|
| 1012 |
+
|
| 1013 |
+
1081 1135
|
| 1014 |
+
|
| 1015 |
+
1082 1136
|
| 1016 |
+
|
| 1017 |
+
1083 1137
|
| 1018 |
+
|
| 1019 |
+
1084 1138
|
| 1020 |
+
|
| 1021 |
+
1085 1139
|
| 1022 |
+
|
| 1023 |
+
1086 1140
|
| 1024 |
+
|
| 1025 |
+
1087 1141
|
| 1026 |
+
|
| 1027 |
+
1088 1142
|
| 1028 |
+
|
| 1029 |
+
1089 1143
|
| 1030 |
+
|
| 1031 |
+
1090 1144
|
| 1032 |
+
|
| 1033 |
+
1091 1145
|
| 1034 |
+
|
| 1035 |
+
1092 1146
|
| 1036 |
+
|
| 1037 |
+
1093 1147
|
| 1038 |
+
|
| 1039 |
+

|
| 1040 |
+
|
| 1041 |
+
Figure 6: Confusion matrices showing the performance of the models chosen based on (A) micro F1, (B) macro F1, (C) micro F1 trained with weighted loss, and (D) macro F1 trained with weighted loss.
|
| 1042 |
+
|
| 1043 |
+
1094 1148
|
| 1044 |
+
|
| 1045 |
+
1095 1149
|
| 1046 |
+
|
| 1047 |
+
1096 1150
|
| 1048 |
+
|
| 1049 |
+
1097 1151
|
| 1050 |
+
|
| 1051 |
+
1098 1152
|
| 1052 |
+
|
| 1053 |
+
1099 1153
|
| 1054 |
+
|
| 1055 |
+
1100 1154
|
| 1056 |
+
|
| 1057 |
+
1101 1155
|
| 1058 |
+
|
| 1059 |
+
1102 1156
|
| 1060 |
+
|
| 1061 |
+
1103 1157
|
| 1062 |
+
|
| 1063 |
+
1104 1158
|
| 1064 |
+
|
| 1065 |
+
1105 1159
|
| 1066 |
+
|
| 1067 |
+
1106 1160
|
| 1068 |
+
|
| 1069 |
+
1107 1161
|
| 1070 |
+
|
| 1071 |
+
1108 1162
|
| 1072 |
+
|
| 1073 |
+
1109 1163
|
| 1074 |
+
|
| 1075 |
+
1110 1164
|
| 1076 |
+
|
| 1077 |
+
1111 1165
|
| 1078 |
+
|
| 1079 |
+
1112 1166
|
| 1080 |
+
|
| 1081 |
+
1113 1167
|
| 1082 |
+
|
| 1083 |
+
1114 1168
|
| 1084 |
+
|
| 1085 |
+
1115 1169
|
| 1086 |
+
|
| 1087 |
+
1116 1170
|
| 1088 |
+
|
| 1089 |
+
1117 1171
|
| 1090 |
+
|
| 1091 |
+
1118 1172
|
| 1092 |
+
|
| 1093 |
+
1119 1173
|
| 1094 |
+
|
| 1095 |
+
1120 1174
|
| 1096 |
+
|
| 1097 |
+
1121 1175
|
| 1098 |
+
|
| 1099 |
+
1122 1176
|
| 1100 |
+
|
| 1101 |
+
1123 1177
|
| 1102 |
+
|
| 1103 |
+
1124 1178
|
| 1104 |
+
|
| 1105 |
+
1125 1179
|
| 1106 |
+
|
| 1107 |
+
1126 1180
|
| 1108 |
+
|
| 1109 |
+
1127 1181
|
| 1110 |
+
|
| 1111 |
+
1128 1182
|
| 1112 |
+
|
| 1113 |
+
1129 1183
|
| 1114 |
+
|
| 1115 |
+
1130 1184
|
| 1116 |
+
|
| 1117 |
+
1131 1185
|
| 1118 |
+
|
| 1119 |
+
1132 1186
|
| 1120 |
+
|
| 1121 |
+
1133 1187
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wEJaCIkgLG/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,748 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ DANISH CLINICAL NAMED ENTITY RECOGNITION AND RELATION EXTRACTION
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
004 058
|
| 12 |
+
|
| 13 |
+
005 059
|
| 14 |
+
|
| 15 |
+
006 060
|
| 16 |
+
|
| 17 |
+
007 061
|
| 18 |
+
|
| 19 |
+
008 062
|
| 20 |
+
|
| 21 |
+
§ ABSTRACT
|
| 22 |
+
|
| 23 |
+
016
|
| 24 |
+
|
| 25 |
+
Electronic health records contain impor-
|
| 26 |
+
|
| 27 |
+
018 tant information regarding the patients' medical history but much of this information is stored in unstructured narra-
|
| 28 |
+
|
| 29 |
+
021 tive text. This paper presents the first Danish clinical named entity recognition
|
| 30 |
+
|
| 31 |
+
023 and relation extraction dataset for extrac-
|
| 32 |
+
|
| 33 |
+
025 tion of six types of clinical events, six types of attributes, and three types of 026 relations. The dataset contains 11,607 027 paragraphs from Danish electronic health 028
|
| 34 |
+
|
| 35 |
+
029 records containing 54,631 clinical events,
|
| 36 |
+
|
| 37 |
+
030 41,954 attributes, and 14,604 relations.
|
| 38 |
+
|
| 39 |
+
031 We detail the methodology of developing the annotation scheme, and train a
|
| 40 |
+
|
| 41 |
+
033 transformer-based architecture on the developed dataset with macro F1 performance of ${60.05}\% ,{44.85}\%$ , and ${70.64}\%$
|
| 42 |
+
|
| 43 |
+
036 for clinical events, attributes, and relations, respectively.
|
| 44 |
+
|
| 45 |
+
038
|
| 46 |
+
|
| 47 |
+
§ 1 INTRODUCTION
|
| 48 |
+
|
| 49 |
+
040 Electronic health records (EHR) contain important information regarding the patients' medical his-
|
| 50 |
+
|
| 51 |
+
043 tory including diagnoses, medications, treatment plans, allergies, and test results. However, much of this information is stored in unstructured narrative text. While this information could be used to guide diagnostic decision making and treatment
|
| 52 |
+
|
| 53 |
+
048 plans, the unstructured format makes it infeasible to fully exploit in clinical practice and research.
|
| 54 |
+
|
| 55 |
+
Natural language processing (NLP) algorithms could be used to transform the unstructured narrative text of the EHR into structured information
|
| 56 |
+
|
| 57 |
+
053 and give medical doctors (MD) a fast overview of
|
| 58 |
+
|
| 59 |
+
063
|
| 60 |
+
|
| 61 |
+
064
|
| 62 |
+
|
| 63 |
+
065
|
| 64 |
+
|
| 65 |
+
066
|
| 66 |
+
|
| 67 |
+
067
|
| 68 |
+
|
| 69 |
+
068
|
| 70 |
+
|
| 71 |
+
even a medical history spanning multiple years. 069
|
| 72 |
+
|
| 73 |
+
NLP models' ability to process and extract infor- 070 mation from written text keeps improving with
|
| 74 |
+
|
| 75 |
+
benchmark-breaking models being published on 072 a regular basis. For example, transformer-based
|
| 76 |
+
|
| 77 |
+
models such as GPT-3 (Brown et al., 2020), BERT 075 (Devlin et al., 2019), and ELECTRA (Clark et al.,
|
| 78 |
+
|
| 79 |
+
2020) have recently shown promising results for 077 many NLP tasks, e.g. named entity recognition and relation extraction (NER). In NER, models
|
| 80 |
+
|
| 81 |
+
are trained to tag words with predefined entities 080 and find the relations between them. In clinical
|
| 82 |
+
|
| 83 |
+
NER, entities such as diseases, treatments, drugs, 082 and tests have been extracted automatically from EHRs. However, many of the developed datasets
|
| 84 |
+
|
| 85 |
+
are only in English and for specific clinical spe- 085 cialities or note types (Uzuner et al., 2007, 2010;
|
| 86 |
+
|
| 87 |
+
Bethard et al., 2016). 087
|
| 88 |
+
|
| 89 |
+
This paper describes the methodology for developing the first Danish clinical NER dataset.
|
| 90 |
+
|
| 91 |
+
The dataset consists of text paragraphs from Dan- 090 ish EHRs spanning multiple departments and note
|
| 92 |
+
|
| 93 |
+
types. 092
|
| 94 |
+
|
| 95 |
+
First, the paper describes the clinical dataset, the strategy for choosing entities tailored to extract
|
| 96 |
+
|
| 97 |
+
important information from EHRs, and the anno- 095 tation scheme. Next, we train a transformer-based
|
| 98 |
+
|
| 99 |
+
architecture on the developed NER dataset. 097 098
|
| 100 |
+
|
| 101 |
+
§ 2 METHODS
|
| 102 |
+
|
| 103 |
+
This section describes the data, annotation
|
| 104 |
+
|
| 105 |
+
scheme, and model used for Danish clinical NER. 102
|
| 106 |
+
|
| 107 |
+
§ 2.1 DATA
|
| 108 |
+
|
| 109 |
+
We extracted 11,607 paragraphs with a length between 11 and 75 words from EHRs from Odense
|
| 110 |
+
|
| 111 |
+
University Hospital in Denmark. Paragraphs were 107 sampled randomly from different EHR note types across every department of the hospital to ensure the data distribution would resemble that of EHRs: ${46}\%$ were from clinical contacts, ${13}\%$ primary journals, 10% care data, 3% epicrises, 3% ambulatory care contacts, $2\%$ surgical notes, $2\%$ emergency room journals, and ${20}\%$ were from 55 different minor EHR note types. Paragraphs were lowercased and anonymised by two of the authors.
|
| 112 |
+
|
| 113 |
+
max width=
|
| 114 |
+
|
| 115 |
+
Clinical event Description
|
| 116 |
+
|
| 117 |
+
1-2
|
| 118 |
+
Disease A disorder of structure or function, especially one that has a known cause and a distinctive group of symptoms, signs, or anatomical changes. Examples include cancer, influenza, and narcolepsy.
|
| 119 |
+
|
| 120 |
+
1-2
|
| 121 |
+
$\mathbf{{Symptom}}$ A symptom is a physical or mental feature which is regarded as indicating a condition of disease, particularly such a feature that is apparent to the patient. We include abnormal findings, which the MD makes when examining the patient objectively, as these are sometimes coinciding with symptoms-e.g. bruises. Examples include headache, stomach ache, and pain.
|
| 122 |
+
|
| 123 |
+
1-2
|
| 124 |
+
Diagnostic Any tool or method concerned with the diagnosis of illnesses or other problems. Includes measurements and tests. Examples include CT scans, blood samples, and temperatures.
|
| 125 |
+
|
| 126 |
+
1-2
|
| 127 |
+
Treatment A treatment is any medical care given to a patient for an illness or injury. Examples include medication, plaster, and rehabilitation.
|
| 128 |
+
|
| 129 |
+
1-2
|
| 130 |
+
$\mathbf{{Anatomy}}$ Any part of human anatomy. Includes body fluids and excrements. Examples include arms, organs, and blood.
|
| 131 |
+
|
| 132 |
+
1-2
|
| 133 |
+
Result All results of diagnostics that do not carry any meaning without being coupled to the diagnostic. Examples include numbers that indicate length, temperature, or volumes. Diseases or symptoms found by diagnostics are annotated as such, e.g. a tumour found by a CT scan.
|
| 134 |
+
|
| 135 |
+
1-2
|
| 136 |
+
|
| 137 |
+
Table 1: Description of clinical events. Descriptions were inspired by the Oxford English Dictionary.
|
| 138 |
+
|
| 139 |
+
§ 2.2 ANNOTATION
|
| 140 |
+
|
| 141 |
+
§ 2.2.1 ANNOTATION SCHEME
|
| 142 |
+
|
| 143 |
+
Two MDs with expert clinical domain knowledge developed the annotation scheme through an iterative process of making annotation rules and testing them.
|
| 144 |
+
|
| 145 |
+
Annotation rules were made to extract clinically relevant information from the medical history. Focus was for the rules to be as complete as possible to capture all important information about the medical history while still being simple to use for the annotators.
|
| 146 |
+
|
| 147 |
+
We extracted three types of information: clinical events, the attributes of the clinical events, and relations between the clinical events.
|
| 148 |
+
|
| 149 |
+
Clinical events were: diseases; symptoms, including abnormal findings; diagnostics; treatments; anatomies including body fluids and excrements; and results. Symptoms and abnormal findings were joined in one as they sometimes coincided. Normal findings were not included as there were so many that they would cloud the visualisation of the history. Table 1 shows all clini-
|
| 150 |
+
|
| 151 |
+
max width=
|
| 152 |
+
|
| 153 |
+
Attributes Description
|
| 154 |
+
|
| 155 |
+
1-2
|
| 156 |
+
$\mathbf{{Prior}}$ Entities that occurred in prior admissions or in the distant past. Includes treatments that are being stopped at that point in time.
|
| 157 |
+
|
| 158 |
+
1-2
|
| 159 |
+
Current Entities that occur in the present. Includes prescribed medicine.
|
| 160 |
+
|
| 161 |
+
1-2
|
| 162 |
+
Future Entities that occur or might occur in the future-e.g. the risk of skin cancer, or ordering diagnostics for a later day.
|
| 163 |
+
|
| 164 |
+
1-2
|
| 165 |
+
Doubt Any entity that is not confirmed. Includes any treatments that might need to be started in the future.
|
| 166 |
+
|
| 167 |
+
1-2
|
| 168 |
+
Negation Entities such as diseases or symptoms that are mentioned as not being present.
|
| 169 |
+
|
| 170 |
+
1-2
|
| 171 |
+
Non-patient Entities that are not related to the patient in question. One example is the disease history of the patient's relatives.
|
| 172 |
+
|
| 173 |
+
1-2
|
| 174 |
+
|
| 175 |
+
Table 2: Description of attributes.
|
| 176 |
+
|
| 177 |
+
162
|
| 178 |
+
|
| 179 |
+
163
|
| 180 |
+
|
| 181 |
+
168 cal events and their descriptions as defined by the medical experts.
|
| 182 |
+
|
| 183 |
+
Clinical events were further described by their attributes. Attributes were: prior; current; future; doubt; negation; and non-patient. All clinical events could take one of the six attributes except anatomies and results. Anatomies did not take any attributes while results could only take a prior or current attribute. Table 2 shows all attributes and their descriptions.
|
| 184 |
+
|
| 185 |
+
Clinical events could connect to each other in limited ways through one-way relations. Diseases, diagnostics, and symptoms could connect to anatomies through a "has location" relation. Diseases, symptoms, and anatomies could connect to treatments through a "is treated with" relation. Diagnostics could connect to results through a "has
|
| 186 |
+
|
| 187 |
+
result" relation. 190
|
| 188 |
+
|
| 189 |
+
Figure 1 shows an overview of the clinical
|
| 190 |
+
|
| 191 |
+
events, attributes, and relations. Appendix A 193 shows the full annotation guidelines with further
|
| 192 |
+
|
| 193 |
+
details and explanations to the annotators. 195
|
| 194 |
+
|
| 195 |
+
§ 2.2.2 ANNOTATION PROCESS
|
| 196 |
+
|
| 197 |
+
Six annotators were recruited for the task. Five were Master of Science in Medicine students and
|
| 198 |
+
|
| 199 |
+
one was a MD. 200 Figure 2 shows the process of annotator training. It included reading the annotation guide and an iterative process of annotating a learning set of 55 paragraphs (not included in dataset) followed
|
| 200 |
+
|
| 201 |
+
by error analysis until a final test was made on 205 a set of 98 gold paragraphs annotated by an expert MD. Paragraphs were annotated using the CLAMP software (Soysal et al., 2017). We report the micro F1 of each annotator on the gold set.
|
| 202 |
+
|
| 203 |
+
Figure 3 shows an example of an annotated 210 paragraph.
|
| 204 |
+
|
| 205 |
+
§ 2.3 ENTITY AND RELATION EXTRACTION MODEL
|
| 206 |
+
|
| 207 |
+
This section describes the architecture of the
|
| 208 |
+
|
| 209 |
+
Princeton University Relation Extraction system 215
|
| 210 |
+
|
| 211 |
+
216
|
| 212 |
+
|
| 213 |
+
"is treated with". Orange: "has location". Grey: "has result". (B) Attributes. Anatomy (dashed lines) takes no attributes. Other clinical events must take one attribute. Results only take prior or current attributes. (A) Clinical events and relations Disease has location Result has result Diagnostic Anatomy Symptom (B) Attributes is treated with Prior Doubt Treatment Current Negation Future Non-patient
|
| 214 |
+
|
| 215 |
+
Figure 1: (A) Clinical events and relations between them. Symptoms include abnormal findings. Anatomies include body fluids and excrements. Diagnostics include measurements and tests. Blue:
|
| 216 |
+
|
| 217 |
+
217
|
| 218 |
+
|
| 219 |
+
218
|
| 220 |
+
|
| 221 |
+
219
|
| 222 |
+
|
| 223 |
+
220
|
| 224 |
+
|
| 225 |
+
221
|
| 226 |
+
|
| 227 |
+
222
|
| 228 |
+
|
| 229 |
+
223
|
| 230 |
+
|
| 231 |
+
227
|
| 232 |
+
|
| 233 |
+
Study annotation guide learning set Error analysis Annotate gold set Annotate
|
| 234 |
+
|
| 235 |
+
Figure 2: Annotator training process. Figure inspired by Sun et al. (2013).
|
| 236 |
+
|
| 237 |
+
229
|
| 238 |
+
|
| 239 |
+
232
|
| 240 |
+
|
| 241 |
+
234
|
| 242 |
+
|
| 243 |
+
237
|
| 244 |
+
|
| 245 |
+
239
|
| 246 |
+
|
| 247 |
+
Current Anatomy the left breast has location Symptom slight redness
|
| 248 |
+
|
| 249 |
+
Figure 3: Example of annotated paragraph. % signifies that no attribute could be assigned to the clinical event per the annotation scheme.
|
| 250 |
+
|
| 251 |
+
249 (PURE) (Zhong and Chen, 2021) which we used and adapted for Danish clinical NER. It further describes the dataset used and the training of the models.
|
| 252 |
+
|
| 253 |
+
254
|
| 254 |
+
|
| 255 |
+
§ 2.3.1 MODEL ARCHITECTURE
|
| 256 |
+
|
| 257 |
+
PURE is a NER deep learning model based on a transformer structure. The model has a separate
|
| 258 |
+
|
| 259 |
+
259 entity and relation extraction part. For entity extraction, the model takes as input all possible text spans up to a maximum length. A transformer extracts contextual word embeddings for the start and end token of each span. They
|
| 260 |
+
|
| 261 |
+
264 are concatenated with a learned span width embedding and classified by a feedforward network.
|
| 262 |
+
|
| 263 |
+
When extracting relations, for each candidate pair of entities, the text is passed through a transformer with inserted entity start and end marker to-
|
| 264 |
+
|
| 265 |
+
269 kens for the subject and object entity, also indicat-
|
| 266 |
+
|
| 267 |
+
270
|
| 268 |
+
|
| 269 |
+
271
|
| 270 |
+
|
| 271 |
+
272
|
| 272 |
+
|
| 273 |
+
273
|
| 274 |
+
|
| 275 |
+
274
|
| 276 |
+
|
| 277 |
+
275
|
| 278 |
+
|
| 279 |
+
276
|
| 280 |
+
|
| 281 |
+
277
|
| 282 |
+
|
| 283 |
+
278
|
| 284 |
+
|
| 285 |
+
279
|
| 286 |
+
|
| 287 |
+
280
|
| 288 |
+
|
| 289 |
+
281
|
| 290 |
+
|
| 291 |
+
Symptom Anatomy in the left breast in the [O:An] left breast $\left\lbrack {/\mathrm{O} : \mathrm{{An}}}\right\rbrack$ (A) slight redness Current [Sy] has location (C) slight [S:Sy] redness $\left\lbrack {/\mathrm{S} : \mathrm{{Sy}}}\right\rbrack$
|
| 292 |
+
|
| 293 |
+
Figure 4: (A) Classification of clinical events from start and end tokens of span. Span width embedding not depicted. (B) Classification of attribute using clinical event marker tokens. (C) Classification of relation using subject/object and clinical event marker tokens. Figure inspired by Zhong and Chen (2021).
|
| 294 |
+
|
| 295 |
+
282
|
| 296 |
+
|
| 297 |
+
283
|
| 298 |
+
|
| 299 |
+
284
|
| 300 |
+
|
| 301 |
+
285
|
| 302 |
+
|
| 303 |
+
286
|
| 304 |
+
|
| 305 |
+
287
|
| 306 |
+
|
| 307 |
+
288
|
| 308 |
+
|
| 309 |
+
289
|
| 310 |
+
|
| 311 |
+
290
|
| 312 |
+
|
| 313 |
+
291
|
| 314 |
+
|
| 315 |
+
293
|
| 316 |
+
|
| 317 |
+
296
|
| 318 |
+
|
| 319 |
+
298 ing the type. The concatenation of the start marker token for the candidate subject and object entity is
|
| 320 |
+
|
| 321 |
+
classified by a feedforward neural network. 301
|
| 322 |
+
|
| 323 |
+
We used PURE's entity extraction approach for 303
|
| 324 |
+
|
| 325 |
+
clinical events and the relation extraction approach 304
|
| 326 |
+
|
| 327 |
+
for relations between clinical events. 305
|
| 328 |
+
|
| 329 |
+
We used our own approach adapted from the 306
|
| 330 |
+
|
| 331 |
+
PURE relation extraction approach for attributes. 307
|
| 332 |
+
|
| 333 |
+
We inserted clinical event start and end marker 308
|
| 334 |
+
|
| 335 |
+
tokens, passed all tokens through a transformer, 309
|
| 336 |
+
|
| 337 |
+
concatenated the start and end marker tokens, and 310
|
| 338 |
+
|
| 339 |
+
classified the attribute using a feedforward net- 311
|
| 340 |
+
|
| 341 |
+
work. The marker tokens were used for classi- 312
|
| 342 |
+
|
| 343 |
+
fication instead of the word(s) forming the clini- 313 cal event to guide the model to look more at the context rather than the specific word-the context being the important factor in attribute classifica-
|
| 344 |
+
|
| 345 |
+
tion. Additionally, enriching the input with the 318 type of the clinical event could guide the model if 319 attributes were described differently for different 320
|
| 346 |
+
|
| 347 |
+
clinical events. 321
|
| 348 |
+
|
| 349 |
+
Figure 4 shows the three types of extraction 322
|
| 350 |
+
|
| 351 |
+
tasks. 323
|
| 352 |
+
|
| 353 |
+
§ 2.3.2 DATASETS
|
| 354 |
+
|
| 355 |
+
Table 3 shows the number of clinical events, attributes, and relations by type in the train, validation, and test set. The dataset had a total of 11,607 paragraphs, each containing a varying number of clinical events, attributes, and relations. On average, each paragraph contained 4.7 clinical events, 3.6 attributes, and 1.3 relations. We split the paragraphs in train, validation, and test sets for an approximate ${80}\% - {10}\% - {10}\%$ ratio between each type of clinical event, attribute, and relation. The sets were unbalanced on type of entity or relation-e.g. for the attributes training set, there were 23,217 current and only 480 non-patient attributes. All datasets were in the json format used by PURE (see Zhong and Chen (2021)).
|
| 356 |
+
|
| 357 |
+
§ 2.3.3 TRAINING
|
| 358 |
+
|
| 359 |
+
When training the clinical event extraction model, we used a Danish Clinical ELECTRA pretrained on the narrative text from 299,718 EHRs from Odense University Hospital as the transformer base (Pedersen et al., 2022). The model had $\sim {13}\mathrm{M}$ parameters and consisted of 12 transformer layers with 4 attention heads. We used a dropout of 0.1 after the last ELECTRA hidden layer output. We tested classification heads with two hidden layers of varying size, each followed by a dropout of 0.2 and a ReLU activation function. We used a maximum span of 8 and a train batch size of 32 . We trained for 100 epochs using the AdamW optimizer with learning rate 1e-5 for the transformer layers and 1e-4 for the classification head, and a warm-up proportion of 0.1 .
|
| 360 |
+
|
| 361 |
+
When training each of the models for extracting attributes and relations, we used the same transformer base with a normalisation layer and a
|
| 362 |
+
|
| 363 |
+
362 dropout of 0.1 after the concatenation of tokens. We tested classification heads with two hidden
|
| 364 |
+
|
| 365 |
+
365 layers of varying size, each followed by a dropout of 0.2 and a ReLU activation function. We fur-
|
| 366 |
+
|
| 367 |
+
367 ther tested a classification head only consisting of a single classification layer. We used a train batch size of 32 and a maximum sequence length of 128 . We trained for 20 epochs using the AdamW optimizer with learning rate $2\mathrm{e} - 5$ and a warm-up proportion of 0.1 .
|
| 368 |
+
|
| 369 |
+
We modified the training method of PURE to guide the models towards equal performance on all classes. We used a weighted loss function to 376 counteract the unbalanced dataset (experiment in 377 Appendix B). Class weights were calculated for
|
| 370 |
+
|
| 371 |
+
the training of each model using the default for- 378
|
| 372 |
+
|
| 373 |
+
mula in Scikit-learn (Pedregosa et al., 2011): 379
|
| 374 |
+
|
| 375 |
+
380
|
| 376 |
+
|
| 377 |
+
$$
|
| 378 |
+
{w}_{x} = \frac{{n}_{\text{ samples }}}{{n}_{\text{ classes }} \cdot {n}_{x}} \tag{1}
|
| 379 |
+
$$
|
| 380 |
+
|
| 381 |
+
381 382
|
| 382 |
+
|
| 383 |
+
where $x$ is the class, ${n}_{\text{ samples }}$ is the number of to- 384 tal samples, and ${n}_{\text{ classes }}$ is the number of classes. The negative class, i.e. samples not to be given any label by the model, was given a weight of 1 .
|
| 384 |
+
|
| 385 |
+
To further enforce equal performance on all
|
| 386 |
+
|
| 387 |
+
classes, we chose the best model for each of the 389 clinical event, attribute, and relation extraction
|
| 388 |
+
|
| 389 |
+
tasks as the model iteration with the best macro 391 F1 on the validation set, rather than the micro F1 standard of PURE (experiment in Appendix B).
|
| 390 |
+
|
| 391 |
+
The negative class was excluded when calculating 394 the F1. We only trained the attribute and relation
|
| 392 |
+
|
| 393 |
+
models to make classifications that were allowed 396 for the connected clinical events according to the annotation scheme. Appendix C shows the results
|
| 394 |
+
|
| 395 |
+
of the hyperparameter search. We report the micro 399 and macro recall, precision, and F1 for the best
|
| 396 |
+
|
| 397 |
+
models on the test set. 401
|
| 398 |
+
|
| 399 |
+
§ 3 RESULTS
|
| 400 |
+
|
| 401 |
+
404
|
| 402 |
+
|
| 403 |
+
This section presents the agreement of the annota-
|
| 404 |
+
|
| 405 |
+
tors on the gold set and the results of the Danish 406 clinical NER models.
|
| 406 |
+
|
| 407 |
+
§ 3.1 ANNOTATION
|
| 408 |
+
|
| 409 |
+
409
|
| 410 |
+
|
| 411 |
+
Table 4 shows the annotators' micro F1 per-
|
| 412 |
+
|
| 413 |
+
formance on the gold set. For clinical events, 411 it ranged ${83.71}\% - {91.24}\%$ (average ${85.62}\%$ ) for overlapping matches, and 74.12%-85.15% (average 77.67%) for exact matches. For attributes, it
|
| 414 |
+
|
| 415 |
+
ranged ${79.21}\% - {86.19}\%$ (average ${81.71}\%$ ) and for 416 relations 71.28%-90.06% (average 77.79%).
|
| 416 |
+
|
| 417 |
+
§ 3.2 ENTITY AND RELATION EXTRACTION MODEL
|
| 418 |
+
|
| 419 |
+
The models that had the best validation perfor-
|
| 420 |
+
|
| 421 |
+
mance in the hyperparameter search were: 421
|
| 422 |
+
|
| 423 |
+
* A clinical event extraction model with two hidden layers of size 450 in the classification head.
|
| 424 |
+
|
| 425 |
+
426
|
| 426 |
+
|
| 427 |
+
* An attribute extraction model with a single classification layer.
|
| 428 |
+
|
| 429 |
+
* A relation extraction model with two hidden 430
|
| 430 |
+
|
| 431 |
+
layers of size 150 in the classification head. 431
|
| 432 |
+
|
| 433 |
+
433 487
|
| 434 |
+
|
| 435 |
+
max width=
|
| 436 |
+
|
| 437 |
+
X Train (% of row total) Validation (% of row total) Test (% of row total) Total (% of column total)
|
| 438 |
+
|
| 439 |
+
1-5
|
| 440 |
+
Paragraphs 9,687 (83%) 960 (8%) 960 (8%) 11,607 (100%)
|
| 441 |
+
|
| 442 |
+
1-5
|
| 443 |
+
5|c|Clinical events
|
| 444 |
+
|
| 445 |
+
1-5
|
| 446 |
+
Diseases 2,033 (78%) 295 (11%) 272 (10%) 2,600 (5%)
|
| 447 |
+
|
| 448 |
+
1-5
|
| 449 |
+
Symptoms 11,937 (80%) 1,455 (10%) 1,571 (10%) 14,963 (27%)
|
| 450 |
+
|
| 451 |
+
1-5
|
| 452 |
+
Diagnostics 8,921 (80%) 1,095 (10%) 1,194 (11%) 11,210 (21%)
|
| 453 |
+
|
| 454 |
+
1-5
|
| 455 |
+
Treatments 6,918 (79%) 911 (10%) 882 (10%) 8,711 (16%)
|
| 456 |
+
|
| 457 |
+
1-5
|
| 458 |
+
Anatomies 10,172 (80%) 1,227 (10%) 1,278 (10%) 12,677 (23%)
|
| 459 |
+
|
| 460 |
+
1-5
|
| 461 |
+
Results 3,522 (79%) 473 (11%) 475 (11%) 4,470 (8%)
|
| 462 |
+
|
| 463 |
+
1-5
|
| 464 |
+
TOTAL 43,503 (80%) 5,456 (10%) 5,672 (10%) 54,631 (100%)
|
| 465 |
+
|
| 466 |
+
1-5
|
| 467 |
+
5|c|Attributes
|
| 468 |
+
|
| 469 |
+
1-5
|
| 470 |
+
Prior 2,028 (80%) 237 (9%) 283 (11%) 2,548 (6%)
|
| 471 |
+
|
| 472 |
+
1-5
|
| 473 |
+
Current 23,217 (79%) 3,021 (10%) 3,109 (11%) 29,347 (70%)
|
| 474 |
+
|
| 475 |
+
1-5
|
| 476 |
+
Future 1,237 (79%) 161 (10%) 160 (10%) 1,558 (4%)
|
| 477 |
+
|
| 478 |
+
1-5
|
| 479 |
+
Doubt 2,479 (82%) 263 (9%) 289 (10%) 3,031 (7%)
|
| 480 |
+
|
| 481 |
+
1-5
|
| 482 |
+
Negation 3,890 (80%) 496 (10%) 500 (10%) 4,886 (12%)
|
| 483 |
+
|
| 484 |
+
1-5
|
| 485 |
+
Non-patient 480 (82%) 51 (9%) 53 (9%) 584 (1%)
|
| 486 |
+
|
| 487 |
+
1-5
|
| 488 |
+
TOTAL 33,331 (79%) 4,229 (10%) 4,394 (10%) 41,954 (100%)
|
| 489 |
+
|
| 490 |
+
1-5
|
| 491 |
+
5|c|Relations
|
| 492 |
+
|
| 493 |
+
1-5
|
| 494 |
+
is treated with 1,485 (80%) 175 (9%) 197 (11%) 1,857 (13%)
|
| 495 |
+
|
| 496 |
+
1-5
|
| 497 |
+
has location 6,501 (80%) 779 (10%) 823 (10%) 8,103 (55%)
|
| 498 |
+
|
| 499 |
+
1-5
|
| 500 |
+
has result 3,652 (79%) 499 (11%) 493 (11%) 4,644 (32%)
|
| 501 |
+
|
| 502 |
+
1-5
|
| 503 |
+
TOTAL 11,638 (80%) 1,453 (10%) 1,513 (10%) 14,604 (100%)
|
| 504 |
+
|
| 505 |
+
1-5
|
| 506 |
+
|
| 507 |
+
Table 3: Composition of the train, validation and test sets by type of clinical event, attribute, and relation.
|
| 508 |
+
|
| 509 |
+
486
|
| 510 |
+
|
| 511 |
+
488
|
| 512 |
+
|
| 513 |
+
489
|
| 514 |
+
|
| 515 |
+
490
|
| 516 |
+
|
| 517 |
+
437 491
|
| 518 |
+
|
| 519 |
+
438 492
|
| 520 |
+
|
| 521 |
+
439
|
| 522 |
+
|
| 523 |
+
443 497
|
| 524 |
+
|
| 525 |
+
447 501
|
| 526 |
+
|
| 527 |
+
448 502
|
| 528 |
+
|
| 529 |
+
449
|
| 530 |
+
|
| 531 |
+
450 504
|
| 532 |
+
|
| 533 |
+
451
|
| 534 |
+
|
| 535 |
+
452
|
| 536 |
+
|
| 537 |
+
453 507
|
| 538 |
+
|
| 539 |
+
455
|
| 540 |
+
|
| 541 |
+
max width=
|
| 542 |
+
|
| 543 |
+
Annotator A $\mathbf{B}$ C D E $\mathbf{F}$
|
| 544 |
+
|
| 545 |
+
1-7
|
| 546 |
+
X 6|c|Overlap match, micro F1%
|
| 547 |
+
|
| 548 |
+
1-7
|
| 549 |
+
Clinical event 91.24 84.22 84.41 85.71 84.43 83.71
|
| 550 |
+
|
| 551 |
+
1-7
|
| 552 |
+
Attribute 86.19 83.06 79.21 81.29 79.75 80.75
|
| 553 |
+
|
| 554 |
+
1-7
|
| 555 |
+
Relation 90.06 76.97 75.60 77.01 71.28 75.84
|
| 556 |
+
|
| 557 |
+
1-7
|
| 558 |
+
X 6|c|Exact match, micro F1%
|
| 559 |
+
|
| 560 |
+
1-7
|
| 561 |
+
Clinical event 85.15 76.08 76.29 78.69 74.12 75.71
|
| 562 |
+
|
| 563 |
+
1-7
|
| 564 |
+
|
| 565 |
+
456
|
| 566 |
+
|
| 567 |
+
457
|
| 568 |
+
|
| 569 |
+
458
|
| 570 |
+
|
| 571 |
+
459
|
| 572 |
+
|
| 573 |
+
460
|
| 574 |
+
|
| 575 |
+
461 Table 4: The anonymised annotators' performance
|
| 576 |
+
|
| 577 |
+
462 on the gold set. Exact match: a match is defined
|
| 578 |
+
|
| 579 |
+
463 as the exact tokens annotated in the gold set with
|
| 580 |
+
|
| 581 |
+
464 the same label. Overlap match: a match is defined
|
| 582 |
+
|
| 583 |
+
465 as minimum one token overlapping with the gold
|
| 584 |
+
|
| 585 |
+
466 set annotation of the same label. Only an overlap
|
| 586 |
+
|
| 587 |
+
467 match F1 is calculated for attributes and relations
|
| 588 |
+
|
| 589 |
+
468 as evaluating an exact match would propagate the potential error in the span of the clinical event to
|
| 590 |
+
|
| 591 |
+
470 which the attribute or relation is connected.
|
| 592 |
+
|
| 593 |
+
475
|
| 594 |
+
|
| 595 |
+
max width=
|
| 596 |
+
|
| 597 |
+
X 3|c|$\mathbf{{Micro}}$ 3|c|$\mathbf{{Macro}}$
|
| 598 |
+
|
| 599 |
+
1-7
|
| 600 |
+
X $\mathbf{R}\%$ $\mathbf{P}\%$ F1% R% $\mathbf{P}\%$ F1%
|
| 601 |
+
|
| 602 |
+
1-7
|
| 603 |
+
X 6|c|Overlap match
|
| 604 |
+
|
| 605 |
+
1-7
|
| 606 |
+
Clinical events 66.29 77.31 71.38 64.88 72.60 68.20
|
| 607 |
+
|
| 608 |
+
1-7
|
| 609 |
+
X 6|c|Exact match
|
| 610 |
+
|
| 611 |
+
1-7
|
| 612 |
+
Clinical events 60.97 65.64 63.22 59.84 61.30 60.05
|
| 613 |
+
|
| 614 |
+
1-7
|
| 615 |
+
Attributes 66.04 66.04 66.04 51.60 42.64 44.85
|
| 616 |
+
|
| 617 |
+
1-7
|
| 618 |
+
Relations 75.88 72.66 74.23 74.74 67.85 70.64
|
| 619 |
+
|
| 620 |
+
1-7
|
| 621 |
+
|
| 622 |
+
Table 5: Performance of the best clinical event, attribute, and relation extraction models on the test set. Attributes and relations are only reported with an exact match as the models do not consider the span of the clinical event from which the attribute or relation is classified. R: Recall. P: Precision.
|
| 623 |
+
|
| 624 |
+
480
|
| 625 |
+
|
| 626 |
+
485
|
| 627 |
+
|
| 628 |
+
Table 5 shows the performance of the best mod- 509 els on the test set. Clinical events were extracted with exact micro F1 63.22% and macro F1 60.05%, attributes with micro F1 66.04% and macro F1 44.85%, and relations with micro F1
|
| 629 |
+
|
| 630 |
+
74.23% and macro F1 70.64%. The negative class 514 was excluded when calculating the recall, precision, and F1 scores.
|
| 631 |
+
|
| 632 |
+
Figure 5 shows the confusion matrices of per- 517 formance on clinical events, attributes, and rela-
|
| 633 |
+
|
| 634 |
+
tions. The confusion matrices include the clinical 519
|
| 635 |
+
|
| 636 |
+
events and relations that were not extracted and 520
|
| 637 |
+
|
| 638 |
+
falsely extracted by the model ('O'). 521
|
| 639 |
+
|
| 640 |
+
The model for clinical event extraction per- 522 formed best on anatomies (69%) and worst on re-
|
| 641 |
+
|
| 642 |
+
sults (53%). 1,568 spans were falsely extracted 524
|
| 643 |
+
|
| 644 |
+
as a clinical event with symptoms being the most 525
|
| 645 |
+
|
| 646 |
+
frequent (21%). The model for attribute extrac- 526
|
| 647 |
+
|
| 648 |
+
tion performed best on negations (84%) and worst 527
|
| 649 |
+
|
| 650 |
+
on non-patient (23%). The model for relation ex- 528
|
| 651 |
+
|
| 652 |
+
traction performed best on "has result" (93%) and 529 530 worst on "is treated with" (62%). 432 false rela-
|
| 653 |
+
|
| 654 |
+
tions were extracted of which "has location" was 532 the most frequent misclassification (45%). 533
|
| 655 |
+
|
| 656 |
+
§ 4 DISCUSSION AND LIMITATIONS
|
| 657 |
+
|
| 658 |
+
534
|
| 659 |
+
|
| 660 |
+
535
|
| 661 |
+
|
| 662 |
+
This paper presented a methodology for develop- 536
|
| 663 |
+
|
| 664 |
+
ing a dataset for Danish clinical NER. It presented 537
|
| 665 |
+
|
| 666 |
+
an annotation scheme for annotation of all clinical 538
|
| 667 |
+
|
| 668 |
+
events, their attributes, and relations that are rele- 539
|
| 669 |
+
|
| 670 |
+
540 594
|
| 671 |
+
|
| 672 |
+
(A) Clinical events (B) Attributes (C) Relations 0.04 0.1 0.1 0.06 0.08 0.11 has location 0 0.69 0 0.31 0.6 0.48 0.25 0.01 0.02 -0.5 -0.4 0.11 0.5 0.16 0.03 has result 0.93 0.07 0.02 0.07 0.84 0.01 -0.2 0.37 0.45 0.17 0 0.02 0.4 0.11 0.23 -0.0 -0.0 is treated with Non-patient Predicted Disease 0.58 0.12 0.01 0.01 0.27 Prior 0.36 0.41 Symptom 0.05 0.01 0.01 0.36 Current 0.06 0.69 0.5 Diagnostic 0.01 0.63 0.01 0.33 Future 0.08 0.16 Treatment 0.01 0.01 0.01 0.38 -0.3 Doubt 0.04 0.16 Anatomy 0.01 0 0.69 0.3 Negation 0.01 0.05 -0.1 0.1 0.21 0.14 Non-patient 0.19 0.06 $= {0.0}$ Treatment
|
| 673 |
+
|
| 674 |
+
Figure 5: Confusion matrices of performance on (A) clinical events, (B) attributes, and (C) relations. 'O' counts the clinical events and relations that were not extracted and falsely extracted by the model.
|
| 675 |
+
|
| 676 |
+
602
|
| 677 |
+
|
| 678 |
+
603
|
| 679 |
+
|
| 680 |
+
541 595
|
| 681 |
+
|
| 682 |
+
542 596
|
| 683 |
+
|
| 684 |
+
543 597
|
| 685 |
+
|
| 686 |
+
544 598
|
| 687 |
+
|
| 688 |
+
545 599
|
| 689 |
+
|
| 690 |
+
546 600
|
| 691 |
+
|
| 692 |
+
547 601
|
| 693 |
+
|
| 694 |
+
551 605
|
| 695 |
+
|
| 696 |
+
607
|
| 697 |
+
|
| 698 |
+
609 vant for the medical history. The dataset included text paragraphs from Danish EHRs spanning multiple departments and note types.
|
| 699 |
+
|
| 700 |
+
We trained and adapted PURE NER deep learning models to extract clinical events (overlap match macro F1 68.20%; exact match macro F1 ${60.05}\%$ ), attributes of clinical events (macro F1 ${44.85}\%$ ), and relations between clinical events
|
| 701 |
+
|
| 702 |
+
566 (macro F1 70.64%). The results are promising for Danish clinical NER but need improvement. A discussion of possible improvements to the methodology, limitations, and future work is provided below.
|
| 703 |
+
|
| 704 |
+
The clinical event extraction model had similar performance on all classes with accuracies between ${53}\%$ (results) and ${69}\%$ (anatomies). There was little contamination between classes as most errors were caused by failure to extract or false extraction of a clinical event. There was some contamination between symptoms and diseases with ${12}\%$ of diseases being classified as symptoms and $5\%$ of symptoms being classified as diseases. This supports claims by annotators that diseases and symptoms in some cases are difficult to differentiate and that extra attention must be given to dif-
|
| 705 |
+
|
| 706 |
+
583 ferentiate these in the annotation guidelines.
|
| 707 |
+
|
| 708 |
+
The attribute extraction model had large differences in performance with accuracies between ${23}\%$ (non-patient) and ${84}\%$ (negation). There were more misclassifications of the non-patient attribute as doubt $\left( {{40}\% }\right)$ than correct classifications. The future and doubt attributes had significant contamination between them with ${25}\%$ and ${11}\%$ misclassifications as the other class, respec-
|
| 709 |
+
|
| 710 |
+
593 tively. The many misclassifications between non-
|
| 711 |
+
|
| 712 |
+
610 patient and doubt attributes, and especially future and doubt attributes, could indicate that the model would improve if the non-patient, doubt, and future attributes were merged to a single class of uncertain attributes. This would most likely not harm the usefulness of the model to MDs significantly.
|
| 713 |
+
|
| 714 |
+
The fact that more prior attributes were misclassified as current (41%) than correct classifica-
|
| 715 |
+
|
| 716 |
+
tions (36%) likewise indicates that these two at- 620 tributes could be merged into a single class of clin-
|
| 717 |
+
|
| 718 |
+
ical events that occurred. This would, however, 622 decrease the usefulness of the model as it is important for MDs reviewing the medical history to
|
| 719 |
+
|
| 720 |
+
know if a clinical event is prior or current. 625
|
| 721 |
+
|
| 722 |
+
The relation model extracted ${93}\%$ of the "has
|
| 723 |
+
|
| 724 |
+
result" relations, and ${62}\%$ and ${69}\%$ of the "is 627 treated with" and "has location" relations, respectively. The differences are likely caused by the fact
|
| 725 |
+
|
| 726 |
+
that the "has result" relation only connects diag- 630 nostics to results while the two other relations have
|
| 727 |
+
|
| 728 |
+
three different one-way relationships. 632
|
| 729 |
+
|
| 730 |
+
In this paper, we only explored one type of NER model and tested a limited set of architectures and hyperparameters. Future work could in-
|
| 731 |
+
|
| 732 |
+
clude testing other architectures and enriching the 637 model input with more information, e.g. the output of a text parser, which could help differentiate attributes dealing with the time-aspect. The six annotators had an average micro F1 (overlap
|
| 733 |
+
|
| 734 |
+
match) of ${85.62}\% ,{81.71}\%$ , and ${77.79}\%$ for clin- 642 ical events, attributes, and relations, respectively. Merging certain attributes and more emphasis on differences between symptoms and diseases could
|
| 735 |
+
|
| 736 |
+
increase these scores. 646
|
| 737 |
+
|
| 738 |
+
The Danish clinical NER dataset is not made 647 publicly available due to it containing sensitive
|
| 739 |
+
|
| 740 |
+
649 information. We advise interested researchers to contact us for sharing possibilities.
|
| 741 |
+
|
| 742 |
+
§ 5 CONCLUSIONS
|
| 743 |
+
|
| 744 |
+
This paper presented methodology and annotation scheme for developing the first Danish clinical NER dataset. The corpus consists of 11,607 paragraphs annotated for six entity types, six attributes, and three relations. The corpus was used to fine-tune language models which showed promising results for classifying the entities, attributes, and re-
|
| 745 |
+
|
| 746 |
+
661 lations of the dataset.
|
| 747 |
+
|
| 748 |
+
664
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wKieg8k2taJ/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,923 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Slaapte or Sliep? Extending Neural-Network Simulations of English Past Tense Learning to Dutch and German
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
004 058
|
| 12 |
+
|
| 13 |
+
005 059
|
| 14 |
+
|
| 15 |
+
006 060
|
| 16 |
+
|
| 17 |
+
## Abstract
|
| 18 |
+
|
| 19 |
+
This work studies the plausibility of sequence-to-sequence neural networks as models of morphological acquisition by
|
| 20 |
+
|
| 21 |
+
016 humans. We replicate the findings of Kirov and Cotterell (2018) on the well-
|
| 22 |
+
|
| 23 |
+
018 known challenge of the English past tense and examine their generalizability to two related but morphologically richer lan-
|
| 24 |
+
|
| 25 |
+
021 guages, namely Dutch and German. Using a new dataset of English/Dutch/German
|
| 26 |
+
|
| 27 |
+
023 (ir)regular verb forms, we show that the major findings of Kirov and Cotterell
|
| 28 |
+
|
| 29 |
+
026 (2018) hold for all three languages, includ- ing the observation of over-regularization
|
| 30 |
+
|
| 31 |
+
028 errors and micro U-shape learning trajectories. At the same time, we observe troublesome cases of non human-like errors
|
| 32 |
+
|
| 33 |
+
031 similar to those reported by recent followup studies with different languages or neu-
|
| 34 |
+
|
| 35 |
+
033 ral architectures. Finally, we study the possibility of switching to orthographic input in the absence of pronunciation in-
|
| 36 |
+
|
| 37 |
+
036 formation and show this can have a nonnegligible impact on the simulation re-
|
| 38 |
+
|
| 39 |
+
038 sults, with possibly misleading findings.
|
| 40 |
+
|
| 41 |
+
## 1 Introduction
|
| 42 |
+
|
| 43 |
+
The plausibility of neural network-based or con-nectionist models in simulating psycholinguistic behaviours has been attracting considerable attention since Rumelhart and McClelland (1986) first modeled the past-tense acquisition with an early example of sequence-to-sequence network. Their experiment received harsh criticism (e.g., Pinker and Prince, 1988) but also inspired cognitive scientists with alternatives (e.g., Kirov and Cotterell, 2018; Plunkett and Juola, 1999; Taat-gen and Anderson, 2002). Much more recently,
|
| 44 |
+
|
| 45 |
+
053 Kirov and Cotterell (2018) replicated Rumelhart
|
| 46 |
+
|
| 47 |
+
061
|
| 48 |
+
|
| 49 |
+
062
|
| 50 |
+
|
| 51 |
+
063
|
| 52 |
+
|
| 53 |
+
064
|
| 54 |
+
|
| 55 |
+
and McClelland (1986)'s simulations using a mod- 065 ern encoder-decoder neural architecture developed
|
| 56 |
+
|
| 57 |
+
for the task of morphological paradigm comple- 067 tion. Their improved results resolved much of the original criticisms by Pinker and Prince (1988).
|
| 58 |
+
|
| 59 |
+
The main purpose of this paper is to study the 070 generalizability of Kirov and Cotterell (2018)'s
|
| 60 |
+
|
| 61 |
+
findings beyond the case of English. Specifically, 072 we consider two languages that are genetically
|
| 62 |
+
|
| 63 |
+
related to English, but morphologically richer - 075 namely, Dutch and German. In these languages
|
| 64 |
+
|
| 65 |
+
too, past tense inflection is divided into regular and 077 irregular verbs, but with different proportions and different inflectional patterns than English. More-
|
| 66 |
+
|
| 67 |
+
over, German and Dutch are characterized by a 080 much more transparent orthography than English
|
| 68 |
+
|
| 69 |
+
(Van den Bosch et al., 1994; Marjou, 2021), which 082 allows us to study the usability of grapheme-based input for simulating past tense acquisition patterns
|
| 70 |
+
|
| 71 |
+
when pronunciation information may not avail- 085 able. Concretely, we aim to answer the following
|
| 72 |
+
|
| 73 |
+
research questions: 087
|
| 74 |
+
|
| 75 |
+
1. Can the model applied by Kirov and Cot-
|
| 76 |
+
|
| 77 |
+
terell (2018) to English also simulate the past 090 tense acquisition process in languages with
|
| 78 |
+
|
| 79 |
+
more complex morphological inflection, such 092 as Dutch and German?
|
| 80 |
+
|
| 81 |
+
2. Given the more predictable grapheme-to- 095 phoneme correspondence, i.e., orthographic
|
| 82 |
+
|
| 83 |
+
transparency (Marjou, 2021), in these two 097 languages, will the model perform similarly if the written forms of verbs are used for training instead of the phonetic ones?
|
| 84 |
+
|
| 85 |
+
To answer these two questions, we build and release a new past-tense inflection dataset of English, Dutch, and German, covering both grapheme and phoneme features (Section 3). ${}^{1}$ We
|
| 86 |
+
|
| 87 |
+
107 then replicate the single-task learning experiments
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
${}^{1}$ All code and data are available at https:// anonynmous
|
| 92 |
+
|
| 93 |
+
---
|
| 94 |
+
|
| 95 |
+
109 of Kirov and Cotterell (2018) (Section 4) and extend them to our multilingual dataset, using both phoneme- and grapheme-based input for comparison (Section 5).
|
| 96 |
+
|
| 97 |
+
Our findings reconfirm the potential and limitations of using neural networks for the simulation of human language learning patterns. Our model shows human-like behavior in learning past tenses of verbs, such as the micro U-shape coined by Plunkett et al. (1991) and over-regularization errors in all the examined languages; however non human-like errors are also reported. We also find that learning irregular past tense forms is considerably easier in Dutch and German than in English. Finally, we observe that higher orthographic transparency indeed leads to more consistent learning results when a model is trained with grapheme vs. phoneme input.
|
| 98 |
+
|
| 99 |
+
## 2 Background
|
| 100 |
+
|
| 101 |
+
Past tense debate The acquisition of verbal past tense in English, particularly the over-regularization of the irregular verbs in the process of learning (Marcus et al., 1992), has been serving as a testing ground for different hypotheses in language modelling for decades. A much debated question is whether the past tense of (ir)regular verbs is learnt by rules and memories (e.g., Plaut and Gonnerman, 2000; Seidenberg and Gonner-man, 2000; Marcus et al., 1995; Albright and Hayes, 2003; Pinker and Ullman, 2002), by analogy (e.g., Ramscar, 2002; Albright and Hayes, 2003) or by a dual mechanism (Pinker and Prince, 1988; Taatgen and Anderson, 2002).
|
| 102 |
+
|
| 103 |
+
Marcus et al. (1995) posited the necessity of mental rules in learning German irregular verbs. By contrast, Ernestus and Baayen's (2004) and Hahn and Nakisa's (2000) studies on Dutch and German respectively provided evidence in favour of connectionist and analogical approaches: they showed that humans tend to choose wrong past tense suffixes for regular verbs whose phonological structure is similar to that of irregular ones.
|
| 104 |
+
|
| 105 |
+
Recent connectionist revival The recent development of deep learning methods in computational linguistics has led to a renewed interest in connectionist approaches to modelling language acquisition and processing by humans (e.g., Bly-thing et al., 2018; Kádár et al., 2017; Pater, 2019;
|
| 106 |
+
|
| 107 |
+
161 Corkery et al., 2019; McCurdy et al., 2020). Last
|
| 108 |
+
|
| 109 |
+
year, modelling morphological acquisition trajec- 162
|
| 110 |
+
|
| 111 |
+
tories was adopted as one of the shared tasks 163
|
| 112 |
+
|
| 113 |
+
of SIGMORPHON-UniMorph (Kodner and Khal- 164 ifa, 2022). The three submitted neural systems (Pimentel et al., 2021; Kakolu Ramarao et al., 2022; Elsner and Court, 2022) exhibited over-
|
| 114 |
+
|
| 115 |
+
regularization and developmental regression, but 168 non-human-like behaviours were also observed.
|
| 116 |
+
|
| 117 |
+
Some recent studies have revealed a poor alignment between the way humans and neural encoder-decoder models generalize to new words (wug test) in the case of English verb past tense
|
| 118 |
+
|
| 119 |
+
(Corkery et al., 2019) and German plural nouns 175 (McCurdy et al., 2020). Dankers et al. (2021) observed cognitively plausible representations in
|
| 120 |
+
|
| 121 |
+
a recurrent neural network (RNN) trained to in- 178 flect German plural nouns but also found evidence
|
| 122 |
+
|
| 123 |
+
of problematic 'shortcut' learning. Wiemerslage 180 et al. (2022) observed that Transformers resemble humans in learning the morphological inflection of English and German in the wug tests but they also pointed out the divergence of the model in Ger-
|
| 124 |
+
|
| 125 |
+
man production. However, computational simula- 185 tions have succeeded in replicating the U-shaped learning curve during the acquisition of past tense (Kirov and Cotterell, 2018; Plunkett and Marchman, 2020). Additionally, further probing experi-
|
| 126 |
+
|
| 127 |
+
ments have suggested that neural models do learn 190 linguistic representations (Goodwin et al., 2020; Hupkes et al., 2018; Ravichander et al., 2020). Our research continues on exploring the cognitive plausibility of neural networks in modeling lan-
|
| 128 |
+
|
| 129 |
+
guage inflection learning. 195
|
| 130 |
+
|
| 131 |
+
Recurrent encoder-decoder inflection model In this work, we adopt the model of Kirov and Cotterell (2018), henceforth referred to as K&C.
|
| 132 |
+
|
| 133 |
+
This model is based on the encoder-decoder archi- 200 tecture proposed by Bahdanau et al. (2014), with input representation and hyper-parameters taken from Kann and Schütze (2016). The architecture consists of a bidirectional LSTM (BiLSTM) encoder augmented with an attention mechanism and a unidirectional LSTM decoder. The task of the encoder is to map each phonetic (or orthographic) symbol from the input string to a unique embedding and then process that embedding to get a context-sensitive representation of that symbol. The decoder reads the context vector from the final cell of the encoder and generates an output of phoneme/grapheme sequences through training
|
| 134 |
+
|
| 135 |
+
a BiLSTM model with two hidden layers. For 215 more details on the model, see Bahdanau et al.
|
| 136 |
+
|
| 137 |
+
217 (2014); Kann and Schütze (2016); Kirov and Cotterell (2018).
|
| 138 |
+
|
| 139 |
+
## 3 Datasets
|
| 140 |
+
|
| 141 |
+
To replicate the results published by K&C, we employ their dataset based on CELEX (Baayen et al., 1993). ${}^{2}$ To extend the experiments to Dutch and German and compare the results to English, we build a new dataset containing past tense forms in all three languages.
|
| 142 |
+
|
| 143 |
+
### 3.1 K&C English Dataset
|
| 144 |
+
|
| 145 |
+
K&C's CELEX-based dataset contains 4,039 English verb types including 3,871 regular verbs and 168 irregular verbs. Each verb is associated with an infinitive form and past tense form, both in International Phonetic Alphabet (IPA). Moreover, each verb is marked as regular or irregular (Albright and Hayes, 2003).
|
| 146 |
+
|
| 147 |
+
Note that there are label errors in their dataset. For example, dive-dived, dream-dreamed, light-lighted are marked as irregular. This is possibly because those verbs have two past tense forms and the other form does not follow the regular inflection (dive-dove, dream-dreamt, light-light). However, as the past tense of those verbs in the original dataset aligns with the regular inflection rule of English, we take those verbs as regular ones and manually correct their labels.
|
| 148 |
+
|
| 149 |
+
### 3.2 Multilingual Unimorph-based Dataset
|
| 150 |
+
|
| 151 |
+
We use the morphological annotation dataset Uni-morph (McCarthy et al., 2020) as a source of English, Dutch, and German word forms to enable a fair comparison in our multilingual experiments. In this lexicon, each entry consists of the infinitive of the verb, the conjugation, and the tag containing the Part-Of-Speech and inflectional information. An important adjustment has to be
|
| 152 |
+
|
| 153 |
+
259 made here because English has only two forms for the present tense (I/you/we/they) and only one for the past. By contrast, Dutch and German distinguish more persons in both present and past tense. To address this, we include for each lemma the first/second/third singular present form and plural form together with their respective past form, each as a separate entry (see examples in Figure 1).
|
| 154 |
+
|
| 155 |
+
269
|
| 156 |
+
|
| 157 |
+
<table><tr><td>present(g)</td><td>past(g)</td><td>present(p)</td><td>past(p)</td><td>reg</td><td/></tr><tr><td>accounts</td><td>accounted</td><td>@k6nts</td><td>@k6ntId</td><td>reg</td><td/></tr><tr><td>account</td><td>accounted</td><td>@k6nt</td><td>@k6ntId</td><td>reg</td><td/></tr><tr><td>feels</td><td>felt</td><td>filz</td><td>fElt</td><td>irreg</td><td/></tr><tr><td>feel</td><td>felt</td><td>fil</td><td>fElt</td><td>irreg</td><td/></tr></table>
|
| 158 |
+
|
| 159 |
+
(a) English
|
| 160 |
+
|
| 161 |
+
<table><tr><td>slaap</td><td>sliep</td><td>slap</td><td>slip</td><td>irreg</td></tr><tr><td>slaapt</td><td>sliep</td><td>slapt</td><td>slip</td><td>irreg</td></tr><tr><td>slapen</td><td>sliepen</td><td>slap@</td><td>slip@</td><td>irreg</td></tr><tr><td>behoef</td><td>behoefde</td><td>b@huf</td><td>b@huvd@</td><td>reg</td></tr><tr><td>behoeft</td><td>behoefde</td><td>b@huft</td><td>b@huvd@</td><td>reg</td></tr><tr><td>behoeven</td><td>behoefden</td><td>b@huv@</td><td>b@huvd@</td><td>reg</td></tr></table>
|
| 162 |
+
|
| 163 |
+
<table><tr><td colspan="5">(b) Dutch</td></tr><tr><td>berechne</td><td>berechnete</td><td>b@rExn@</td><td>b@rExn@t@</td><td>reg</td></tr><tr><td>berechnest</td><td>berechnetest</td><td>b@rExn@st</td><td>b@rExn@t@st</td><td>reg</td></tr><tr><td>berechnet</td><td>berechnete</td><td>b@rExn@t</td><td>b@rExn@t@</td><td>reg</td></tr><tr><td>berechnen</td><td>berechneten</td><td>b@rExn@n</td><td>b@rExn@t@n</td><td>reg</td></tr><tr><td>fliehe</td><td>floh</td><td>flia</td><td>flo</td><td>irreg</td></tr><tr><td>fliehst</td><td>flohst</td><td>flist</td><td>flost</td><td>irreg</td></tr><tr><td>flieht</td><td>floh</td><td>flit</td><td>flo</td><td>irreg</td></tr><tr><td>fliehen</td><td>flohen</td><td>flian</td><td>flo@n</td><td>irreg</td></tr></table>
|
| 164 |
+
|
| 165 |
+
(c) German
|
| 166 |
+
|
| 167 |
+
Figure 1: Excerpt of the newly introduced dataset of English, Dutch and German past tense. Dutch verbs: slapen (to sleep); behoeven (to need). German: berechnen (to calculate); fliehen (to fleed).
|
| 168 |
+
|
| 169 |
+
270
|
| 170 |
+
|
| 171 |
+
271
|
| 172 |
+
|
| 173 |
+
272
|
| 174 |
+
|
| 175 |
+
273
|
| 176 |
+
|
| 177 |
+
274
|
| 178 |
+
|
| 179 |
+
275
|
| 180 |
+
|
| 181 |
+
276
|
| 182 |
+
|
| 183 |
+
277
|
| 184 |
+
|
| 185 |
+
278
|
| 186 |
+
|
| 187 |
+
279
|
| 188 |
+
|
| 189 |
+
280
|
| 190 |
+
|
| 191 |
+
281
|
| 192 |
+
|
| 193 |
+
282
|
| 194 |
+
|
| 195 |
+
283
|
| 196 |
+
|
| 197 |
+
284
|
| 198 |
+
|
| 199 |
+
285
|
| 200 |
+
|
| 201 |
+
286
|
| 202 |
+
|
| 203 |
+
287
|
| 204 |
+
|
| 205 |
+
288
|
| 206 |
+
|
| 207 |
+
289
|
| 208 |
+
|
| 209 |
+
290
|
| 210 |
+
|
| 211 |
+
291
|
| 212 |
+
|
| 213 |
+
293
|
| 214 |
+
|
| 215 |
+
Specifically, we start by extracting from Uni- 296 morph a list of verb lemmas and their correspond-
|
| 216 |
+
|
| 217 |
+
ing present and past tense forms. A different ex- 298 traction script is used in each language because of
|
| 218 |
+
|
| 219 |
+
the different number of forms and slightly differ- 301 ent POS tags:
|
| 220 |
+
|
| 221 |
+
- English only has two present tense forms: 303 one for the third person singular and one for the rest. Mostly, there is only one past tense.
|
| 222 |
+
|
| 223 |
+
306
|
| 224 |
+
|
| 225 |
+
- Most verbs in Dutch have three present tense 307
|
| 226 |
+
|
| 227 |
+
forms and two past tense forms. 308
|
| 228 |
+
|
| 229 |
+
309
|
| 230 |
+
|
| 231 |
+
- Most verbs in German have five present tense 310
|
| 232 |
+
|
| 233 |
+
forms and four past tense forms. 311
|
| 234 |
+
|
| 235 |
+
Next, we tag each form as regular or irregular, 312 313 based on a simple rule-based strategy: 314
|
| 236 |
+
|
| 237 |
+
- English: if the past tense ends with 'ed' then
|
| 238 |
+
|
| 239 |
+
it is considered a regular verb. 316
|
| 240 |
+
|
| 241 |
+
317
|
| 242 |
+
|
| 243 |
+
- Dutch: if the singular past tense ends with 318
|
| 244 |
+
|
| 245 |
+
'-de' or '-te', it is considered regular. 319
|
| 246 |
+
|
| 247 |
+
320
|
| 248 |
+
|
| 249 |
+
- German: if the singular past tense of the first 321
|
| 250 |
+
|
| 251 |
+
or third person ends with '-te', it is consid- 322
|
| 252 |
+
|
| 253 |
+
ered regular. 323
|
| 254 |
+
|
| 255 |
+
---
|
| 256 |
+
|
| 257 |
+
${}^{2}$ Dataset, code and other experimental details are taken from https://github.com/ckirov/ RevisitPinkerAndPrince
|
| 258 |
+
|
| 259 |
+
---
|
| 260 |
+
|
| 261 |
+
324 378
|
| 262 |
+
|
| 263 |
+
325 379
|
| 264 |
+
|
| 265 |
+
<table><tr><td rowspan="3">Language</td><td rowspan="3">Type</td><td colspan="6">Number of verbs</td><td rowspan="3">Count</td><td rowspan="3">Total verbs (%)</td></tr><tr><td colspan="2">train</td><td colspan="2">dev</td><td colspan="2">test</td></tr><tr><td>Count</td><td>(%)</td><td>Count</td><td>(%)</td><td>Count</td><td>(%)</td></tr><tr><td rowspan="3">English</td><td>all</td><td>4,879</td><td>79.9</td><td>611</td><td>10.0</td><td>614</td><td>10.1</td><td>6,104</td><td>100.0</td></tr><tr><td>regular</td><td>4,601</td><td>75.4</td><td>529</td><td>8.7</td><td>520</td><td>8.5</td><td>5,650</td><td>92.6</td></tr><tr><td>irregular</td><td>278</td><td>4.6</td><td>82</td><td>1.3</td><td>94</td><td>1.5</td><td>454</td><td>7.4</td></tr><tr><td rowspan="3">Dutch</td><td>all</td><td>4,896</td><td>80.1</td><td>612</td><td>10.0</td><td>607</td><td>9.9</td><td>6,115</td><td>100.0</td></tr><tr><td>regular</td><td>4,383</td><td>71.7</td><td>550</td><td>9.0</td><td>542</td><td>8.9</td><td>5,475</td><td>89.6</td></tr><tr><td>irregular</td><td>513</td><td>8.4</td><td>62</td><td>1.0</td><td>65</td><td>1.0</td><td>640</td><td>10.4</td></tr><tr><td rowspan="3">German</td><td>all</td><td>4,865</td><td>79.7</td><td>616</td><td>10.1</td><td>620</td><td>10.2</td><td>6,101</td><td>100.0</td></tr><tr><td>regular</td><td>4,299</td><td>70.5</td><td>535</td><td>8.8</td><td>578</td><td>9.5</td><td>5,412</td><td>88.8</td></tr><tr><td>irregular</td><td>566</td><td>9.2</td><td>81</td><td>1.3</td><td>42</td><td>0.7</td><td>689</td><td>11.2</td></tr></table>
|
| 266 |
+
|
| 267 |
+
Table 1: Dataset distributed into train, dev and test sets in each of the three languages. The number of regular and irregular verbs is also reported. The percentage is calculated over the total number of verbs per language.
|
| 268 |
+
|
| 269 |
+
387
|
| 270 |
+
|
| 271 |
+
388
|
| 272 |
+
|
| 273 |
+
326 380
|
| 274 |
+
|
| 275 |
+
327 381
|
| 276 |
+
|
| 277 |
+
328 382
|
| 278 |
+
|
| 279 |
+
329 383
|
| 280 |
+
|
| 281 |
+
330 384
|
| 282 |
+
|
| 283 |
+
331 385
|
| 284 |
+
|
| 285 |
+
332 386
|
| 286 |
+
|
| 287 |
+
335 389
|
| 288 |
+
|
| 289 |
+
390
|
| 290 |
+
|
| 291 |
+
337 391
|
| 292 |
+
|
| 293 |
+
340 394
|
| 294 |
+
|
| 295 |
+
342 Finally, the IPA transcriptions of all word forms are retrieved from CELEX for all languages and added to the final dataset. As shown in Figure 1 , the resulting dataset is in the same format as K&C's CELEX-based dataset.
|
| 296 |
+
|
| 297 |
+
Data selection The generated Dutch data only contains 6106 verb forms versus 11489 and 6975 in English and German respectively. Therefore, to enable a fair comparison among languages, we need to downsample the larger datasets. However, randomly choosing $6\mathrm{\;K}$ verb forms from the English and German lists may lead to a poor selection given the long tail of infrequent words. As a solution, we use word form frequencies as provided in the CELEX data and choose all words with a frequency of more than 1 in a million, and complement with a random selection of less frequent words in order to get approximately 6106 verb forms.
|
| 298 |
+
|
| 299 |
+
362 After shuffling, the word forms are split into a train set $\left( {{80}\% }\right)$ , a development(dev)set $\left( {{10}\% }\right)$ and a test set $\left( {{10}\% }\right)$ . The data distribution into three sets and regular/irregular verbs for each language is reported in Table 1.
|
| 300 |
+
|
| 301 |
+
367
|
| 302 |
+
|
| 303 |
+
### 3.3 Remarkable problems
|
| 304 |
+
|
| 305 |
+
A few problems occurred during data preparation. First, rule-based tagging of lemma's is not as trivial as it seems at first sights. For example, in English, not all past tenses ending with '-ed' are regular. Using the data of $\mathrm{K}\& \mathrm{C}$ , we added a few exceptions that are all irregular words ending with '-ed': bled, bred, led, misled, fled,
|
| 306 |
+
|
| 307 |
+
377 and forms of fed (including breast-fed,
|
| 308 |
+
|
| 309 |
+
force-fed and bottle-fed). 396
|
| 310 |
+
|
| 311 |
+
Also, in the original K&C experiment, the model should be able to predict past tense based on what it learned from other verbs, not from other word forms. In morphologically richer languages, a lemma has more word forms and data splitting becomes problematic. For instance, a model might have learned that work $\rightarrow$ worked and walks $\rightarrow$ walked, then it might predict that works $\rightarrow$ worked. In such a case, it is not possible to know whether the model made the right prediction based on similarities to other lemmas (walks) or to other forms of the same verb (work). To be as comparable as possible to the original setup of $\mathrm{K}\& \mathrm{C}$ , we put all forms of the same verb in the same data split (that is, either training, dev or test). As a result, if the model scores well, we know for sure that it cannot make predictions based on other forms of the same verb.
|
| 312 |
+
|
| 313 |
+
Another issue is that one present tense form nor-
|
| 314 |
+
|
| 315 |
+
mally corresponds to one past tense form. How- 416 ever, German poses two notable exceptions to this:
|
| 316 |
+
|
| 317 |
+
- The second person singular verb form ends with '-st' and the third person singular ends
|
| 318 |
+
|
| 319 |
+
with '-t'. Those forms coincide if a verb al- 421 ready ends with an 's', but there is still a difference between those forms in the past tense. For example, bremst is the present conju-
|
| 320 |
+
|
| 321 |
+
gation form of verb bremsen (to brake) for 426 pronoun du you, er he and even ihr you.
|
| 322 |
+
|
| 323 |
+
- Verbs ending in '-t' can be the third person singular or the second person plural informal. For example, wundert is the present conju-
|
| 324 |
+
|
| 325 |
+
431 gation of the verb wundern (to wonder) for the pronoun ihr you and er he.
|
| 326 |
+
|
| 327 |
+
In the former case, the model should be able to output multiple solutions, since only context can make clear whether it is the second person or the third person. However, this complicates the evaluation. As a solution, we exclude the third person form if it collides with the second person. As for the latter issue, we choose to remove all second person plural informal forms, since those are far less frequent than the third person singular forms.
|
| 328 |
+
|
| 329 |
+
## 4 Replication of K&C
|
| 330 |
+
|
| 331 |
+
Before moving to the main multilingual experiments, we replicate the original $\mathrm{K}\& \mathrm{C}$ experiments (single-task only).
|
| 332 |
+
|
| 333 |
+
### 4.1 Experimental Setup
|
| 334 |
+
|
| 335 |
+
For the replication, we employ K&C's CELEX-based dataset and keep the model architecture and hyper-parameters unchanged using Open-NMT (Klein et al.,2017) ${}^{3}$ . See more details in Appendix A. Following K&C, the model is trained on the IPA transcription.
|
| 336 |
+
|
| 337 |
+
We use word form-level accuracy to evaluate model performance. An important remark concerns data splitting: K&C did not release their specific data split, which makes it impossible to replicate the exact same results. We, therefore, create our own splits following K&C's proportions (80/10/10% for training/dev/test). To obtain more reliable results, we train the model three times using different random seeds for different initialization and report the averaged resulting accuracies.
|
| 338 |
+
|
| 339 |
+
To study the micro U-shape learning curve of irregular verbs, we save the model at each 10 epochs and use those partially-trained models to predict the test set and compare their prediction results.
|
| 340 |
+
|
| 341 |
+
### 4.2 Results
|
| 342 |
+
|
| 343 |
+
As shown in Table 2, the results on the training set are almost the same as reported in the original paper, which means our replication is largely successful. ${}^{4}$ We note that the accuracy for irregular
|
| 344 |
+
|
| 345 |
+
verbs in the dev and test set is considerably dif- 486
|
| 346 |
+
|
| 347 |
+
ferent from that of K&C (dev: 21.1% vs. 53.3%; 487 test: 35.3% vs. 28.6%). Since K&C did not re-
|
| 348 |
+
|
| 349 |
+
lease their specific data split, replicating their ex- 489 act results on the small portion of irregular verbs is not possible. Given that our results are averaged
|
| 350 |
+
|
| 351 |
+
over three random seeds and on all three split sets, 492 we consider them more reliable, which means the model might perform worse at learning the past tense of irregular verbs than K&C's report.
|
| 352 |
+
|
| 353 |
+
<table><tr><td rowspan="2"/><td colspan="3">all</td><td colspan="3">regular</td><td colspan="3">irregular</td></tr><tr><td>train</td><td>dev</td><td>test</td><td>train</td><td>dev</td><td>test</td><td>train</td><td>dev</td><td>test</td></tr><tr><td>K&C</td><td>99.8</td><td>97.4</td><td>95.1</td><td>99.9</td><td>99.2</td><td>98.9</td><td>97.6</td><td>53.3</td><td>28.6</td></tr><tr><td>Ours</td><td>99.9</td><td>95.3</td><td>96.5</td><td>99.9</td><td>98.4</td><td>99.2</td><td>98.4</td><td>21.1</td><td>35.3</td></tr></table>
|
| 354 |
+
|
| 355 |
+
Table 2: Mean accuracy of our replication of K&C with 3 random seeds.
|
| 356 |
+
|
| 357 |
+
497
|
| 358 |
+
|
| 359 |
+
499
|
| 360 |
+
|
| 361 |
+
502
|
| 362 |
+
|
| 363 |
+
### 4.3 Discussion
|
| 364 |
+
|
| 365 |
+
The reason we assume for the gap between our results and K&C's is twofold: (i) the number of irregular verbs is much lower than regular ones, which makes the accuracy change dramatically even if only few more or few less verbs are predicted correctly than the original experiments; (ii) we corrected the label errors mentioned above, thus the number of irregular verbs becoming smaller than before. This small difference could cause a large impact on the accuracy calcu-
|
| 366 |
+
|
| 367 |
+
lation given that these two sets only contain about 519 20 irregular verbs. To test this hypothesis, we conduct 9-fold cross-validation ${}^{5}$ and find that the ac-
|
| 368 |
+
|
| 369 |
+
curacy for irregular verbs varied in different dev 522 splits, ranging widely between 9% and 42%.
|
| 370 |
+
|
| 371 |
+
524
|
| 372 |
+
|
| 373 |
+
## 5 Multilingual Experiments
|
| 374 |
+
|
| 375 |
+
This section presents the results of our main experiments aimed at comparing Dutch and German
|
| 376 |
+
|
| 377 |
+
past learning patterns to the English ones. It also 529 presents the results of grapheme vs phoneme sequence learning in all three languages. Because Dutch and German pronunciation is more predictable than the English one, we expect that the
|
| 378 |
+
|
| 379 |
+
difference between grapheme and phoneme learn- 534 ing will be smaller in these languages.
|
| 380 |
+
|
| 381 |
+
539
|
| 382 |
+
|
| 383 |
+
---
|
| 384 |
+
|
| 385 |
+
${}^{3}$ However, as the epoch has been deprecated in the latest version of OpenNMT, we converted it to train_steps based on its relationship with steps.
|
| 386 |
+
|
| 387 |
+
${}^{4}$ Our results are also very close to those of Corkery et al. (2019), who did a similar replication and reported the averaged accuracy over ten runs initialized with different random seeds, but only on the training set.
|
| 388 |
+
|
| 389 |
+
${}^{5}$ We keep the test set unchanged and validated across the train and dev sets. To make sure the dev set has a comparable number of verbs as the original set, we adopt 9 fold instead of 10 fold cross-validation.
|
| 390 |
+
|
| 391 |
+
---
|
| 392 |
+
|
| 393 |
+
540 594
|
| 394 |
+
|
| 395 |
+
<table><tr><td/><td colspan="3">all</td><td colspan="3">regular</td><td colspan="3">irregular</td></tr><tr><td/><td>train</td><td>dev</td><td>test</td><td>train</td><td>dev</td><td>test</td><td>train</td><td>dev</td><td>test</td></tr><tr><td>EN</td><td>99.5</td><td>93.1</td><td>92.1</td><td>99.8</td><td>96.1</td><td>95.0</td><td>98.1</td><td>27.8</td><td>40.5</td></tr><tr><td>NL</td><td>98.9</td><td>88.4</td><td>88.4</td><td>99.2</td><td>91.4</td><td>92.2</td><td>96.5</td><td>62.4</td><td>57.9</td></tr><tr><td>DE</td><td>98.9</td><td>85.0</td><td>92.5</td><td>99.4</td><td>92.0</td><td>95.1</td><td>96.7</td><td>38.7</td><td>57.9</td></tr><tr><td colspan="10">(a) Phoneme input</td></tr></table>
|
| 396 |
+
|
| 397 |
+
<table><tr><td rowspan="2"/><td colspan="3">all</td><td colspan="3">regular</td><td colspan="3">irregular</td></tr><tr><td>train</td><td>dev</td><td>test</td><td>train</td><td>dev</td><td>test</td><td>train</td><td>dev</td><td>test</td></tr><tr><td>EN</td><td>99.1</td><td>93.6</td><td>93.8</td><td>99.8</td><td>98.2</td><td>98.1</td><td>89.0</td><td>11.1</td><td>28.1</td></tr><tr><td>NL</td><td>99.4</td><td>88.0</td><td>89.6</td><td>99.8</td><td>91.2</td><td>93.0</td><td>97.9</td><td>58.6</td><td>61.0</td></tr><tr><td>DE</td><td>98.4</td><td>86.4</td><td>93.6</td><td>99.1</td><td>93.5</td><td>95.7</td><td>93.9</td><td>39.5</td><td>65.9</td></tr></table>
|
| 398 |
+
|
| 399 |
+
(b) Grapheme input
|
| 400 |
+
|
| 401 |
+
597
|
| 402 |
+
|
| 403 |
+
598
|
| 404 |
+
|
| 405 |
+
599
|
| 406 |
+
|
| 407 |
+
601
|
| 408 |
+
|
| 409 |
+
541 595
|
| 410 |
+
|
| 411 |
+
542 596
|
| 412 |
+
|
| 413 |
+
546 600
|
| 414 |
+
|
| 415 |
+
Table 3: Past tense inflection accuracy in English, Dutch, and German; all averaged over 3 random seeds.
|
| 416 |
+
|
| 417 |
+
<table><tr><td rowspan="2">epoch</td><td colspan="2">English</td><td colspan="2">Dutch</td><td colspan="2">German</td></tr><tr><td colspan="2">hits</td><td colspan="2">bestijgt (mounts)</td><td colspan="2">gilt (applies)</td></tr><tr><td>10</td><td>hItId</td><td>hitted</td><td>b@stKGd@</td><td>besteeg</td><td>gIlt@</td><td>galte</td></tr><tr><td>20</td><td>hItst</td><td>hit</td><td>b@stex</td><td>besteeg</td><td>gIlt@</td><td>galt</td></tr><tr><td>30</td><td>hItId</td><td>hitted</td><td>b@stKGd@</td><td>besteeg</td><td>g<</td><td>galt</td></tr><tr><td>40</td><td>hItId</td><td>hitted</td><td>b@stKGd@</td><td>besteeg</td><td>g<</td><td>galt</td></tr><tr><td>50</td><td>hIt</td><td>hitted</td><td>b@stKGd@</td><td>besteeg</td><td>g<</td><td>galt</td></tr><tr><td>60</td><td>hItst</td><td>hit</td><td>b@stex</td><td>besteeg</td><td>gIIt@</td><td>gilte</td></tr><tr><td>70</td><td>hIt</td><td>hit</td><td>b@stex</td><td>bestijgde</td><td>g<</td><td>galt</td></tr><tr><td>80</td><td>hItId</td><td>hitted</td><td>b@stex</td><td>besteeg</td><td>g<</td><td>galt</td></tr><tr><td>90</td><td>hItId</td><td>hitted</td><td>b@stex</td><td>besteeg</td><td>g<</td><td>galt</td></tr><tr><td>100</td><td>hIt</td><td>hit</td><td>b@stex</td><td>besteeg</td><td>g<</td><td>galt</td></tr></table>
|
| 418 |
+
|
| 419 |
+
Table 4: The oscillating development (micro U-shape) of single verbs in three languages: with phoneme or grapheme inputs, the respectively predicted past phonetic (left) or orthographic (right) forms are changing with the training proceeding, but their final predictions are correct when reaching the last epoch.
|
| 420 |
+
|
| 421 |
+
602
|
| 422 |
+
|
| 423 |
+
603
|
| 424 |
+
|
| 425 |
+
551 605
|
| 426 |
+
|
| 427 |
+
604
|
| 428 |
+
|
| 429 |
+
606
|
| 430 |
+
|
| 431 |
+
608
|
| 432 |
+
|
| 433 |
+
609
|
| 434 |
+
|
| 435 |
+
611
|
| 436 |
+
|
| 437 |
+
613
|
| 438 |
+
|
| 439 |
+
614
|
| 440 |
+
|
| 441 |
+
615
|
| 442 |
+
|
| 443 |
+
553 607
|
| 444 |
+
|
| 445 |
+
556 610
|
| 446 |
+
|
| 447 |
+
558 612
|
| 448 |
+
|
| 449 |
+
616
|
| 450 |
+
|
| 451 |
+
563 617
|
| 452 |
+
|
| 453 |
+
618
|
| 454 |
+
|
| 455 |
+
619
|
| 456 |
+
|
| 457 |
+
566 620
|
| 458 |
+
|
| 459 |
+
For comparability, all experiments in this section use the newly introduced Unimorph-based dataset, which includes a similar amount of training forms in all languages (cf. Table 1). The model architecture and the hyperparameter settings are the same as in previous experiments. We also run each experiments three times with different random seeds and report the averaged results.
|
| 460 |
+
|
| 461 |
+
Result overview For the forms seen in training, the model is able to learn both regular and irregular past tense inflection with more than 95% accuracy (Table 3a), and with similar learning curves (Figure 2), which confirms and strengthens the main findings of $\mathrm{K}\& \mathrm{C}$ on two other languages.
|
| 462 |
+
|
| 463 |
+
583 Comparing Table 3a to 3b, we find that the overall trends are maintained when the model is trained on graphemes instead of phonemes (the original setup of $\mathrm{K}\& \mathrm{C}$ ). However, a notable exception is observed: grapheme learning results in a much lower accuracy of English irregular verbs.
|
| 464 |
+
|
| 465 |
+
In the following sections, we discuss these results in more detail.
|
| 466 |
+
|
| 467 |
+
593
|
| 468 |
+
|
| 469 |
+
621
|
| 470 |
+
|
| 471 |
+
### 5.1 Past Tense Learning Results in English, Dutch, and German
|
| 472 |
+
|
| 473 |
+
622
|
| 474 |
+
|
| 475 |
+
623
|
| 476 |
+
|
| 477 |
+
Accuracy Looking closer at the results across 624
|
| 478 |
+
|
| 479 |
+
languages (Table 3a), we notice that inflecting un- 625
|
| 480 |
+
|
| 481 |
+
seen Dutch regular verbs is slightly harder than in 626
|
| 482 |
+
|
| 483 |
+
German and English. This might be explained by 627 the fact that in Dutch all voiced consonants become unvoiced at the end of a word, but to predict if the past tense becomes '-de' (for voiced
|
| 484 |
+
|
| 485 |
+
consonants) or '-te' (for unvoiced consonants), we 632 still need the end consonant of the stem, which can be found within the lemma and most of the times in the spelling of the word form. Unfortunately, this information is absent in the pronun-
|
| 486 |
+
|
| 487 |
+
ciation. For example, in the pair lAnt-lAndd@, 637 one will not know whether the past tense should be 1And@ or 1Ant @ before seeing the orthographic form 1 and. We find that such errors account for about ${50}\% \left( {{18}/{38}}\right)$ of all Dutch regular verb er-
|
| 488 |
+
|
| 489 |
+
rors. This difference in voiced/unvoiced regular 642 past tense endings only occurs in Dutch.
|
| 490 |
+
|
| 491 |
+
As for irregular verbs, we find a large difference across languages in the ability to generalize to new
|
| 492 |
+
|
| 493 |
+
forms. Especially in English, while the model has 646
|
| 494 |
+
|
| 495 |
+
647
|
| 496 |
+
|
| 497 |
+
648
|
| 498 |
+
|
| 499 |
+

|
| 500 |
+
|
| 501 |
+
Figure 2: Learning curves of the model on the German, English, and Dutch training set (with random seed 123).
|
| 502 |
+
|
| 503 |
+
649
|
| 504 |
+
|
| 505 |
+
650
|
| 506 |
+
|
| 507 |
+
651
|
| 508 |
+
|
| 509 |
+
652
|
| 510 |
+
|
| 511 |
+
653
|
| 512 |
+
|
| 513 |
+
654
|
| 514 |
+
|
| 515 |
+
659
|
| 516 |
+
|
| 517 |
+
661
|
| 518 |
+
|
| 519 |
+
664 almost perfectly learned to inflect seen verbs, it
|
| 520 |
+
|
| 521 |
+
681 has a hard time predicting the form of new irregular verbs (dev: 27.8%, test: 40.5%). This effect is smaller in Dutch and German, suggesting the irregular inflection patterns in these languages are more predictable. Surprisingly, the model made
|
| 522 |
+
|
| 523 |
+
686 more mistakes when predicting the inflections of the irregular verbs in the German dev set than the test set (dev: 38.7%, test: 57.9%). By inspecting
|
| 524 |
+
|
| 525 |
+
689 the mistakes, we found that the model incorrectly took many irregular verbs as regular ones because
|
| 526 |
+
|
| 527 |
+
691 of their resemblance (high character overlap). For instance, reitest-*reitetest/rittest (ride) is influenced by the regular conjugation of bereitest-bereitetest (prepare). We
|
| 528 |
+
|
| 529 |
+
696 found ${23}/{81}$ irregular verbs in the dev set are very similar to regular verbs in the training set. Out of these, 8 irregular verbs are identical to regular ones except for a prefix (e.g., reitet (rides) vs. bereitet (prepares) and reitest (ride) vs.
|
| 530 |
+
|
| 531 |
+
701 verbreitest (spread), which could be highly
|
| 532 |
+
|
| 533 |
+
confusing for a model that is only based on form 702
|
| 534 |
+
|
| 535 |
+
regardless of meaning. By contrast, such overlap 703
|
| 536 |
+
|
| 537 |
+
is not found between the irregular verbs in the test 704
|
| 538 |
+
|
| 539 |
+
set and regular ones in the training set. This distri- 705 butional discrepancy might explain the lower accuracy in the dev set. It echoes with our other
|
| 540 |
+
|
| 541 |
+
finding discussed in the next section that irregu- 708 lar verbs might be misled by regular verbs if they share representation similarity.
|
| 542 |
+
|
| 543 |
+
Errors and learning trajectories Going be-
|
| 544 |
+
|
| 545 |
+
yond overall accuracy, we inspect the learning tra- 713 jectories of individual verbs in our dataset. We
|
| 546 |
+
|
| 547 |
+
find that human-like overregularization patterns 715 similar to those observed by K&C in English also occur in Dutch and German. For example,
|
| 548 |
+
|
| 549 |
+
in Dutch, after 40 epochs of training, the model 718 change verscheent to verscheen as the past
|
| 550 |
+
|
| 551 |
+
tense of verschijnt (appears). However, af- 720 ter 50 epochs, the model again generate the wrong form verscheent. After 70 epochs, the correct result is again obtained. Similar patterns are observed for sink in English and streitet (argues) in German. All wrongly predicted irregular verbs are caused by over-regularization. In other words, no patterns like ated in English or lookte in Dutch are found, which is consistent with humans' learning behaviour (Pinker and Prince, 1988). More examples from English, Dutch and German are listed in Table 4.
|
| 552 |
+
|
| 553 |
+
Additionally, we find cases where the model generates an irregular form for a regular verb, because of the resemblance with other (irregular) verbs. In Dutch, for example, the regular verb versier-versierde (decorate-decorated) gets incorrectly inflected as *versoor by resemblance to verbs like verlies-verloor (lose-lost). Similar errors also occur in German. For instance, the wrong prediction of verfehle-*verfahl/verfehlte (miss-missed) might be misled by the pair befehlen-befahlen (order-ordered), and schweben-*schwoben/schwebten (float-floated) is possibly due to its resemblance to schieben-schoben (push-pushed). Interestingly, this type of errors aligns with Ernestus and Baayen (2004)'s experiments with Dutch speakers: phonological similarity, rather than rule-based regularity, influences participants' judgments toward the inflection of verbs.
|
| 554 |
+
|
| 555 |
+
That said, the model also displays error pat-
|
| 556 |
+
|
| 557 |
+
terns that are not human-like, such as copying the 755 present form or randomly removing phonemes (or letters) from it. Similar cases of non-plausible predictions were also observed at the Sigmor-phon Shared Task (Kodner and Khalifa, 2022), for instance forgive-*forgaved/forgave or seek-*sougk/sought. As also observed by Wiemerslage et al. (2022), this kind of model predictions contrasts with the behaviour of human speakers, who mostly resort to generating a regular past tense when a verb is unknown.
|
| 558 |
+
|
| 559 |
+
### 5.2 Phoneme vs. Grapheme Input
|
| 560 |
+
|
| 561 |
+
Undoubtedly, using phoneme input is more principled than grapheme input when simulating human acquisition patterns. However, pronunciation information is not always available and makes it harder to extend this kind of simulations beyond a small set of widely studied languages. Here, we investigate the usability of grapheme-based input for modeling past tense inflection. We expect German and Dutch to be a good use case for this, given their more transparent orthography compared to English (Marjou, 2021).
|
| 562 |
+
|
| 563 |
+
The results in Table 3 clearly show that switching to grapheme input for the English simulations is not principled as this results in a slight increase of regular inflection accuracy (from 99.8/96.1/95.0% to 99.8/98.2/98.1% train/dev/test) as opposed to a large decrease of irregular inflection accuracy (from 98.1/27.8/40.5% to ${89.0}/{11.1}/{28.1}\%$ ). The latter effect is particularly marked, suggesting non-transparent orthography may not be a uniform property of the language but may be correlating with less regular word forms within a language. We leave this investigation to future work.
|
| 564 |
+
|
| 565 |
+
Using grapheme input in Dutch and German seems much safer (differences are overall small, with only a slight increase in almost all cases). Our observations seem to reflect the figures of Mar-jou (2021), who give a much higher transparency score to Dutch and German than to English.
|
| 566 |
+
|
| 567 |
+
In sum, using graphemes to simulate human patterns of morphological acquisition is possible but should be done with caution and only in some languages. A good practice could be to first verify that the orthographic transparency of a language is high (Marjou (2021) present results for 17 languages). When that is not possible, grapheme-based results should be at least validated against a small-scale pronunciation dataset.
|
| 568 |
+
|
| 569 |
+
809
|
| 570 |
+
|
| 571 |
+
## 6 Conclusions
|
| 572 |
+
|
| 573 |
+
810
|
| 574 |
+
|
| 575 |
+
811
|
| 576 |
+
|
| 577 |
+
In this work, we study the plausibility of using 812
|
| 578 |
+
|
| 579 |
+
sequence-to-sequence neural networks for simu- 813
|
| 580 |
+
|
| 581 |
+
lating human patterns of past tense acquisition. 814
|
| 582 |
+
|
| 583 |
+
More specifically, we replicate findings by Kirov 815
|
| 584 |
+
|
| 585 |
+
and Cotterell (2018) and examine their generaliz- 816 ability beyond the specific case of English, using a new dataset of English/Dutch/German (ir)regular verb forms based on Unimorph (McCarthy et al., 2020).
|
| 586 |
+
|
| 587 |
+
We show that the main findings of $\mathrm{K}\& \mathrm{C}$ also 821 largely hold for Dutch and German, including over-regularization errors and the oscillating (or micro U-shape) learning trajectory of individual verb forms across training epochs. At the same
|
| 588 |
+
|
| 589 |
+
time, we also observe cases of non human-like 826 errors, for instance when the model just keeps
|
| 590 |
+
|
| 591 |
+
the present form unchanged or randomly removes 828 phonemes from it. A notable difference among
|
| 592 |
+
|
| 593 |
+
our studied languages concern unseen English ir- 831 regular verbs, which appeared to be much harder to inflect than the Dutch and German ones. We also observe that the orthographic transparency of a language influences and possibly confounds the model's learning performance: higher transparent orthography contributes to more reliable and consistent simulation results, but in general this aspect should be seriously considered when setting up new benchmarks of morphological acquisition.
|
| 594 |
+
|
| 595 |
+
Future work could include the construction of a nonce word benchmark in Dutch and German
|
| 596 |
+
|
| 597 |
+
to enable a multi-lingual evaluation of this task 843 (Corkery et al., 2019), as well as an in-depth investigation of the different level of irregular past
|
| 598 |
+
|
| 599 |
+
inflection difficulty in our three languages. 846
|
| 600 |
+
|
| 601 |
+
Kirov and Cotterell (2018) provided very
|
| 602 |
+
|
| 603 |
+
promising evidence for the use of modern neural 848 networks to model the human language acquisition patterns. Our work confirms the potential of
|
| 604 |
+
|
| 605 |
+
this research direction, but also raises important 851 issues and joins recent follow-up studies (Cork-
|
| 606 |
+
|
| 607 |
+
ery et al., 2019; Dankers et al., 2021; Kodner and 853 Khalifa, 2022; Wiemerslage et al., 2022) that have warned against over-optimistic conclusions.
|
| 608 |
+
|
| 609 |
+
## References
|
| 610 |
+
|
| 611 |
+
858
|
| 612 |
+
|
| 613 |
+
Adam Albright and Bruce Hayes. 2003. Rules vs. analogy in english past tenses: A computational/experimental study. Cognition, 90(2):119- 161.
|
| 614 |
+
|
| 615 |
+
859
|
| 616 |
+
|
| 617 |
+
860
|
| 618 |
+
|
| 619 |
+
863
|
| 620 |
+
|
| 621 |
+
R Harald Baayen, Richard Piepenbrock, and H Van Rijn. 1993. The celex lexical database (cd-rom). linguistic data consortium. Philadelphia, PA: University of Pennsylvania.
|
| 622 |
+
|
| 623 |
+
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
|
| 624 |
+
|
| 625 |
+
Ryan P Blything, Ben Ambridge, and Elena VM Lieven. 2018. Children's acquisition of the english past-tense: Evidence for a single-route account from novel verb production data. Cognitive Science, 42:621-639.
|
| 626 |
+
|
| 627 |
+
A Van den Bosch, Alain Content, W Daelemans, and Béatrice De Gelder. 1994. Analysing orthographic depth of different languages using data-oriented algorithms: Qualico94. In Proceedings of the 2d International Conference on Quantitative Linguistics, pages 26-31.
|
| 628 |
+
|
| 629 |
+
Maria Corkery, Yevgen Matusevych, and Sharon Goldwater. 2019. Are we there yet? encoder-decoder neural networks as cognitive models of English past tense inflection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3868-3877, Florence, Italy. Association for Computational Linguistics.
|
| 630 |
+
|
| 631 |
+
Verna Dankers, Anna Langedijk, Kate McCurdy, Adina Williams, and Dieuwke Hupkes. 2021. Generalising to German plural noun classes, from the perspective of a recurrent neural network. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 94-108, Online. Association for Computational Linguistics.
|
| 632 |
+
|
| 633 |
+
Micha Elsner and Sara Court. 2022. OSU at Sig-Morphon 2022: Analogical inflection with rule features. In Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 220-225, Seattle, Washington. Association for Computational Linguistics.
|
| 634 |
+
|
| 635 |
+
Mirjam Ernestus and Harald Baayen. 2004. Analogical effects in regular past tense production in dutch.
|
| 636 |
+
|
| 637 |
+
Emily Goodwin, Koustuv Sinha, and Timothy J. O'Donnell. 2020. Probing linguistic systematicity. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1958-1969, Online. Association for Computational Linguistics.
|
| 638 |
+
|
| 639 |
+
Ulrike Hahn and Ramin Charles Nakisa. 2000. German inflection: Single route or dual route? Cognitive Psychology, 41(4):313-360.
|
| 640 |
+
|
| 641 |
+
Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926.
|
| 642 |
+
|
| 643 |
+
865 866 867 870
|
| 644 |
+
|
| 645 |
+
885
|
| 646 |
+
|
| 647 |
+
887
|
| 648 |
+
|
| 649 |
+
895
|
| 650 |
+
|
| 651 |
+
897
|
| 652 |
+
|
| 653 |
+
900
|
| 654 |
+
|
| 655 |
+
902
|
| 656 |
+
|
| 657 |
+
907
|
| 658 |
+
|
| 659 |
+
917
|
| 660 |
+
|
| 661 |
+
Akos Kádár, Grzegorz Chrupata, and Afra Alishahi. 918
|
| 662 |
+
|
| 663 |
+
2017. Representation of linguistic form and function in recurrent neural networks. Computational Linguistics, 43(4):761-780.
|
| 664 |
+
|
| 665 |
+
Akhilesh Kakolu Ramarao, Yulia Zinova, Kevin Tang, and Ruben van de Vijver. 2022. HeiMorph at SIG-MORPHON 2022 shared task on morphological acquisition trajectories. In Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 236-239, Seattle, Washington. Association for Computational Linguistics.
|
| 666 |
+
|
| 667 |
+
Katharina Kann and Hinrich Schütze. 2016. Med: The lmu system for the sigmorphon 2016 shared task on morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 62-70.
|
| 668 |
+
|
| 669 |
+
Christo Kirov and Ryan Cotterell. 2018. Recurrent neural networks in linguistic theory: Revisiting pinker and prince (1988) and the past tense debate. Transactions of the Association for Computational Linguistics, 6:651-665.
|
| 670 |
+
|
| 671 |
+
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810.
|
| 672 |
+
|
| 673 |
+
Jordan Kodner and Salam Khalifa. 2022. SIGMORPHON-UniMorph 2022 shared task 0: Modeling inflection in language acquisition. In Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 157-175, Seattle, Washington. Association for Computational Linguistics.
|
| 674 |
+
|
| 675 |
+
Gary F Marcus, Ursula Brinkmann, Harald Clahsen, Richard Wiese, and Steven Pinker. 1995. German inflection: The exception that proves the rule. Cognitive psychology, 29(3):189-256.
|
| 676 |
+
|
| 677 |
+
Gary F Marcus, Steven Pinker, Michael Ullman, Michelle Hollander, T John Rosen, Fei Xu, and Harald Clahsen. 1992. Overregularization in language acquisition. Monographs of the society for research in child development, pages i-178.
|
| 678 |
+
|
| 679 |
+
Xavier Marjou. 2021. OTEANN: Estimating the transparency of orthographies with an artificial neural network. In Proceedings of the Third Workshop on Computational Typology and Multilingual NLP, pages 1-9, Online. Association for Computational Linguistics.
|
| 680 |
+
|
| 681 |
+
Arya D. McCarthy, Christo Kirov, Matteo Grella, Amrit Nidhi, Patrick Xia, Kyle Gorman, Ekaterina Vylomova, Sabrina J. Mielke, Garrett Nicolai, Miikka Silfverberg, Timofey Arkhangelskiy, Na-taly Krizhanovsky, Andrew Krizhanovsky, Elena Klyachko, Alexey Sorokin, John Mansfield, Valts
|
| 682 |
+
|
| 683 |
+
919
|
| 684 |
+
|
| 685 |
+
920
|
| 686 |
+
|
| 687 |
+
921
|
| 688 |
+
|
| 689 |
+
922
|
| 690 |
+
|
| 691 |
+
923
|
| 692 |
+
|
| 693 |
+
924
|
| 694 |
+
|
| 695 |
+
929
|
| 696 |
+
|
| 697 |
+
934
|
| 698 |
+
|
| 699 |
+
936
|
| 700 |
+
|
| 701 |
+
939
|
| 702 |
+
|
| 703 |
+
941
|
| 704 |
+
|
| 705 |
+
956
|
| 706 |
+
|
| 707 |
+
959
|
| 708 |
+
|
| 709 |
+
961
|
| 710 |
+
|
| 711 |
+
966 971 972 Ernštreits, Yuval Pinter, Cassandra L. Jacobs, Ryan 973 Cotterell, Mans Hulden, and David Yarowsky. 2020. 974 UniMorph 3.0: Universal Morphology. In Proceed- 975 ings of the 12th Language Resources and Evaluation 976 Conference, pages 3922-3931, Marseille, France. European Language Resources Association. 977
|
| 712 |
+
|
| 713 |
+
978 Kate McCurdy, Sharon Goldwater, and Adam Lopez. 979 2020. Inflecting when there's no majority: Limi- 980 tations of encoder-decoder neural networks as cog- nitive models for German plurals. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1745-1756, On- 983 line. Association for Computational Linguistics.
|
| 714 |
+
|
| 715 |
+
Joe Pater. 2019. Generative linguistics and neural net- 985 works at 60: Foundation, friction, and fusion. Lan-
|
| 716 |
+
|
| 717 |
+
986 guage, 95(1):e41-e74.
|
| 718 |
+
|
| 719 |
+
987
|
| 720 |
+
|
| 721 |
+
988 Tiago Pimentel, Maria Ryskina, Sabrina J. Mielke,
|
| 722 |
+
|
| 723 |
+
989 Shijie Wu, Eleanor Chodroff, Brian Leonard, Gar- rett Nicolai, Yustinus Ghanggo Ate, Salam Khal-
|
| 724 |
+
|
| 725 |
+
990 ifa, Nizar Habash, Charbel El-Khaissi, Omer Goldman, Michael Gasser, William Lane, Matt Coler, Arturo Oncevay, Jaime Rafael Montoya Samame,
|
| 726 |
+
|
| 727 |
+
993 Gema Celeste Silva Villegas, Adam Ek, Jean-Philippe Bernardy, Andrey Shcherbakov, Aziyana Bayyr-ool, Karina Sheifer, Sofya Ganieva, Matvey
|
| 728 |
+
|
| 729 |
+
995 Plugaryov, Elena Klyachko, Ali Salehi, Andrew Krizhanovsky, Natalia Krizhanovsky, Clara Vania, Sardana Ivanova, Aelita Salchak, Christo-
|
| 730 |
+
|
| 731 |
+
998 pher Straughn, Zoey Liu, Jonathan North Washington, Duygu Ataman, Witold Kieraé, Marcin Woliński, Totok Suhardijanto, Niklas Stoehr, Zahroh
|
| 732 |
+
|
| 733 |
+
1000 Nuriah, Shyam Ratan, Francis M. Tyers, Edoardo M. Ponti, Grant Aiton, Richard J. Hatcher, Emily Prud'hommeaux, Ritesh Kumar, Mans Hulden, Botond Barta, Dorina Lakatos, Gábor Szolnok, Ju-dit Acs, Mohit Raj, David Yarowsky, Ryan Cotterell, Ben Ambridge, and Ekaterina Vylomova.
|
| 734 |
+
|
| 735 |
+
1005 2021. SIGMORPHON 2021 shared task on morphological reinflection: Generalization across languages. In Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229-259, On-
|
| 736 |
+
|
| 737 |
+
1010 line. Association for Computational Linguistics.
|
| 738 |
+
|
| 739 |
+
Steven Pinker and Alan Prince. 1988. On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition, 28(1-2):73-193.
|
| 740 |
+
|
| 741 |
+
1015 Steven Pinker and Michael T Ullman. 2002. The past and future of the past tense. Trends in cognitive sciences, 6(11):456-463.
|
| 742 |
+
|
| 743 |
+
David C Plaut and Laura M Gonnerman. 2000. Are non-semantic morphological effects incompatible with a distributed connectionist approach to lexical processing? Language and Cognitive Processes, 15(4-5):445-485.
|
| 744 |
+
|
| 745 |
+
Kim Plunkett and Patrick Juola. 1999. A connectionist 1024 model of english past tense and plural morphology. 1025 Cognitive Science, 23(4):463-490.
|
| 746 |
+
|
| 747 |
+
Kim Plunkett and Virginia Marchman. 2020. U-shaped 1026 learning and frequency effects in a multilayered per- 1027 ceptron: Implications for child language acquisi- 1028 tion. Connectionist psychology: A text with read- 1029
|
| 748 |
+
|
| 749 |
+
ings, pages 487-526. 1030
|
| 750 |
+
|
| 751 |
+
Kim Plunkett, Virginia Marchman, and Steen Lade- 1031
|
| 752 |
+
|
| 753 |
+
gaard Knudsen. 1991. From rote learning to system 1032
|
| 754 |
+
|
| 755 |
+
building: acquiring verb morphology in children and 1033
|
| 756 |
+
|
| 757 |
+
connectionist nets. In Connectionist Models, pages 1034 201-219. Elsevier.
|
| 758 |
+
|
| 759 |
+
1035
|
| 760 |
+
|
| 761 |
+
Michael Ramscar. 2002. The role of meaning in in- 1036
|
| 762 |
+
|
| 763 |
+
flection: Why the past tense does not require a rule. 1037
|
| 764 |
+
|
| 765 |
+
Cognitive Psychology, 45(1):45-94. 1038
|
| 766 |
+
|
| 767 |
+
Abhilasha Ravichander, Eduard Hovy, Kaheer Sule- 1039
|
| 768 |
+
|
| 769 |
+
man, Adam Trischler, and Jackie Chi Kit Cheung. 1040
|
| 770 |
+
|
| 771 |
+
2020. On the systematicity of probing contextual- 1041
|
| 772 |
+
|
| 773 |
+
ized word representations: The case of hypernymy 1042 in bert. In Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics,
|
| 774 |
+
|
| 775 |
+
pages 88-102. 1044
|
| 776 |
+
|
| 777 |
+
David E Rumelhart and James L McClelland. 1986. On
|
| 778 |
+
|
| 779 |
+
learning the past tenses of english verbs. 1047
|
| 780 |
+
|
| 781 |
+
Mark S Seidenberg and Laura M Gonnerman. 2000.
|
| 782 |
+
|
| 783 |
+
Explaining derivational morphology as the conver- 1049
|
| 784 |
+
|
| 785 |
+
gence of codes. Trends in cognitive sciences, 1050
|
| 786 |
+
|
| 787 |
+
4(9):353-361. 1051
|
| 788 |
+
|
| 789 |
+
Niels A Taatgen and John R Anderson. 2002. Why 1052 do children learn to say "broke"? a model of learn-
|
| 790 |
+
|
| 791 |
+
ing the past tense without feedback. Cognition, 1054
|
| 792 |
+
|
| 793 |
+
86(2):123-155. 1055
|
| 794 |
+
|
| 795 |
+
Adam Wiemerslage, Shiran Dudy, and Katharina 1056
|
| 796 |
+
|
| 797 |
+
Kann. 2022. A comprehensive comparison of neu- 1057
|
| 798 |
+
|
| 799 |
+
ral networks as cognitive models of inflection. arXiv 1058
|
| 800 |
+
|
| 801 |
+
preprint arXiv:2210.12321. 1059
|
| 802 |
+
|
| 803 |
+
1060
|
| 804 |
+
|
| 805 |
+
1061
|
| 806 |
+
|
| 807 |
+
1062
|
| 808 |
+
|
| 809 |
+
1063
|
| 810 |
+
|
| 811 |
+
1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079
|
| 812 |
+
|
| 813 |
+
1080 1134
|
| 814 |
+
|
| 815 |
+
<table><tr><td>Parameter</td><td>Value</td></tr><tr><td>seed</td><td>123</td></tr><tr><td>feat_vec_size</td><td>300</td></tr><tr><td>feat_merge</td><td>concat</td></tr><tr><td>rnn_type</td><td>LSTM</td></tr><tr><td>encoder_type</td><td>brnn</td></tr><tr><td>encoder_layers</td><td>2</td></tr><tr><td>encoder_rnn_size</td><td>100</td></tr><tr><td>decoder_type</td><td>rnn</td></tr><tr><td>decoder_layers</td><td>2</td></tr><tr><td>decoder_rnn_size</td><td>100</td></tr><tr><td>dropout</td><td>0.3</td></tr><tr><td>learning_rate_decay</td><td>1.0</td></tr><tr><td>learning_rate</td><td>1.0</td></tr><tr><td>batch_size</td><td>20</td></tr><tr><td/><td>(trainingsample size/</td></tr><tr><td>train_steps</td><td>batch size)*the number of epochs</td></tr><tr><td>beam_size</td><td>12</td></tr><tr><td>optim</td><td>adadelta</td></tr><tr><td>verbose</td><td>True</td></tr><tr><td>tensorboard</td><td>True</td></tr><tr><td>tensorboard_log_dir</td><td>logs</td></tr><tr><td>report_every</td><td>steps / 100</td></tr><tr><td>log_file</td><td>directory of the log file</td></tr><tr><td>log_file_level</td><td>20</td></tr></table>
|
| 816 |
+
|
| 817 |
+
A Appendix A displays hyperparameter settings of the replicating experiments and the extension experiments.
|
| 818 |
+
|
| 819 |
+
1081 1135
|
| 820 |
+
|
| 821 |
+
1164
|
| 822 |
+
|
| 823 |
+
1082 1136
|
| 824 |
+
|
| 825 |
+
1083 1137
|
| 826 |
+
|
| 827 |
+
1084 1138
|
| 828 |
+
|
| 829 |
+
1085 1139
|
| 830 |
+
|
| 831 |
+
1086 1140
|
| 832 |
+
|
| 833 |
+
1087 1141
|
| 834 |
+
|
| 835 |
+
1088 1142
|
| 836 |
+
|
| 837 |
+
1089 1143
|
| 838 |
+
|
| 839 |
+
1090 1144
|
| 840 |
+
|
| 841 |
+
1091 1145
|
| 842 |
+
|
| 843 |
+
1092 1146
|
| 844 |
+
|
| 845 |
+
1093 1147
|
| 846 |
+
|
| 847 |
+
1094 1148
|
| 848 |
+
|
| 849 |
+
1095 1149
|
| 850 |
+
|
| 851 |
+
1096 1150
|
| 852 |
+
|
| 853 |
+
1097 1151
|
| 854 |
+
|
| 855 |
+
1098 1152
|
| 856 |
+
|
| 857 |
+
1099 1153
|
| 858 |
+
|
| 859 |
+
1100 1154
|
| 860 |
+
|
| 861 |
+
1101 1155
|
| 862 |
+
|
| 863 |
+
1102 1156
|
| 864 |
+
|
| 865 |
+
1103 1157
|
| 866 |
+
|
| 867 |
+
1104 1158
|
| 868 |
+
|
| 869 |
+
1105 1159
|
| 870 |
+
|
| 871 |
+
1106 1160
|
| 872 |
+
|
| 873 |
+
1107 1161
|
| 874 |
+
|
| 875 |
+
1108 1162
|
| 876 |
+
|
| 877 |
+
1109 1163
|
| 878 |
+
|
| 879 |
+
1111 1165
|
| 880 |
+
|
| 881 |
+
1112 1166
|
| 882 |
+
|
| 883 |
+
1113 1167
|
| 884 |
+
|
| 885 |
+
1114 1168
|
| 886 |
+
|
| 887 |
+
1115 1169
|
| 888 |
+
|
| 889 |
+
1116 1170
|
| 890 |
+
|
| 891 |
+
1117 1171
|
| 892 |
+
|
| 893 |
+
1118 1172
|
| 894 |
+
|
| 895 |
+
1119 1173
|
| 896 |
+
|
| 897 |
+
1120 1174
|
| 898 |
+
|
| 899 |
+
1121 1175
|
| 900 |
+
|
| 901 |
+
1122 1176
|
| 902 |
+
|
| 903 |
+
1123 1177
|
| 904 |
+
|
| 905 |
+
1124 1178
|
| 906 |
+
|
| 907 |
+
1125 1179
|
| 908 |
+
|
| 909 |
+
1126 1180
|
| 910 |
+
|
| 911 |
+
1127 1181
|
| 912 |
+
|
| 913 |
+
1128 1182
|
| 914 |
+
|
| 915 |
+
1129 1183
|
| 916 |
+
|
| 917 |
+
1130 1184
|
| 918 |
+
|
| 919 |
+
1131 1185
|
| 920 |
+
|
| 921 |
+
1132 1186
|
| 922 |
+
|
| 923 |
+
1133 1187
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wKieg8k2taJ/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,780 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ SLAAPTE OR SLIEP? EXTENDING NEURAL-NETWORK SIMULATIONS OF ENGLISH PAST TENSE LEARNING TO DUTCH AND GERMAN
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
004 058
|
| 12 |
+
|
| 13 |
+
005 059
|
| 14 |
+
|
| 15 |
+
006 060
|
| 16 |
+
|
| 17 |
+
§ ABSTRACT
|
| 18 |
+
|
| 19 |
+
This work studies the plausibility of sequence-to-sequence neural networks as models of morphological acquisition by
|
| 20 |
+
|
| 21 |
+
016 humans. We replicate the findings of Kirov and Cotterell (2018) on the well-
|
| 22 |
+
|
| 23 |
+
018 known challenge of the English past tense and examine their generalizability to two related but morphologically richer lan-
|
| 24 |
+
|
| 25 |
+
021 guages, namely Dutch and German. Using a new dataset of English/Dutch/German
|
| 26 |
+
|
| 27 |
+
023 (ir)regular verb forms, we show that the major findings of Kirov and Cotterell
|
| 28 |
+
|
| 29 |
+
026 (2018) hold for all three languages, includ- ing the observation of over-regularization
|
| 30 |
+
|
| 31 |
+
028 errors and micro U-shape learning trajectories. At the same time, we observe troublesome cases of non human-like errors
|
| 32 |
+
|
| 33 |
+
031 similar to those reported by recent followup studies with different languages or neu-
|
| 34 |
+
|
| 35 |
+
033 ral architectures. Finally, we study the possibility of switching to orthographic input in the absence of pronunciation in-
|
| 36 |
+
|
| 37 |
+
036 formation and show this can have a nonnegligible impact on the simulation re-
|
| 38 |
+
|
| 39 |
+
038 sults, with possibly misleading findings.
|
| 40 |
+
|
| 41 |
+
§ 1 INTRODUCTION
|
| 42 |
+
|
| 43 |
+
The plausibility of neural network-based or con-nectionist models in simulating psycholinguistic behaviours has been attracting considerable attention since Rumelhart and McClelland (1986) first modeled the past-tense acquisition with an early example of sequence-to-sequence network. Their experiment received harsh criticism (e.g., Pinker and Prince, 1988) but also inspired cognitive scientists with alternatives (e.g., Kirov and Cotterell, 2018; Plunkett and Juola, 1999; Taat-gen and Anderson, 2002). Much more recently,
|
| 44 |
+
|
| 45 |
+
053 Kirov and Cotterell (2018) replicated Rumelhart
|
| 46 |
+
|
| 47 |
+
061
|
| 48 |
+
|
| 49 |
+
062
|
| 50 |
+
|
| 51 |
+
063
|
| 52 |
+
|
| 53 |
+
064
|
| 54 |
+
|
| 55 |
+
and McClelland (1986)'s simulations using a mod- 065 ern encoder-decoder neural architecture developed
|
| 56 |
+
|
| 57 |
+
for the task of morphological paradigm comple- 067 tion. Their improved results resolved much of the original criticisms by Pinker and Prince (1988).
|
| 58 |
+
|
| 59 |
+
The main purpose of this paper is to study the 070 generalizability of Kirov and Cotterell (2018)'s
|
| 60 |
+
|
| 61 |
+
findings beyond the case of English. Specifically, 072 we consider two languages that are genetically
|
| 62 |
+
|
| 63 |
+
related to English, but morphologically richer - 075 namely, Dutch and German. In these languages
|
| 64 |
+
|
| 65 |
+
too, past tense inflection is divided into regular and 077 irregular verbs, but with different proportions and different inflectional patterns than English. More-
|
| 66 |
+
|
| 67 |
+
over, German and Dutch are characterized by a 080 much more transparent orthography than English
|
| 68 |
+
|
| 69 |
+
(Van den Bosch et al., 1994; Marjou, 2021), which 082 allows us to study the usability of grapheme-based input for simulating past tense acquisition patterns
|
| 70 |
+
|
| 71 |
+
when pronunciation information may not avail- 085 able. Concretely, we aim to answer the following
|
| 72 |
+
|
| 73 |
+
research questions: 087
|
| 74 |
+
|
| 75 |
+
1. Can the model applied by Kirov and Cot-
|
| 76 |
+
|
| 77 |
+
terell (2018) to English also simulate the past 090 tense acquisition process in languages with
|
| 78 |
+
|
| 79 |
+
more complex morphological inflection, such 092 as Dutch and German?
|
| 80 |
+
|
| 81 |
+
2. Given the more predictable grapheme-to- 095 phoneme correspondence, i.e., orthographic
|
| 82 |
+
|
| 83 |
+
transparency (Marjou, 2021), in these two 097 languages, will the model perform similarly if the written forms of verbs are used for training instead of the phonetic ones?
|
| 84 |
+
|
| 85 |
+
To answer these two questions, we build and release a new past-tense inflection dataset of English, Dutch, and German, covering both grapheme and phoneme features (Section 3). ${}^{1}$ We
|
| 86 |
+
|
| 87 |
+
107 then replicate the single-task learning experiments
|
| 88 |
+
|
| 89 |
+
${}^{1}$ All code and data are available at https:// anonynmous
|
| 90 |
+
|
| 91 |
+
109 of Kirov and Cotterell (2018) (Section 4) and extend them to our multilingual dataset, using both phoneme- and grapheme-based input for comparison (Section 5).
|
| 92 |
+
|
| 93 |
+
Our findings reconfirm the potential and limitations of using neural networks for the simulation of human language learning patterns. Our model shows human-like behavior in learning past tenses of verbs, such as the micro U-shape coined by Plunkett et al. (1991) and over-regularization errors in all the examined languages; however non human-like errors are also reported. We also find that learning irregular past tense forms is considerably easier in Dutch and German than in English. Finally, we observe that higher orthographic transparency indeed leads to more consistent learning results when a model is trained with grapheme vs. phoneme input.
|
| 94 |
+
|
| 95 |
+
§ 2 BACKGROUND
|
| 96 |
+
|
| 97 |
+
Past tense debate The acquisition of verbal past tense in English, particularly the over-regularization of the irregular verbs in the process of learning (Marcus et al., 1992), has been serving as a testing ground for different hypotheses in language modelling for decades. A much debated question is whether the past tense of (ir)regular verbs is learnt by rules and memories (e.g., Plaut and Gonnerman, 2000; Seidenberg and Gonner-man, 2000; Marcus et al., 1995; Albright and Hayes, 2003; Pinker and Ullman, 2002), by analogy (e.g., Ramscar, 2002; Albright and Hayes, 2003) or by a dual mechanism (Pinker and Prince, 1988; Taatgen and Anderson, 2002).
|
| 98 |
+
|
| 99 |
+
Marcus et al. (1995) posited the necessity of mental rules in learning German irregular verbs. By contrast, Ernestus and Baayen's (2004) and Hahn and Nakisa's (2000) studies on Dutch and German respectively provided evidence in favour of connectionist and analogical approaches: they showed that humans tend to choose wrong past tense suffixes for regular verbs whose phonological structure is similar to that of irregular ones.
|
| 100 |
+
|
| 101 |
+
Recent connectionist revival The recent development of deep learning methods in computational linguistics has led to a renewed interest in connectionist approaches to modelling language acquisition and processing by humans (e.g., Bly-thing et al., 2018; Kádár et al., 2017; Pater, 2019;
|
| 102 |
+
|
| 103 |
+
161 Corkery et al., 2019; McCurdy et al., 2020). Last
|
| 104 |
+
|
| 105 |
+
year, modelling morphological acquisition trajec- 162
|
| 106 |
+
|
| 107 |
+
tories was adopted as one of the shared tasks 163
|
| 108 |
+
|
| 109 |
+
of SIGMORPHON-UniMorph (Kodner and Khal- 164 ifa, 2022). The three submitted neural systems (Pimentel et al., 2021; Kakolu Ramarao et al., 2022; Elsner and Court, 2022) exhibited over-
|
| 110 |
+
|
| 111 |
+
regularization and developmental regression, but 168 non-human-like behaviours were also observed.
|
| 112 |
+
|
| 113 |
+
Some recent studies have revealed a poor alignment between the way humans and neural encoder-decoder models generalize to new words (wug test) in the case of English verb past tense
|
| 114 |
+
|
| 115 |
+
(Corkery et al., 2019) and German plural nouns 175 (McCurdy et al., 2020). Dankers et al. (2021) observed cognitively plausible representations in
|
| 116 |
+
|
| 117 |
+
a recurrent neural network (RNN) trained to in- 178 flect German plural nouns but also found evidence
|
| 118 |
+
|
| 119 |
+
of problematic 'shortcut' learning. Wiemerslage 180 et al. (2022) observed that Transformers resemble humans in learning the morphological inflection of English and German in the wug tests but they also pointed out the divergence of the model in Ger-
|
| 120 |
+
|
| 121 |
+
man production. However, computational simula- 185 tions have succeeded in replicating the U-shaped learning curve during the acquisition of past tense (Kirov and Cotterell, 2018; Plunkett and Marchman, 2020). Additionally, further probing experi-
|
| 122 |
+
|
| 123 |
+
ments have suggested that neural models do learn 190 linguistic representations (Goodwin et al., 2020; Hupkes et al., 2018; Ravichander et al., 2020). Our research continues on exploring the cognitive plausibility of neural networks in modeling lan-
|
| 124 |
+
|
| 125 |
+
guage inflection learning. 195
|
| 126 |
+
|
| 127 |
+
Recurrent encoder-decoder inflection model In this work, we adopt the model of Kirov and Cotterell (2018), henceforth referred to as K&C.
|
| 128 |
+
|
| 129 |
+
This model is based on the encoder-decoder archi- 200 tecture proposed by Bahdanau et al. (2014), with input representation and hyper-parameters taken from Kann and Schütze (2016). The architecture consists of a bidirectional LSTM (BiLSTM) encoder augmented with an attention mechanism and a unidirectional LSTM decoder. The task of the encoder is to map each phonetic (or orthographic) symbol from the input string to a unique embedding and then process that embedding to get a context-sensitive representation of that symbol. The decoder reads the context vector from the final cell of the encoder and generates an output of phoneme/grapheme sequences through training
|
| 130 |
+
|
| 131 |
+
a BiLSTM model with two hidden layers. For 215 more details on the model, see Bahdanau et al.
|
| 132 |
+
|
| 133 |
+
217 (2014); Kann and Schütze (2016); Kirov and Cotterell (2018).
|
| 134 |
+
|
| 135 |
+
§ 3 DATASETS
|
| 136 |
+
|
| 137 |
+
To replicate the results published by K&C, we employ their dataset based on CELEX (Baayen et al., 1993). ${}^{2}$ To extend the experiments to Dutch and German and compare the results to English, we build a new dataset containing past tense forms in all three languages.
|
| 138 |
+
|
| 139 |
+
§ 3.1 K&C ENGLISH DATASET
|
| 140 |
+
|
| 141 |
+
K&C's CELEX-based dataset contains 4,039 English verb types including 3,871 regular verbs and 168 irregular verbs. Each verb is associated with an infinitive form and past tense form, both in International Phonetic Alphabet (IPA). Moreover, each verb is marked as regular or irregular (Albright and Hayes, 2003).
|
| 142 |
+
|
| 143 |
+
Note that there are label errors in their dataset. For example, dive-dived, dream-dreamed, light-lighted are marked as irregular. This is possibly because those verbs have two past tense forms and the other form does not follow the regular inflection (dive-dove, dream-dreamt, light-light). However, as the past tense of those verbs in the original dataset aligns with the regular inflection rule of English, we take those verbs as regular ones and manually correct their labels.
|
| 144 |
+
|
| 145 |
+
§ 3.2 MULTILINGUAL UNIMORPH-BASED DATASET
|
| 146 |
+
|
| 147 |
+
We use the morphological annotation dataset Uni-morph (McCarthy et al., 2020) as a source of English, Dutch, and German word forms to enable a fair comparison in our multilingual experiments. In this lexicon, each entry consists of the infinitive of the verb, the conjugation, and the tag containing the Part-Of-Speech and inflectional information. An important adjustment has to be
|
| 148 |
+
|
| 149 |
+
259 made here because English has only two forms for the present tense (I/you/we/they) and only one for the past. By contrast, Dutch and German distinguish more persons in both present and past tense. To address this, we include for each lemma the first/second/third singular present form and plural form together with their respective past form, each as a separate entry (see examples in Figure 1).
|
| 150 |
+
|
| 151 |
+
269
|
| 152 |
+
|
| 153 |
+
max width=
|
| 154 |
+
|
| 155 |
+
present(g) past(g) present(p) past(p) reg X
|
| 156 |
+
|
| 157 |
+
1-6
|
| 158 |
+
accounts accounted @k6nts @k6ntId reg X
|
| 159 |
+
|
| 160 |
+
1-6
|
| 161 |
+
account accounted @k6nt @k6ntId reg X
|
| 162 |
+
|
| 163 |
+
1-6
|
| 164 |
+
feels felt filz fElt irreg X
|
| 165 |
+
|
| 166 |
+
1-6
|
| 167 |
+
feel felt fil fElt irreg X
|
| 168 |
+
|
| 169 |
+
1-6
|
| 170 |
+
|
| 171 |
+
(a) English
|
| 172 |
+
|
| 173 |
+
max width=
|
| 174 |
+
|
| 175 |
+
slaap sliep slap slip irreg
|
| 176 |
+
|
| 177 |
+
1-5
|
| 178 |
+
slaapt sliep slapt slip irreg
|
| 179 |
+
|
| 180 |
+
1-5
|
| 181 |
+
slapen sliepen slap@ slip@ irreg
|
| 182 |
+
|
| 183 |
+
1-5
|
| 184 |
+
behoef behoefde b@huf b@huvd@ reg
|
| 185 |
+
|
| 186 |
+
1-5
|
| 187 |
+
behoeft behoefde b@huft b@huvd@ reg
|
| 188 |
+
|
| 189 |
+
1-5
|
| 190 |
+
behoeven behoefden b@huv@ b@huvd@ reg
|
| 191 |
+
|
| 192 |
+
1-5
|
| 193 |
+
|
| 194 |
+
max width=
|
| 195 |
+
|
| 196 |
+
5|c|(b) Dutch
|
| 197 |
+
|
| 198 |
+
1-5
|
| 199 |
+
berechne berechnete b@rExn@ b@rExn@t@ reg
|
| 200 |
+
|
| 201 |
+
1-5
|
| 202 |
+
berechnest berechnetest b@rExn@st b@rExn@t@st reg
|
| 203 |
+
|
| 204 |
+
1-5
|
| 205 |
+
berechnet berechnete b@rExn@t b@rExn@t@ reg
|
| 206 |
+
|
| 207 |
+
1-5
|
| 208 |
+
berechnen berechneten b@rExn@n b@rExn@t@n reg
|
| 209 |
+
|
| 210 |
+
1-5
|
| 211 |
+
fliehe floh flia flo irreg
|
| 212 |
+
|
| 213 |
+
1-5
|
| 214 |
+
fliehst flohst flist flost irreg
|
| 215 |
+
|
| 216 |
+
1-5
|
| 217 |
+
flieht floh flit flo irreg
|
| 218 |
+
|
| 219 |
+
1-5
|
| 220 |
+
fliehen flohen flian flo@n irreg
|
| 221 |
+
|
| 222 |
+
1-5
|
| 223 |
+
|
| 224 |
+
(c) German
|
| 225 |
+
|
| 226 |
+
Figure 1: Excerpt of the newly introduced dataset of English, Dutch and German past tense. Dutch verbs: slapen (to sleep); behoeven (to need). German: berechnen (to calculate); fliehen (to fleed).
|
| 227 |
+
|
| 228 |
+
270
|
| 229 |
+
|
| 230 |
+
271
|
| 231 |
+
|
| 232 |
+
272
|
| 233 |
+
|
| 234 |
+
273
|
| 235 |
+
|
| 236 |
+
274
|
| 237 |
+
|
| 238 |
+
275
|
| 239 |
+
|
| 240 |
+
276
|
| 241 |
+
|
| 242 |
+
277
|
| 243 |
+
|
| 244 |
+
278
|
| 245 |
+
|
| 246 |
+
279
|
| 247 |
+
|
| 248 |
+
280
|
| 249 |
+
|
| 250 |
+
281
|
| 251 |
+
|
| 252 |
+
282
|
| 253 |
+
|
| 254 |
+
283
|
| 255 |
+
|
| 256 |
+
284
|
| 257 |
+
|
| 258 |
+
285
|
| 259 |
+
|
| 260 |
+
286
|
| 261 |
+
|
| 262 |
+
287
|
| 263 |
+
|
| 264 |
+
288
|
| 265 |
+
|
| 266 |
+
289
|
| 267 |
+
|
| 268 |
+
290
|
| 269 |
+
|
| 270 |
+
291
|
| 271 |
+
|
| 272 |
+
293
|
| 273 |
+
|
| 274 |
+
Specifically, we start by extracting from Uni- 296 morph a list of verb lemmas and their correspond-
|
| 275 |
+
|
| 276 |
+
ing present and past tense forms. A different ex- 298 traction script is used in each language because of
|
| 277 |
+
|
| 278 |
+
the different number of forms and slightly differ- 301 ent POS tags:
|
| 279 |
+
|
| 280 |
+
* English only has two present tense forms: 303 one for the third person singular and one for the rest. Mostly, there is only one past tense.
|
| 281 |
+
|
| 282 |
+
306
|
| 283 |
+
|
| 284 |
+
* Most verbs in Dutch have three present tense 307
|
| 285 |
+
|
| 286 |
+
forms and two past tense forms. 308
|
| 287 |
+
|
| 288 |
+
309
|
| 289 |
+
|
| 290 |
+
* Most verbs in German have five present tense 310
|
| 291 |
+
|
| 292 |
+
forms and four past tense forms. 311
|
| 293 |
+
|
| 294 |
+
Next, we tag each form as regular or irregular, 312 313 based on a simple rule-based strategy: 314
|
| 295 |
+
|
| 296 |
+
* English: if the past tense ends with 'ed' then
|
| 297 |
+
|
| 298 |
+
it is considered a regular verb. 316
|
| 299 |
+
|
| 300 |
+
317
|
| 301 |
+
|
| 302 |
+
* Dutch: if the singular past tense ends with 318
|
| 303 |
+
|
| 304 |
+
'-de' or '-te', it is considered regular. 319
|
| 305 |
+
|
| 306 |
+
320
|
| 307 |
+
|
| 308 |
+
* German: if the singular past tense of the first 321
|
| 309 |
+
|
| 310 |
+
or third person ends with '-te', it is consid- 322
|
| 311 |
+
|
| 312 |
+
ered regular. 323
|
| 313 |
+
|
| 314 |
+
${}^{2}$ Dataset, code and other experimental details are taken from https://github.com/ckirov/ RevisitPinkerAndPrince
|
| 315 |
+
|
| 316 |
+
324 378
|
| 317 |
+
|
| 318 |
+
325 379
|
| 319 |
+
|
| 320 |
+
max width=
|
| 321 |
+
|
| 322 |
+
3*Language 3*Type 6|c|Number of verbs 3*Count 3*Total verbs (%)
|
| 323 |
+
|
| 324 |
+
3-8
|
| 325 |
+
2|c|train 2|c|dev 2|c|test
|
| 326 |
+
|
| 327 |
+
3-8
|
| 328 |
+
Count (%) Count (%) Count (%)
|
| 329 |
+
|
| 330 |
+
1-10
|
| 331 |
+
3*English all 4,879 79.9 611 10.0 614 10.1 6,104 100.0
|
| 332 |
+
|
| 333 |
+
2-10
|
| 334 |
+
regular 4,601 75.4 529 8.7 520 8.5 5,650 92.6
|
| 335 |
+
|
| 336 |
+
2-10
|
| 337 |
+
irregular 278 4.6 82 1.3 94 1.5 454 7.4
|
| 338 |
+
|
| 339 |
+
1-10
|
| 340 |
+
3*Dutch all 4,896 80.1 612 10.0 607 9.9 6,115 100.0
|
| 341 |
+
|
| 342 |
+
2-10
|
| 343 |
+
regular 4,383 71.7 550 9.0 542 8.9 5,475 89.6
|
| 344 |
+
|
| 345 |
+
2-10
|
| 346 |
+
irregular 513 8.4 62 1.0 65 1.0 640 10.4
|
| 347 |
+
|
| 348 |
+
1-10
|
| 349 |
+
3*German all 4,865 79.7 616 10.1 620 10.2 6,101 100.0
|
| 350 |
+
|
| 351 |
+
2-10
|
| 352 |
+
regular 4,299 70.5 535 8.8 578 9.5 5,412 88.8
|
| 353 |
+
|
| 354 |
+
2-10
|
| 355 |
+
irregular 566 9.2 81 1.3 42 0.7 689 11.2
|
| 356 |
+
|
| 357 |
+
1-10
|
| 358 |
+
|
| 359 |
+
Table 1: Dataset distributed into train, dev and test sets in each of the three languages. The number of regular and irregular verbs is also reported. The percentage is calculated over the total number of verbs per language.
|
| 360 |
+
|
| 361 |
+
387
|
| 362 |
+
|
| 363 |
+
388
|
| 364 |
+
|
| 365 |
+
326 380
|
| 366 |
+
|
| 367 |
+
327 381
|
| 368 |
+
|
| 369 |
+
328 382
|
| 370 |
+
|
| 371 |
+
329 383
|
| 372 |
+
|
| 373 |
+
330 384
|
| 374 |
+
|
| 375 |
+
331 385
|
| 376 |
+
|
| 377 |
+
332 386
|
| 378 |
+
|
| 379 |
+
335 389
|
| 380 |
+
|
| 381 |
+
390
|
| 382 |
+
|
| 383 |
+
337 391
|
| 384 |
+
|
| 385 |
+
340 394
|
| 386 |
+
|
| 387 |
+
342 Finally, the IPA transcriptions of all word forms are retrieved from CELEX for all languages and added to the final dataset. As shown in Figure 1, the resulting dataset is in the same format as K&C's CELEX-based dataset.
|
| 388 |
+
|
| 389 |
+
Data selection The generated Dutch data only contains 6106 verb forms versus 11489 and 6975 in English and German respectively. Therefore, to enable a fair comparison among languages, we need to downsample the larger datasets. However, randomly choosing $6\mathrm{\;K}$ verb forms from the English and German lists may lead to a poor selection given the long tail of infrequent words. As a solution, we use word form frequencies as provided in the CELEX data and choose all words with a frequency of more than 1 in a million, and complement with a random selection of less frequent words in order to get approximately 6106 verb forms.
|
| 390 |
+
|
| 391 |
+
362 After shuffling, the word forms are split into a train set $\left( {{80}\% }\right)$ , a development(dev)set $\left( {{10}\% }\right)$ and a test set $\left( {{10}\% }\right)$ . The data distribution into three sets and regular/irregular verbs for each language is reported in Table 1.
|
| 392 |
+
|
| 393 |
+
367
|
| 394 |
+
|
| 395 |
+
§ 3.3 REMARKABLE PROBLEMS
|
| 396 |
+
|
| 397 |
+
A few problems occurred during data preparation. First, rule-based tagging of lemma's is not as trivial as it seems at first sights. For example, in English, not all past tenses ending with '-ed' are regular. Using the data of $\mathrm{K}\& \mathrm{C}$ , we added a few exceptions that are all irregular words ending with '-ed': bled, bred, led, misled, fled,
|
| 398 |
+
|
| 399 |
+
377 and forms of fed (including breast-fed,
|
| 400 |
+
|
| 401 |
+
force-fed and bottle-fed). 396
|
| 402 |
+
|
| 403 |
+
Also, in the original K&C experiment, the model should be able to predict past tense based on what it learned from other verbs, not from other word forms. In morphologically richer languages, a lemma has more word forms and data splitting becomes problematic. For instance, a model might have learned that work $\rightarrow$ worked and walks $\rightarrow$ walked, then it might predict that works $\rightarrow$ worked. In such a case, it is not possible to know whether the model made the right prediction based on similarities to other lemmas (walks) or to other forms of the same verb (work). To be as comparable as possible to the original setup of $\mathrm{K}\& \mathrm{C}$ , we put all forms of the same verb in the same data split (that is, either training, dev or test). As a result, if the model scores well, we know for sure that it cannot make predictions based on other forms of the same verb.
|
| 404 |
+
|
| 405 |
+
Another issue is that one present tense form nor-
|
| 406 |
+
|
| 407 |
+
mally corresponds to one past tense form. How- 416 ever, German poses two notable exceptions to this:
|
| 408 |
+
|
| 409 |
+
* The second person singular verb form ends with '-st' and the third person singular ends
|
| 410 |
+
|
| 411 |
+
with '-t'. Those forms coincide if a verb al- 421 ready ends with an 's', but there is still a difference between those forms in the past tense. For example, bremst is the present conju-
|
| 412 |
+
|
| 413 |
+
gation form of verb bremsen (to brake) for 426 pronoun du you, er he and even ihr you.
|
| 414 |
+
|
| 415 |
+
* Verbs ending in '-t' can be the third person singular or the second person plural informal. For example, wundert is the present conju-
|
| 416 |
+
|
| 417 |
+
431 gation of the verb wundern (to wonder) for the pronoun ihr you and er he.
|
| 418 |
+
|
| 419 |
+
In the former case, the model should be able to output multiple solutions, since only context can make clear whether it is the second person or the third person. However, this complicates the evaluation. As a solution, we exclude the third person form if it collides with the second person. As for the latter issue, we choose to remove all second person plural informal forms, since those are far less frequent than the third person singular forms.
|
| 420 |
+
|
| 421 |
+
§ 4 REPLICATION OF K&C
|
| 422 |
+
|
| 423 |
+
Before moving to the main multilingual experiments, we replicate the original $\mathrm{K}\& \mathrm{C}$ experiments (single-task only).
|
| 424 |
+
|
| 425 |
+
§ 4.1 EXPERIMENTAL SETUP
|
| 426 |
+
|
| 427 |
+
For the replication, we employ K&C's CELEX-based dataset and keep the model architecture and hyper-parameters unchanged using Open-NMT (Klein et al.,2017) ${}^{3}$ . See more details in Appendix A. Following K&C, the model is trained on the IPA transcription.
|
| 428 |
+
|
| 429 |
+
We use word form-level accuracy to evaluate model performance. An important remark concerns data splitting: K&C did not release their specific data split, which makes it impossible to replicate the exact same results. We, therefore, create our own splits following K&C's proportions (80/10/10% for training/dev/test). To obtain more reliable results, we train the model three times using different random seeds for different initialization and report the averaged resulting accuracies.
|
| 430 |
+
|
| 431 |
+
To study the micro U-shape learning curve of irregular verbs, we save the model at each 10 epochs and use those partially-trained models to predict the test set and compare their prediction results.
|
| 432 |
+
|
| 433 |
+
§ 4.2 RESULTS
|
| 434 |
+
|
| 435 |
+
As shown in Table 2, the results on the training set are almost the same as reported in the original paper, which means our replication is largely successful. ${}^{4}$ We note that the accuracy for irregular
|
| 436 |
+
|
| 437 |
+
verbs in the dev and test set is considerably dif- 486
|
| 438 |
+
|
| 439 |
+
ferent from that of K&C (dev: 21.1% vs. 53.3%; 487 test: 35.3% vs. 28.6%). Since K&C did not re-
|
| 440 |
+
|
| 441 |
+
lease their specific data split, replicating their ex- 489 act results on the small portion of irregular verbs is not possible. Given that our results are averaged
|
| 442 |
+
|
| 443 |
+
over three random seeds and on all three split sets, 492 we consider them more reliable, which means the model might perform worse at learning the past tense of irregular verbs than K&C's report.
|
| 444 |
+
|
| 445 |
+
max width=
|
| 446 |
+
|
| 447 |
+
2*X 3|c|all 3|c|regular 3|c|irregular
|
| 448 |
+
|
| 449 |
+
2-10
|
| 450 |
+
train dev test train dev test train dev test
|
| 451 |
+
|
| 452 |
+
1-10
|
| 453 |
+
K&C 99.8 97.4 95.1 99.9 99.2 98.9 97.6 53.3 28.6
|
| 454 |
+
|
| 455 |
+
1-10
|
| 456 |
+
Ours 99.9 95.3 96.5 99.9 98.4 99.2 98.4 21.1 35.3
|
| 457 |
+
|
| 458 |
+
1-10
|
| 459 |
+
|
| 460 |
+
Table 2: Mean accuracy of our replication of K&C with 3 random seeds.
|
| 461 |
+
|
| 462 |
+
497
|
| 463 |
+
|
| 464 |
+
499
|
| 465 |
+
|
| 466 |
+
502
|
| 467 |
+
|
| 468 |
+
§ 4.3 DISCUSSION
|
| 469 |
+
|
| 470 |
+
The reason we assume for the gap between our results and K&C's is twofold: (i) the number of irregular verbs is much lower than regular ones, which makes the accuracy change dramatically even if only few more or few less verbs are predicted correctly than the original experiments; (ii) we corrected the label errors mentioned above, thus the number of irregular verbs becoming smaller than before. This small difference could cause a large impact on the accuracy calcu-
|
| 471 |
+
|
| 472 |
+
lation given that these two sets only contain about 519 20 irregular verbs. To test this hypothesis, we conduct 9-fold cross-validation ${}^{5}$ and find that the ac-
|
| 473 |
+
|
| 474 |
+
curacy for irregular verbs varied in different dev 522 splits, ranging widely between 9% and 42%.
|
| 475 |
+
|
| 476 |
+
524
|
| 477 |
+
|
| 478 |
+
§ 5 MULTILINGUAL EXPERIMENTS
|
| 479 |
+
|
| 480 |
+
This section presents the results of our main experiments aimed at comparing Dutch and German
|
| 481 |
+
|
| 482 |
+
past learning patterns to the English ones. It also 529 presents the results of grapheme vs phoneme sequence learning in all three languages. Because Dutch and German pronunciation is more predictable than the English one, we expect that the
|
| 483 |
+
|
| 484 |
+
difference between grapheme and phoneme learn- 534 ing will be smaller in these languages.
|
| 485 |
+
|
| 486 |
+
539
|
| 487 |
+
|
| 488 |
+
${}^{3}$ However, as the epoch has been deprecated in the latest version of OpenNMT, we converted it to train_steps based on its relationship with steps.
|
| 489 |
+
|
| 490 |
+
${}^{4}$ Our results are also very close to those of Corkery et al. (2019), who did a similar replication and reported the averaged accuracy over ten runs initialized with different random seeds, but only on the training set.
|
| 491 |
+
|
| 492 |
+
${}^{5}$ We keep the test set unchanged and validated across the train and dev sets. To make sure the dev set has a comparable number of verbs as the original set, we adopt 9 fold instead of 10 fold cross-validation.
|
| 493 |
+
|
| 494 |
+
540 594
|
| 495 |
+
|
| 496 |
+
max width=
|
| 497 |
+
|
| 498 |
+
X 3|c|all 3|c|regular 3|c|irregular
|
| 499 |
+
|
| 500 |
+
1-10
|
| 501 |
+
X train dev test train dev test train dev test
|
| 502 |
+
|
| 503 |
+
1-10
|
| 504 |
+
EN 99.5 93.1 92.1 99.8 96.1 95.0 98.1 27.8 40.5
|
| 505 |
+
|
| 506 |
+
1-10
|
| 507 |
+
NL 98.9 88.4 88.4 99.2 91.4 92.2 96.5 62.4 57.9
|
| 508 |
+
|
| 509 |
+
1-10
|
| 510 |
+
DE 98.9 85.0 92.5 99.4 92.0 95.1 96.7 38.7 57.9
|
| 511 |
+
|
| 512 |
+
1-10
|
| 513 |
+
10|c|(a) Phoneme input
|
| 514 |
+
|
| 515 |
+
1-10
|
| 516 |
+
|
| 517 |
+
max width=
|
| 518 |
+
|
| 519 |
+
2*X 3|c|all 3|c|regular 3|c|irregular
|
| 520 |
+
|
| 521 |
+
2-10
|
| 522 |
+
train dev test train dev test train dev test
|
| 523 |
+
|
| 524 |
+
1-10
|
| 525 |
+
EN 99.1 93.6 93.8 99.8 98.2 98.1 89.0 11.1 28.1
|
| 526 |
+
|
| 527 |
+
1-10
|
| 528 |
+
NL 99.4 88.0 89.6 99.8 91.2 93.0 97.9 58.6 61.0
|
| 529 |
+
|
| 530 |
+
1-10
|
| 531 |
+
DE 98.4 86.4 93.6 99.1 93.5 95.7 93.9 39.5 65.9
|
| 532 |
+
|
| 533 |
+
1-10
|
| 534 |
+
|
| 535 |
+
(b) Grapheme input
|
| 536 |
+
|
| 537 |
+
597
|
| 538 |
+
|
| 539 |
+
598
|
| 540 |
+
|
| 541 |
+
599
|
| 542 |
+
|
| 543 |
+
601
|
| 544 |
+
|
| 545 |
+
541 595
|
| 546 |
+
|
| 547 |
+
542 596
|
| 548 |
+
|
| 549 |
+
546 600
|
| 550 |
+
|
| 551 |
+
Table 3: Past tense inflection accuracy in English, Dutch, and German; all averaged over 3 random seeds.
|
| 552 |
+
|
| 553 |
+
max width=
|
| 554 |
+
|
| 555 |
+
2*epoch 2|c|English 2|c|Dutch 2|c|German
|
| 556 |
+
|
| 557 |
+
2-7
|
| 558 |
+
2|c|hits 2|c|bestijgt (mounts) 2|c|gilt (applies)
|
| 559 |
+
|
| 560 |
+
1-7
|
| 561 |
+
10 hItId hitted b@stKGd@ besteeg gIlt@ galte
|
| 562 |
+
|
| 563 |
+
1-7
|
| 564 |
+
20 hItst hit b@stex besteeg gIlt@ galt
|
| 565 |
+
|
| 566 |
+
1-7
|
| 567 |
+
30 hItId hitted b@stKGd@ besteeg g< galt
|
| 568 |
+
|
| 569 |
+
1-7
|
| 570 |
+
40 hItId hitted b@stKGd@ besteeg g< galt
|
| 571 |
+
|
| 572 |
+
1-7
|
| 573 |
+
50 hIt hitted b@stKGd@ besteeg g< galt
|
| 574 |
+
|
| 575 |
+
1-7
|
| 576 |
+
60 hItst hit b@stex besteeg gIIt@ gilte
|
| 577 |
+
|
| 578 |
+
1-7
|
| 579 |
+
70 hIt hit b@stex bestijgde g< galt
|
| 580 |
+
|
| 581 |
+
1-7
|
| 582 |
+
80 hItId hitted b@stex besteeg g< galt
|
| 583 |
+
|
| 584 |
+
1-7
|
| 585 |
+
90 hItId hitted b@stex besteeg g< galt
|
| 586 |
+
|
| 587 |
+
1-7
|
| 588 |
+
100 hIt hit b@stex besteeg g< galt
|
| 589 |
+
|
| 590 |
+
1-7
|
| 591 |
+
|
| 592 |
+
Table 4: The oscillating development (micro U-shape) of single verbs in three languages: with phoneme or grapheme inputs, the respectively predicted past phonetic (left) or orthographic (right) forms are changing with the training proceeding, but their final predictions are correct when reaching the last epoch.
|
| 593 |
+
|
| 594 |
+
602
|
| 595 |
+
|
| 596 |
+
603
|
| 597 |
+
|
| 598 |
+
551 605
|
| 599 |
+
|
| 600 |
+
604
|
| 601 |
+
|
| 602 |
+
606
|
| 603 |
+
|
| 604 |
+
608
|
| 605 |
+
|
| 606 |
+
609
|
| 607 |
+
|
| 608 |
+
611
|
| 609 |
+
|
| 610 |
+
613
|
| 611 |
+
|
| 612 |
+
614
|
| 613 |
+
|
| 614 |
+
615
|
| 615 |
+
|
| 616 |
+
553 607
|
| 617 |
+
|
| 618 |
+
556 610
|
| 619 |
+
|
| 620 |
+
558 612
|
| 621 |
+
|
| 622 |
+
616
|
| 623 |
+
|
| 624 |
+
563 617
|
| 625 |
+
|
| 626 |
+
618
|
| 627 |
+
|
| 628 |
+
619
|
| 629 |
+
|
| 630 |
+
566 620
|
| 631 |
+
|
| 632 |
+
For comparability, all experiments in this section use the newly introduced Unimorph-based dataset, which includes a similar amount of training forms in all languages (cf. Table 1). The model architecture and the hyperparameter settings are the same as in previous experiments. We also run each experiments three times with different random seeds and report the averaged results.
|
| 633 |
+
|
| 634 |
+
Result overview For the forms seen in training, the model is able to learn both regular and irregular past tense inflection with more than 95% accuracy (Table 3a), and with similar learning curves (Figure 2), which confirms and strengthens the main findings of $\mathrm{K}\& \mathrm{C}$ on two other languages.
|
| 635 |
+
|
| 636 |
+
583 Comparing Table 3a to 3b, we find that the overall trends are maintained when the model is trained on graphemes instead of phonemes (the original setup of $\mathrm{K}\& \mathrm{C}$ ). However, a notable exception is observed: grapheme learning results in a much lower accuracy of English irregular verbs.
|
| 637 |
+
|
| 638 |
+
In the following sections, we discuss these results in more detail.
|
| 639 |
+
|
| 640 |
+
593
|
| 641 |
+
|
| 642 |
+
621
|
| 643 |
+
|
| 644 |
+
§ 5.1 PAST TENSE LEARNING RESULTS IN ENGLISH, DUTCH, AND GERMAN
|
| 645 |
+
|
| 646 |
+
622
|
| 647 |
+
|
| 648 |
+
623
|
| 649 |
+
|
| 650 |
+
Accuracy Looking closer at the results across 624
|
| 651 |
+
|
| 652 |
+
languages (Table 3a), we notice that inflecting un- 625
|
| 653 |
+
|
| 654 |
+
seen Dutch regular verbs is slightly harder than in 626
|
| 655 |
+
|
| 656 |
+
German and English. This might be explained by 627 the fact that in Dutch all voiced consonants become unvoiced at the end of a word, but to predict if the past tense becomes '-de' (for voiced
|
| 657 |
+
|
| 658 |
+
consonants) or '-te' (for unvoiced consonants), we 632 still need the end consonant of the stem, which can be found within the lemma and most of the times in the spelling of the word form. Unfortunately, this information is absent in the pronun-
|
| 659 |
+
|
| 660 |
+
ciation. For example, in the pair lAnt-lAndd@, 637 one will not know whether the past tense should be 1And@ or 1Ant @ before seeing the orthographic form 1 and. We find that such errors account for about ${50}\% \left( {{18}/{38}}\right)$ of all Dutch regular verb er-
|
| 661 |
+
|
| 662 |
+
rors. This difference in voiced/unvoiced regular 642 past tense endings only occurs in Dutch.
|
| 663 |
+
|
| 664 |
+
As for irregular verbs, we find a large difference across languages in the ability to generalize to new
|
| 665 |
+
|
| 666 |
+
forms. Especially in English, while the model has 646
|
| 667 |
+
|
| 668 |
+
647
|
| 669 |
+
|
| 670 |
+
648
|
| 671 |
+
|
| 672 |
+
english_irreg english_reg dutch_irreg dutch_reg german_irreg german_reg 60 100 Number of Epochs (a) Phoneme Input english_reg dutch_irreg dutch_reg german_irreg 100 Number of Epochs (b) Grapheme Input Accuracy 80 65 0 40 100 Accuracy 80 70 60 40
|
| 673 |
+
|
| 674 |
+
Figure 2: Learning curves of the model on the German, English, and Dutch training set (with random seed 123).
|
| 675 |
+
|
| 676 |
+
649
|
| 677 |
+
|
| 678 |
+
650
|
| 679 |
+
|
| 680 |
+
651
|
| 681 |
+
|
| 682 |
+
652
|
| 683 |
+
|
| 684 |
+
653
|
| 685 |
+
|
| 686 |
+
654
|
| 687 |
+
|
| 688 |
+
659
|
| 689 |
+
|
| 690 |
+
661
|
| 691 |
+
|
| 692 |
+
664 almost perfectly learned to inflect seen verbs, it
|
| 693 |
+
|
| 694 |
+
681 has a hard time predicting the form of new irregular verbs (dev: 27.8%, test: 40.5%). This effect is smaller in Dutch and German, suggesting the irregular inflection patterns in these languages are more predictable. Surprisingly, the model made
|
| 695 |
+
|
| 696 |
+
686 more mistakes when predicting the inflections of the irregular verbs in the German dev set than the test set (dev: 38.7%, test: 57.9%). By inspecting
|
| 697 |
+
|
| 698 |
+
689 the mistakes, we found that the model incorrectly took many irregular verbs as regular ones because
|
| 699 |
+
|
| 700 |
+
691 of their resemblance (high character overlap). For instance, reitest-*reitetest/rittest (ride) is influenced by the regular conjugation of bereitest-bereitetest (prepare). We
|
| 701 |
+
|
| 702 |
+
696 found ${23}/{81}$ irregular verbs in the dev set are very similar to regular verbs in the training set. Out of these, 8 irregular verbs are identical to regular ones except for a prefix (e.g., reitet (rides) vs. bereitet (prepares) and reitest (ride) vs.
|
| 703 |
+
|
| 704 |
+
701 verbreitest (spread), which could be highly
|
| 705 |
+
|
| 706 |
+
confusing for a model that is only based on form 702
|
| 707 |
+
|
| 708 |
+
regardless of meaning. By contrast, such overlap 703
|
| 709 |
+
|
| 710 |
+
is not found between the irregular verbs in the test 704
|
| 711 |
+
|
| 712 |
+
set and regular ones in the training set. This distri- 705 butional discrepancy might explain the lower accuracy in the dev set. It echoes with our other
|
| 713 |
+
|
| 714 |
+
finding discussed in the next section that irregu- 708 lar verbs might be misled by regular verbs if they share representation similarity.
|
| 715 |
+
|
| 716 |
+
Errors and learning trajectories Going be-
|
| 717 |
+
|
| 718 |
+
yond overall accuracy, we inspect the learning tra- 713 jectories of individual verbs in our dataset. We
|
| 719 |
+
|
| 720 |
+
find that human-like overregularization patterns 715 similar to those observed by K&C in English also occur in Dutch and German. For example,
|
| 721 |
+
|
| 722 |
+
in Dutch, after 40 epochs of training, the model 718 change verscheent to verscheen as the past
|
| 723 |
+
|
| 724 |
+
tense of verschijnt (appears). However, af- 720 ter 50 epochs, the model again generate the wrong form verscheent. After 70 epochs, the correct result is again obtained. Similar patterns are observed for sink in English and streitet (argues) in German. All wrongly predicted irregular verbs are caused by over-regularization. In other words, no patterns like ated in English or lookte in Dutch are found, which is consistent with humans' learning behaviour (Pinker and Prince, 1988). More examples from English, Dutch and German are listed in Table 4.
|
| 725 |
+
|
| 726 |
+
Additionally, we find cases where the model generates an irregular form for a regular verb, because of the resemblance with other (irregular) verbs. In Dutch, for example, the regular verb versier-versierde (decorate-decorated) gets incorrectly inflected as *versoor by resemblance to verbs like verlies-verloor (lose-lost). Similar errors also occur in German. For instance, the wrong prediction of verfehle-*verfahl/verfehlte (miss-missed) might be misled by the pair befehlen-befahlen (order-ordered), and schweben-*schwoben/schwebten (float-floated) is possibly due to its resemblance to schieben-schoben (push-pushed). Interestingly, this type of errors aligns with Ernestus and Baayen (2004)'s experiments with Dutch speakers: phonological similarity, rather than rule-based regularity, influences participants' judgments toward the inflection of verbs.
|
| 727 |
+
|
| 728 |
+
That said, the model also displays error pat-
|
| 729 |
+
|
| 730 |
+
terns that are not human-like, such as copying the 755 present form or randomly removing phonemes (or letters) from it. Similar cases of non-plausible predictions were also observed at the Sigmor-phon Shared Task (Kodner and Khalifa, 2022), for instance forgive-*forgaved/forgave or seek-*sougk/sought. As also observed by Wiemerslage et al. (2022), this kind of model predictions contrasts with the behaviour of human speakers, who mostly resort to generating a regular past tense when a verb is unknown.
|
| 731 |
+
|
| 732 |
+
§ 5.2 PHONEME VS. GRAPHEME INPUT
|
| 733 |
+
|
| 734 |
+
Undoubtedly, using phoneme input is more principled than grapheme input when simulating human acquisition patterns. However, pronunciation information is not always available and makes it harder to extend this kind of simulations beyond a small set of widely studied languages. Here, we investigate the usability of grapheme-based input for modeling past tense inflection. We expect German and Dutch to be a good use case for this, given their more transparent orthography compared to English (Marjou, 2021).
|
| 735 |
+
|
| 736 |
+
The results in Table 3 clearly show that switching to grapheme input for the English simulations is not principled as this results in a slight increase of regular inflection accuracy (from 99.8/96.1/95.0% to 99.8/98.2/98.1% train/dev/test) as opposed to a large decrease of irregular inflection accuracy (from 98.1/27.8/40.5% to ${89.0}/{11.1}/{28.1}\%$ ). The latter effect is particularly marked, suggesting non-transparent orthography may not be a uniform property of the language but may be correlating with less regular word forms within a language. We leave this investigation to future work.
|
| 737 |
+
|
| 738 |
+
Using grapheme input in Dutch and German seems much safer (differences are overall small, with only a slight increase in almost all cases). Our observations seem to reflect the figures of Mar-jou (2021), who give a much higher transparency score to Dutch and German than to English.
|
| 739 |
+
|
| 740 |
+
In sum, using graphemes to simulate human patterns of morphological acquisition is possible but should be done with caution and only in some languages. A good practice could be to first verify that the orthographic transparency of a language is high (Marjou (2021) present results for 17 languages). When that is not possible, grapheme-based results should be at least validated against a small-scale pronunciation dataset.
|
| 741 |
+
|
| 742 |
+
809
|
| 743 |
+
|
| 744 |
+
§ 6 CONCLUSIONS
|
| 745 |
+
|
| 746 |
+
810
|
| 747 |
+
|
| 748 |
+
811
|
| 749 |
+
|
| 750 |
+
In this work, we study the plausibility of using 812
|
| 751 |
+
|
| 752 |
+
sequence-to-sequence neural networks for simu- 813
|
| 753 |
+
|
| 754 |
+
lating human patterns of past tense acquisition. 814
|
| 755 |
+
|
| 756 |
+
More specifically, we replicate findings by Kirov 815
|
| 757 |
+
|
| 758 |
+
and Cotterell (2018) and examine their generaliz- 816 ability beyond the specific case of English, using a new dataset of English/Dutch/German (ir)regular verb forms based on Unimorph (McCarthy et al., 2020).
|
| 759 |
+
|
| 760 |
+
We show that the main findings of $\mathrm{K}\& \mathrm{C}$ also 821 largely hold for Dutch and German, including over-regularization errors and the oscillating (or micro U-shape) learning trajectory of individual verb forms across training epochs. At the same
|
| 761 |
+
|
| 762 |
+
time, we also observe cases of non human-like 826 errors, for instance when the model just keeps
|
| 763 |
+
|
| 764 |
+
the present form unchanged or randomly removes 828 phonemes from it. A notable difference among
|
| 765 |
+
|
| 766 |
+
our studied languages concern unseen English ir- 831 regular verbs, which appeared to be much harder to inflect than the Dutch and German ones. We also observe that the orthographic transparency of a language influences and possibly confounds the model's learning performance: higher transparent orthography contributes to more reliable and consistent simulation results, but in general this aspect should be seriously considered when setting up new benchmarks of morphological acquisition.
|
| 767 |
+
|
| 768 |
+
Future work could include the construction of a nonce word benchmark in Dutch and German
|
| 769 |
+
|
| 770 |
+
to enable a multi-lingual evaluation of this task 843 (Corkery et al., 2019), as well as an in-depth investigation of the different level of irregular past
|
| 771 |
+
|
| 772 |
+
inflection difficulty in our three languages. 846
|
| 773 |
+
|
| 774 |
+
Kirov and Cotterell (2018) provided very
|
| 775 |
+
|
| 776 |
+
promising evidence for the use of modern neural 848 networks to model the human language acquisition patterns. Our work confirms the potential of
|
| 777 |
+
|
| 778 |
+
this research direction, but also raises important 851 issues and joins recent follow-up studies (Cork-
|
| 779 |
+
|
| 780 |
+
ery et al., 2019; Dankers et al., 2021; Kodner and 853 Khalifa, 2022; Wiemerslage et al., 2022) that have warned against over-optimistic conclusions.
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wbQd_esbJC/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,781 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Parser Evaluation for Analyzing Swedish 19th-20th Century Literature
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author
|
| 10 |
+
|
| 11 |
+
Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
006 Affiliation / Address line 2 Affiliation / Address line 3
|
| 14 |
+
|
| 15 |
+
email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 3
|
| 24 |
+
|
| 25 |
+
email@domain
|
| 26 |
+
|
| 27 |
+
Anonymousest Author 057
|
| 28 |
+
|
| 29 |
+
Affiliation / Address line 1 058
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 32 |
+
|
| 33 |
+
063
|
| 34 |
+
|
| 35 |
+
## Abstract
|
| 36 |
+
|
| 37 |
+
013 In this study, we aim to find a parser for accurately identifying different types of subordinate clauses, and related phenomena,
|
| 38 |
+
|
| 39 |
+
016 in 19th-20th-century Swedish literature. Since no test set is available for parsing
|
| 40 |
+
|
| 41 |
+
018 data from this time period, we propose a lightweight annotation scheme for annotating a single relation of interest per sen-
|
| 42 |
+
|
| 43 |
+
021 tence. We train a variety of parsers for Swedish and compare evaluations on stan-
|
| 44 |
+
|
| 45 |
+
023 dard modern test sets and our targeted test set. We find clear trends in which parser
|
| 46 |
+
|
| 47 |
+
026 types perform best on the standard test set, but that performance is considerably more
|
| 48 |
+
|
| 49 |
+
028 varied on the targeted test set. We believe that our proposed annotation scheme can be useful for complementing standard
|
| 50 |
+
|
| 51 |
+
031 evaluations, with a low annotation effort.
|
| 52 |
+
|
| 53 |
+
033
|
| 54 |
+
|
| 55 |
+
## 1 Introduction
|
| 56 |
+
|
| 57 |
+
Dependency parsers can be useful tools for analyzing large text materials, and as such can en-
|
| 58 |
+
|
| 59 |
+
036 able large-scale studies within many scientific disciplines. Modern parsers can achieve very high
|
| 60 |
+
|
| 61 |
+
038 scores on standard test sets, at least for languages with large treebanks, but these test sets are often limited to only a few domains, and typically on publication-level modern language, such as news or Wikipedia. For more challenging text types, for instance, noisy data like Twitter or historical texts, parsers typically perform considerably worse even for high-resource languages.
|
| 62 |
+
|
| 63 |
+
Parsers are typically evaluated on a treebank that is split into training, development, and test sets. This can overestimate the parser performance, since parsers are then trained on data that matches its test set in all relevant aspects, such as genre, time period, and annotation style. Fur-
|
| 64 |
+
|
| 65 |
+
053 thermore, parser evaluation is typically done using
|
| 66 |
+
|
| 67 |
+
metrics that give a holistic score for the full tree, 065 such as (un)labeled attachment score. In many
|
| 68 |
+
|
| 69 |
+
real-world scenarios, such as ours, we are not in- 067 terested in the full tree, but in a subset of relations.
|
| 70 |
+
|
| 71 |
+
This study is part of a larger project with 070 the overall aim to identify and explore language
|
| 72 |
+
|
| 73 |
+
change in Swedish literature during the period 072 1800-1930. During the 19th century, the Swedish language changed in several aspects. This change
|
| 74 |
+
|
| 75 |
+
includes various linguistic levels and involve also 075 lexical aspects. Overall, the changes led to a
|
| 76 |
+
|
| 77 |
+
smaller difference between spoken and written 077 Swedish since the written language moved closer to the spoken vernacular. The goal of the project
|
| 78 |
+
|
| 79 |
+
is to cover morphological, syntactical, and lexical 080 changes. In this paper, however, we focus only on
|
| 80 |
+
|
| 81 |
+
syntactic aspects. The changes in the 19th century 082 resulted in a less complex language - not least as far as subordinate clauses and related phenom-
|
| 82 |
+
|
| 83 |
+
ena are concerned. To enable large-scale analysis 085 of subordinate clauses, we require a high-quality
|
| 84 |
+
|
| 85 |
+
parser for our target domain, Swedish literary nov- 087 els and short stories from 1800-1930. In this paper, we explore whether parsers can be evaluated
|
| 86 |
+
|
| 87 |
+
for this domain, without requiring a large manual 090 annotation effort.
|
| 88 |
+
|
| 89 |
+
To evaluate a parser for a new text type and task, 092 as in our case 19th century literature with a focus mainly on subordinate clauses, we would ideally like to have an annotated treebank for the target
|
| 90 |
+
|
| 91 |
+
text type. However, this is a human annotation 097 task that is time-consuming, and thus costly, and which requires an expert on dependency grammar. For many practical projects, this is not feasible. We propose a lightweight annotation task for our target task, which consists of only annotating one type of phenomenon per sentence, constituting a targeted test set. We then explore whether this could be an efficient option to annotating full trees. The focus is on four phenomena related to subor-
|
| 92 |
+
|
| 93 |
+
dinate clauses, and annotate a small targeted test 107 set for our target text type, which will be publicly released. For comparison, we also evaluate on standard Swedish test sets.
|
| 94 |
+
|
| 95 |
+
We compare several variants of three generations of parsers trained on different subsets of the Universal Dependencies (UD) treebanks (Nivre et al., 2020), and evaluate them on UD, both with holistic metrics and for a subset of relations of interest, as well as on our targeted test set. On the UD test sets we see clear trends that a modern BERT-based parser is better than BiLSTM- and SVM-based parsers, and that it is better to train on several North Germanic languages than only on Swedish. However, on our new targeted test set, the results are more mixed, and we see less clear trends, which is in line with earlier work for German (Adelmann et al., 2018). We think that our targeted test set is able to give a complementary view to standard evaluations.
|
| 96 |
+
|
| 97 |
+
In Section 2 we review related work, followed by a description of our project focused on Swedish language change in Section 3. In Section 4 we describe the data and in Section 5 we describe the parsers evaluated, including the multilingual training setup. We summarize the results in Section 6, discuss them in Section 7, and finally we conclude in Section 8.
|
| 98 |
+
|
| 99 |
+
## 2 Related Work
|
| 100 |
+
|
| 101 |
+
Dependency parsers have continuously developed, from 'old school' parsers like MaltParser (Nivre et al., 2007) and MSTparser (McDonald et al., 2005) based on classical machine learning, like support vector machines, to modern neural parsers. Many of the first strong neural parsers were based on recurrent neural networks, as most of the best parsers in the CoNLL 2017 shared task on dependency parsing (Zeman et al., 2017). Next, models based on deep contextualized em-beddings have been taking over, and most strong parsers today are based on fine-tuning contextu-alized models like BERT (Devlin et al., 2019) or XLM-R (Conneau et al., 2020), e.g. Machamp (van der Goot et al., 2021) and Trankit (Nguyen et al., 2021).
|
| 102 |
+
|
| 103 |
+
The standard way to evaluate dependency parsers is by calculating holistic metrics such as labeled attachment score (LAS), which measures the percentage of words which gets both their head word and label correct. There are, however, examples of more detailed evaluations (e.g. McDonald
|
| 104 |
+
|
| 105 |
+
and Nivre, 2007; Kulmizev et al., 2019; Salomoni, 162
|
| 106 |
+
|
| 107 |
+
2017), focusing on aspects such as arc and sen- 163 tence lengths, non-projective dependencies, and scores for specific POS-tags and dependency relations. The overall conclusion is typically that different parser types have different strengths. As far
|
| 108 |
+
|
| 109 |
+
as we are aware, there are no datasets and evalua- 168 tions like our proposal, focused on a single relation per sentence.
|
| 110 |
+
|
| 111 |
+
Highly relevant to our study is the work of Adelmann et al. (2018), who evaluate a set of six parsers for digital humanities research, focusing on novels and academic texts for German. Like us, they are also interested in specific relations, for in-
|
| 112 |
+
|
| 113 |
+
stance, related to speaker attribution, and not only 178 in holistic evaluation. Unlike us, they perform
|
| 114 |
+
|
| 115 |
+
a full dependency tree annotation effort for three 180 sample texts. In addition, they do not include any neural parsers in their evaluation. They find that several parsers do well on the holistic metrics, but that the results are considerably worse for several of the specific relations of interest, such as appositions, and that it is not always the overall strongest parser that is the best choice for a specific relation. Salomoni (2017) performed a detailed evaluation on parsing German 17th-century literature, for which he annotated two excerpts of text with full dependency annotations. Again, no neural parsers were included in the study, which found a drop compared to in-domain results, but where the relative performance of the two parsers evaluated was consistent on different metrics, possibly because of the large difference in performance between them.
|
| 116 |
+
|
| 117 |
+
Swedish literary texts from different eras have 200 been analyzed for different purposes before, requiring taggers and/or parsers. Dahllöf (2022) aims to characterize differences between dialogue and narrative in contemporary fiction, whereas (Stymne et al., 2018) analyze prose rhythm in a novel from 1940. However, in none of these studies, the choice of tagger and/or parser is motivated. There have also been some earlier smaller-scale studies focusing on the transition towards a more colloquial written Swedish. For instance, language development in Swedish literature during the 19th century has been explored, but only on a small scale focusing on individual authors (e.g.
|
| 118 |
+
|
| 119 |
+
Lindstedt, 1922; Von Hofsten, 1935). 215
|
| 120 |
+
|
| 121 |
+
216 270
|
| 122 |
+
|
| 123 |
+
<table><tr><td>Language</td><td>Treebank</td><td>Genres</td><td>Train</td><td>Test</td></tr><tr><td rowspan="3">Swedish</td><td>Talbanken</td><td>news, nonfiction</td><td>67K</td><td>20K</td></tr><tr><td>PUD</td><td>news, wiki</td><td>-</td><td>19K</td></tr><tr><td>LinES-M</td><td>fiction, nonfiction, spoken</td><td>18K</td><td>73K</td></tr><tr><td rowspan="3">Norwegian</td><td>Bokmaal</td><td>blog, news, nonfiction</td><td>244K</td><td>30K</td></tr><tr><td>Nynorsk</td><td>blog, news, nonfiction</td><td>245K</td><td>25K</td></tr><tr><td>NynorskLIA</td><td>spoken</td><td>35K</td><td>10K</td></tr><tr><td>Danish</td><td>DDT</td><td>fiction, news, nonfiction, spoken</td><td>80K</td><td>10K</td></tr><tr><td>Faroese</td><td>FarPaHC</td><td>bible</td><td>1.5K</td><td>6.6K</td></tr><tr><td>Icelandic</td><td>Modern</td><td>news, nonfiction</td><td>7.5K</td><td>10K</td></tr></table>
|
| 124 |
+
|
| 125 |
+
Table 1: Treebanks used, with info about genres (as defined in UD) and number of tokens in test and training data. LinES-M refers to our modified version of LinES.
|
| 126 |
+
|
| 127 |
+
277
|
| 128 |
+
|
| 129 |
+
217 271
|
| 130 |
+
|
| 131 |
+
218 272
|
| 132 |
+
|
| 133 |
+
219 273
|
| 134 |
+
|
| 135 |
+
220 274
|
| 136 |
+
|
| 137 |
+
221 275
|
| 138 |
+
|
| 139 |
+
222 276
|
| 140 |
+
|
| 141 |
+
278
|
| 142 |
+
|
| 143 |
+
227
|
| 144 |
+
|
| 145 |
+
## 3 Language Change in 19th Century Swedish
|
| 146 |
+
|
| 147 |
+
229
|
| 148 |
+
|
| 149 |
+
This study is part of a larger project with the over-
|
| 150 |
+
|
| 151 |
+
232 all aim to identify and explore language change in Swedish literature during the period 1800-1930.
|
| 152 |
+
|
| 153 |
+
234 In the history of the Swedish language, this period is characterized by modernization in the sense that the written language was influenced by the spoken vernacular. In this process of modernization, fictional prose is of certain interest since it
|
| 154 |
+
|
| 155 |
+
239 has been suggested that linguistic change spread from literary dialogue (Engdahl, 1962; Teleman, 2003). By investigating a corpus of literary texts the project will not only contribute with a more detailed account of language change in 19th-century Swedish but also address the question of how linguistic change increased in the community.
|
| 156 |
+
|
| 157 |
+
The modernization of the Swedish written language during the 19th century affected several lin-
|
| 158 |
+
|
| 159 |
+
249 guistic aspects. As for the lexicon, it is wellknown that formal functions words were replaced by colloquial counterparts. Much attention has also been devoted to the loss of verbal agreement, i.e. the use of the vernacular singular variant in
|
| 160 |
+
|
| 161 |
+
254 both singular and plural. On the syntactic level, Engdahl (1962) has shown a remarkable change in sentence length during the end of the 19th century. Engdahl's study focuses on non-fictional prose, periodicals from 1878 to 1950, but his re-
|
| 162 |
+
|
| 163 |
+
259 sults call for a more detailed account of syntactic complexity during the period, and hence we will focus on subordinate clauses and phenomena related to them in this paper.
|
| 164 |
+
|
| 165 |
+
For this study, we have chosen to focus on three types of subordinate clauses, based on UD dependency labels, and one phenomenon related to subordinate clauses: (i) relative clauses (RELCL), (ii) cleft constructions (CLEFT),[1 (iii) clausal
|
| 166 |
+
|
| 167 |
+
269
|
| 168 |
+
|
| 169 |
+
281 complements not determined by obligatory con-
|
| 170 |
+
|
| 171 |
+
trol (CCOMP), and (iv) auxiliary drop (NO-AUX). 283 Whereas the first three types can be used in order to measure syntactic complexity, auxiliary drop
|
| 172 |
+
|
| 173 |
+
has been suggested to mark written style, and 286 hence almost never occur in spoken language (cf.
|
| 174 |
+
|
| 175 |
+
Wellander, 1939). Since auxiliary drop of fi- 288 nite verbs is restricted to subordinate clauses in Swedish, we have included it as related to sub-
|
| 176 |
+
|
| 177 |
+
ordinate clauses. In this study, we only include 291
|
| 178 |
+
|
| 179 |
+
auxiliary drop that occurs in clausal complements 293 CCOMP.
|
| 180 |
+
|
| 181 |
+
## 4 Data
|
| 182 |
+
|
| 183 |
+
296
|
| 184 |
+
|
| 185 |
+
In this section, we will describe the data used. We
|
| 186 |
+
|
| 187 |
+
will first describe the data from UD, including the 298 modified version of the LinES treebank, and then describe the targeted dataset we constructed for
|
| 188 |
+
|
| 189 |
+
this project 301
|
| 190 |
+
|
| 191 |
+
### 4.1 Universal Dependencies Treebanks
|
| 192 |
+
|
| 193 |
+
303
|
| 194 |
+
|
| 195 |
+
We use data from Universal Dependencies Nivre
|
| 196 |
+
|
| 197 |
+
et al. (2020) version 2.11 (Zeman et al., 2022) for 306 training our parsers and for the standard evalua-
|
| 198 |
+
|
| 199 |
+
tion. Besides dependency annotations, UD also 308 contains lemmas, universal and language-specific part-of-speech tags (UPOS/XPOS), and morphological features. Our main focus is on Swedish, for which there are three treebanks, Talbanken,
|
| 200 |
+
|
| 201 |
+
LinES, and PUD, where PUD only contains a test 313 set. In addition, we use data from related north Germanic languages: Norwegian (both variants: Bokmål and Nynorsk), Danish, Faroese, and Icelandic. The treebanks used are summarized in Ta-
|
| 202 |
+
|
| 203 |
+
ble [1]. The intuition behind also using related lan- 318 guages is twofold, first, it has been shown to improve parsers (e.g. Smith et al., 2018a), second,
|
| 204 |
+
|
| 205 |
+
323
|
| 206 |
+
|
| 207 |
+
---
|
| 208 |
+
|
| 209 |
+
subtypes of ACL, clausal modifier of noun, and are denoted ACL:RELCL and ACL:CLEFT. In this paper, we will use shorter names, excluding the prefix.
|
| 210 |
+
|
| 211 |
+
${}^{1}$ In UD, both relative clauses and cleft constructions are
|
| 212 |
+
|
| 213 |
+
---
|
| 214 |
+
|
| 215 |
+
324 378
|
| 216 |
+
|
| 217 |
+
<table><tr><td>Relation</td><td>Example</td><td>Class</td></tr><tr><td>RELCL</td><td>Hvad hon beundrar Maurits , som kan *stâ* så lugn !</td><td>Correct</td></tr><tr><td>RELCL</td><td>Men kan du säga hvar vi *äro* ?</td><td>False</td></tr><tr><td>NO-AUX</td><td>Jag har fätt hvad du i natt *skrifvit* till mig .</td><td>Correct</td></tr></table>
|
| 218 |
+
|
| 219 |
+
Table 2: Examples of sentences shown to the annotators, marked as either correct or wrong.
|
| 220 |
+
|
| 221 |
+
380
|
| 222 |
+
|
| 223 |
+
325 379
|
| 224 |
+
|
| 225 |
+
381
|
| 226 |
+
|
| 227 |
+
382
|
| 228 |
+
|
| 229 |
+
383
|
| 230 |
+
|
| 231 |
+
330 we believe it may make the parser more robust to non-standard Swedish, which has many differences from the modern Swedish of the Swedish treebanks. Written Norwegian and Danish, in particular, are very similar to Swedish, and are considered mutually intelligible.
|
| 232 |
+
|
| 233 |
+
As can be seen in Table 1, the genres, according to the UD specification, of the treebanks used are mixed. To be able, to at least some extent, investigate whether it would help to have an in-genre test set, we create a modified version, LinES-M, of the LinES treebank (Ahrenberg, 2007) which consists of three genres: literary fiction, Microsoft manuals, and European parliament proceedings. The literary part contains a set of novels translated from English, published 1977-2017. While this is not a perfect match to our target of novels and short stories written originally in Swedish during an earlier time period, this was the closest we could get to an in-domain test set, without any re-annotations. We re-split LinES by merging the data from the training and test sets, and moving all literature [2] to a new test set, and all other texts to a new training set, referred to as LinES-M in Table 1.
|
| 234 |
+
|
| 235 |
+
For evaluation on the Swedish UD test sets, we report labeled attachment score (LAS). For LinES-M, we also report F1-scores for the three relations in focus for our targeted test set and AUX, which is relevant to identify auxiliary drop.
|
| 236 |
+
|
| 237 |
+
### 4.2 Targeted Literature Dataset
|
| 238 |
+
|
| 239 |
+
In this section, we will describe the sampling and annotation of the targeted literary dataset annotated for this project as an alternative way of evaluating the performance of parsers on specific phenomena in a specific text type. The targeted dataset will be made publicly available.
|
| 240 |
+
|
| 241 |
+
## Sampling and Text Processing
|
| 242 |
+
|
| 243 |
+
Our target data is literary texts from 1800-1930, focusing on novels and collections of short stories. Such works have been made available by
|
| 244 |
+
|
| 245 |
+
Litteraturbanken. ${}^{3}$ We choose to work only with 384
|
| 246 |
+
|
| 247 |
+
the subset of works that have been proofread after 385 386 going through OCR, available in an XML format. We extracted all novels and short stories available
|
| 248 |
+
|
| 249 |
+
in this format from the time period of interest. 389 From these texts, we extracted the raw text para-
|
| 250 |
+
|
| 251 |
+
graphs. For another sub-project, we had already 391 extracted a set of novels where quotations are used to mark dialog, and used quotation marks to sep-
|
| 252 |
+
|
| 253 |
+
arate dialogue and narrative, which we use also in 394 this study. This sample consists of 165 novels and
|
| 254 |
+
|
| 255 |
+
collections of short stories. 396
|
| 256 |
+
|
| 257 |
+
The selected works were parsed early on in the project, using Swepipe and UUparser ${}^{s}$ with
|
| 258 |
+
|
| 259 |
+
Swepipe tags (see 5). From the parse trees, we 399 extracted all sentences containing a relation of interest and marked the head word for which that relation occurred. For NO-AUX, we also checked that there was no outgoing AUX relation from the head word. It is not uncommon to have several instances of a single relation in a sentence, but we only marked a single occurrence per example, to make the annotation consistent between sentences. From this set, we randomly sampled 200 sentences for each relation type, except CLEFT, for which we only found 74 examples, which were all included. Table 2 shows examples, also containing examples of plural verb forms äro (modern: är, 'are') and old-fashioned spelling 'skrifvit' (modern: skrivit, 'written').
|
| 260 |
+
|
| 261 |
+
#### 4.2.1 Annotation
|
| 262 |
+
|
| 263 |
+
416
|
| 264 |
+
|
| 265 |
+
The annotation was performed by two experts on Swedish grammar, both native Swedish speakers. The annotators were given the example sentences in Excel, and for each sentence, they were to decide whether the marked head word belonged to the given type or not. For each type, 20 examples were annotated by both annotators, and the remaining examples were split between them. Af-
|
| 266 |
+
|
| 267 |
+
ter the first round, there were a few disagreements 426 in the doubly annotated sets, which were discussed by the annotators, followed by a re-annotation of all examples. The initial round of annotation
|
| 268 |
+
|
| 269 |
+
431
|
| 270 |
+
|
| 271 |
+
---
|
| 272 |
+
|
| 273 |
+
${}^{2}$ The literary works are in documents2,3,4,6,7, and 8 ; document 1 contains Microsoft manuals and document 5 contains parliament proceedings. (Lars Ahrenberg, personal communication)
|
| 274 |
+
|
| 275 |
+
https://litteraturbanken.se/
|
| 276 |
+
|
| 277 |
+
---
|
| 278 |
+
|
| 279 |
+
433 was very quick, roughly between 15-30 minutes per 100 examples, with a somewhat longer time needed for CCOMP. Table 3 shows the number of correct and wrong examples for each class. Note that the dataset is skewed towards positive examples.
|
| 280 |
+
|
| 281 |
+
<table><tr><td>Relation</td><td>Correct</td><td>Wrong</td></tr><tr><td>CLEFT</td><td>64</td><td>10</td></tr><tr><td>RELCL</td><td>133</td><td>67</td></tr><tr><td>CCOMP</td><td>141</td><td>59</td></tr><tr><td>NO-AUX</td><td>170</td><td>30</td></tr></table>
|
| 282 |
+
|
| 283 |
+
Table 3: Class distribution in our annotated dataset
|
| 284 |
+
|
| 285 |
+
#### 4.2.2 Evaluation
|
| 286 |
+
|
| 287 |
+
We evaluate on the targeted dataset by calculating the number of times the parser assigns the correct relation to the focus word, and for NO-AUX, that there in addition is no aux-dependent. We then calculate precision and recall for each relation type. Note that this is different from standard evaluation of dependency parsers where we evaluate a full tree. In this case, we instead evaluate a single relation of interest for each sentence.
|
| 288 |
+
|
| 289 |
+
## 5 Parsers
|
| 290 |
+
|
| 291 |
+
In order to investigate how well the different types of evaluation work, we explore three generations of parsers. While the main focus is on dependency parsing. As a baseline, we use the easily accessible Swepipe with its provided model for Swedish. We also use two generations of neural parsers, UUParser and Machamp, for which we also experiment with multilingual parsing. We train each model three times with different random seeds and report average scores.
|
| 292 |
+
|
| 293 |
+
### 5.1 Swepipe
|
| 294 |
+
|
| 295 |
+
As a baseline parser, we wanted an easily accessible parser, which comes with a trained parsing model, and which might be used by non-experts in a digital humanities project. Our choice was to use the Swedish annotation pipeline, Swepipe. 4, a pre-trained model covering all steps needed to analyse Swedish texts from scratch, including tok-enization, tagging, and parsing. Swepipe is similar to several other systems targeted at this user group, such as the web-based Swegram 5, which uses the same parser and tagger (Megyesi et al., 2019).
|
| 296 |
+
|
| 297 |
+
Swepipe is pre-neural and uses efselab (Östling, 486
|
| 298 |
+
|
| 299 |
+
2018) for tagging and MaltParser (Nivre et al., 487 2007) trained on Talbanken for parsing. Malt-Parser is a classical transition-based parser, using a support vector machine for classification, based on a feature vector with words, POS-tags, and already built relations.
|
| 300 |
+
|
| 301 |
+
### 5.2 UUParser
|
| 302 |
+
|
| 303 |
+
UUParser (de Lhoneux et al., 2017; Smith et al., 2018b) is a neural transition-based dependency parser with a BiLSTM feature extractor, based on
|
| 304 |
+
|
| 305 |
+
Kiperwasser and Goldberg (2016). Word repre- 499 sentations are fed to a BiLSTM, to create contex-tualized word representations, which are given as
|
| 306 |
+
|
| 307 |
+
input to an MLP classifying the next transition. 502 We use an arc-hybrid transition model (Kuhlmann
|
| 308 |
+
|
| 309 |
+
et al., 2011) with a swap transition and a static- 504 dynamic oracle (de Lhoneux et al., 2017). As input word representation we use word embeddings, character-based word embeddings, UPOS-tag em-beddings, and treebank embeddings, which represent the treebank of a sentence. All embeddings were initialized randomaly at training time. We use the default UUparser settings (Smith et al., 2018b), except for adding drop-out with a rate of 0.33 for UPOS-embeddings, since the parser is trained with gold tags. At test time, we use two different sets of POS-tags, from Swepipe/efselab and from Machamp. We will call these variants UUparser ${}^{s}$ and UUparser ${}^{m}$ respectively. To counteract the differing sizes of the training data, we limited the number of sentences used per treebank to 4,300 per iteration.
|
| 310 |
+
|
| 311 |
+
522
|
| 312 |
+
|
| 313 |
+
### 5.3 Machamp
|
| 314 |
+
|
| 315 |
+
Machamp (van der Goot et al., 2021) is a toolkit 524 for multitask learning covering several NLP tasks, based on fine-tuning a pre-trained contextualized model, like BERT (Devlin et al., 2019). In a multitask setup, each task has a separate decoder. The dependency parser is a graph-based parser using deep biaffine attention (Dozat and Manning, 2018) to score word pairs, and the CLU algorithm (Chu and Liu, 1965; Edmonds, 1967) to extract trees. For tagging, a greedy decoder, with a softmax output layer is used.
|
| 316 |
+
|
| 317 |
+
In this work we use Machamp in a multi-task setup, to jointly learn tagging of UPOS, XPOS and morphological features, and dependency parsing.
|
| 318 |
+
|
| 319 |
+
We experiment with two sets of language models, 539
|
| 320 |
+
|
| 321 |
+
---
|
| 322 |
+
|
| 323 |
+
4https://github.com/robertostling/ efselab
|
| 324 |
+
|
| 325 |
+
${}^{5}$ https://cl.lingfil.uu.se/swegram/
|
| 326 |
+
|
| 327 |
+
---
|
| 328 |
+
|
| 329 |
+
540
|
| 330 |
+
|
| 331 |
+
<table><tr><td>Group</td><td>Included treebanks/languages</td></tr><tr><td>Talbank</td><td>Swedish-talbanken</td></tr><tr><td>Swedish</td><td>Talbank+ Swedish-LinES-M</td></tr><tr><td>SweNor</td><td>Swedish + Norwegian (*3)</td></tr><tr><td>Scand</td><td>SweNor + Danish</td></tr><tr><td>NorthG</td><td>Scand + Faroese + Icelandic</td></tr></table>
|
| 332 |
+
|
| 333 |
+
Table 4: Groups of languages/treebanks used for multilingual training.
|
| 334 |
+
|
| 335 |
+
541
|
| 336 |
+
|
| 337 |
+
542
|
| 338 |
+
|
| 339 |
+
543
|
| 340 |
+
|
| 341 |
+
546 multilingual BERT (mBERT Devlin et al., 2019) ${}^{6}$ , trained on 104 languages including all languages used in our study except Faroese, and the Swedish model KB-BERT (Malmsten et al., 2020), trained only on Swedish. We will call these systems Machamp ${}^{m}$ and Macahmp ${}^{k}$ respectively. For both models, we used the cased version. KB-BERT
|
| 342 |
+
|
| 343 |
+
556 has been shown to improve Swedish named entity recognition and POS-tagging (Malmsten et al.,
|
| 344 |
+
|
| 345 |
+
558 2020), but as far as we are aware, it has not been used in multilingual dependency parsing models. We use the default parameters of Machamp. To counteract the differing sizes of the training data, we applied sampling smoothing set to 0.5 .
|
| 346 |
+
|
| 347 |
+
### 5.4 Multilingual Training
|
| 348 |
+
|
| 349 |
+
For UUParser and Machamp, we explore multilingual training. We limit ourselves to the North-Germanic languages, all relatively closely related to Swedish. We train two Swedish models, on Talbanken only, to be comparable with Swepipe, and also with LinES-M. In addition, we train three models with different subsets of the other North Germanic Languages. For our multilingual models, we first combine Swedish with Norwegian, which has three treebanks covering both variants of Norwegian. We then add Danish, to train a Scandinavian model. The reason for adding Norwegian first, despite the fact that Danish is considered a closer relative to Swedish, is the availability of more data for Norwegian with variability in language variants. Our final model, NorthG, also adds Faroese and Icelandic, which are more distant from Swedish, and not mutually intelligible. The language groups are summarized in Table 4.
|
| 350 |
+
|
| 351 |
+
## 6 Results
|
| 352 |
+
|
| 353 |
+
Tables 5 and 6 show results from the standard and targeted evaluations for Swepipe, UUparser ${}^{m}$ with Machamp ${}^{k}$ POS-tags and Machamp ${}^{k}$ trained with
|
| 354 |
+
|
| 355 |
+
KB-BERT. In all tables, we mark the three best 594
|
| 356 |
+
|
| 357 |
+
results for each metric in bold. 595
|
| 358 |
+
|
| 359 |
+
596
|
| 360 |
+
|
| 361 |
+
Table 5 shows results on UD test sets. We see 597
|
| 362 |
+
|
| 363 |
+
no obvious differences between LAS on the in- 598 genre LinES-M and the other two Swedish test
|
| 364 |
+
|
| 365 |
+
sets, indicating that time period might play a big- 600 ger role than genre in our scenario. Swepipe has overall the lowest scores, followed by UUparser ${}^{m}$ , and then ${\operatorname{Machamp}}^{k}$ . For the two Swedish models, the differences between using only Talbanken
|
| 366 |
+
|
| 367 |
+
and adding the small LinES-M training set are 605 typically small, but sometimes with a positive
|
| 368 |
+
|
| 369 |
+
effect for UUparser ${}^{m}$ and a negative effect for 607 Machamp ${}^{k}$ . Adding Norwegian leads to improvements in nearly all scores, often quite substan-
|
| 370 |
+
|
| 371 |
+
tial, whereas adding additional languages has a 610 smaller impact. The difference between parsers varies for the different relation types. Swepipe does not find any CLEFTs, and falls behind UUparser ${}^{m}$ on all other relation types, especially for AUX. Machamp ${}^{k}$ improves considerably over UUparser ${}^{m}$ for all explored relations, except AUX, where both neural parsers perform well, possibly since they both use the POS-tags of Machamp ${}^{k}$ .
|
| 372 |
+
|
| 373 |
+
The results in Table 6 for our targeted test set 620 show a partially different picture. First, we note
|
| 374 |
+
|
| 375 |
+
that Swepipe has a very high recall for all re- 622 lation types except CLEFT, which it never predicts. We think this is mainly an artifact of the
|
| 376 |
+
|
| 377 |
+
sampling procedure for this test set, where the 625 annotated sentences were sampled from Swepipe
|
| 378 |
+
|
| 379 |
+
and UUparser ${}^{s}$ , with Swepipe POS-tags, which 627 means that they were mostly predicted as correct by Swepipe. The other parsers do not have this advantage, and thus have a lower recall, which we believe is more predictive of real performance.
|
| 380 |
+
|
| 381 |
+
Swepipe has considerably lower precision than the 632 other parsers for all relation types. We believe that the evaluation should still be fair in comparing ${\text{UUparser}}^{m}$ and Machamp ${}^{k}$ , from which
|
| 382 |
+
|
| 383 |
+
no samples were taken. Compared to the stan- 637 dard evaluation where Machamp ${}^{k}$ was clearly better than UUparser ${}^{m}$ , we now see a more mixed picture, where there is no clear overall advantage of Machamp ${}^{k}$ over ${\mathrm{{UUparser}}}^{m}$ , and the results are mixed across relation types and precision/recall. The trends between training languages are also less clear, with some combinations standing out in performance for some relation types. Machamp ${}^{k}$ trained with Scand and NorthG, has a
|
| 384 |
+
|
| 385 |
+
considerably higher recall on RELCL than the other 647
|
| 386 |
+
|
| 387 |
+
---
|
| 388 |
+
|
| 389 |
+
https://github.com/google-research/ bert/blob/master/multilingual.md
|
| 390 |
+
|
| 391 |
+
---
|
| 392 |
+
|
| 393 |
+
648 702
|
| 394 |
+
|
| 395 |
+
<table><tr><td rowspan="2"/><td colspan="3">LAS</td><td colspan="4">F1, LinES-M</td></tr><tr><td>LinES-M</td><td>TB</td><td>PUD</td><td>CLEFT</td><td>RELCL</td><td>CCOMP</td><td>AUX</td></tr><tr><td>Swepipe-Talbank</td><td>71.75</td><td>79.69</td><td>78.82</td><td>-</td><td>61.31</td><td>54.98</td><td>88.45</td></tr><tr><td>UUparser ${}^{m}$ -Talbank</td><td>72.10</td><td>83.75</td><td>76.66</td><td>26.82</td><td>64.67</td><td>59.62</td><td>93.99</td></tr><tr><td>UUparser ${}^{m}$ -Swedish</td><td>75.51</td><td>83.76</td><td>77.50</td><td>29.12</td><td>67.37</td><td>61.65</td><td>94.21</td></tr><tr><td>UUparser ${}^{m}$ -Norswe</td><td>79.69</td><td>85.60</td><td>81.50</td><td>39.92</td><td>74.34</td><td>66.79</td><td>94.35</td></tr><tr><td>UUparser ${}^{m}$ -Scand</td><td>79.74</td><td>85.43</td><td>81.34</td><td>41.74</td><td>73.03</td><td>64.93</td><td>94.20</td></tr><tr><td>UUparser ${}^{m}$ -NorthG</td><td>79.33</td><td>85.35</td><td>81.27</td><td>41.71</td><td>72.82</td><td>64.70</td><td>94.27</td></tr><tr><td>Machamp ${}^{k}$ -Talbank</td><td>80.54</td><td>92.24</td><td>86.05</td><td>56.73</td><td>79.07</td><td>74.59</td><td>95.44</td></tr><tr><td>Machamp ${}^{k}$ -Swedish</td><td>80.26</td><td>90.72</td><td>86.83</td><td>49.67</td><td>75.84</td><td>71.29</td><td>93.94</td></tr><tr><td>Machamp ${}^{k}$ -Norswe</td><td>83.13</td><td>91.63</td><td>86.79</td><td>55.42</td><td>81.29</td><td>75.32</td><td>95.29</td></tr><tr><td>Machamp ${}^{k}$ -Scand</td><td>83.16</td><td>92.31</td><td>87.21</td><td>55.54</td><td>81.21</td><td>74.27</td><td>95.97</td></tr><tr><td>Machamp ${}^{k}$ -NorthG</td><td>83.03</td><td>92.35</td><td>87.17</td><td>56.00</td><td>82.27</td><td>74.78</td><td>95.85</td></tr></table>
|
| 396 |
+
|
| 397 |
+
Table 5: Results on standard Swedish UD test sets. LAS for all three Swedish test sets, and F1-scores for four relations of interest for LinES-M.
|
| 398 |
+
|
| 399 |
+
649 703
|
| 400 |
+
|
| 401 |
+
650 704
|
| 402 |
+
|
| 403 |
+
651 705
|
| 404 |
+
|
| 405 |
+
652 706
|
| 406 |
+
|
| 407 |
+
653 707
|
| 408 |
+
|
| 409 |
+
654 708
|
| 410 |
+
|
| 411 |
+
655 709
|
| 412 |
+
|
| 413 |
+
656 710
|
| 414 |
+
|
| 415 |
+
657 711
|
| 416 |
+
|
| 417 |
+
658 712
|
| 418 |
+
|
| 419 |
+
659 713
|
| 420 |
+
|
| 421 |
+
660 714
|
| 422 |
+
|
| 423 |
+
661 715
|
| 424 |
+
|
| 425 |
+
<table><tr><td rowspan="2"/><td colspan="4">Precision</td><td colspan="4">Recall</td></tr><tr><td>CLEFT</td><td>RELCL</td><td>CCOMP</td><td>NO-AUX</td><td>CLEFT</td><td>RELCL</td><td>CCOMP</td><td>NO-AUX</td></tr><tr><td>Swepipe-Talbank</td><td>-</td><td>66.33</td><td>70.41</td><td>84.62</td><td>0.00</td><td>99.25</td><td>98.57</td><td>97.06</td></tr><tr><td>${\mathrm{{UUparser}}}^{m}$ -Talbank</td><td>92.46</td><td>93.32</td><td>94.11</td><td>98.14</td><td>50.35</td><td>82.37</td><td>63.97</td><td>51.44</td></tr><tr><td>UUparser ${}^{m}$ -Swedish</td><td>92.49</td><td>93.45</td><td>95.84</td><td>97.60</td><td>69.79</td><td>81.45</td><td>65.95</td><td>50.85</td></tr><tr><td>UUparser ${}^{m}$ -NorSwe</td><td>92.12</td><td>94.65</td><td>97.39</td><td>98.30</td><td>84.55</td><td>81.20</td><td>70.87</td><td>56.21</td></tr><tr><td>UUparser ${}^{m}$ -Scand</td><td>94.64</td><td>95.69</td><td>96.73</td><td>98.72</td><td>84.20</td><td>79.62</td><td>70.48</td><td>61.05</td></tr><tr><td>UUparser ${}^{m}$ -NorthG</td><td>93.31</td><td>95.55</td><td>96.06</td><td>99.05</td><td>75.00</td><td>79.37</td><td>74.13</td><td>61.57</td></tr><tr><td>Machamp ${}^{k}$ -Talbank</td><td>94.12</td><td>95.16</td><td>94.63</td><td>98.52</td><td>59.90</td><td>83.46</td><td>75.48</td><td>65.69</td></tr><tr><td>Machamp ${}^{k}$ -Swedish</td><td>94.92</td><td>96.19</td><td>95.09</td><td>98.81</td><td>53.12</td><td>82.21</td><td>73.81</td><td>65.10</td></tr><tr><td>Machamp ${}^{k}$ -NorSwe</td><td>95.38</td><td>96.71</td><td>94.77</td><td>99.13</td><td>72.92</td><td>79.70</td><td>73.33</td><td>67.25</td></tr><tr><td>Machamp ${}^{k}$ -Scand</td><td>96.61</td><td>95.11</td><td>94.29</td><td>99.01</td><td>59.38</td><td>87.47</td><td>66.90</td><td>58.82</td></tr><tr><td>Machamp ${}^{k}$ -NorthG</td><td>95.38</td><td>93.83</td><td>93.46</td><td>99.00</td><td>64.06</td><td>87.72</td><td>68.10</td><td>58.04</td></tr></table>
|
| 426 |
+
|
| 427 |
+
Table 6: Precision and recall for our targeted test set.
|
| 428 |
+
|
| 429 |
+
724
|
| 430 |
+
|
| 431 |
+
726
|
| 432 |
+
|
| 433 |
+
662 716
|
| 434 |
+
|
| 435 |
+
663 717
|
| 436 |
+
|
| 437 |
+
664 718
|
| 438 |
+
|
| 439 |
+
665 719
|
| 440 |
+
|
| 441 |
+
666 720
|
| 442 |
+
|
| 443 |
+
667 721
|
| 444 |
+
|
| 445 |
+
668 722
|
| 446 |
+
|
| 447 |
+
669 723
|
| 448 |
+
|
| 449 |
+
671 725
|
| 450 |
+
|
| 451 |
+
727
|
| 452 |
+
|
| 453 |
+
674 728
|
| 454 |
+
|
| 455 |
+
676 models, with only a small drop in precision. On CCOMP and NO-AUX, on the other hand, these two models instead have a low recall, without gain-
|
| 456 |
+
|
| 457 |
+
679 ing much on precision. We do not see this pattern for UUparser ${}^{m}$ , where the Scand model is overall
|
| 458 |
+
|
| 459 |
+
681 strong.
|
| 460 |
+
|
| 461 |
+
In Table 7 we show a summary of results for both variants of UUparser and Machamp, showing
|
| 462 |
+
|
| 463 |
+
684 only precision for the targeted test set, since recall is biased towards Swepipe and UUparser ${}^{s}$ due to
|
| 464 |
+
|
| 465 |
+
686 the sampling. T We can see that UUparser ${}^{s}$ does not consistently improve on LAS over Swepipe when trained on the same Talbanken data, but
|
| 466 |
+
|
| 467 |
+
689 that adding the Scandinavian treebanks improves the results considerably both for the UD evalua-
|
| 468 |
+
|
| 469 |
+
691 tions and on the targeted test set. When we compare the two variants of UUparser and Machamp we see that ${\mathrm{{UUparser}}}^{m}$ and ${\operatorname{Machamp}}^{k}$ beat their variant consistently on the UD evaluation, and in most cases on the targeted test set. We also see
|
| 470 |
+
|
| 471 |
+
696 that training on Scand is better than training on Talbanken in the majority of cases, both for UD
|
| 472 |
+
|
| 473 |
+
701
|
| 474 |
+
|
| 475 |
+
and on Precision for the targeted test set, however, 730 from Table 6, we know that Scand is sometimes not as strong on recall.
|
| 476 |
+
|
| 477 |
+
## 7 Discussion
|
| 478 |
+
|
| 479 |
+
733
|
| 480 |
+
|
| 481 |
+
An important question is whether the parser per- 735 formance on our target task is good enough to use for our study of change in the Swedish writ-
|
| 482 |
+
|
| 483 |
+
ten language. Overall, both Machamp and UU- 738 parser have good precision for all our relations
|
| 484 |
+
|
| 485 |
+
of interest, always scoring above 90, and reach- 740 ing scores above 96 for some parsers for each relation type. The recall, however, is considerably lower. This means that the instances of each rela-
|
| 486 |
+
|
| 487 |
+
tion type the parser finds are mostly good, but it 745 does miss a substantial part of relevant instances. The recall is highest for RELCL, where it is well above 80 for several of the Machamp models with UUparser also above 80 . This approaches a level
|
| 488 |
+
|
| 489 |
+
that is usable for our end project, of finding syn- 750 tactic features in 18th-19th-century literature, and tracking them over time. Other relation types have a more mixed performance, as CLEFT, for which
|
| 490 |
+
|
| 491 |
+
${\text{UUparser}}^{m}$ trained on NorSwe and Scand per- 754
|
| 492 |
+
|
| 493 |
+
forms very well, with a recall of over 84 , but where 755
|
| 494 |
+
|
| 495 |
+
---
|
| 496 |
+
|
| 497 |
+
${}^{7}$ To save space, we only show results for two training language groups. The other groups exhibit largely the same trends.
|
| 498 |
+
|
| 499 |
+
---
|
| 500 |
+
|
| 501 |
+
757 811
|
| 502 |
+
|
| 503 |
+
<table><tr><td rowspan="2"/><td colspan="3">LAS</td><td colspan="4">F1, UD_LinES-M</td><td colspan="4">P, litt</td></tr><tr><td>LinES-M</td><td>TB</td><td>PUD</td><td>CLEFT</td><td>RELCL</td><td>CCOMP</td><td>AUX</td><td>CLEFT</td><td>RELCL</td><td>CCOMP</td><td>NO-AUX</td></tr><tr><td>Swepipe-Talbank</td><td>71.75</td><td>79.69</td><td>78.82</td><td>-</td><td>61.31</td><td>54.98</td><td>88.45</td><td>-</td><td>79.52</td><td>82.14</td><td>90.41</td></tr><tr><td>UUparser ${}^{s}$ -Talbank</td><td>70.80</td><td>82.35</td><td>75.78</td><td>26.08</td><td>63.01</td><td>58.39</td><td>91.31</td><td>92.80</td><td>92.52</td><td>93.05</td><td>96.50</td></tr><tr><td>UUparser ${}^{s}$ -Scand</td><td>77.63</td><td>83.39</td><td>80.25</td><td>30.77</td><td>70.55</td><td>62.22</td><td>90.82</td><td>93.86</td><td>94.07</td><td>94.66</td><td>97.95</td></tr><tr><td>UUparser ${}^{m}$ -Talbank</td><td>72.10</td><td>83.75</td><td>76.66</td><td>26.82</td><td>64.67</td><td>59.62</td><td>93.99</td><td>92.46</td><td>93.32</td><td>94.11</td><td>98.14</td></tr><tr><td>UUparser ${}^{m}$ -Scand</td><td>79.74</td><td>85.43</td><td>81.34</td><td>41.74</td><td>73.03</td><td>64.93</td><td>94.20</td><td>94.64</td><td>95.69</td><td>96.73</td><td>98.72</td></tr><tr><td>Machamp ${}^{m}$ -Talbank</td><td>77.20</td><td>89.35</td><td>84.21</td><td>38.47</td><td>72.87</td><td>69.09</td><td>92.91</td><td>92.94</td><td>96.13</td><td>93.00</td><td>98.23</td></tr><tr><td>Machamp ${}^{m}$ -Scand</td><td>80.13</td><td>89.50</td><td>85.79</td><td>43.09</td><td>77.67</td><td>71.18</td><td>93.49</td><td>93.41</td><td>96.98</td><td>92.47</td><td>99.08</td></tr><tr><td>Machamp ${}^{k}$ -Talbank</td><td>80.54</td><td>92.24</td><td>86.05</td><td>56.73</td><td>79.07</td><td>74.59</td><td>95.44</td><td>94.12</td><td>95.16</td><td>94.63</td><td>98.52</td></tr><tr><td>Machamp ${}^{k}$ -Scand</td><td>83.16</td><td>92.31</td><td>87.21</td><td>55.54</td><td>81.21</td><td>74.27</td><td>95.97</td><td>96.61</td><td>95.11</td><td>94.29</td><td>99.01</td></tr></table>
|
| 504 |
+
|
| 505 |
+
Table 7: Comparison of parser variants, on standard test sets and our test set.
|
| 506 |
+
|
| 507 |
+
810
|
| 508 |
+
|
| 509 |
+
812
|
| 510 |
+
|
| 511 |
+
813
|
| 512 |
+
|
| 513 |
+
814
|
| 514 |
+
|
| 515 |
+
815
|
| 516 |
+
|
| 517 |
+
816 other models perform considerably worse. The recall of CCOMP, and especially of NO-AUX is lower, and we would need to improve parser performance for those relation types, possibly by using domain adaptation techniques, before they would reach a useful level. The varying performance of parsers for different relation types is in line with the results for German by Adelmann et al. (2018), who recommend choosing different parsers for different end goals.
|
| 518 |
+
|
| 519 |
+
On the standard evaluation, Machamp is clearly overall better than UUparser, training on Scand is better than training only on Swedish, KB-BERT is better than mBERT for Machamp, and UUparser is better with Machamp tags than with Swepipe tags. For our targeted test sets, however, we see fewer clear trends, and there is much more variation among the systems. Machamp ${}^{k}$ and UUparser ${}^{m}$ tend to perform better than their counterparts, and the multilingual models may have a small advantage over the Swedish-only models. Swepipe clearly seems to fall behind the other parsers on precision, whereas its high recall can be explained by the sampling procedure. A side-effect of our study is that we have found that Machamp ${}^{k}$ trained on Scand or NorthG is a very strong parser for modern Swedish as measured by the UD test sets.
|
| 520 |
+
|
| 521 |
+
Our targeted test set does suffer from an issue with sampling from only two parsers, which affects its recall mainly for Swepipe, but also for ${\text{UUparser}}^{s}$ . We believe UUparser ${}^{m}$ is less affected since it relies on a different set of POS-tags. The dataset is also relatively small, especially for the CLEFT relation. However, we think it still contributes to showing that when selecting a parser for a particular target task and text type, we cannot rely solely on evaluation scores on standard test sets, as also shown by Adelmann et al. (2018).
|
| 522 |
+
|
| 523 |
+
809 Even if we focus on the F1-score for the relations
|
| 524 |
+
|
| 525 |
+
821 of interest, rather than on the full tree, we see no clear similarity of parser ranking to the evaluation of the same relation types in our targeted test set. To further investigate whether this type of test set
|
| 526 |
+
|
| 527 |
+
can indeed be useful, we would need to perform 826 further analysis. It would be interesting to learn more about where the main improvements shown on UD evaluation for a parser like Machamp ${}^{k}$ actually occurs. We also think it would be useful
|
| 528 |
+
|
| 529 |
+
to consider the sampling for the test set, specifi- 831 cally if it is worth the effort to also annotate some
|
| 530 |
+
|
| 531 |
+
raw text, in order to find instances not identified by 833 any of our parsers. Another issue that we did not
|
| 532 |
+
|
| 533 |
+
yet explore, is whether parsing performance varies 836 over the time period in question.
|
| 534 |
+
|
| 535 |
+
## 8 Conclusion
|
| 536 |
+
|
| 537 |
+
838
|
| 538 |
+
|
| 539 |
+
839
|
| 540 |
+
|
| 541 |
+
We describe a study of Swedish dependency 840
|
| 542 |
+
|
| 543 |
+
parsers with the goal of tracking changes in the 841 use of certain types of subordinate clauses and re-
|
| 544 |
+
|
| 545 |
+
lated phenomena in Swedish literature from 1800- 843 1930. Since standard test sets do not cover this time period or genre, and we did not have the re-
|
| 546 |
+
|
| 547 |
+
sources to perform a full annotation of dependency 846
|
| 548 |
+
|
| 549 |
+
trees, we propose a smaller-scale annotation task, 848 focusing on single relation types. We evaluated a set of parsers on UD and on our targeted test set.
|
| 550 |
+
|
| 551 |
+
While there was a clear and relatively consistent 851 order between the parsers on the UD evaluation,
|
| 552 |
+
|
| 553 |
+
the performance was more mixed on our targeted 853 test set, without a clear overall best parser across relation types. We believe that our proposed annotation scheme can be useful in complementing standard evaluations, with a low annotation effort,
|
| 554 |
+
|
| 555 |
+
but that more analysis is needed. 858
|
| 556 |
+
|
| 557 |
+
## References
|
| 558 |
+
|
| 559 |
+
Benedikt Adelmann, Wolfgang Menzel, Melanie An-dresen, and Heike Zinsmeister. 2018. Evaluation of
|
| 560 |
+
|
| 561 |
+
862
|
| 562 |
+
|
| 563 |
+
863 out-of-domain dependency parsing for its applica- 865 tion in a digital humanities project. In Proceedings 866 of the 14th Conference on Natural Language Processing (KONVENS 2018), pages 121-135, Vienna, Austria.
|
| 564 |
+
|
| 565 |
+
Lars Ahrenberg. 2007. LinES: An English-Swedish 870 parallel treebank. In Proceedings of the 16th Nordic Conference of Computational Linguistics (NODAL-IDA 2007), pages 270-273, Tartu, Estonia. University of Tartu, Estonia.
|
| 566 |
+
|
| 567 |
+
Yoeng-Jin Chu and Tseng-Hong Liu. 1965. On the shortest arborescence of a directed graph. Scientia Sinica, 14:1396-1400.
|
| 568 |
+
|
| 569 |
+
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-
|
| 570 |
+
|
| 571 |
+
880 moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In
|
| 572 |
+
|
| 573 |
+
882 Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Linguistics.
|
| 574 |
+
|
| 575 |
+
885 Mats Dahllöf. 2022. Quotation and narration in contemporary popular fiction in swedish - stylometric
|
| 576 |
+
|
| 577 |
+
887 explorations. In Proceedings of the 6th Digital Humanities in the Nordic and Baltic Countries Conference (DHNB 2022), pages 203-211, Uppsala, Swe-
|
| 578 |
+
|
| 579 |
+
890 den.
|
| 580 |
+
|
| 581 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
|
| 582 |
+
|
| 583 |
+
892 Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference
|
| 584 |
+
|
| 585 |
+
895 of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers),
|
| 586 |
+
|
| 587 |
+
897 pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 588 |
+
|
| 589 |
+
900 Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meet-
|
| 590 |
+
|
| 591 |
+
902 ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 484-490, Melbourne, Australia. Association for Computational Linguistics.
|
| 592 |
+
|
| 593 |
+
Jack Edmonds. 1967. Optimum branchings. Journal
|
| 594 |
+
|
| 595 |
+
907 of Research of the national Bureau of Standards B, 71(4):233-240.
|
| 596 |
+
|
| 597 |
+
Sven Engdahl. 1962. Studier i nusvensk sakprosa. Några utvecklingslinjer. Skrifter utgivna av Insti-tutionen för nordiska spräk vid Uppsala universitet, Uppsala.
|
| 598 |
+
|
| 599 |
+
Rob van der Goot, Ahmet Üstün, Alan Ramponi, Ibrahim Sharaf, and Barbara Plank. 2021. Massive choice, ample tasks (MaChAmp): A toolkit for multi-task learning in NLP. In Proceedings of
|
| 600 |
+
|
| 601 |
+
917 the 16th Conference of the European Chapter of the
|
| 602 |
+
|
| 603 |
+
Association for Computational Linguistics: System 918
|
| 604 |
+
|
| 605 |
+
Demonstrations, pages 176-197, Online. Associa- 919
|
| 606 |
+
|
| 607 |
+
tion for Computational Linguistics. 920
|
| 608 |
+
|
| 609 |
+
Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- 921
|
| 610 |
+
|
| 611 |
+
ple and accurate dependency parsing using bidirec- 922
|
| 612 |
+
|
| 613 |
+
tional LSTM feature representations. Transactions 923
|
| 614 |
+
|
| 615 |
+
of the Association for Computational Linguistics, 924
|
| 616 |
+
|
| 617 |
+
4:313-327. 925
|
| 618 |
+
|
| 619 |
+
Marco Kuhlmann, Carlos Gómez-Rodríguez, and Gior- 926 gio Satta. 2011. Dynamic programming algorithms for transition-based dependency parsers. In Pro-
|
| 620 |
+
|
| 621 |
+
ceedings of the 49th Annual Meeting of the Asso- 929 ciation for Computational Linguistics: Human Language Technologies, pages 673-682, Portland, Oregon, USA. Association for Computational Linguistics.
|
| 622 |
+
|
| 623 |
+
Artur Kulmizev, Miryam de Lhoneux, Johannes 934 Gontrum, Elena Fano, and Joakim Nivre. 2019.
|
| 624 |
+
|
| 625 |
+
Deep contextualized word embeddings in transition- 936 based and graph-based dependency parsing - a tale of two parsers revisited. In Proceedings of the 2019 Conference on Empirical Methods in Natu-
|
| 626 |
+
|
| 627 |
+
ral Language Processing and the 9th International 939 Joint Conference on Natural Language Processing
|
| 628 |
+
|
| 629 |
+
(EMNLP-IJCNLP), pages 2755-2768, Hong Kong, 941 China. Association for Computational Linguistics.
|
| 630 |
+
|
| 631 |
+
Miryam de Lhoneux, Sara Stymne, and Joakim Nivre. 2017. Arc-hybrid non-projective dependency parsing with a static-dynamic oracle. In Proceedings of
|
| 632 |
+
|
| 633 |
+
the 15th International Conference on Parsing Tech- 946 nologies, pages 99-104, Pisa, Italy. Association for Computational Linguistics.
|
| 634 |
+
|
| 635 |
+
Torvald Lindstedt. 1922. Studier över stilen i Gösta 949 Berlings saga. Nysvenska studier, 2:31-77.
|
| 636 |
+
|
| 637 |
+
Martin Malmsten, Love Börjeson, and Chris Haf- 951 fenden. 2020. Playing with words at the National Library of Sweden - making a Swedish BERT. CoRR,
|
| 638 |
+
|
| 639 |
+
abs/2007.01658. 954
|
| 640 |
+
|
| 641 |
+
Ryan McDonald and Joakim Nivre. 2007. Charac-
|
| 642 |
+
|
| 643 |
+
terizing the errors of data-driven dependency pars- 956 ing models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Lan-
|
| 644 |
+
|
| 645 |
+
guage Processing and Computational Natural Lan- 959 guage Learning (EMNLP-CoNLL), pages 122-131, Prague, Czech Republic. Association for Computa-
|
| 646 |
+
|
| 647 |
+
tional Linguistics. 961
|
| 648 |
+
|
| 649 |
+
Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajič. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of Human Language Technology Conference
|
| 650 |
+
|
| 651 |
+
and Conference on Empirical Methods in Natural 966 Language Processing, pages 523-530, Vancouver, British Columbia, Canada. Association for Computational Linguistics.
|
| 652 |
+
|
| 653 |
+
Beáta Megyesi, Anne Palmér, and Näsman Jesper.
|
| 654 |
+
|
| 655 |
+
2019. SWEGRAM - Annotering och analys av 971 972 svenska texter. Technical report, Department of Lin- 973 guistics and Philology, Uppsala University. 974
|
| 656 |
+
|
| 657 |
+
975 Minh Van Nguyen, Viet Dac Lai, Amir Pouran Ben Veyseh, and Thien Huu Nguyen. 2021. Trankit: 976 A light-weight transformer-based toolkit for multilingual natural language processing. In Proceedings 978 of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Sys- 980 tem Demonstrations, pages 80-90, Online. Association for Computational Linguistics.
|
| 658 |
+
|
| 659 |
+
Joakim Nivre, Johan Hall, Jens Nilsson, Atanas
|
| 660 |
+
|
| 661 |
+
983 Chanev, Gülsen Eryiğit, Sandra Kübler, Svetoslav Marinov, and Erwin Marsi. 2007. Maltparser: A
|
| 662 |
+
|
| 663 |
+
985 language-independent system for data-driven depen-
|
| 664 |
+
|
| 665 |
+
986 dency parsing. Natural Language Engineering, ${13}\left( 2\right) ,,{13}\left( 2\right) : {95} - {135}$ . 987
|
| 666 |
+
|
| 667 |
+
988 Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin-
|
| 668 |
+
|
| 669 |
+
989 ter, Jan Hajič, Christopher D. Manning, Sampo
|
| 670 |
+
|
| 671 |
+
990 Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the Twelfth Language Resources
|
| 672 |
+
|
| 673 |
+
993 and Evaluation Conference, pages 4034-4043, Marseille, France. European Language Resources Asso-
|
| 674 |
+
|
| 675 |
+
995 ciation.
|
| 676 |
+
|
| 677 |
+
Robert Östling. 2018. Part of speech tagging: Shallow or deep learning? Northern European Journal of
|
| 678 |
+
|
| 679 |
+
998 Language Technology, 5:1-15.
|
| 680 |
+
|
| 681 |
+
1000 Alessio Salomoni. 2017. Dependency parsing on late- 18th-century German aesthetic writings: A preliminary inquiry into Schiller and F. Schlegel. In Proceedings of the 2nd International Conference on Digital Access to Textual Cultural Heritage, DAT-eCH2017, page 47-52, New York, NY, USA. Asso-
|
| 682 |
+
|
| 683 |
+
1005 ciation for Computing Machinery.
|
| 684 |
+
|
| 685 |
+
Aaron Smith, Bernd Bohnet, Miryam de Lhoneux, Joakim Nivre, Yan Shao, and Sara Stymne. 2018a. 82 treebanks, 34 models: Universal Dependency parsing with multi-treebank models. In Proceedings
|
| 686 |
+
|
| 687 |
+
1010 of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 113-123, Brussels, Belgium. Association for Computational Linguistics.
|
| 688 |
+
|
| 689 |
+
Aaron Smith, Miryam de Lhoneux, Sara Stymne, and
|
| 690 |
+
|
| 691 |
+
1015 Joakim Nivre. 2018b. An investigation of the interactions between pre-trained word embeddings, character models and POS tags in dependency parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2711-2720, Brussels, Belgium. Association
|
| 692 |
+
|
| 693 |
+
1020 for Computational Linguistics.
|
| 694 |
+
|
| 695 |
+
Sara Stymne, Johan Svedjedal, and Carin Östman. 2018. Spräklig rytm i skönlitterär prosa. En fall-studie i Karin Boyes Kallocain. Samlaren. Tidskrift för forskning om svensk och annan nordisk litter-
|
| 696 |
+
|
| 697 |
+
1025 atur, 139:128-161.
|
| 698 |
+
|
| 699 |
+
Ulf Teleman. 2003. Tradis och funkis : svensk 1026
|
| 700 |
+
|
| 701 |
+
spräkvård och spräkpolitik efter 1800, 1st edition. 1027
|
| 702 |
+
|
| 703 |
+
Norstedts ordbok, Stockholm, Sweden. 1028
|
| 704 |
+
|
| 705 |
+
Louise Von Hofsten. 1935. Några stildrag hos 1029
|
| 706 |
+
|
| 707 |
+
Selma Lagerlöf med utgångspunkt frān Charlotte 1030
|
| 708 |
+
|
| 709 |
+
Löwenskiöld. Nysvenska studier, 15:150-183. 1031
|
| 710 |
+
|
| 711 |
+
Erik Wellander. 1939. Riktig svenska: en handledning 1032 1033 i svenska sprèkets vård. Norstedt, Stockholm, Swe-
|
| 712 |
+
|
| 713 |
+
den. 1034
|
| 714 |
+
|
| 715 |
+
1035
|
| 716 |
+
|
| 717 |
+
Daniel Zeman, Joakim Nivre, Mitchell Abrams, 1036
|
| 718 |
+
|
| 719 |
+
et al. 2022. Universal dependencies 2.11. 1037 LINDAT/CLARIAH-CZ digital library at the Insti-
|
| 720 |
+
|
| 721 |
+
tute of Formal and Applied Linguistics (UFAL), 1038
|
| 722 |
+
|
| 723 |
+
Faculty of Mathematics and Physics, Charles Uni- 1039
|
| 724 |
+
|
| 725 |
+
versity. 1040
|
| 726 |
+
|
| 727 |
+
Daniel Zeman, Martin Popel, Milan Straka, Jan 1041 1042 Hajič, Joakim Nivre, Filip Ginter, Juhani Luotolahti,
|
| 728 |
+
|
| 729 |
+
Sampo Pyysalo, Slav Petrov, Martin Potthast, Fran- 1043
|
| 730 |
+
|
| 731 |
+
cis Tyers, Elena Badmaeva, Memduh Gokirmak, 1044 Anna Nedoluzhko, Silvie Cinková, Jan Hajič jr., Jaroslava Hlaváčová, Václava Kettnerová, Zdeňka
|
| 732 |
+
|
| 733 |
+
Urešová, Jenna Kanerva, Stina Ojala, Anna Mis- 1047 silä, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Le-
|
| 734 |
+
|
| 735 |
+
ung, Marie-Catherine de Marneffe, Manuela San- 1049
|
| 736 |
+
|
| 737 |
+
guinetti, Maria Simi, Hiroshi Kanayama, Valeria 1050
|
| 738 |
+
|
| 739 |
+
de Paiva, Kira Droganova, Héctor Martínez Alonso, 1051
|
| 740 |
+
|
| 741 |
+
Çağrı Çöltekin, Umut Sulubacak, Hans Uszkor- 1052
|
| 742 |
+
|
| 743 |
+
eit, Vivien Macketanz, Aljoscha Burchardt, Kim 1053 Harris, Katrin Marheinecke, Georg Rehm, Tolga
|
| 744 |
+
|
| 745 |
+
Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran 1054
|
| 746 |
+
|
| 747 |
+
Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, 1055
|
| 748 |
+
|
| 749 |
+
Jesse Kirchner, Hector Fernandez Alcalde, Jana Str- 1056
|
| 750 |
+
|
| 751 |
+
nadová, Esha Banerjee, Ruli Manurung, Antonio 1057
|
| 752 |
+
|
| 753 |
+
Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo 1058 Mendonça, Tatiana Lando, Rattima Nitisaroj, and
|
| 754 |
+
|
| 755 |
+
Josie Li. 2017. CoNLL 2017 shared task: Multi- 1059
|
| 756 |
+
|
| 757 |
+
lingual parsing from raw text to Universal Depen- 1060
|
| 758 |
+
|
| 759 |
+
dencies. In Proceedings of the CoNLL 2017 Shared 1061
|
| 760 |
+
|
| 761 |
+
Task: Multilingual Parsing from Raw Text to Univer- 1062
|
| 762 |
+
|
| 763 |
+
sal Dependencies, pages 1-19, Vancouver, Canada. 1063 Association for Computational Linguistics. 1064
|
| 764 |
+
|
| 765 |
+
1065
|
| 766 |
+
|
| 767 |
+
1066
|
| 768 |
+
|
| 769 |
+
1067
|
| 770 |
+
|
| 771 |
+
1068
|
| 772 |
+
|
| 773 |
+
1069
|
| 774 |
+
|
| 775 |
+
1070
|
| 776 |
+
|
| 777 |
+
1071
|
| 778 |
+
|
| 779 |
+
1072 1073 1074
|
| 780 |
+
|
| 781 |
+
1075 1076 1077 1078 1079
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wbQd_esbJC/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,728 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ PARSER EVALUATION FOR ANALYZING SWEDISH 19TH-20TH CENTURY LITERATURE
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author
|
| 10 |
+
|
| 11 |
+
Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
006 Affiliation / Address line 2 Affiliation / Address line 3
|
| 14 |
+
|
| 15 |
+
email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 3
|
| 24 |
+
|
| 25 |
+
email@domain
|
| 26 |
+
|
| 27 |
+
Anonymousest Author 057
|
| 28 |
+
|
| 29 |
+
Affiliation / Address line 1 058
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 32 |
+
|
| 33 |
+
063
|
| 34 |
+
|
| 35 |
+
§ ABSTRACT
|
| 36 |
+
|
| 37 |
+
013 In this study, we aim to find a parser for accurately identifying different types of subordinate clauses, and related phenomena,
|
| 38 |
+
|
| 39 |
+
016 in 19th-20th-century Swedish literature. Since no test set is available for parsing
|
| 40 |
+
|
| 41 |
+
018 data from this time period, we propose a lightweight annotation scheme for annotating a single relation of interest per sen-
|
| 42 |
+
|
| 43 |
+
021 tence. We train a variety of parsers for Swedish and compare evaluations on stan-
|
| 44 |
+
|
| 45 |
+
023 dard modern test sets and our targeted test set. We find clear trends in which parser
|
| 46 |
+
|
| 47 |
+
026 types perform best on the standard test set, but that performance is considerably more
|
| 48 |
+
|
| 49 |
+
028 varied on the targeted test set. We believe that our proposed annotation scheme can be useful for complementing standard
|
| 50 |
+
|
| 51 |
+
031 evaluations, with a low annotation effort.
|
| 52 |
+
|
| 53 |
+
033
|
| 54 |
+
|
| 55 |
+
§ 1 INTRODUCTION
|
| 56 |
+
|
| 57 |
+
Dependency parsers can be useful tools for analyzing large text materials, and as such can en-
|
| 58 |
+
|
| 59 |
+
036 able large-scale studies within many scientific disciplines. Modern parsers can achieve very high
|
| 60 |
+
|
| 61 |
+
038 scores on standard test sets, at least for languages with large treebanks, but these test sets are often limited to only a few domains, and typically on publication-level modern language, such as news or Wikipedia. For more challenging text types, for instance, noisy data like Twitter or historical texts, parsers typically perform considerably worse even for high-resource languages.
|
| 62 |
+
|
| 63 |
+
Parsers are typically evaluated on a treebank that is split into training, development, and test sets. This can overestimate the parser performance, since parsers are then trained on data that matches its test set in all relevant aspects, such as genre, time period, and annotation style. Fur-
|
| 64 |
+
|
| 65 |
+
053 thermore, parser evaluation is typically done using
|
| 66 |
+
|
| 67 |
+
metrics that give a holistic score for the full tree, 065 such as (un)labeled attachment score. In many
|
| 68 |
+
|
| 69 |
+
real-world scenarios, such as ours, we are not in- 067 terested in the full tree, but in a subset of relations.
|
| 70 |
+
|
| 71 |
+
This study is part of a larger project with 070 the overall aim to identify and explore language
|
| 72 |
+
|
| 73 |
+
change in Swedish literature during the period 072 1800-1930. During the 19th century, the Swedish language changed in several aspects. This change
|
| 74 |
+
|
| 75 |
+
includes various linguistic levels and involve also 075 lexical aspects. Overall, the changes led to a
|
| 76 |
+
|
| 77 |
+
smaller difference between spoken and written 077 Swedish since the written language moved closer to the spoken vernacular. The goal of the project
|
| 78 |
+
|
| 79 |
+
is to cover morphological, syntactical, and lexical 080 changes. In this paper, however, we focus only on
|
| 80 |
+
|
| 81 |
+
syntactic aspects. The changes in the 19th century 082 resulted in a less complex language - not least as far as subordinate clauses and related phenom-
|
| 82 |
+
|
| 83 |
+
ena are concerned. To enable large-scale analysis 085 of subordinate clauses, we require a high-quality
|
| 84 |
+
|
| 85 |
+
parser for our target domain, Swedish literary nov- 087 els and short stories from 1800-1930. In this paper, we explore whether parsers can be evaluated
|
| 86 |
+
|
| 87 |
+
for this domain, without requiring a large manual 090 annotation effort.
|
| 88 |
+
|
| 89 |
+
To evaluate a parser for a new text type and task, 092 as in our case 19th century literature with a focus mainly on subordinate clauses, we would ideally like to have an annotated treebank for the target
|
| 90 |
+
|
| 91 |
+
text type. However, this is a human annotation 097 task that is time-consuming, and thus costly, and which requires an expert on dependency grammar. For many practical projects, this is not feasible. We propose a lightweight annotation task for our target task, which consists of only annotating one type of phenomenon per sentence, constituting a targeted test set. We then explore whether this could be an efficient option to annotating full trees. The focus is on four phenomena related to subor-
|
| 92 |
+
|
| 93 |
+
dinate clauses, and annotate a small targeted test 107 set for our target text type, which will be publicly released. For comparison, we also evaluate on standard Swedish test sets.
|
| 94 |
+
|
| 95 |
+
We compare several variants of three generations of parsers trained on different subsets of the Universal Dependencies (UD) treebanks (Nivre et al., 2020), and evaluate them on UD, both with holistic metrics and for a subset of relations of interest, as well as on our targeted test set. On the UD test sets we see clear trends that a modern BERT-based parser is better than BiLSTM- and SVM-based parsers, and that it is better to train on several North Germanic languages than only on Swedish. However, on our new targeted test set, the results are more mixed, and we see less clear trends, which is in line with earlier work for German (Adelmann et al., 2018). We think that our targeted test set is able to give a complementary view to standard evaluations.
|
| 96 |
+
|
| 97 |
+
In Section 2 we review related work, followed by a description of our project focused on Swedish language change in Section 3. In Section 4 we describe the data and in Section 5 we describe the parsers evaluated, including the multilingual training setup. We summarize the results in Section 6, discuss them in Section 7, and finally we conclude in Section 8.
|
| 98 |
+
|
| 99 |
+
§ 2 RELATED WORK
|
| 100 |
+
|
| 101 |
+
Dependency parsers have continuously developed, from 'old school' parsers like MaltParser (Nivre et al., 2007) and MSTparser (McDonald et al., 2005) based on classical machine learning, like support vector machines, to modern neural parsers. Many of the first strong neural parsers were based on recurrent neural networks, as most of the best parsers in the CoNLL 2017 shared task on dependency parsing (Zeman et al., 2017). Next, models based on deep contextualized em-beddings have been taking over, and most strong parsers today are based on fine-tuning contextu-alized models like BERT (Devlin et al., 2019) or XLM-R (Conneau et al., 2020), e.g. Machamp (van der Goot et al., 2021) and Trankit (Nguyen et al., 2021).
|
| 102 |
+
|
| 103 |
+
The standard way to evaluate dependency parsers is by calculating holistic metrics such as labeled attachment score (LAS), which measures the percentage of words which gets both their head word and label correct. There are, however, examples of more detailed evaluations (e.g. McDonald
|
| 104 |
+
|
| 105 |
+
and Nivre, 2007; Kulmizev et al., 2019; Salomoni, 162
|
| 106 |
+
|
| 107 |
+
2017), focusing on aspects such as arc and sen- 163 tence lengths, non-projective dependencies, and scores for specific POS-tags and dependency relations. The overall conclusion is typically that different parser types have different strengths. As far
|
| 108 |
+
|
| 109 |
+
as we are aware, there are no datasets and evalua- 168 tions like our proposal, focused on a single relation per sentence.
|
| 110 |
+
|
| 111 |
+
Highly relevant to our study is the work of Adelmann et al. (2018), who evaluate a set of six parsers for digital humanities research, focusing on novels and academic texts for German. Like us, they are also interested in specific relations, for in-
|
| 112 |
+
|
| 113 |
+
stance, related to speaker attribution, and not only 178 in holistic evaluation. Unlike us, they perform
|
| 114 |
+
|
| 115 |
+
a full dependency tree annotation effort for three 180 sample texts. In addition, they do not include any neural parsers in their evaluation. They find that several parsers do well on the holistic metrics, but that the results are considerably worse for several of the specific relations of interest, such as appositions, and that it is not always the overall strongest parser that is the best choice for a specific relation. Salomoni (2017) performed a detailed evaluation on parsing German 17th-century literature, for which he annotated two excerpts of text with full dependency annotations. Again, no neural parsers were included in the study, which found a drop compared to in-domain results, but where the relative performance of the two parsers evaluated was consistent on different metrics, possibly because of the large difference in performance between them.
|
| 116 |
+
|
| 117 |
+
Swedish literary texts from different eras have 200 been analyzed for different purposes before, requiring taggers and/or parsers. Dahllöf (2022) aims to characterize differences between dialogue and narrative in contemporary fiction, whereas (Stymne et al., 2018) analyze prose rhythm in a novel from 1940. However, in none of these studies, the choice of tagger and/or parser is motivated. There have also been some earlier smaller-scale studies focusing on the transition towards a more colloquial written Swedish. For instance, language development in Swedish literature during the 19th century has been explored, but only on a small scale focusing on individual authors (e.g.
|
| 118 |
+
|
| 119 |
+
Lindstedt, 1922; Von Hofsten, 1935). 215
|
| 120 |
+
|
| 121 |
+
216 270
|
| 122 |
+
|
| 123 |
+
max width=
|
| 124 |
+
|
| 125 |
+
Language Treebank Genres Train Test
|
| 126 |
+
|
| 127 |
+
1-5
|
| 128 |
+
3*Swedish Talbanken news, nonfiction 67K 20K
|
| 129 |
+
|
| 130 |
+
2-5
|
| 131 |
+
PUD news, wiki - 19K
|
| 132 |
+
|
| 133 |
+
2-5
|
| 134 |
+
LinES-M fiction, nonfiction, spoken 18K 73K
|
| 135 |
+
|
| 136 |
+
1-5
|
| 137 |
+
3*Norwegian Bokmaal blog, news, nonfiction 244K 30K
|
| 138 |
+
|
| 139 |
+
2-5
|
| 140 |
+
Nynorsk blog, news, nonfiction 245K 25K
|
| 141 |
+
|
| 142 |
+
2-5
|
| 143 |
+
NynorskLIA spoken 35K 10K
|
| 144 |
+
|
| 145 |
+
1-5
|
| 146 |
+
Danish DDT fiction, news, nonfiction, spoken 80K 10K
|
| 147 |
+
|
| 148 |
+
1-5
|
| 149 |
+
Faroese FarPaHC bible 1.5K 6.6K
|
| 150 |
+
|
| 151 |
+
1-5
|
| 152 |
+
Icelandic Modern news, nonfiction 7.5K 10K
|
| 153 |
+
|
| 154 |
+
1-5
|
| 155 |
+
|
| 156 |
+
Table 1: Treebanks used, with info about genres (as defined in UD) and number of tokens in test and training data. LinES-M refers to our modified version of LinES.
|
| 157 |
+
|
| 158 |
+
277
|
| 159 |
+
|
| 160 |
+
217 271
|
| 161 |
+
|
| 162 |
+
218 272
|
| 163 |
+
|
| 164 |
+
219 273
|
| 165 |
+
|
| 166 |
+
220 274
|
| 167 |
+
|
| 168 |
+
221 275
|
| 169 |
+
|
| 170 |
+
222 276
|
| 171 |
+
|
| 172 |
+
278
|
| 173 |
+
|
| 174 |
+
227
|
| 175 |
+
|
| 176 |
+
§ 3 LANGUAGE CHANGE IN 19TH CENTURY SWEDISH
|
| 177 |
+
|
| 178 |
+
229
|
| 179 |
+
|
| 180 |
+
This study is part of a larger project with the over-
|
| 181 |
+
|
| 182 |
+
232 all aim to identify and explore language change in Swedish literature during the period 1800-1930.
|
| 183 |
+
|
| 184 |
+
234 In the history of the Swedish language, this period is characterized by modernization in the sense that the written language was influenced by the spoken vernacular. In this process of modernization, fictional prose is of certain interest since it
|
| 185 |
+
|
| 186 |
+
239 has been suggested that linguistic change spread from literary dialogue (Engdahl, 1962; Teleman, 2003). By investigating a corpus of literary texts the project will not only contribute with a more detailed account of language change in 19th-century Swedish but also address the question of how linguistic change increased in the community.
|
| 187 |
+
|
| 188 |
+
The modernization of the Swedish written language during the 19th century affected several lin-
|
| 189 |
+
|
| 190 |
+
249 guistic aspects. As for the lexicon, it is wellknown that formal functions words were replaced by colloquial counterparts. Much attention has also been devoted to the loss of verbal agreement, i.e. the use of the vernacular singular variant in
|
| 191 |
+
|
| 192 |
+
254 both singular and plural. On the syntactic level, Engdahl (1962) has shown a remarkable change in sentence length during the end of the 19th century. Engdahl's study focuses on non-fictional prose, periodicals from 1878 to 1950, but his re-
|
| 193 |
+
|
| 194 |
+
259 sults call for a more detailed account of syntactic complexity during the period, and hence we will focus on subordinate clauses and phenomena related to them in this paper.
|
| 195 |
+
|
| 196 |
+
For this study, we have chosen to focus on three types of subordinate clauses, based on UD dependency labels, and one phenomenon related to subordinate clauses: (i) relative clauses (RELCL), (ii) cleft constructions (CLEFT),[1 (iii) clausal
|
| 197 |
+
|
| 198 |
+
269
|
| 199 |
+
|
| 200 |
+
281 complements not determined by obligatory con-
|
| 201 |
+
|
| 202 |
+
trol (CCOMP), and (iv) auxiliary drop (NO-AUX). 283 Whereas the first three types can be used in order to measure syntactic complexity, auxiliary drop
|
| 203 |
+
|
| 204 |
+
has been suggested to mark written style, and 286 hence almost never occur in spoken language (cf.
|
| 205 |
+
|
| 206 |
+
Wellander, 1939). Since auxiliary drop of fi- 288 nite verbs is restricted to subordinate clauses in Swedish, we have included it as related to sub-
|
| 207 |
+
|
| 208 |
+
ordinate clauses. In this study, we only include 291
|
| 209 |
+
|
| 210 |
+
auxiliary drop that occurs in clausal complements 293 CCOMP.
|
| 211 |
+
|
| 212 |
+
§ 4 DATA
|
| 213 |
+
|
| 214 |
+
296
|
| 215 |
+
|
| 216 |
+
In this section, we will describe the data used. We
|
| 217 |
+
|
| 218 |
+
will first describe the data from UD, including the 298 modified version of the LinES treebank, and then describe the targeted dataset we constructed for
|
| 219 |
+
|
| 220 |
+
this project 301
|
| 221 |
+
|
| 222 |
+
§ 4.1 UNIVERSAL DEPENDENCIES TREEBANKS
|
| 223 |
+
|
| 224 |
+
303
|
| 225 |
+
|
| 226 |
+
We use data from Universal Dependencies Nivre
|
| 227 |
+
|
| 228 |
+
et al. (2020) version 2.11 (Zeman et al., 2022) for 306 training our parsers and for the standard evalua-
|
| 229 |
+
|
| 230 |
+
tion. Besides dependency annotations, UD also 308 contains lemmas, universal and language-specific part-of-speech tags (UPOS/XPOS), and morphological features. Our main focus is on Swedish, for which there are three treebanks, Talbanken,
|
| 231 |
+
|
| 232 |
+
LinES, and PUD, where PUD only contains a test 313 set. In addition, we use data from related north Germanic languages: Norwegian (both variants: Bokmål and Nynorsk), Danish, Faroese, and Icelandic. The treebanks used are summarized in Ta-
|
| 233 |
+
|
| 234 |
+
ble [1]. The intuition behind also using related lan- 318 guages is twofold, first, it has been shown to improve parsers (e.g. Smith et al., 2018a), second,
|
| 235 |
+
|
| 236 |
+
323
|
| 237 |
+
|
| 238 |
+
subtypes of ACL, clausal modifier of noun, and are denoted ACL:RELCL and ACL:CLEFT. In this paper, we will use shorter names, excluding the prefix.
|
| 239 |
+
|
| 240 |
+
${}^{1}$ In UD, both relative clauses and cleft constructions are
|
| 241 |
+
|
| 242 |
+
324 378
|
| 243 |
+
|
| 244 |
+
max width=
|
| 245 |
+
|
| 246 |
+
Relation Example Class
|
| 247 |
+
|
| 248 |
+
1-3
|
| 249 |
+
RELCL Hvad hon beundrar Maurits, som kan *stâ* så lugn ! Correct
|
| 250 |
+
|
| 251 |
+
1-3
|
| 252 |
+
RELCL Men kan du säga hvar vi *äro* ? False
|
| 253 |
+
|
| 254 |
+
1-3
|
| 255 |
+
NO-AUX Jag har fätt hvad du i natt *skrifvit* till mig . Correct
|
| 256 |
+
|
| 257 |
+
1-3
|
| 258 |
+
|
| 259 |
+
Table 2: Examples of sentences shown to the annotators, marked as either correct or wrong.
|
| 260 |
+
|
| 261 |
+
380
|
| 262 |
+
|
| 263 |
+
325 379
|
| 264 |
+
|
| 265 |
+
381
|
| 266 |
+
|
| 267 |
+
382
|
| 268 |
+
|
| 269 |
+
383
|
| 270 |
+
|
| 271 |
+
330 we believe it may make the parser more robust to non-standard Swedish, which has many differences from the modern Swedish of the Swedish treebanks. Written Norwegian and Danish, in particular, are very similar to Swedish, and are considered mutually intelligible.
|
| 272 |
+
|
| 273 |
+
As can be seen in Table 1, the genres, according to the UD specification, of the treebanks used are mixed. To be able, to at least some extent, investigate whether it would help to have an in-genre test set, we create a modified version, LinES-M, of the LinES treebank (Ahrenberg, 2007) which consists of three genres: literary fiction, Microsoft manuals, and European parliament proceedings. The literary part contains a set of novels translated from English, published 1977-2017. While this is not a perfect match to our target of novels and short stories written originally in Swedish during an earlier time period, this was the closest we could get to an in-domain test set, without any re-annotations. We re-split LinES by merging the data from the training and test sets, and moving all literature [2] to a new test set, and all other texts to a new training set, referred to as LinES-M in Table 1.
|
| 274 |
+
|
| 275 |
+
For evaluation on the Swedish UD test sets, we report labeled attachment score (LAS). For LinES-M, we also report F1-scores for the three relations in focus for our targeted test set and AUX, which is relevant to identify auxiliary drop.
|
| 276 |
+
|
| 277 |
+
§ 4.2 TARGETED LITERATURE DATASET
|
| 278 |
+
|
| 279 |
+
In this section, we will describe the sampling and annotation of the targeted literary dataset annotated for this project as an alternative way of evaluating the performance of parsers on specific phenomena in a specific text type. The targeted dataset will be made publicly available.
|
| 280 |
+
|
| 281 |
+
§ SAMPLING AND TEXT PROCESSING
|
| 282 |
+
|
| 283 |
+
Our target data is literary texts from 1800-1930, focusing on novels and collections of short stories. Such works have been made available by
|
| 284 |
+
|
| 285 |
+
Litteraturbanken. ${}^{3}$ We choose to work only with 384
|
| 286 |
+
|
| 287 |
+
the subset of works that have been proofread after 385 386 going through OCR, available in an XML format. We extracted all novels and short stories available
|
| 288 |
+
|
| 289 |
+
in this format from the time period of interest. 389 From these texts, we extracted the raw text para-
|
| 290 |
+
|
| 291 |
+
graphs. For another sub-project, we had already 391 extracted a set of novels where quotations are used to mark dialog, and used quotation marks to sep-
|
| 292 |
+
|
| 293 |
+
arate dialogue and narrative, which we use also in 394 this study. This sample consists of 165 novels and
|
| 294 |
+
|
| 295 |
+
collections of short stories. 396
|
| 296 |
+
|
| 297 |
+
The selected works were parsed early on in the project, using Swepipe and UUparser ${}^{s}$ with
|
| 298 |
+
|
| 299 |
+
Swepipe tags (see 5). From the parse trees, we 399 extracted all sentences containing a relation of interest and marked the head word for which that relation occurred. For NO-AUX, we also checked that there was no outgoing AUX relation from the head word. It is not uncommon to have several instances of a single relation in a sentence, but we only marked a single occurrence per example, to make the annotation consistent between sentences. From this set, we randomly sampled 200 sentences for each relation type, except CLEFT, for which we only found 74 examples, which were all included. Table 2 shows examples, also containing examples of plural verb forms äro (modern: är, 'are') and old-fashioned spelling 'skrifvit' (modern: skrivit, 'written').
|
| 300 |
+
|
| 301 |
+
§ 4.2.1 ANNOTATION
|
| 302 |
+
|
| 303 |
+
416
|
| 304 |
+
|
| 305 |
+
The annotation was performed by two experts on Swedish grammar, both native Swedish speakers. The annotators were given the example sentences in Excel, and for each sentence, they were to decide whether the marked head word belonged to the given type or not. For each type, 20 examples were annotated by both annotators, and the remaining examples were split between them. Af-
|
| 306 |
+
|
| 307 |
+
ter the first round, there were a few disagreements 426 in the doubly annotated sets, which were discussed by the annotators, followed by a re-annotation of all examples. The initial round of annotation
|
| 308 |
+
|
| 309 |
+
431
|
| 310 |
+
|
| 311 |
+
${}^{2}$ The literary works are in documents2,3,4,6,7, and 8 ; document 1 contains Microsoft manuals and document 5 contains parliament proceedings. (Lars Ahrenberg, personal communication)
|
| 312 |
+
|
| 313 |
+
https://litteraturbanken.se/
|
| 314 |
+
|
| 315 |
+
433 was very quick, roughly between 15-30 minutes per 100 examples, with a somewhat longer time needed for CCOMP. Table 3 shows the number of correct and wrong examples for each class. Note that the dataset is skewed towards positive examples.
|
| 316 |
+
|
| 317 |
+
max width=
|
| 318 |
+
|
| 319 |
+
Relation Correct Wrong
|
| 320 |
+
|
| 321 |
+
1-3
|
| 322 |
+
CLEFT 64 10
|
| 323 |
+
|
| 324 |
+
1-3
|
| 325 |
+
RELCL 133 67
|
| 326 |
+
|
| 327 |
+
1-3
|
| 328 |
+
CCOMP 141 59
|
| 329 |
+
|
| 330 |
+
1-3
|
| 331 |
+
NO-AUX 170 30
|
| 332 |
+
|
| 333 |
+
1-3
|
| 334 |
+
|
| 335 |
+
Table 3: Class distribution in our annotated dataset
|
| 336 |
+
|
| 337 |
+
§ 4.2.2 EVALUATION
|
| 338 |
+
|
| 339 |
+
We evaluate on the targeted dataset by calculating the number of times the parser assigns the correct relation to the focus word, and for NO-AUX, that there in addition is no aux-dependent. We then calculate precision and recall for each relation type. Note that this is different from standard evaluation of dependency parsers where we evaluate a full tree. In this case, we instead evaluate a single relation of interest for each sentence.
|
| 340 |
+
|
| 341 |
+
§ 5 PARSERS
|
| 342 |
+
|
| 343 |
+
In order to investigate how well the different types of evaluation work, we explore three generations of parsers. While the main focus is on dependency parsing. As a baseline, we use the easily accessible Swepipe with its provided model for Swedish. We also use two generations of neural parsers, UUParser and Machamp, for which we also experiment with multilingual parsing. We train each model three times with different random seeds and report average scores.
|
| 344 |
+
|
| 345 |
+
§ 5.1 SWEPIPE
|
| 346 |
+
|
| 347 |
+
As a baseline parser, we wanted an easily accessible parser, which comes with a trained parsing model, and which might be used by non-experts in a digital humanities project. Our choice was to use the Swedish annotation pipeline, Swepipe. 4, a pre-trained model covering all steps needed to analyse Swedish texts from scratch, including tok-enization, tagging, and parsing. Swepipe is similar to several other systems targeted at this user group, such as the web-based Swegram 5, which uses the same parser and tagger (Megyesi et al., 2019).
|
| 348 |
+
|
| 349 |
+
Swepipe is pre-neural and uses efselab (Östling, 486
|
| 350 |
+
|
| 351 |
+
2018) for tagging and MaltParser (Nivre et al., 487 2007) trained on Talbanken for parsing. Malt-Parser is a classical transition-based parser, using a support vector machine for classification, based on a feature vector with words, POS-tags, and already built relations.
|
| 352 |
+
|
| 353 |
+
§ 5.2 UUPARSER
|
| 354 |
+
|
| 355 |
+
UUParser (de Lhoneux et al., 2017; Smith et al., 2018b) is a neural transition-based dependency parser with a BiLSTM feature extractor, based on
|
| 356 |
+
|
| 357 |
+
Kiperwasser and Goldberg (2016). Word repre- 499 sentations are fed to a BiLSTM, to create contex-tualized word representations, which are given as
|
| 358 |
+
|
| 359 |
+
input to an MLP classifying the next transition. 502 We use an arc-hybrid transition model (Kuhlmann
|
| 360 |
+
|
| 361 |
+
et al., 2011) with a swap transition and a static- 504 dynamic oracle (de Lhoneux et al., 2017). As input word representation we use word embeddings, character-based word embeddings, UPOS-tag em-beddings, and treebank embeddings, which represent the treebank of a sentence. All embeddings were initialized randomaly at training time. We use the default UUparser settings (Smith et al., 2018b), except for adding drop-out with a rate of 0.33 for UPOS-embeddings, since the parser is trained with gold tags. At test time, we use two different sets of POS-tags, from Swepipe/efselab and from Machamp. We will call these variants UUparser ${}^{s}$ and UUparser ${}^{m}$ respectively. To counteract the differing sizes of the training data, we limited the number of sentences used per treebank to 4,300 per iteration.
|
| 362 |
+
|
| 363 |
+
522
|
| 364 |
+
|
| 365 |
+
§ 5.3 MACHAMP
|
| 366 |
+
|
| 367 |
+
Machamp (van der Goot et al., 2021) is a toolkit 524 for multitask learning covering several NLP tasks, based on fine-tuning a pre-trained contextualized model, like BERT (Devlin et al., 2019). In a multitask setup, each task has a separate decoder. The dependency parser is a graph-based parser using deep biaffine attention (Dozat and Manning, 2018) to score word pairs, and the CLU algorithm (Chu and Liu, 1965; Edmonds, 1967) to extract trees. For tagging, a greedy decoder, with a softmax output layer is used.
|
| 368 |
+
|
| 369 |
+
In this work we use Machamp in a multi-task setup, to jointly learn tagging of UPOS, XPOS and morphological features, and dependency parsing.
|
| 370 |
+
|
| 371 |
+
We experiment with two sets of language models, 539
|
| 372 |
+
|
| 373 |
+
4https://github.com/robertostling/ efselab
|
| 374 |
+
|
| 375 |
+
${}^{5}$ https://cl.lingfil.uu.se/swegram/
|
| 376 |
+
|
| 377 |
+
540
|
| 378 |
+
|
| 379 |
+
max width=
|
| 380 |
+
|
| 381 |
+
Group Included treebanks/languages
|
| 382 |
+
|
| 383 |
+
1-2
|
| 384 |
+
Talbank Swedish-talbanken
|
| 385 |
+
|
| 386 |
+
1-2
|
| 387 |
+
Swedish Talbank+ Swedish-LinES-M
|
| 388 |
+
|
| 389 |
+
1-2
|
| 390 |
+
SweNor Swedish + Norwegian (*3)
|
| 391 |
+
|
| 392 |
+
1-2
|
| 393 |
+
Scand SweNor + Danish
|
| 394 |
+
|
| 395 |
+
1-2
|
| 396 |
+
NorthG Scand + Faroese + Icelandic
|
| 397 |
+
|
| 398 |
+
1-2
|
| 399 |
+
|
| 400 |
+
Table 4: Groups of languages/treebanks used for multilingual training.
|
| 401 |
+
|
| 402 |
+
541
|
| 403 |
+
|
| 404 |
+
542
|
| 405 |
+
|
| 406 |
+
543
|
| 407 |
+
|
| 408 |
+
546 multilingual BERT (mBERT Devlin et al., 2019) ${}^{6}$ , trained on 104 languages including all languages used in our study except Faroese, and the Swedish model KB-BERT (Malmsten et al., 2020), trained only on Swedish. We will call these systems Machamp ${}^{m}$ and Macahmp ${}^{k}$ respectively. For both models, we used the cased version. KB-BERT
|
| 409 |
+
|
| 410 |
+
556 has been shown to improve Swedish named entity recognition and POS-tagging (Malmsten et al.,
|
| 411 |
+
|
| 412 |
+
558 2020), but as far as we are aware, it has not been used in multilingual dependency parsing models. We use the default parameters of Machamp. To counteract the differing sizes of the training data, we applied sampling smoothing set to 0.5 .
|
| 413 |
+
|
| 414 |
+
§ 5.4 MULTILINGUAL TRAINING
|
| 415 |
+
|
| 416 |
+
For UUParser and Machamp, we explore multilingual training. We limit ourselves to the North-Germanic languages, all relatively closely related to Swedish. We train two Swedish models, on Talbanken only, to be comparable with Swepipe, and also with LinES-M. In addition, we train three models with different subsets of the other North Germanic Languages. For our multilingual models, we first combine Swedish with Norwegian, which has three treebanks covering both variants of Norwegian. We then add Danish, to train a Scandinavian model. The reason for adding Norwegian first, despite the fact that Danish is considered a closer relative to Swedish, is the availability of more data for Norwegian with variability in language variants. Our final model, NorthG, also adds Faroese and Icelandic, which are more distant from Swedish, and not mutually intelligible. The language groups are summarized in Table 4.
|
| 417 |
+
|
| 418 |
+
§ 6 RESULTS
|
| 419 |
+
|
| 420 |
+
Tables 5 and 6 show results from the standard and targeted evaluations for Swepipe, UUparser ${}^{m}$ with Machamp ${}^{k}$ POS-tags and Machamp ${}^{k}$ trained with
|
| 421 |
+
|
| 422 |
+
KB-BERT. In all tables, we mark the three best 594
|
| 423 |
+
|
| 424 |
+
results for each metric in bold. 595
|
| 425 |
+
|
| 426 |
+
596
|
| 427 |
+
|
| 428 |
+
Table 5 shows results on UD test sets. We see 597
|
| 429 |
+
|
| 430 |
+
no obvious differences between LAS on the in- 598 genre LinES-M and the other two Swedish test
|
| 431 |
+
|
| 432 |
+
sets, indicating that time period might play a big- 600 ger role than genre in our scenario. Swepipe has overall the lowest scores, followed by UUparser ${}^{m}$ , and then ${\operatorname{Machamp}}^{k}$ . For the two Swedish models, the differences between using only Talbanken
|
| 433 |
+
|
| 434 |
+
and adding the small LinES-M training set are 605 typically small, but sometimes with a positive
|
| 435 |
+
|
| 436 |
+
effect for UUparser ${}^{m}$ and a negative effect for 607 Machamp ${}^{k}$ . Adding Norwegian leads to improvements in nearly all scores, often quite substan-
|
| 437 |
+
|
| 438 |
+
tial, whereas adding additional languages has a 610 smaller impact. The difference between parsers varies for the different relation types. Swepipe does not find any CLEFTs, and falls behind UUparser ${}^{m}$ on all other relation types, especially for AUX. Machamp ${}^{k}$ improves considerably over UUparser ${}^{m}$ for all explored relations, except AUX, where both neural parsers perform well, possibly since they both use the POS-tags of Machamp ${}^{k}$ .
|
| 439 |
+
|
| 440 |
+
The results in Table 6 for our targeted test set 620 show a partially different picture. First, we note
|
| 441 |
+
|
| 442 |
+
that Swepipe has a very high recall for all re- 622 lation types except CLEFT, which it never predicts. We think this is mainly an artifact of the
|
| 443 |
+
|
| 444 |
+
sampling procedure for this test set, where the 625 annotated sentences were sampled from Swepipe
|
| 445 |
+
|
| 446 |
+
and UUparser ${}^{s}$ , with Swepipe POS-tags, which 627 means that they were mostly predicted as correct by Swepipe. The other parsers do not have this advantage, and thus have a lower recall, which we believe is more predictive of real performance.
|
| 447 |
+
|
| 448 |
+
Swepipe has considerably lower precision than the 632 other parsers for all relation types. We believe that the evaluation should still be fair in comparing ${\text{ UUparser }}^{m}$ and Machamp ${}^{k}$ , from which
|
| 449 |
+
|
| 450 |
+
no samples were taken. Compared to the stan- 637 dard evaluation where Machamp ${}^{k}$ was clearly better than UUparser ${}^{m}$ , we now see a more mixed picture, where there is no clear overall advantage of Machamp ${}^{k}$ over ${\mathrm{{UUparser}}}^{m}$ , and the results are mixed across relation types and precision/recall. The trends between training languages are also less clear, with some combinations standing out in performance for some relation types. Machamp ${}^{k}$ trained with Scand and NorthG, has a
|
| 451 |
+
|
| 452 |
+
considerably higher recall on RELCL than the other 647
|
| 453 |
+
|
| 454 |
+
https://github.com/google-research/ bert/blob/master/multilingual.md
|
| 455 |
+
|
| 456 |
+
648 702
|
| 457 |
+
|
| 458 |
+
max width=
|
| 459 |
+
|
| 460 |
+
2*X 3|c|LAS 4|c|F1, LinES-M
|
| 461 |
+
|
| 462 |
+
2-8
|
| 463 |
+
LinES-M TB PUD CLEFT RELCL CCOMP AUX
|
| 464 |
+
|
| 465 |
+
1-8
|
| 466 |
+
Swepipe-Talbank 71.75 79.69 78.82 - 61.31 54.98 88.45
|
| 467 |
+
|
| 468 |
+
1-8
|
| 469 |
+
UUparser ${}^{m}$ -Talbank 72.10 83.75 76.66 26.82 64.67 59.62 93.99
|
| 470 |
+
|
| 471 |
+
1-8
|
| 472 |
+
UUparser ${}^{m}$ -Swedish 75.51 83.76 77.50 29.12 67.37 61.65 94.21
|
| 473 |
+
|
| 474 |
+
1-8
|
| 475 |
+
UUparser ${}^{m}$ -Norswe 79.69 85.60 81.50 39.92 74.34 66.79 94.35
|
| 476 |
+
|
| 477 |
+
1-8
|
| 478 |
+
UUparser ${}^{m}$ -Scand 79.74 85.43 81.34 41.74 73.03 64.93 94.20
|
| 479 |
+
|
| 480 |
+
1-8
|
| 481 |
+
UUparser ${}^{m}$ -NorthG 79.33 85.35 81.27 41.71 72.82 64.70 94.27
|
| 482 |
+
|
| 483 |
+
1-8
|
| 484 |
+
Machamp ${}^{k}$ -Talbank 80.54 92.24 86.05 56.73 79.07 74.59 95.44
|
| 485 |
+
|
| 486 |
+
1-8
|
| 487 |
+
Machamp ${}^{k}$ -Swedish 80.26 90.72 86.83 49.67 75.84 71.29 93.94
|
| 488 |
+
|
| 489 |
+
1-8
|
| 490 |
+
Machamp ${}^{k}$ -Norswe 83.13 91.63 86.79 55.42 81.29 75.32 95.29
|
| 491 |
+
|
| 492 |
+
1-8
|
| 493 |
+
Machamp ${}^{k}$ -Scand 83.16 92.31 87.21 55.54 81.21 74.27 95.97
|
| 494 |
+
|
| 495 |
+
1-8
|
| 496 |
+
Machamp ${}^{k}$ -NorthG 83.03 92.35 87.17 56.00 82.27 74.78 95.85
|
| 497 |
+
|
| 498 |
+
1-8
|
| 499 |
+
|
| 500 |
+
Table 5: Results on standard Swedish UD test sets. LAS for all three Swedish test sets, and F1-scores for four relations of interest for LinES-M.
|
| 501 |
+
|
| 502 |
+
649 703
|
| 503 |
+
|
| 504 |
+
650 704
|
| 505 |
+
|
| 506 |
+
651 705
|
| 507 |
+
|
| 508 |
+
652 706
|
| 509 |
+
|
| 510 |
+
653 707
|
| 511 |
+
|
| 512 |
+
654 708
|
| 513 |
+
|
| 514 |
+
655 709
|
| 515 |
+
|
| 516 |
+
656 710
|
| 517 |
+
|
| 518 |
+
657 711
|
| 519 |
+
|
| 520 |
+
658 712
|
| 521 |
+
|
| 522 |
+
659 713
|
| 523 |
+
|
| 524 |
+
660 714
|
| 525 |
+
|
| 526 |
+
661 715
|
| 527 |
+
|
| 528 |
+
max width=
|
| 529 |
+
|
| 530 |
+
2*X 4|c|Precision 4|c|Recall
|
| 531 |
+
|
| 532 |
+
2-9
|
| 533 |
+
CLEFT RELCL CCOMP NO-AUX CLEFT RELCL CCOMP NO-AUX
|
| 534 |
+
|
| 535 |
+
1-9
|
| 536 |
+
Swepipe-Talbank - 66.33 70.41 84.62 0.00 99.25 98.57 97.06
|
| 537 |
+
|
| 538 |
+
1-9
|
| 539 |
+
${\mathrm{{UUparser}}}^{m}$ -Talbank 92.46 93.32 94.11 98.14 50.35 82.37 63.97 51.44
|
| 540 |
+
|
| 541 |
+
1-9
|
| 542 |
+
UUparser ${}^{m}$ -Swedish 92.49 93.45 95.84 97.60 69.79 81.45 65.95 50.85
|
| 543 |
+
|
| 544 |
+
1-9
|
| 545 |
+
UUparser ${}^{m}$ -NorSwe 92.12 94.65 97.39 98.30 84.55 81.20 70.87 56.21
|
| 546 |
+
|
| 547 |
+
1-9
|
| 548 |
+
UUparser ${}^{m}$ -Scand 94.64 95.69 96.73 98.72 84.20 79.62 70.48 61.05
|
| 549 |
+
|
| 550 |
+
1-9
|
| 551 |
+
UUparser ${}^{m}$ -NorthG 93.31 95.55 96.06 99.05 75.00 79.37 74.13 61.57
|
| 552 |
+
|
| 553 |
+
1-9
|
| 554 |
+
Machamp ${}^{k}$ -Talbank 94.12 95.16 94.63 98.52 59.90 83.46 75.48 65.69
|
| 555 |
+
|
| 556 |
+
1-9
|
| 557 |
+
Machamp ${}^{k}$ -Swedish 94.92 96.19 95.09 98.81 53.12 82.21 73.81 65.10
|
| 558 |
+
|
| 559 |
+
1-9
|
| 560 |
+
Machamp ${}^{k}$ -NorSwe 95.38 96.71 94.77 99.13 72.92 79.70 73.33 67.25
|
| 561 |
+
|
| 562 |
+
1-9
|
| 563 |
+
Machamp ${}^{k}$ -Scand 96.61 95.11 94.29 99.01 59.38 87.47 66.90 58.82
|
| 564 |
+
|
| 565 |
+
1-9
|
| 566 |
+
Machamp ${}^{k}$ -NorthG 95.38 93.83 93.46 99.00 64.06 87.72 68.10 58.04
|
| 567 |
+
|
| 568 |
+
1-9
|
| 569 |
+
|
| 570 |
+
Table 6: Precision and recall for our targeted test set.
|
| 571 |
+
|
| 572 |
+
724
|
| 573 |
+
|
| 574 |
+
726
|
| 575 |
+
|
| 576 |
+
662 716
|
| 577 |
+
|
| 578 |
+
663 717
|
| 579 |
+
|
| 580 |
+
664 718
|
| 581 |
+
|
| 582 |
+
665 719
|
| 583 |
+
|
| 584 |
+
666 720
|
| 585 |
+
|
| 586 |
+
667 721
|
| 587 |
+
|
| 588 |
+
668 722
|
| 589 |
+
|
| 590 |
+
669 723
|
| 591 |
+
|
| 592 |
+
671 725
|
| 593 |
+
|
| 594 |
+
727
|
| 595 |
+
|
| 596 |
+
674 728
|
| 597 |
+
|
| 598 |
+
676 models, with only a small drop in precision. On CCOMP and NO-AUX, on the other hand, these two models instead have a low recall, without gain-
|
| 599 |
+
|
| 600 |
+
679 ing much on precision. We do not see this pattern for UUparser ${}^{m}$ , where the Scand model is overall
|
| 601 |
+
|
| 602 |
+
681 strong.
|
| 603 |
+
|
| 604 |
+
In Table 7 we show a summary of results for both variants of UUparser and Machamp, showing
|
| 605 |
+
|
| 606 |
+
684 only precision for the targeted test set, since recall is biased towards Swepipe and UUparser ${}^{s}$ due to
|
| 607 |
+
|
| 608 |
+
686 the sampling. T We can see that UUparser ${}^{s}$ does not consistently improve on LAS over Swepipe when trained on the same Talbanken data, but
|
| 609 |
+
|
| 610 |
+
689 that adding the Scandinavian treebanks improves the results considerably both for the UD evalua-
|
| 611 |
+
|
| 612 |
+
691 tions and on the targeted test set. When we compare the two variants of UUparser and Machamp we see that ${\mathrm{{UUparser}}}^{m}$ and ${\operatorname{Machamp}}^{k}$ beat their variant consistently on the UD evaluation, and in most cases on the targeted test set. We also see
|
| 613 |
+
|
| 614 |
+
696 that training on Scand is better than training on Talbanken in the majority of cases, both for UD
|
| 615 |
+
|
| 616 |
+
701
|
| 617 |
+
|
| 618 |
+
and on Precision for the targeted test set, however, 730 from Table 6, we know that Scand is sometimes not as strong on recall.
|
| 619 |
+
|
| 620 |
+
§ 7 DISCUSSION
|
| 621 |
+
|
| 622 |
+
733
|
| 623 |
+
|
| 624 |
+
An important question is whether the parser per- 735 formance on our target task is good enough to use for our study of change in the Swedish writ-
|
| 625 |
+
|
| 626 |
+
ten language. Overall, both Machamp and UU- 738 parser have good precision for all our relations
|
| 627 |
+
|
| 628 |
+
of interest, always scoring above 90, and reach- 740 ing scores above 96 for some parsers for each relation type. The recall, however, is considerably lower. This means that the instances of each rela-
|
| 629 |
+
|
| 630 |
+
tion type the parser finds are mostly good, but it 745 does miss a substantial part of relevant instances. The recall is highest for RELCL, where it is well above 80 for several of the Machamp models with UUparser also above 80 . This approaches a level
|
| 631 |
+
|
| 632 |
+
that is usable for our end project, of finding syn- 750 tactic features in 18th-19th-century literature, and tracking them over time. Other relation types have a more mixed performance, as CLEFT, for which
|
| 633 |
+
|
| 634 |
+
${\text{ UUparser }}^{m}$ trained on NorSwe and Scand per- 754
|
| 635 |
+
|
| 636 |
+
forms very well, with a recall of over 84, but where 755
|
| 637 |
+
|
| 638 |
+
${}^{7}$ To save space, we only show results for two training language groups. The other groups exhibit largely the same trends.
|
| 639 |
+
|
| 640 |
+
757 811
|
| 641 |
+
|
| 642 |
+
max width=
|
| 643 |
+
|
| 644 |
+
2*X 3|c|LAS 4|c|F1, UD_LinES-M 4|c|P, litt
|
| 645 |
+
|
| 646 |
+
2-12
|
| 647 |
+
LinES-M TB PUD CLEFT RELCL CCOMP AUX CLEFT RELCL CCOMP NO-AUX
|
| 648 |
+
|
| 649 |
+
1-12
|
| 650 |
+
Swepipe-Talbank 71.75 79.69 78.82 - 61.31 54.98 88.45 - 79.52 82.14 90.41
|
| 651 |
+
|
| 652 |
+
1-12
|
| 653 |
+
UUparser ${}^{s}$ -Talbank 70.80 82.35 75.78 26.08 63.01 58.39 91.31 92.80 92.52 93.05 96.50
|
| 654 |
+
|
| 655 |
+
1-12
|
| 656 |
+
UUparser ${}^{s}$ -Scand 77.63 83.39 80.25 30.77 70.55 62.22 90.82 93.86 94.07 94.66 97.95
|
| 657 |
+
|
| 658 |
+
1-12
|
| 659 |
+
UUparser ${}^{m}$ -Talbank 72.10 83.75 76.66 26.82 64.67 59.62 93.99 92.46 93.32 94.11 98.14
|
| 660 |
+
|
| 661 |
+
1-12
|
| 662 |
+
UUparser ${}^{m}$ -Scand 79.74 85.43 81.34 41.74 73.03 64.93 94.20 94.64 95.69 96.73 98.72
|
| 663 |
+
|
| 664 |
+
1-12
|
| 665 |
+
Machamp ${}^{m}$ -Talbank 77.20 89.35 84.21 38.47 72.87 69.09 92.91 92.94 96.13 93.00 98.23
|
| 666 |
+
|
| 667 |
+
1-12
|
| 668 |
+
Machamp ${}^{m}$ -Scand 80.13 89.50 85.79 43.09 77.67 71.18 93.49 93.41 96.98 92.47 99.08
|
| 669 |
+
|
| 670 |
+
1-12
|
| 671 |
+
Machamp ${}^{k}$ -Talbank 80.54 92.24 86.05 56.73 79.07 74.59 95.44 94.12 95.16 94.63 98.52
|
| 672 |
+
|
| 673 |
+
1-12
|
| 674 |
+
Machamp ${}^{k}$ -Scand 83.16 92.31 87.21 55.54 81.21 74.27 95.97 96.61 95.11 94.29 99.01
|
| 675 |
+
|
| 676 |
+
1-12
|
| 677 |
+
|
| 678 |
+
Table 7: Comparison of parser variants, on standard test sets and our test set.
|
| 679 |
+
|
| 680 |
+
810
|
| 681 |
+
|
| 682 |
+
812
|
| 683 |
+
|
| 684 |
+
813
|
| 685 |
+
|
| 686 |
+
814
|
| 687 |
+
|
| 688 |
+
815
|
| 689 |
+
|
| 690 |
+
816 other models perform considerably worse. The recall of CCOMP, and especially of NO-AUX is lower, and we would need to improve parser performance for those relation types, possibly by using domain adaptation techniques, before they would reach a useful level. The varying performance of parsers for different relation types is in line with the results for German by Adelmann et al. (2018), who recommend choosing different parsers for different end goals.
|
| 691 |
+
|
| 692 |
+
On the standard evaluation, Machamp is clearly overall better than UUparser, training on Scand is better than training only on Swedish, KB-BERT is better than mBERT for Machamp, and UUparser is better with Machamp tags than with Swepipe tags. For our targeted test sets, however, we see fewer clear trends, and there is much more variation among the systems. Machamp ${}^{k}$ and UUparser ${}^{m}$ tend to perform better than their counterparts, and the multilingual models may have a small advantage over the Swedish-only models. Swepipe clearly seems to fall behind the other parsers on precision, whereas its high recall can be explained by the sampling procedure. A side-effect of our study is that we have found that Machamp ${}^{k}$ trained on Scand or NorthG is a very strong parser for modern Swedish as measured by the UD test sets.
|
| 693 |
+
|
| 694 |
+
Our targeted test set does suffer from an issue with sampling from only two parsers, which affects its recall mainly for Swepipe, but also for ${\text{ UUparser }}^{s}$ . We believe UUparser ${}^{m}$ is less affected since it relies on a different set of POS-tags. The dataset is also relatively small, especially for the CLEFT relation. However, we think it still contributes to showing that when selecting a parser for a particular target task and text type, we cannot rely solely on evaluation scores on standard test sets, as also shown by Adelmann et al. (2018).
|
| 695 |
+
|
| 696 |
+
809 Even if we focus on the F1-score for the relations
|
| 697 |
+
|
| 698 |
+
821 of interest, rather than on the full tree, we see no clear similarity of parser ranking to the evaluation of the same relation types in our targeted test set. To further investigate whether this type of test set
|
| 699 |
+
|
| 700 |
+
can indeed be useful, we would need to perform 826 further analysis. It would be interesting to learn more about where the main improvements shown on UD evaluation for a parser like Machamp ${}^{k}$ actually occurs. We also think it would be useful
|
| 701 |
+
|
| 702 |
+
to consider the sampling for the test set, specifi- 831 cally if it is worth the effort to also annotate some
|
| 703 |
+
|
| 704 |
+
raw text, in order to find instances not identified by 833 any of our parsers. Another issue that we did not
|
| 705 |
+
|
| 706 |
+
yet explore, is whether parsing performance varies 836 over the time period in question.
|
| 707 |
+
|
| 708 |
+
§ 8 CONCLUSION
|
| 709 |
+
|
| 710 |
+
838
|
| 711 |
+
|
| 712 |
+
839
|
| 713 |
+
|
| 714 |
+
We describe a study of Swedish dependency 840
|
| 715 |
+
|
| 716 |
+
parsers with the goal of tracking changes in the 841 use of certain types of subordinate clauses and re-
|
| 717 |
+
|
| 718 |
+
lated phenomena in Swedish literature from 1800- 843 1930. Since standard test sets do not cover this time period or genre, and we did not have the re-
|
| 719 |
+
|
| 720 |
+
sources to perform a full annotation of dependency 846
|
| 721 |
+
|
| 722 |
+
trees, we propose a smaller-scale annotation task, 848 focusing on single relation types. We evaluated a set of parsers on UD and on our targeted test set.
|
| 723 |
+
|
| 724 |
+
While there was a clear and relatively consistent 851 order between the parsers on the UD evaluation,
|
| 725 |
+
|
| 726 |
+
the performance was more mixed on our targeted 853 test set, without a clear overall best parser across relation types. We believe that our proposed annotation scheme can be useful in complementing standard evaluations, with a low annotation effort,
|
| 727 |
+
|
| 728 |
+
but that more analysis is needed. 858
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/xbPTfBIUby/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,411 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Automatic Transcription for Estonian Children's Speech
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003
|
| 10 |
+
|
| 11 |
+
004 Anonymous Author
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 1
|
| 14 |
+
|
| 15 |
+
006 email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1
|
| 20 |
+
|
| 21 |
+
email@domain
|
| 22 |
+
|
| 23 |
+
057
|
| 24 |
+
|
| 25 |
+
Anonymousest Author 058
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 1 059
|
| 28 |
+
|
| 29 |
+
email@domain 060
|
| 30 |
+
|
| 31 |
+
061
|
| 32 |
+
|
| 33 |
+
062
|
| 34 |
+
|
| 35 |
+
## Abstract
|
| 36 |
+
|
| 37 |
+
We evaluate the impact of recent improvements in Automatic Speech Recognition (ASR) on transcribing Estonian children's
|
| 38 |
+
|
| 39 |
+
016 speech. Our research focuses on fine-tuning large ASR models with a 10-hour
|
| 40 |
+
|
| 41 |
+
018 Estonian children's speech dataset to create accurate transcriptions. Our results show that large pre-trained models hold
|
| 42 |
+
|
| 43 |
+
021 great potential when fine-tuned first with a more substantial Estonian adult speech
|
| 44 |
+
|
| 45 |
+
023 corpus and then further trained with children's speech.
|
| 46 |
+
|
| 47 |
+
026
|
| 48 |
+
|
| 49 |
+
## 1 Introduction
|
| 50 |
+
|
| 51 |
+
028 Automatic Speech Recognition (ASR) continues to face challenges in accurately transcribing children's speech. Research efforts are under-
|
| 52 |
+
|
| 53 |
+
031 way to adapt adult ASR models to better handle the unique pronunciation variations and lim-
|
| 54 |
+
|
| 55 |
+
033 ited vocabulary that are characteristic of children's speech (Thienpondt and Demuynck, 2022; Dutta et al., 2022). These adaptations are necessary due to the limitations of current ASR systems, which often lack adequate representation of children's
|
| 56 |
+
|
| 57 |
+
038 speech and struggle to generalize to new examples.
|
| 58 |
+
|
| 59 |
+
Recent advancements in ASR technology, including the use of large transformer-based models and unsupervised pre-training techniques, have resulted in improved performance for adult speech recognition, with the ability to train on a diverse range of data without human annotations (Baevski et al., 2020; Radford et al., 2022; Hsu et al., 2021). These models demonstrate greater robustness and generalization compared to previous systems. However, the effectiveness of these advanced ASR models for children's speech, especially in low-resource languages like Estonian, re-
|
| 60 |
+
|
| 61 |
+
053 mains untested.
|
| 62 |
+
|
| 63 |
+
063
|
| 64 |
+
|
| 65 |
+
064
|
| 66 |
+
|
| 67 |
+
In this paper, we are investigating two multi- 065 lingual speech models - Facebook's Wav2Vec2-
|
| 68 |
+
|
| 69 |
+
XLS-R (Babu et al., 2021) and OpenAI's Whis- 067 per (Radford et al., 2022) - as potential starting points for building an ASR system transcribing
|
| 70 |
+
|
| 71 |
+
Estonian children's speech. Our objective is to 070 determine the potential of these models in creat-
|
| 72 |
+
|
| 73 |
+
ing low-effort ASR systems for children speaking 072 a low-resource language like Estonian, for which
|
| 74 |
+
|
| 75 |
+
there are no ASR systems for children's speech. 075 To accomplish this, we fine-tune the XLS-R
|
| 76 |
+
|
| 77 |
+
and Whisper models from scratch using children's 077 speech data. We also fine-tune pre-existing models for the Estonian language with additional chil-
|
| 78 |
+
|
| 79 |
+
dren's speech recordings. Furthermore, we com- 080 pare the quality of the ASR system by evaluating
|
| 80 |
+
|
| 81 |
+
a pre-made Estonian ASR system provided by Mi- 082 crosoft Azure and exploring its fine-tuning capabilities.
|
| 82 |
+
|
| 83 |
+
Our research indicates that XLS-R models and 085 Whisper models can serve as effective starting
|
| 84 |
+
|
| 85 |
+
points for building an ASR system using only 10 087 hours of children's speech. However, for optimal performance, these models should first be fine-
|
| 86 |
+
|
| 87 |
+
tuned with Estonian adult speech. We achieve 090 the best word error rate of around 15 using an
|
| 88 |
+
|
| 89 |
+
XLS-R model that was fine-tuned with Estonian 092 ASR datasets and further trained with children's speech. Furthermore, our results show that the Azure speech-to-text model performs similarly to the Estonian XLS-R model but not as well as the
|
| 90 |
+
|
| 91 |
+
fine-tuned public models. 097
|
| 92 |
+
|
| 93 |
+
In the next sections, we describe which data we used for evaluation and training, which models we used and how we fine-tuned these and last but not
|
| 94 |
+
|
| 95 |
+
least we present and analyse the results. 102
|
| 96 |
+
|
| 97 |
+
## 2 Dataset and evaluation
|
| 98 |
+
|
| 99 |
+
The Children ASR dataset used in this work consists of speech recordings from 53 children aged
|
| 100 |
+
|
| 101 |
+
6 to 13. The data was collected by the Children's 107 Clinic of Tartu University Hospital and contains a mix of both boys and girls speaking about various topics such as answering questions, describing pictures, talking about their family and friends, and more. The dataset is divided into three subsets - test, dev, and train - with no overlap in speakers or texts.
|
| 102 |
+
|
| 103 |
+
The test set contains all age and gender groups and has a total recording duration of 278 minutes (approximately 4.6 hours). The development set is missing some speakers and has a total recording duration of 182 minutes (approximately 3 hours). The training set is also missing some speakers and has a total recording duration of 613 minutes (approximately 10 hours). A breakdown of the total recording duration for the test set by age and gender of the speakers is shown in Table 1.
|
| 104 |
+
|
| 105 |
+
<table><tr><td>Age</td><td>Girls (min)</td><td>Boys (min)</td><td>Total (min)</td></tr><tr><td>6</td><td>17</td><td>21</td><td>38</td></tr><tr><td>7</td><td>14</td><td>16</td><td>30</td></tr><tr><td>8</td><td>17</td><td>14</td><td>31</td></tr><tr><td>9</td><td>22</td><td>18</td><td>40</td></tr><tr><td>10</td><td>15</td><td>17</td><td>32</td></tr><tr><td>11</td><td>20</td><td>17</td><td>37</td></tr><tr><td>12</td><td>16</td><td>22</td><td>38</td></tr><tr><td>13</td><td>19</td><td>13</td><td>32</td></tr><tr><td>Total</td><td>140</td><td>138</td><td>278</td></tr></table>
|
| 106 |
+
|
| 107 |
+
Table 1: Total recording duration in minutes for the Estonian children ASR test set, broken down by age and gender of the speakers.
|
| 108 |
+
|
| 109 |
+
The children in the dataset speak about a wide range of topics, covering everything from answering questions and describing pictures to discussing their family and friends. They also include recordings of children reading fairytales, reciting poems, and saying specific sentences. The utterances in the dataset vary in their level of spontaneity - some are unscripted expressions of thoughts, while others feature children reading.
|
| 110 |
+
|
| 111 |
+
We evaluate the performance of our speech recognition models using the standard measure of word error rate (WER). This involves converting all text to lowercase and removing punctuation but not standardizing different spelling variations. Our reference transcriptions reflect the pronunciation of children, including any errors they may make. However, the line between correct and incorrect pronunciation is often blurry and some
|
| 112 |
+
|
| 113 |
+
161 children's speech can be difficult to comprehend.
|
| 114 |
+
|
| 115 |
+
We do not consider the ambiguity in human tran- 162
|
| 116 |
+
|
| 117 |
+
scriptions and simply compare the models' output 163 to our reference transcription, which could lead to increased WERs.
|
| 118 |
+
|
| 119 |
+
## 3 Models and training
|
| 120 |
+
|
| 121 |
+
168
|
| 122 |
+
|
| 123 |
+
We are using both public large speech models and private black box speech service. In the case of public models, we also searched for models already fine-tuned with Estonian speech data. We fine-tune the selection of these models with the children's speech dataset mentioned in the last section.
|
| 124 |
+
|
| 125 |
+
For public models, we use two multilingual
|
| 126 |
+
|
| 127 |
+
ones: Facebook's XLS-R and OpenAI's Whisper 178 (Radford et al., 2022). XLS-R model is trained
|
| 128 |
+
|
| 129 |
+
with speech modelling objective, not ASR but it 180 can be fine-tuned to ASR with Connectionist Temporal Classification (CTC) (Graves et al., 2006) algorithm. The Whisper on the other hand is a multipurpose model that contains both transformer encoder and decoder blocks and has been trained on several speech-processing tasks, like multilingual speech recognition, speech translation and voice activity detection (Radford et al., 2022).
|
| 130 |
+
|
| 131 |
+
The available XLS-R models have 300 million, 1 billion and 2 billion parameters, we are using the two smaller ones in this work. The Whisper model comes in six different sizes; we are using medium and large-v2 since the Estonian error rates for other ones are relatively high. There is one Estonian-specific fine-tuned model available for the 300 million parameter version, trained with over 700 hours of Estonian speech data (Alumäe and Olev, 2022). There are several Estonian Whisper models available in HuggingFace but these are trained with fewer data examples. We are using the best available medium and large-v2 ones. ${}^{12}$
|
| 132 |
+
|
| 133 |
+
We use standard fine-tuning procedures. For training XLS-R-based ASR models from scratch, we use the learning rate of $3\mathrm{e} - 4$ , a 400-step warmup and train the models for 60 epochs with children's speech dataset, which is less than 4000 steps. When further fine-tuning the Estonian XLS-R model with children's speech, we use the learning rate of $2\mathrm{e} - 5$ and 200 warmup steps. We fine-tune all the Whisper models with warmup 10%
|
| 134 |
+
|
| 135 |
+
215 of the steps and learning rate 1e-05. When fine-
|
| 136 |
+
|
| 137 |
+
---
|
| 138 |
+
|
| 139 |
+
${}^{1}$ https://huggingface.co/agnesluhtaru/ whisper-medium-et-ERR2020
|
| 140 |
+
|
| 141 |
+
${}^{2}$ https://huggingface.co/agnesluhtaru/ whisper-large-et-ERR2020-v2
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
217 tuning the out-of-the-box Whisper models, we train these for 5000 steps or atound 40 epochs and when fine-tuned models already trained with Estonian adult speech, we train the large model for 2000 steps or over 16 epochs and medium model for 1000 steps or eight epochs.
|
| 146 |
+
|
| 147 |
+
For the private model, we use Microsoft Azure Speech service’s speech-to-text ${}^{3}$ , which requires an Azure subscription and a Speech resource. The transcription services can be accessed by making REST requests.
|
| 148 |
+
|
| 149 |
+
229 Microsoft Azure offers the option to fine-tune the model with custom datasets. This process involves uploading data to train the models, fol-
|
| 150 |
+
|
| 151 |
+
232 lowed by deploying the trained models. Since audio-based fine-tuning is not available for Esto-
|
| 152 |
+
|
| 153 |
+
234 nian, we use text-based tuning for our work with the texts from the children's speech dataset.
|
| 154 |
+
|
| 155 |
+
## 4 Results
|
| 156 |
+
|
| 157 |
+
In this section, we describe the results of all the models based on Facebook's XLS-R, OpenAI'S Whisper and Microsoft Azure speech-to-text.
|
| 158 |
+
|
| 159 |
+
### 4.1 XLS-R
|
| 160 |
+
|
| 161 |
+
Table 2 shows the word error rate (WER) scores of fine-tuned Estonian XLS-R models using only 10 hours of Estonian children's speech data, the fine-tuned Estonian model (Alumäe and Olev, 2022) and Estonian model further trained with children's speech. We can see that the limited amount of
|
| 162 |
+
|
| 163 |
+
249 data for fine-tuning XLS-R from scratch results in a high WER of over 30 for both models with 300 million and one billion parameters. Training an ASR model using only 10 hours of speech data
|
| 164 |
+
|
| 165 |
+
254 can be challenging, especially when the speech is for a low-resource language and children.
|
| 166 |
+
|
| 167 |
+
The results show that the pre-trained Estonian ASR model has a WER of around 20, while further fine-tuning the model with children's speech data
|
| 168 |
+
|
| 169 |
+
259 leads to even better results, with a WER of less than 15. Based on the lower WER score for fine-tuned one billion parameter model, we can suggest that a larger model fine-tuned with Estonian data first and then further trained on children's speech could lead to even better results.
|
| 170 |
+
|
| 171 |
+
The results indicate that fine-tuning the Estonian ASR model using children's speech data im-
|
| 172 |
+
|
| 173 |
+
269
|
| 174 |
+
|
| 175 |
+
<table><tr><td>Model</td><td>Test</td><td>Dev</td></tr><tr><td>xls-r-300M-children</td><td>36.3</td><td>34.58</td></tr><tr><td>xls-r-1B-children</td><td>30.89</td><td>31.06</td></tr><tr><td>xls-r-300M-et</td><td>20.62</td><td>19.15</td></tr><tr><td>xls-r-300M-et-children</td><td>15.31</td><td>14.30</td></tr></table>
|
| 176 |
+
|
| 177 |
+
Table 2: Comparison of WER scores for Face-book's Wav2Vec2 XLS-R (Babu et al., 2021) based models fine-tuned with only Estonian children's speech, only Estonian adult speech (Alumäe and Olev, 2022) and first fine-tuned to Estonian and further trained with children's speech.
|
| 178 |
+
|
| 179 |
+
270
|
| 180 |
+
|
| 181 |
+
271
|
| 182 |
+
|
| 183 |
+
272
|
| 184 |
+
|
| 185 |
+
273
|
| 186 |
+
|
| 187 |
+
274
|
| 188 |
+
|
| 189 |
+
275
|
| 190 |
+
|
| 191 |
+
276
|
| 192 |
+
|
| 193 |
+
278
|
| 194 |
+
|
| 195 |
+
279
|
| 196 |
+
|
| 197 |
+
280
|
| 198 |
+
|
| 199 |
+
281
|
| 200 |
+
|
| 201 |
+
282
|
| 202 |
+
|
| 203 |
+
283
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
|
| 207 |
+
Figure 1: Performance comparison of Estonian XLS-R ASR and children's speech fine-tuned models across age groups.
|
| 208 |
+
|
| 209 |
+
284
|
| 210 |
+
|
| 211 |
+
285
|
| 212 |
+
|
| 213 |
+
286
|
| 214 |
+
|
| 215 |
+
287
|
| 216 |
+
|
| 217 |
+
288
|
| 218 |
+
|
| 219 |
+
289
|
| 220 |
+
|
| 221 |
+
290
|
| 222 |
+
|
| 223 |
+
291
|
| 224 |
+
|
| 225 |
+
292
|
| 226 |
+
|
| 227 |
+
293
|
| 228 |
+
|
| 229 |
+
294
|
| 230 |
+
|
| 231 |
+
295
|
| 232 |
+
|
| 233 |
+
296
|
| 234 |
+
|
| 235 |
+
297
|
| 236 |
+
|
| 237 |
+
298
|
| 238 |
+
|
| 239 |
+
301
|
| 240 |
+
|
| 241 |
+
proves performance across all age groups (refer 302
|
| 242 |
+
|
| 243 |
+
to Figure 1). Younger speakers tend to have a 303
|
| 244 |
+
|
| 245 |
+
higher word error rate (WER) than older speakers, 304 although this relationship is not always straight-
|
| 246 |
+
|
| 247 |
+
forward. There are some exceptions, such as the 306 recognition performance for 13-year-olds being
|
| 248 |
+
|
| 249 |
+
worse than that of younger age groups. This high- 308
|
| 250 |
+
|
| 251 |
+
lights that speaker variability plays a role in the 309 WER results. Nevertheless, the fine-tuning of the ASR model using children's speech data reduces the differences in recognition performance across
|
| 252 |
+
|
| 253 |
+
age groups, resulting in improved overall perfor- 313 mance.
|
| 254 |
+
|
| 255 |
+
### 4.2 Whisper
|
| 256 |
+
|
| 257 |
+
The performance of the out-of-the-box Whisper 318 models on the children's dataset (see Table 3) is
|
| 258 |
+
|
| 259 |
+
comparable to the scores reported by Radford et al. 320
|
| 260 |
+
|
| 261 |
+
(2022) on the Estonian Common Voice 9 Ardila 321
|
| 262 |
+
|
| 263 |
+
et al. (2020). All models have a WER of at least 322
|
| 264 |
+
|
| 265 |
+
35. So, although we can use Whisper without fine- 323 tuning, it does not transcribe Estonian speech well
|
| 266 |
+
|
| 267 |
+
---
|
| 268 |
+
|
| 269 |
+
${}^{3}$ https://learn.microsoft.com/en-us/ azure/cognitive-services/speech-service/ speech-to-text
|
| 270 |
+
|
| 271 |
+
---
|
| 272 |
+
|
| 273 |
+
325 and therefore does not give great transcriptions for Estonian children's speech as well.
|
| 274 |
+
|
| 275 |
+
When only fine-tuning the model with 10 hours of children's speech, we can already get better results, with large-v2 model WER around 20, which is significantly better than using a model fine-tuned with Estonian speech that showed much better performance for XLS-R. Although we don't have reasonable evidence for saying that XLS-R models are better because the Whisper models are not optimal.
|
| 276 |
+
|
| 277 |
+
<table><tr><td>Model</td><td>Test</td><td>Dev</td></tr><tr><td>Whisper-medium</td><td>46.11</td><td>43.21</td></tr><tr><td>Whisper-large-v2</td><td>36.01</td><td>35.06</td></tr><tr><td>Whisper-medium-children</td><td>25.08</td><td>24.29</td></tr><tr><td>Whisper-large-v2-children</td><td>20.38</td><td>20.58</td></tr><tr><td>Whisper-medium-et</td><td>28.78</td><td>26.83</td></tr><tr><td>Whisper-large-v2-et</td><td>29.2</td><td>28.13</td></tr><tr><td>Whisper-medium-et-children</td><td>18.66</td><td>17.49</td></tr><tr><td>Whisper-large-v2-et-children</td><td>16.02</td><td>15.73</td></tr></table>
|
| 278 |
+
|
| 279 |
+
Table 3: Comparison of WER scores for OpenAI Whisper (Radford et al., 2022) models and Whisper models fine-tuned with only Estonian children's speech, only Estonian adult speech and first fine-tuned to Estonian and further trained with children's speech.
|
| 280 |
+
|
| 281 |
+
Despite using the Estonian Whisper models fine-tuned with fewer audio text pairs than the XLS-R model, when trained further with children's speech, the large model achieved similar WER as the double fine-tuned smaller XLS-R model.
|
| 282 |
+
|
| 283 |
+
362
|
| 284 |
+
|
| 285 |
+
### 4.3 Azure
|
| 286 |
+
|
| 287 |
+
The results from our evaluation of the children's speech dataset show that the out-of-the-box Azure speech-to-text model performs similarly or better than the fine-tuned Estonian XLS-R model (Alumäe and Olev, 2022). As indicated in Table 4, the Microsoft Azure speech-to-text scores are around 20 or below.
|
| 288 |
+
|
| 289 |
+
<table><tr><td>Model</td><td>Test</td><td>Dev</td></tr><tr><td>Microsoft Azure</td><td>18.93</td><td>20.18</td></tr><tr><td>Azure text-tuned</td><td>20.31</td><td>21.21</td></tr></table>
|
| 290 |
+
|
| 291 |
+
Table 4: WER scores for Microsoft Azure speech-to text and it's custom text-tuned version.
|
| 292 |
+
|
| 293 |
+
377
|
| 294 |
+
|
| 295 |
+
However, the experiment also shows that text- 378
|
| 296 |
+
|
| 297 |
+
tuning is not the best approach for this particular 379
|
| 298 |
+
|
| 299 |
+
dataset. The dataset mostly contains simpler vo- 380
|
| 300 |
+
|
| 301 |
+
cabulary and not much terminology, most likely 381
|
| 302 |
+
|
| 303 |
+
leading to quick overfitting with text-tuning. Cur- 382 rently, text-tuning is the only option available for
|
| 304 |
+
|
| 305 |
+
the Estonian language, but it might not be the best 384 use case for children's speech datasets.
|
| 306 |
+
|
| 307 |
+
## 5 Discussion
|
| 308 |
+
|
| 309 |
+
Our experiments show that children's speech 389 recognition continues to be a tricky problem but big speech models are looking promising. It is possible to build an ASR system for Estonian children's speech without any bells and whistles using
|
| 310 |
+
|
| 311 |
+
only 10 hours of data and get output that is de- 394 cent and might be good enough for use in chatbots.
|
| 312 |
+
|
| 313 |
+
However, when it comes to six-year-olds, whose 396 speech is difficult to understand even for the human ear, the system is still struggling.
|
| 314 |
+
|
| 315 |
+
We evaluate different models and it appears that 399 both OpenAI's Whisper and Facebook's XLS-R are viable options for developing a speech recognition model for Estonian children's speech. The current best word error rate is around 15 with XLS-R. However, it remains unclear if this pre-trained model is optimal for children's speech or if a lower error rate could be achieved with Whisper after fine-tuning with a similar amount of Estonian
|
| 316 |
+
|
| 317 |
+
adult speech. Additionally, we do not obtain com- 409 parable results with the Azure service, as it does
|
| 318 |
+
|
| 319 |
+
not permit fine-tuning with audio data. 411
|
| 320 |
+
|
| 321 |
+
Our findings suggest that the results could be improved by using a larger XLS-R model as the
|
| 322 |
+
|
| 323 |
+
base or by fine-tuning Whisper models with more 414 data. Additionally, we do not use a separate lan-
|
| 324 |
+
|
| 325 |
+
guage model, which is possible with both Whisper 416 and XLS-R models and could potentially enhance the performance of these models.
|
| 326 |
+
|
| 327 |
+
419
|
| 328 |
+
|
| 329 |
+
## 6 Conclusion
|
| 330 |
+
|
| 331 |
+
We test the performance of two speech recogni- 421 tion models, XLS-R and Whisper, on transcribing Estonian children's speech. We fine-tune the models with children's speech data and compared
|
| 332 |
+
|
| 333 |
+
them to an off-the-shelf system from Microsoft 426 Azure. Both models fine-tuned with children's speech, outperform Microsoft Azure, which does not allow fine-tuning with audio for Estonian, and are promising for children's ASR system.
|
| 334 |
+
|
| 335 |
+
431
|
| 336 |
+
|
| 337 |
+
## References
|
| 338 |
+
|
| 339 |
+
433 Tanel Alumäe and Aivo Olev. 2022. Estonian speech 434 recognition and transcription editing service. vol- 435 ume 10, page 409-421. 436
|
| 340 |
+
|
| 341 |
+
437 Rosana Ardila, Megan Branson, Kelly Davis, 438 Michael Kohler, Josh Meyer, Michael Hen- retty, Reuben Morais, Lindsay Saunders, 439 Francis Tyers, and Gregor Weber. 2020. https://aclanthology.org/2020.lrec-1.520 Common voice: A massively-multilingual speech corpus. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4218-4222, Marseille, France. European Language Resources Association.
|
| 342 |
+
|
| 343 |
+
Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2021. https://doi.org/10.48550/ARXIV.2111.09296 Xls-r: Self-supervised cross-lingual speech representation learning at scale.
|
| 344 |
+
|
| 345 |
+
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. In Advances in Neural Information Processing Systems, volume 33, pages 12449-12460. Curran Associates, Inc.
|
| 346 |
+
|
| 347 |
+
Satwik Dutta, Sarah Anne Tao, Jacob C. Reyna, Rebecca Elizabeth Hacker, Dwight W. Irvin, Jay F. Buzhardt, and John H.L. Hansen. 2022. https://doi.org/10.21437/Interspeech.2022-555 Challenges remain in Building ASR for Spontaneous Preschool Children Speech in Naturalistic Educational Environments. In Proc. Interspeech 2022, pages 4322-4326.
|
| 348 |
+
|
| 349 |
+
Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. https://doi.org/10.1145/1143844.1143891 Con-nectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd International Conference on Machine Learning, ICML '06, page 369-376, New York, NY, USA. Association for Computing Machinery.
|
| 350 |
+
|
| 351 |
+
Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhut-dinov, and Abdelrahman Mohamed. 2021. https://doi.org/10.1109/TASLP.2021.3122291 Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Trans. Audio, Speech and Lang. Proc., 29:3451-3460.
|
| 352 |
+
|
| 353 |
+
Alec Radford, Jong Wook Kim, Tao Xu, Greg Brock-man, Christine McLeavey, and Ilya Sutskever. 2022. https://doi.org/10.48550/ARXIV.2212.04356 Robust speech recognition via large-scale weak su-
|
| 354 |
+
|
| 355 |
+
485 pervision.
|
| 356 |
+
|
| 357 |
+
Jenthe Thienpondt and Kris Demuynck. 2022. https://doi.org/10.21437/Interspeech.2022-10964 Transfer Learning for Robust Low-Resource Children's Speech ASR with Transformers and Source-Filter Warping. In Proc. Interspeech 2022, pages 2213-2217.
|
| 358 |
+
|
| 359 |
+
486 487 488 489 490
|
| 360 |
+
|
| 361 |
+
491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513
|
| 362 |
+
|
| 363 |
+
514 515
|
| 364 |
+
|
| 365 |
+
516
|
| 366 |
+
|
| 367 |
+
517
|
| 368 |
+
|
| 369 |
+
518
|
| 370 |
+
|
| 371 |
+
519
|
| 372 |
+
|
| 373 |
+
520
|
| 374 |
+
|
| 375 |
+
521
|
| 376 |
+
|
| 377 |
+
522
|
| 378 |
+
|
| 379 |
+
523
|
| 380 |
+
|
| 381 |
+
524
|
| 382 |
+
|
| 383 |
+
525
|
| 384 |
+
|
| 385 |
+
526
|
| 386 |
+
|
| 387 |
+
527
|
| 388 |
+
|
| 389 |
+
528
|
| 390 |
+
|
| 391 |
+
529
|
| 392 |
+
|
| 393 |
+
530
|
| 394 |
+
|
| 395 |
+
531
|
| 396 |
+
|
| 397 |
+
532
|
| 398 |
+
|
| 399 |
+
533
|
| 400 |
+
|
| 401 |
+
534
|
| 402 |
+
|
| 403 |
+
535
|
| 404 |
+
|
| 405 |
+
536
|
| 406 |
+
|
| 407 |
+
537
|
| 408 |
+
|
| 409 |
+
538
|
| 410 |
+
|
| 411 |
+
539
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/xbPTfBIUby/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,412 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ AUTOMATIC TRANSCRIPTION FOR ESTONIAN CHILDREN'S SPEECH
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003
|
| 10 |
+
|
| 11 |
+
004 Anonymous Author
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 1
|
| 14 |
+
|
| 15 |
+
006 email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1
|
| 20 |
+
|
| 21 |
+
email@domain
|
| 22 |
+
|
| 23 |
+
057
|
| 24 |
+
|
| 25 |
+
Anonymousest Author 058
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 1 059
|
| 28 |
+
|
| 29 |
+
email@domain 060
|
| 30 |
+
|
| 31 |
+
061
|
| 32 |
+
|
| 33 |
+
062
|
| 34 |
+
|
| 35 |
+
§ ABSTRACT
|
| 36 |
+
|
| 37 |
+
We evaluate the impact of recent improvements in Automatic Speech Recognition (ASR) on transcribing Estonian children's
|
| 38 |
+
|
| 39 |
+
016 speech. Our research focuses on fine-tuning large ASR models with a 10-hour
|
| 40 |
+
|
| 41 |
+
018 Estonian children's speech dataset to create accurate transcriptions. Our results show that large pre-trained models hold
|
| 42 |
+
|
| 43 |
+
021 great potential when fine-tuned first with a more substantial Estonian adult speech
|
| 44 |
+
|
| 45 |
+
023 corpus and then further trained with children's speech.
|
| 46 |
+
|
| 47 |
+
026
|
| 48 |
+
|
| 49 |
+
§ 1 INTRODUCTION
|
| 50 |
+
|
| 51 |
+
028 Automatic Speech Recognition (ASR) continues to face challenges in accurately transcribing children's speech. Research efforts are under-
|
| 52 |
+
|
| 53 |
+
031 way to adapt adult ASR models to better handle the unique pronunciation variations and lim-
|
| 54 |
+
|
| 55 |
+
033 ited vocabulary that are characteristic of children's speech (Thienpondt and Demuynck, 2022; Dutta et al., 2022). These adaptations are necessary due to the limitations of current ASR systems, which often lack adequate representation of children's
|
| 56 |
+
|
| 57 |
+
038 speech and struggle to generalize to new examples.
|
| 58 |
+
|
| 59 |
+
Recent advancements in ASR technology, including the use of large transformer-based models and unsupervised pre-training techniques, have resulted in improved performance for adult speech recognition, with the ability to train on a diverse range of data without human annotations (Baevski et al., 2020; Radford et al., 2022; Hsu et al., 2021). These models demonstrate greater robustness and generalization compared to previous systems. However, the effectiveness of these advanced ASR models for children's speech, especially in low-resource languages like Estonian, re-
|
| 60 |
+
|
| 61 |
+
053 mains untested.
|
| 62 |
+
|
| 63 |
+
063
|
| 64 |
+
|
| 65 |
+
064
|
| 66 |
+
|
| 67 |
+
In this paper, we are investigating two multi- 065 lingual speech models - Facebook's Wav2Vec2-
|
| 68 |
+
|
| 69 |
+
XLS-R (Babu et al., 2021) and OpenAI's Whis- 067 per (Radford et al., 2022) - as potential starting points for building an ASR system transcribing
|
| 70 |
+
|
| 71 |
+
Estonian children's speech. Our objective is to 070 determine the potential of these models in creat-
|
| 72 |
+
|
| 73 |
+
ing low-effort ASR systems for children speaking 072 a low-resource language like Estonian, for which
|
| 74 |
+
|
| 75 |
+
there are no ASR systems for children's speech. 075 To accomplish this, we fine-tune the XLS-R
|
| 76 |
+
|
| 77 |
+
and Whisper models from scratch using children's 077 speech data. We also fine-tune pre-existing models for the Estonian language with additional chil-
|
| 78 |
+
|
| 79 |
+
dren's speech recordings. Furthermore, we com- 080 pare the quality of the ASR system by evaluating
|
| 80 |
+
|
| 81 |
+
a pre-made Estonian ASR system provided by Mi- 082 crosoft Azure and exploring its fine-tuning capabilities.
|
| 82 |
+
|
| 83 |
+
Our research indicates that XLS-R models and 085 Whisper models can serve as effective starting
|
| 84 |
+
|
| 85 |
+
points for building an ASR system using only 10 087 hours of children's speech. However, for optimal performance, these models should first be fine-
|
| 86 |
+
|
| 87 |
+
tuned with Estonian adult speech. We achieve 090 the best word error rate of around 15 using an
|
| 88 |
+
|
| 89 |
+
XLS-R model that was fine-tuned with Estonian 092 ASR datasets and further trained with children's speech. Furthermore, our results show that the Azure speech-to-text model performs similarly to the Estonian XLS-R model but not as well as the
|
| 90 |
+
|
| 91 |
+
fine-tuned public models. 097
|
| 92 |
+
|
| 93 |
+
In the next sections, we describe which data we used for evaluation and training, which models we used and how we fine-tuned these and last but not
|
| 94 |
+
|
| 95 |
+
least we present and analyse the results. 102
|
| 96 |
+
|
| 97 |
+
§ 2 DATASET AND EVALUATION
|
| 98 |
+
|
| 99 |
+
The Children ASR dataset used in this work consists of speech recordings from 53 children aged
|
| 100 |
+
|
| 101 |
+
6 to 13. The data was collected by the Children's 107 Clinic of Tartu University Hospital and contains a mix of both boys and girls speaking about various topics such as answering questions, describing pictures, talking about their family and friends, and more. The dataset is divided into three subsets - test, dev, and train - with no overlap in speakers or texts.
|
| 102 |
+
|
| 103 |
+
The test set contains all age and gender groups and has a total recording duration of 278 minutes (approximately 4.6 hours). The development set is missing some speakers and has a total recording duration of 182 minutes (approximately 3 hours). The training set is also missing some speakers and has a total recording duration of 613 minutes (approximately 10 hours). A breakdown of the total recording duration for the test set by age and gender of the speakers is shown in Table 1.
|
| 104 |
+
|
| 105 |
+
max width=
|
| 106 |
+
|
| 107 |
+
Age Girls (min) Boys (min) Total (min)
|
| 108 |
+
|
| 109 |
+
1-4
|
| 110 |
+
6 17 21 38
|
| 111 |
+
|
| 112 |
+
1-4
|
| 113 |
+
7 14 16 30
|
| 114 |
+
|
| 115 |
+
1-4
|
| 116 |
+
8 17 14 31
|
| 117 |
+
|
| 118 |
+
1-4
|
| 119 |
+
9 22 18 40
|
| 120 |
+
|
| 121 |
+
1-4
|
| 122 |
+
10 15 17 32
|
| 123 |
+
|
| 124 |
+
1-4
|
| 125 |
+
11 20 17 37
|
| 126 |
+
|
| 127 |
+
1-4
|
| 128 |
+
12 16 22 38
|
| 129 |
+
|
| 130 |
+
1-4
|
| 131 |
+
13 19 13 32
|
| 132 |
+
|
| 133 |
+
1-4
|
| 134 |
+
Total 140 138 278
|
| 135 |
+
|
| 136 |
+
1-4
|
| 137 |
+
|
| 138 |
+
Table 1: Total recording duration in minutes for the Estonian children ASR test set, broken down by age and gender of the speakers.
|
| 139 |
+
|
| 140 |
+
The children in the dataset speak about a wide range of topics, covering everything from answering questions and describing pictures to discussing their family and friends. They also include recordings of children reading fairytales, reciting poems, and saying specific sentences. The utterances in the dataset vary in their level of spontaneity - some are unscripted expressions of thoughts, while others feature children reading.
|
| 141 |
+
|
| 142 |
+
We evaluate the performance of our speech recognition models using the standard measure of word error rate (WER). This involves converting all text to lowercase and removing punctuation but not standardizing different spelling variations. Our reference transcriptions reflect the pronunciation of children, including any errors they may make. However, the line between correct and incorrect pronunciation is often blurry and some
|
| 143 |
+
|
| 144 |
+
161 children's speech can be difficult to comprehend.
|
| 145 |
+
|
| 146 |
+
We do not consider the ambiguity in human tran- 162
|
| 147 |
+
|
| 148 |
+
scriptions and simply compare the models' output 163 to our reference transcription, which could lead to increased WERs.
|
| 149 |
+
|
| 150 |
+
§ 3 MODELS AND TRAINING
|
| 151 |
+
|
| 152 |
+
168
|
| 153 |
+
|
| 154 |
+
We are using both public large speech models and private black box speech service. In the case of public models, we also searched for models already fine-tuned with Estonian speech data. We fine-tune the selection of these models with the children's speech dataset mentioned in the last section.
|
| 155 |
+
|
| 156 |
+
For public models, we use two multilingual
|
| 157 |
+
|
| 158 |
+
ones: Facebook's XLS-R and OpenAI's Whisper 178 (Radford et al., 2022). XLS-R model is trained
|
| 159 |
+
|
| 160 |
+
with speech modelling objective, not ASR but it 180 can be fine-tuned to ASR with Connectionist Temporal Classification (CTC) (Graves et al., 2006) algorithm. The Whisper on the other hand is a multipurpose model that contains both transformer encoder and decoder blocks and has been trained on several speech-processing tasks, like multilingual speech recognition, speech translation and voice activity detection (Radford et al., 2022).
|
| 161 |
+
|
| 162 |
+
The available XLS-R models have 300 million, 1 billion and 2 billion parameters, we are using the two smaller ones in this work. The Whisper model comes in six different sizes; we are using medium and large-v2 since the Estonian error rates for other ones are relatively high. There is one Estonian-specific fine-tuned model available for the 300 million parameter version, trained with over 700 hours of Estonian speech data (Alumäe and Olev, 2022). There are several Estonian Whisper models available in HuggingFace but these are trained with fewer data examples. We are using the best available medium and large-v2 ones. ${}^{12}$
|
| 163 |
+
|
| 164 |
+
We use standard fine-tuning procedures. For training XLS-R-based ASR models from scratch, we use the learning rate of $3\mathrm{e} - 4$ , a 400-step warmup and train the models for 60 epochs with children's speech dataset, which is less than 4000 steps. When further fine-tuning the Estonian XLS-R model with children's speech, we use the learning rate of $2\mathrm{e} - 5$ and 200 warmup steps. We fine-tune all the Whisper models with warmup 10%
|
| 165 |
+
|
| 166 |
+
215 of the steps and learning rate 1e-05. When fine-
|
| 167 |
+
|
| 168 |
+
${}^{1}$ https://huggingface.co/agnesluhtaru/ whisper-medium-et-ERR2020
|
| 169 |
+
|
| 170 |
+
${}^{2}$ https://huggingface.co/agnesluhtaru/ whisper-large-et-ERR2020-v2
|
| 171 |
+
|
| 172 |
+
217 tuning the out-of-the-box Whisper models, we train these for 5000 steps or atound 40 epochs and when fine-tuned models already trained with Estonian adult speech, we train the large model for 2000 steps or over 16 epochs and medium model for 1000 steps or eight epochs.
|
| 173 |
+
|
| 174 |
+
For the private model, we use Microsoft Azure Speech service’s speech-to-text ${}^{3}$ , which requires an Azure subscription and a Speech resource. The transcription services can be accessed by making REST requests.
|
| 175 |
+
|
| 176 |
+
229 Microsoft Azure offers the option to fine-tune the model with custom datasets. This process involves uploading data to train the models, fol-
|
| 177 |
+
|
| 178 |
+
232 lowed by deploying the trained models. Since audio-based fine-tuning is not available for Esto-
|
| 179 |
+
|
| 180 |
+
234 nian, we use text-based tuning for our work with the texts from the children's speech dataset.
|
| 181 |
+
|
| 182 |
+
§ 4 RESULTS
|
| 183 |
+
|
| 184 |
+
In this section, we describe the results of all the models based on Facebook's XLS-R, OpenAI'S Whisper and Microsoft Azure speech-to-text.
|
| 185 |
+
|
| 186 |
+
§ 4.1 XLS-R
|
| 187 |
+
|
| 188 |
+
Table 2 shows the word error rate (WER) scores of fine-tuned Estonian XLS-R models using only 10 hours of Estonian children's speech data, the fine-tuned Estonian model (Alumäe and Olev, 2022) and Estonian model further trained with children's speech. We can see that the limited amount of
|
| 189 |
+
|
| 190 |
+
249 data for fine-tuning XLS-R from scratch results in a high WER of over 30 for both models with 300 million and one billion parameters. Training an ASR model using only 10 hours of speech data
|
| 191 |
+
|
| 192 |
+
254 can be challenging, especially when the speech is for a low-resource language and children.
|
| 193 |
+
|
| 194 |
+
The results show that the pre-trained Estonian ASR model has a WER of around 20, while further fine-tuning the model with children's speech data
|
| 195 |
+
|
| 196 |
+
259 leads to even better results, with a WER of less than 15. Based on the lower WER score for fine-tuned one billion parameter model, we can suggest that a larger model fine-tuned with Estonian data first and then further trained on children's speech could lead to even better results.
|
| 197 |
+
|
| 198 |
+
The results indicate that fine-tuning the Estonian ASR model using children's speech data im-
|
| 199 |
+
|
| 200 |
+
269
|
| 201 |
+
|
| 202 |
+
max width=
|
| 203 |
+
|
| 204 |
+
Model Test Dev
|
| 205 |
+
|
| 206 |
+
1-3
|
| 207 |
+
xls-r-300M-children 36.3 34.58
|
| 208 |
+
|
| 209 |
+
1-3
|
| 210 |
+
xls-r-1B-children 30.89 31.06
|
| 211 |
+
|
| 212 |
+
1-3
|
| 213 |
+
xls-r-300M-et 20.62 19.15
|
| 214 |
+
|
| 215 |
+
1-3
|
| 216 |
+
xls-r-300M-et-children 15.31 14.30
|
| 217 |
+
|
| 218 |
+
1-3
|
| 219 |
+
|
| 220 |
+
Table 2: Comparison of WER scores for Face-book's Wav2Vec2 XLS-R (Babu et al., 2021) based models fine-tuned with only Estonian children's speech, only Estonian adult speech (Alumäe and Olev, 2022) and first fine-tuned to Estonian and further trained with children's speech.
|
| 221 |
+
|
| 222 |
+
270
|
| 223 |
+
|
| 224 |
+
271
|
| 225 |
+
|
| 226 |
+
272
|
| 227 |
+
|
| 228 |
+
273
|
| 229 |
+
|
| 230 |
+
274
|
| 231 |
+
|
| 232 |
+
275
|
| 233 |
+
|
| 234 |
+
276
|
| 235 |
+
|
| 236 |
+
278
|
| 237 |
+
|
| 238 |
+
279
|
| 239 |
+
|
| 240 |
+
280
|
| 241 |
+
|
| 242 |
+
281
|
| 243 |
+
|
| 244 |
+
282
|
| 245 |
+
|
| 246 |
+
283
|
| 247 |
+
|
| 248 |
+
< g r a p h i c s >
|
| 249 |
+
|
| 250 |
+
Figure 1: Performance comparison of Estonian XLS-R ASR and children's speech fine-tuned models across age groups.
|
| 251 |
+
|
| 252 |
+
284
|
| 253 |
+
|
| 254 |
+
285
|
| 255 |
+
|
| 256 |
+
286
|
| 257 |
+
|
| 258 |
+
287
|
| 259 |
+
|
| 260 |
+
288
|
| 261 |
+
|
| 262 |
+
289
|
| 263 |
+
|
| 264 |
+
290
|
| 265 |
+
|
| 266 |
+
291
|
| 267 |
+
|
| 268 |
+
292
|
| 269 |
+
|
| 270 |
+
293
|
| 271 |
+
|
| 272 |
+
294
|
| 273 |
+
|
| 274 |
+
295
|
| 275 |
+
|
| 276 |
+
296
|
| 277 |
+
|
| 278 |
+
297
|
| 279 |
+
|
| 280 |
+
298
|
| 281 |
+
|
| 282 |
+
301
|
| 283 |
+
|
| 284 |
+
proves performance across all age groups (refer 302
|
| 285 |
+
|
| 286 |
+
to Figure 1). Younger speakers tend to have a 303
|
| 287 |
+
|
| 288 |
+
higher word error rate (WER) than older speakers, 304 although this relationship is not always straight-
|
| 289 |
+
|
| 290 |
+
forward. There are some exceptions, such as the 306 recognition performance for 13-year-olds being
|
| 291 |
+
|
| 292 |
+
worse than that of younger age groups. This high- 308
|
| 293 |
+
|
| 294 |
+
lights that speaker variability plays a role in the 309 WER results. Nevertheless, the fine-tuning of the ASR model using children's speech data reduces the differences in recognition performance across
|
| 295 |
+
|
| 296 |
+
age groups, resulting in improved overall perfor- 313 mance.
|
| 297 |
+
|
| 298 |
+
§ 4.2 WHISPER
|
| 299 |
+
|
| 300 |
+
The performance of the out-of-the-box Whisper 318 models on the children's dataset (see Table 3) is
|
| 301 |
+
|
| 302 |
+
comparable to the scores reported by Radford et al. 320
|
| 303 |
+
|
| 304 |
+
(2022) on the Estonian Common Voice 9 Ardila 321
|
| 305 |
+
|
| 306 |
+
et al. (2020). All models have a WER of at least 322
|
| 307 |
+
|
| 308 |
+
35. So, although we can use Whisper without fine- 323 tuning, it does not transcribe Estonian speech well
|
| 309 |
+
|
| 310 |
+
${}^{3}$ https://learn.microsoft.com/en-us/ azure/cognitive-services/speech-service/ speech-to-text
|
| 311 |
+
|
| 312 |
+
325 and therefore does not give great transcriptions for Estonian children's speech as well.
|
| 313 |
+
|
| 314 |
+
When only fine-tuning the model with 10 hours of children's speech, we can already get better results, with large-v2 model WER around 20, which is significantly better than using a model fine-tuned with Estonian speech that showed much better performance for XLS-R. Although we don't have reasonable evidence for saying that XLS-R models are better because the Whisper models are not optimal.
|
| 315 |
+
|
| 316 |
+
max width=
|
| 317 |
+
|
| 318 |
+
Model Test Dev
|
| 319 |
+
|
| 320 |
+
1-3
|
| 321 |
+
Whisper-medium 46.11 43.21
|
| 322 |
+
|
| 323 |
+
1-3
|
| 324 |
+
Whisper-large-v2 36.01 35.06
|
| 325 |
+
|
| 326 |
+
1-3
|
| 327 |
+
Whisper-medium-children 25.08 24.29
|
| 328 |
+
|
| 329 |
+
1-3
|
| 330 |
+
Whisper-large-v2-children 20.38 20.58
|
| 331 |
+
|
| 332 |
+
1-3
|
| 333 |
+
Whisper-medium-et 28.78 26.83
|
| 334 |
+
|
| 335 |
+
1-3
|
| 336 |
+
Whisper-large-v2-et 29.2 28.13
|
| 337 |
+
|
| 338 |
+
1-3
|
| 339 |
+
Whisper-medium-et-children 18.66 17.49
|
| 340 |
+
|
| 341 |
+
1-3
|
| 342 |
+
Whisper-large-v2-et-children 16.02 15.73
|
| 343 |
+
|
| 344 |
+
1-3
|
| 345 |
+
|
| 346 |
+
Table 3: Comparison of WER scores for OpenAI Whisper (Radford et al., 2022) models and Whisper models fine-tuned with only Estonian children's speech, only Estonian adult speech and first fine-tuned to Estonian and further trained with children's speech.
|
| 347 |
+
|
| 348 |
+
Despite using the Estonian Whisper models fine-tuned with fewer audio text pairs than the XLS-R model, when trained further with children's speech, the large model achieved similar WER as the double fine-tuned smaller XLS-R model.
|
| 349 |
+
|
| 350 |
+
362
|
| 351 |
+
|
| 352 |
+
§ 4.3 AZURE
|
| 353 |
+
|
| 354 |
+
The results from our evaluation of the children's speech dataset show that the out-of-the-box Azure speech-to-text model performs similarly or better than the fine-tuned Estonian XLS-R model (Alumäe and Olev, 2022). As indicated in Table 4, the Microsoft Azure speech-to-text scores are around 20 or below.
|
| 355 |
+
|
| 356 |
+
max width=
|
| 357 |
+
|
| 358 |
+
Model Test Dev
|
| 359 |
+
|
| 360 |
+
1-3
|
| 361 |
+
Microsoft Azure 18.93 20.18
|
| 362 |
+
|
| 363 |
+
1-3
|
| 364 |
+
Azure text-tuned 20.31 21.21
|
| 365 |
+
|
| 366 |
+
1-3
|
| 367 |
+
|
| 368 |
+
Table 4: WER scores for Microsoft Azure speech-to text and it's custom text-tuned version.
|
| 369 |
+
|
| 370 |
+
377
|
| 371 |
+
|
| 372 |
+
However, the experiment also shows that text- 378
|
| 373 |
+
|
| 374 |
+
tuning is not the best approach for this particular 379
|
| 375 |
+
|
| 376 |
+
dataset. The dataset mostly contains simpler vo- 380
|
| 377 |
+
|
| 378 |
+
cabulary and not much terminology, most likely 381
|
| 379 |
+
|
| 380 |
+
leading to quick overfitting with text-tuning. Cur- 382 rently, text-tuning is the only option available for
|
| 381 |
+
|
| 382 |
+
the Estonian language, but it might not be the best 384 use case for children's speech datasets.
|
| 383 |
+
|
| 384 |
+
§ 5 DISCUSSION
|
| 385 |
+
|
| 386 |
+
Our experiments show that children's speech 389 recognition continues to be a tricky problem but big speech models are looking promising. It is possible to build an ASR system for Estonian children's speech without any bells and whistles using
|
| 387 |
+
|
| 388 |
+
only 10 hours of data and get output that is de- 394 cent and might be good enough for use in chatbots.
|
| 389 |
+
|
| 390 |
+
However, when it comes to six-year-olds, whose 396 speech is difficult to understand even for the human ear, the system is still struggling.
|
| 391 |
+
|
| 392 |
+
We evaluate different models and it appears that 399 both OpenAI's Whisper and Facebook's XLS-R are viable options for developing a speech recognition model for Estonian children's speech. The current best word error rate is around 15 with XLS-R. However, it remains unclear if this pre-trained model is optimal for children's speech or if a lower error rate could be achieved with Whisper after fine-tuning with a similar amount of Estonian
|
| 393 |
+
|
| 394 |
+
adult speech. Additionally, we do not obtain com- 409 parable results with the Azure service, as it does
|
| 395 |
+
|
| 396 |
+
not permit fine-tuning with audio data. 411
|
| 397 |
+
|
| 398 |
+
Our findings suggest that the results could be improved by using a larger XLS-R model as the
|
| 399 |
+
|
| 400 |
+
base or by fine-tuning Whisper models with more 414 data. Additionally, we do not use a separate lan-
|
| 401 |
+
|
| 402 |
+
guage model, which is possible with both Whisper 416 and XLS-R models and could potentially enhance the performance of these models.
|
| 403 |
+
|
| 404 |
+
419
|
| 405 |
+
|
| 406 |
+
§ 6 CONCLUSION
|
| 407 |
+
|
| 408 |
+
We test the performance of two speech recogni- 421 tion models, XLS-R and Whisper, on transcribing Estonian children's speech. We fine-tune the models with children's speech data and compared
|
| 409 |
+
|
| 410 |
+
them to an off-the-shelf system from Microsoft 426 Azure. Both models fine-tuned with children's speech, outperform Microsoft Azure, which does not allow fine-tuning with audio for Estonian, and are promising for children's ASR system.
|
| 411 |
+
|
| 412 |
+
431
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/-eCgVcWbnzE/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AI BASED AUTOMATIC MARK ENTRY SYSTEM
|
| 2 |
+
|
| 3 |
+
R.Subasri*, R.Meenakumari*
|
| 4 |
+
|
| 5 |
+
*Professor, Kongu Engineering college, Perundurai, India
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
An automatic mark entry system is a computer based system that automatically captures marks or grades from various sources and stores them in a database. The system is designed to automate the process of entering marks or grades for students, eliminating the need for manual entry, and reducing the chances of errors. OCR-based systems use image processing techniques to automatically recognize and extract marks or grades from scanned documents, such as exam answer sheets or report cards. By automating the process of entering marks or grades, teachers and administrators can focus on other important tasks, such as teaching and providing feedback to students. A webcam is used to capture the marks in answer sheets of all the students and the data is transferred into an Excel sheet automatically. Automatic mark entry systems not only save time and reduce errors but also provide real-time access to the data, allowing teachers and administrators to quickly analyze and evaluate the performance of students.
|
| 10 |
+
|
| 11 |
+
## 1 Introduction
|
| 12 |
+
|
| 13 |
+
Traditionally, marks or grades are entered manually by teachers or administrators, which can be a time-consuming process and may lead to errors. OCR-based systems use image processing techniques to automatically recognize and extract marks or grades from scanned documents, such as exam answer sheets or report cards. A webcam is used to capture the answer sheets of all the students who scored in the examination. The numbers are detected, and the data is transferred into an Excel sheet automatically. Generally this OCR technology is used in number plate recognition system. An accurate vehicle detection system for traffic control is recommended by many researchers. A system which recognises a vehicle's number plate from video using video processing and OCR technology was proposed [1] [3]. for storing the detected number plate of vehicles in a database. Further to overcome the drawback of inaccuracy in recognizing the number plates of high speed vehicles, an automatic vehicle recognition identification System using EasyOCR is recommended[2]. The validation of effectiveness of EasyOCR is also highlighted in comparison with Tesseract OCR for Automated License Plate Recognition Using Deep Learning Algorithm [4]
|
| 14 |
+
|
| 15 |
+
In this paper, EasyOCR is applied to recognize the hand written marks in the front page of answer sheets for individual questions and total and automatically creates an excel data sheet. The front page of answer sheet is printed with other details such as name of institution, name of students, course name, etc along with tabular column for entering marks of individual question. The image of the tabular column filled with marks is scanned and given as input to easyocr algorithm for automatic creation of data base. This system ensures ${100}\%$ accuracy in mark entering process for data base creation to publish results in every educational institution.
|
| 16 |
+
|
| 17 |
+
The automatic mark entry system as shown in Fig 1 consists of a key algorithm namely EasyOCR. EasyOCR is used for the number recognition, a webcam is used for scanning the exam paper, the detected image is displayed, and the output is automatically converted into an Excel sheet for the data storage.
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
Fig 1 Block Diagram for the AI based automatic mark entry system
|
| 22 |
+
|
| 23 |
+
In order to recognize numbers using EasyOCR, the library uses a combination of machine learning and image processing techniques. The library is pre-trained on a large dataset of images containing various types of text, including numbers. During training, the library learns to identify the patterns and features that are characteristic of different types of text, and uses this knowledge to recognize text in new images. When recognizing numbers, EasyOCR first identifies regions of an image that contain text using image processing techniques. Once the text regions have been identified, EasyOCR applies its machine learning models to recognize the individual characters within the text regions. EasyOCR is designed to be able to recognize numbers in a wide range of formats, including handwritten numbers, numbers with unusual fonts or styles, and numbers that appear against complex backgrounds.
|
| 24 |
+
|
| 25 |
+
Detecting numbers in a webcam image involves using image processing techniques to identify and extract numerical characters from the image. The process can be broken down as initial step of image acquisition, where the image is captured using the webcam. The webcam captures the live video tream and sends it to the computer. Secondly, image preprocessing, in which the image is captured is be preprocessed to improve its quality and prepare it for analysis. This may involve operations like resizing, cropping, color correction, and noise reduction. Finally image segmentation will segment the image into regions of interest (ROIs) where numbers are likely to be located. This may involve identifying features such as edges or corners that indicate the presence of a number.Once the ROIs have been identified, the next step is to recognize the individual characters within them. This can be done using techniques like template matching, feature extraction, or machine learning algorithms. Finally, the recognized numbers can be outputted to a display or another application.
|
| 26 |
+
|
| 27 |
+
The identified numbers using the EasyOCR library are linked into an excel spreadsheet using a variety of programming languages and libraries. Here the popular option of the Pandas library in Python is used. After importing the necessary libraries in Python script, image is loaded using Open CV. The numbers alone are extracted using EasyOCR from the image and it displayed the result as a list of dictionaries, where each dictionary contains information about the recognized characters. the extracted numbers stored in a DataFrame created in Pandas and using the option of excel method, the data is displayed in excel sheet.
|
| 28 |
+
|
| 29 |
+
## 2 Performance Evaluation and Testing Results
|
| 30 |
+
|
| 31 |
+
After installation of necessary files and libraries, as a first step the user is asked to enter the course code and name and to give the number of students as in Fig 2. After completing the task, the mark sheet is kept of image capturing as in Fig 3
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+
Fig 2. User Interface
|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
|
| 39 |
+
Fig 3 Input image of sample mark sheet
|
| 40 |
+
|
| 41 |
+
After capturing the image using webcam EasyOCR will detect and display the numbers and finally that outputs displayed are stored automatically in the Excel sheet as in Fig 4
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
|
| 45 |
+
Fig 4 Excel sheet with marks
|
| 46 |
+
|
| 47 |
+
From the excel sheets, it is evident that the accuracy in transferring the marks entered in the grade sheets to excel is ${100}\%$ . By automating the grading process, educators no longer need to spend a significant amount of time and effort manually grading exams, which can significantly reduce the workload and manpower required for grading.
|
| 48 |
+
|
| 49 |
+
## ACKNOWLEDGEMENT
|
| 50 |
+
|
| 51 |
+
This work has been completed by utilizing the resources of the Centre of Excellence on IIoT laboratory in collaboration with ALAI labs Pve Ltd, Singapore in the department of Electronics and Instrumentation Engineering of Kongu Engineering College, Erode, TamilNadu, India. The authors would like to thank the technical team of ALAI labs Pve Ltd for their incessant support and guidance in completion of this task.
|
| 52 |
+
|
| 53 |
+
## References
|
| 54 |
+
|
| 55 |
+
[1] Vishwanath Burkpalli, Abhishek Joshi, Abhishek B Warad, Akash Patil. Automatic number plate recognition using Tensorflow and EasyOCR, International Research Journal of Modernization in Engineering Technology and Science, 04(09), 493-501, September-2022.
|
| 56 |
+
|
| 57 |
+
[2] Amit Kochale, Ashutosh Khemariya, Aditi Tiwari. Real Time Automatic Vehicle (License) Recognition Identification System with the Help of Opencv & Easyocr Model, International Journal of Research, Science, Technology & Management, 24(3) 2455-2240, September 2021.
|
| 58 |
+
|
| 59 |
+
[3] S. Ranjan et al., CR based Automated Number Plate Text Detection and Extraction, 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 2022, pp. 621- 627, doi: 10.23919/INDIACom54597.2022.9763248.
|
| 60 |
+
|
| 61 |
+
[4] D. R. Vedhaviyassh, R. Sudhan, G. Saranya, M. Safa and D. Arun, "Comparative Analysis of EasyOCR and TesseractOCR for Automatic License Plate Recognition using Deep Learning Algorithm," 2022 6th International Conference on Electronics, Communication and Aerospace Technology, Coimbatore, India, 2022, pp. 966-971, doi: 10.1109/ICECA55336.2022.10009215.
|
| 62 |
+
|
| 63 |
+
[5] VenkataNagaSai, Rakesh Kamisetty et al. Digitization of Data from Invoice using OCR. 6th International Conference on Computing Methodologies and Communication (ICCMC). IEEE. 2022, 1-10.
|
| 64 |
+
|
| 65 |
+
[6] Azka Gilani et al. Table detection using deep learning. 14th IAPR international conference on document analysis and recognition (ICDAR). IEEE. 2017, 771-776.
|
| 66 |
+
|
| 67 |
+
[7] Adam Jatowt et al. Deep statistical analysis of OCR errors for effective post-OCR processing. ACM/IEEE Joint Conference on Digital Libraries (JCDL). IEEE. 2019, 29-38.
|
| 68 |
+
|
| 69 |
+
[8] D. Yadav, S. Sánchez-Cuadrado, and J. Morato. Optical character recognition for Hindi language using a Neural-network approach, Journal of Information Processing Systems., 9(1), 117-140, 2013.
|
| 70 |
+
|
| 71 |
+
[9] I.K.Pathan, A.A.Ali, R. J. Ramteke. Recognition of offline handwritten isolated Urdu character, Advances in Computational Research.4(1). 117-121, 2012.
|
| 72 |
+
|
| 73 |
+
[10] S. Mori, H. Nishida, and H. Yamada. Optical Character Recognition. Wiley Series in Microwave and Optical Engineering USA, 1999. ISBN 047308196.
|
| 74 |
+
|
| 75 |
+
[11] J. Ravagli, Z. Ziran, and S. Marinai . Text recognition and classification in floor plan images, International Conference on Document Analysis and Recognition Workshops (ICDARW), I. 1-6. Sep. 2019.
|
| 76 |
+
|
| 77 |
+
[12] Liang, J., Doermann, D. & Li, H. Camera-based analysis of text and documents: a survey. IJDAR 7, 84- 104 2005.
|
| 78 |
+
|
| 79 |
+
[13] Lingqian Yang, Daji Ergu, Ying Cai, Fangyao Liu, Bo Ma. A review of natural scene text detection methods." The 8th International Conference on Information Technology and Quantitative Management (ITQM 2020 & 2021). Procedia Computer Science 199 1458-1465, 2022.
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/-eCgVcWbnzE/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ AI BASED AUTOMATIC MARK ENTRY SYSTEM
|
| 2 |
+
|
| 3 |
+
R.Subasri*, R.Meenakumari*
|
| 4 |
+
|
| 5 |
+
*Professor, Kongu Engineering college, Perundurai, India
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
An automatic mark entry system is a computer based system that automatically captures marks or grades from various sources and stores them in a database. The system is designed to automate the process of entering marks or grades for students, eliminating the need for manual entry, and reducing the chances of errors. OCR-based systems use image processing techniques to automatically recognize and extract marks or grades from scanned documents, such as exam answer sheets or report cards. By automating the process of entering marks or grades, teachers and administrators can focus on other important tasks, such as teaching and providing feedback to students. A webcam is used to capture the marks in answer sheets of all the students and the data is transferred into an Excel sheet automatically. Automatic mark entry systems not only save time and reduce errors but also provide real-time access to the data, allowing teachers and administrators to quickly analyze and evaluate the performance of students.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Traditionally, marks or grades are entered manually by teachers or administrators, which can be a time-consuming process and may lead to errors. OCR-based systems use image processing techniques to automatically recognize and extract marks or grades from scanned documents, such as exam answer sheets or report cards. A webcam is used to capture the answer sheets of all the students who scored in the examination. The numbers are detected, and the data is transferred into an Excel sheet automatically. Generally this OCR technology is used in number plate recognition system. An accurate vehicle detection system for traffic control is recommended by many researchers. A system which recognises a vehicle's number plate from video using video processing and OCR technology was proposed [1] [3]. for storing the detected number plate of vehicles in a database. Further to overcome the drawback of inaccuracy in recognizing the number plates of high speed vehicles, an automatic vehicle recognition identification System using EasyOCR is recommended[2]. The validation of effectiveness of EasyOCR is also highlighted in comparison with Tesseract OCR for Automated License Plate Recognition Using Deep Learning Algorithm [4]
|
| 14 |
+
|
| 15 |
+
In this paper, EasyOCR is applied to recognize the hand written marks in the front page of answer sheets for individual questions and total and automatically creates an excel data sheet. The front page of answer sheet is printed with other details such as name of institution, name of students, course name, etc along with tabular column for entering marks of individual question. The image of the tabular column filled with marks is scanned and given as input to easyocr algorithm for automatic creation of data base. This system ensures ${100}\%$ accuracy in mark entering process for data base creation to publish results in every educational institution.
|
| 16 |
+
|
| 17 |
+
The automatic mark entry system as shown in Fig 1 consists of a key algorithm namely EasyOCR. EasyOCR is used for the number recognition, a webcam is used for scanning the exam paper, the detected image is displayed, and the output is automatically converted into an Excel sheet for the data storage.
|
| 18 |
+
|
| 19 |
+
< g r a p h i c s >
|
| 20 |
+
|
| 21 |
+
Fig 1 Block Diagram for the AI based automatic mark entry system
|
| 22 |
+
|
| 23 |
+
In order to recognize numbers using EasyOCR, the library uses a combination of machine learning and image processing techniques. The library is pre-trained on a large dataset of images containing various types of text, including numbers. During training, the library learns to identify the patterns and features that are characteristic of different types of text, and uses this knowledge to recognize text in new images. When recognizing numbers, EasyOCR first identifies regions of an image that contain text using image processing techniques. Once the text regions have been identified, EasyOCR applies its machine learning models to recognize the individual characters within the text regions. EasyOCR is designed to be able to recognize numbers in a wide range of formats, including handwritten numbers, numbers with unusual fonts or styles, and numbers that appear against complex backgrounds.
|
| 24 |
+
|
| 25 |
+
Detecting numbers in a webcam image involves using image processing techniques to identify and extract numerical characters from the image. The process can be broken down as initial step of image acquisition, where the image is captured using the webcam. The webcam captures the live video tream and sends it to the computer. Secondly, image preprocessing, in which the image is captured is be preprocessed to improve its quality and prepare it for analysis. This may involve operations like resizing, cropping, color correction, and noise reduction. Finally image segmentation will segment the image into regions of interest (ROIs) where numbers are likely to be located. This may involve identifying features such as edges or corners that indicate the presence of a number.Once the ROIs have been identified, the next step is to recognize the individual characters within them. This can be done using techniques like template matching, feature extraction, or machine learning algorithms. Finally, the recognized numbers can be outputted to a display or another application.
|
| 26 |
+
|
| 27 |
+
The identified numbers using the EasyOCR library are linked into an excel spreadsheet using a variety of programming languages and libraries. Here the popular option of the Pandas library in Python is used. After importing the necessary libraries in Python script, image is loaded using Open CV. The numbers alone are extracted using EasyOCR from the image and it displayed the result as a list of dictionaries, where each dictionary contains information about the recognized characters. the extracted numbers stored in a DataFrame created in Pandas and using the option of excel method, the data is displayed in excel sheet.
|
| 28 |
+
|
| 29 |
+
§ 2 PERFORMANCE EVALUATION AND TESTING RESULTS
|
| 30 |
+
|
| 31 |
+
After installation of necessary files and libraries, as a first step the user is asked to enter the course code and name and to give the number of students as in Fig 2. After completing the task, the mark sheet is kept of image capturing as in Fig 3
|
| 32 |
+
|
| 33 |
+
< g r a p h i c s >
|
| 34 |
+
|
| 35 |
+
Fig 2. User Interface
|
| 36 |
+
|
| 37 |
+
< g r a p h i c s >
|
| 38 |
+
|
| 39 |
+
Fig 3 Input image of sample mark sheet
|
| 40 |
+
|
| 41 |
+
After capturing the image using webcam EasyOCR will detect and display the numbers and finally that outputs displayed are stored automatically in the Excel sheet as in Fig 4
|
| 42 |
+
|
| 43 |
+
< g r a p h i c s >
|
| 44 |
+
|
| 45 |
+
Fig 4 Excel sheet with marks
|
| 46 |
+
|
| 47 |
+
From the excel sheets, it is evident that the accuracy in transferring the marks entered in the grade sheets to excel is ${100}\%$ . By automating the grading process, educators no longer need to spend a significant amount of time and effort manually grading exams, which can significantly reduce the workload and manpower required for grading.
|
| 48 |
+
|
| 49 |
+
§ ACKNOWLEDGEMENT
|
| 50 |
+
|
| 51 |
+
This work has been completed by utilizing the resources of the Centre of Excellence on IIoT laboratory in collaboration with ALAI labs Pve Ltd, Singapore in the department of Electronics and Instrumentation Engineering of Kongu Engineering College, Erode, TamilNadu, India. The authors would like to thank the technical team of ALAI labs Pve Ltd for their incessant support and guidance in completion of this task.
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/1w8vMnVeJB/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Interpretable Multimodal Emotion Recognition using Facial Features and Physiological Signals
|
| 2 |
+
|
| 3 |
+
Puneet Kumar and Xiaobai Li*
|
| 4 |
+
|
| 5 |
+
CMVS, University of Oulu, Finland.
|
| 6 |
+
|
| 7 |
+
\{puneet.kumar, xiaobai.li\}@oulu.fi
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
This paper aims to demonstrate the importance and feasibility of fusing multimodal information for emotion recognition. It introduces a multimodal framework for emotion understanding by fusing the information from visual facial features and rPPG signals extracted from the input videos. An interpretability technique based on permutation feature importance analysis has also been implemented to compute the contributions of rPPG and visual modalities toward classifying a given input video into a particular emotion class. The experiments on IEMOCAP dataset demonstrate that the emotion classification performance improves by combining the complementary information from multiple modalities.
|
| 12 |
+
|
| 13 |
+
Keywords: Affective Computing, Interpretable & Deployable AI, Multimodal Analysis, rPPG, Facial Features.
|
| 14 |
+
|
| 15 |
+
## 1 Introduction
|
| 16 |
+
|
| 17 |
+
Emotions, characterized by a rich and complex mix of physiological and cognitive states, hold significant importance across multiple fields such as psychology, human-computer interaction, affective computing, and even extending to broader domains such as virtual reality, user experience design, healthcare, and education [1]. Understanding and accurately interpreting emotions is essential in human communication and social interactions [2]. With the surge in the development and accessibility of multimodal sensing technologies, researchers can explore multiple modalities to enhance the accuracy and robustness of emotion recognition systems [3]. The current research trend focuses on building Artificial Intelligence (AI) systems that can be deployed for real-life applications [4].
|
| 18 |
+
|
| 19 |
+
Two such modalities, facial expressions and physiological signals, have garnered significant attention due to the rich information they offer and their non-invasive nature [5]. Facial expressions, direct and non-invasive indicators of emotion, have been thoroughly investigated [6]. Various techniques involving the extraction of facial landmarks, local descriptors, or holistic representations have been proposed to capture nuanced variations in facial muscle movements that reflect different emotional states [7]. Physiological signals, such as remote photoplethysmography (rPPG) signals, provide another layer of emotional cues. These signals, obtained through non-contact video-based techniques, offer insights into physiological changes associated with emotional responses [5]. The interplay of these two modalities offers a more holistic understanding of emotions, thus enhancing the robustness of emotion recognition systems [8].
|
| 20 |
+
|
| 21 |
+
Emotion classification through audio-visual information is a well-established research task $\left\lbrack {9,{10},{11}}\right\rbrack$ . However, recognizing emotion using the physiological context along with the audio-visual information score for further exploration [5]. Furthermore, despite the significant advancements, many multimodal emotion recognition models do not provide meaningful interpretations for their predictions $\left\lbrack {{12},{13}}\right\rbrack$ . Most existing interpretability techniques have been implemented for visual modality and have yet to be fully explored for multimodal analysis $\left\lbrack {{14},{15},6}\right\rbrack$ .
|
| 22 |
+
|
| 23 |
+
This paper proposes an interpretable multimodal emotion recognition framework that extracts rPPG signals and facial features from the input videos and uses their combined context for emotion detection. The Haar cascades classifier [16] has been implemented to extract the rPPG signals, whereas a pre-trained ResNet-34-based network extracts the visual features. Further, early and late fusion approaches that integrate the static facial expression features and dynamic rPPG signals to capture both spatial and temporal aspects of emotions have been incorporated.
|
| 24 |
+
|
| 25 |
+
An interpretability technique based on permutation feature importance (PFI) [17] has also been incorporated that computes the contribution of rPPG and visual modality towards classifying a given input video into a particular emotion class. The experiments performed on Interactive Emotional Dyadic Motion Capture (IEMOCAP) dataset [18] have resulted in an accuracy of ${54.61}\%$ while classifying the input videos into ten emotion classes ('neutral, 'happy, 'sad, 'angry, 'excited, ' 'frustrated, ' 'fearful, 'surprised, ' 'distressed' and 'other'). The increased performance on using the multimodal context than the individual accuracies on using rPPG or visual modality alone advocates the importance of leveraging the multimodal context for emotion understanding. The average contributions of rPPG and visual modalities towards emotion recognition have been computed as 37.67% and 62.33%, respectively.
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
*Corresponding Author: xiaobai.li@oulu.fi
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
The contributions of this paper can be summarized as follows:
|
| 34 |
+
|
| 35 |
+
- A multimodal emotion recognition framework has been proposed to classify a given video into discrete emotion classes. It extracts the dynamic rPPG signals from the input videos and combines them with static facial expressions using early and late fusion approaches.
|
| 36 |
+
|
| 37 |
+
- An interpretability technique has been incorporated that computes the contribution of rPPG and visual modalities towards emotion classification using the PFI algorithm.
|
| 38 |
+
|
| 39 |
+
- Extensive experiments have been performed on the IEMOCAP dataset, and the results have been presented in terms of accuracy, precision, recall, F1 score, and modality-wise contributions toward emotion classification.
|
| 40 |
+
|
| 41 |
+
## 2 Proposed Method
|
| 42 |
+
|
| 43 |
+
The proposed framework has been diagrammatically depicted in Figure 1 and described in the following sections.
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
|
| 47 |
+
Figure 1: Schematic illustration of the proposed framework.
|
| 48 |
+
|
| 49 |
+
### 2.1 Preprocessing and Feature Extraction
|
| 50 |
+
|
| 51 |
+
The video files are loaded and processed frame by frame using OpenCV (cv2) library ${}^{1}$ and processed to extract rPPG signals and facial features.
|
| 52 |
+
|
| 53 |
+
i) rPPG Signals Extraction: Face detection within each video frame during the rPPG signal extraction process is accomplished using Haar cascades [16]. The region of interest (ROI), predominantly the facial region, is isolated from each frame, after which the mean intensity is computed to generate the rPPG signal for each video. The calculation of the mean intensity within the ROI $\left( {\bar{I}c}\right)$ is represented in Eq. 1.
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\bar{I}c = \frac{1}{N}\sum {x = 1}^{W}\mathop{\sum }\limits_{{y = 1}}^{H}{I}_{x, y, c} \tag{1}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
Where ${I}_{x, y, c}$ is the intensity of the pixel at location(x, y)for color channel $c$ in the ROI, and $N$ is the total number of pixels in the ROI, whereas $W$ and $H$ represent the width and height of the ROI, respectively, and $c \in R, G, B$ .
|
| 60 |
+
|
| 61 |
+
ii) Facial Features Extraction: Facial feature extraction employs Dlib's shape predictor [19], which is a version of the ResNet-34 trained on Face Scrub dataset[20]. As per Eq. 2, it identifies 68 facial landmarks for each detected face within every frame, distinguishing unique facial characteristics.
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
P = D\left( {F,\left\{ {L}_{i}\right\} }\right) \tag{2}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
F = \left\lbrack {{f}_{1},{f}_{2},\ldots ,{f}_{n}}\right\rbrack
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
---
|
| 72 |
+
|
| 73 |
+
${}^{1}$ https://opencv.org/
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
+
Where $F$ represents the face detected in a frame, $P$ represents the predicted points on the face, $D\left( {F,\left\{ {L}_{i}\right\} }\right)$ is the function for predicting points on the face, and ${L}_{ - }i$ is the set of landmark points for the ${i}^{th}$ point. As signals from different videos might differ in length, it becomes crucial to standardize the input for the neural network model. This standardization is achieved by zero-padding $\bar{I}$ and $P$ to match the maximum signal length.
|
| 78 |
+
|
| 79 |
+
### 2.2 Multimodal Feature Fusion
|
| 80 |
+
|
| 81 |
+
Early fusion and late fusion approaches are used to combine the rPPG signals and facial features.
|
| 82 |
+
|
| 83 |
+
i) Early Fusion: In the early fusion approach, the rPPG signals and facial features are concatenated before being fed into the model. The fused data are then passed through a neural network comprising a flatten layer, followed by CNN layers of dimensions 512 and 256, and the final layer of size equal to the number of classes. The flatten layer transforms the 3D input tensor into a 1D tensor, and the subsequent CNN layers functions perform the classification task. The model structure is represented as per Eq. 3.
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{I}^{\prime } = \text{concatenate}\left( {\bar{I}c, P}\right)
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
{I}^{\prime \prime } = \operatorname{flatten}\left( {I}^{\prime }\right) \tag{3}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{F}_{\text{early }} = \operatorname{NNet}\left( {{I}^{\prime \prime }, C}\right)
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
Where $I$ is the input shape, $C$ denotes the number of classes, $\bar{I}c$ is the mean intensity within the ROI from the rPPG signals, $P$ represents the facial features, ${NNet}$ represents the early fusion network and ${F}_{\text{early }}$ is the output of the early fusion.
|
| 98 |
+
|
| 99 |
+
ii) Late Fusion: In the late fusion approach, the rPPG and visual models are trained separately, and their outputs are combined using a weighted average. Eq. 4 represents a late fusion approach where the models are trained separately, and their outputs are combined in the final output ${F}_{\text{late }}$ .
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{F}_{\text{late }} = {w}_{1} \cdot {M}_{\mathrm{{rPPG}}}\left( {\bar{I}c}\right) + {w}_{2} \cdot {M}_{\text{facial }}\left( P\right) \tag{4}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
Where ${M}_{\mathrm{{rPPG}}}\left( {\bar{I}c}\right)$ and ${M}_{\text{facial }}\left( P\right)$ represent the outputs of the rPPG model and the visual model, respectively, and ${w}_{1}$ and ${w}_{2}$ are the weights assigned to each model’s output in the final fusion.
|
| 106 |
+
|
| 107 |
+
### 2.3 Emotion Classification
|
| 108 |
+
|
| 109 |
+
This study employs three separate models for emotion classification. Two of these models operate independently, utilizing rPPG signals and facial features. The third model operates via 'early fusion,' exploiting the combined context of data from the rPPG and visual models. The outputs of these individual models are then collaboratively integrated through a 'late fusion' approach that uses a weighted addition technique. The individual models, based on rPPG signals and facial features, are constructed as follows.
|
| 110 |
+
|
| 111 |
+
i) rPPG Model: This model utilizes a Deep Convolutional Neural Network (CNN) with two hidden layers. It incorporates Rectified Linear Unit (ReLU) activation functions for emotion classification derived from rPPG signals.
|
| 112 |
+
|
| 113 |
+
ii) Visual Model: This model, built on facial features, employs a ResNet-based Deep CNN with two hidden layers and ReLU activation functions.
|
| 114 |
+
|
| 115 |
+
### 2.4 Interpretability
|
| 116 |
+
|
| 117 |
+
An explainability method based on permutation feature importance (PFI) [17] is implemented, which is used to estimate the importance of features by permuting the values of each feature and measuring the resulting impact on model performance. The PFI of feature $j$ is the decrease in the model score when values of feature $j$ are randomly permuted. PFI for a feature $j$ is the difference in the model score when the values of feature $j$ are randomly permuted. Eq. 5 mathematically represents the concept of permutation feature importance.
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
{PFI}\left( j\right) = {E}_{\pi }\left\lbrack {f\left( {X}^{\left( i\right) }\right) }\right\rbrack - {E}_{\pi }\left\lbrack {f\left( {X}_{{\pi }_{j}}^{\left( i\right) }\right) }\right\rbrack \tag{5}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
Where $\operatorname{PFI}\left( j\right)$ is the permutation feature importance of feature $j,{E}_{\pi }\left\lbrack {f\left( {X}^{\left( i\right) }\right) }\right\rbrack$ is the expected value of the model score over all samples in the dataset when the model is scored normally, ${E}_{\pi }\left\lbrack {f\left( {X}_{{\pi }_{j}}^{\left( i\right) }\right) }\right\rbrack$ is the expected value of the model score when the values of feature $j$ are permuted according to some permutation $\pi$ , and ${X}_{{\pi }_{j}}^{\left( i\right) }$ denotes the dataset ${X}^{\left( i\right) }$ with the values of feature $j$ permuted according to $\pi$ .
|
| 124 |
+
|
| 125 |
+
## 3 Results and Discussion
|
| 126 |
+
|
| 127 |
+
### 3.1 Experimental Setup
|
| 128 |
+
|
| 129 |
+
The emotion classification experiments have been performed on the IEMOCAP dataset [18] consisting of 10,039 videos labeled with ten discrete emotion labels ('neutral," happy, 'sad," angry, 'excited, 'frustrated,' 'fearful,' 'surprised,' 'distressed' and 'other'). The model training has been trained on NVIDIA RTX 4090 GPU for 50 epochs with a batch size of 32 and a learning rate of 0.001 . The performance has been evaluated using accuracy, precision, recall, and F1 score metrics.
|
| 130 |
+
|
| 131 |
+
### 3.2 Results
|
| 132 |
+
|
| 133 |
+
Table 1 summarizes the accuracy of the individual and fusion models, whereas the average contributions of rPPG and visual modalities towards emotion recognition in the early fusion setup are presented in Table 2. The proposed framework has demonstrated an emotion classification accuracy of 54.61%, and the average contributions of rPPG and visual modalities towards emotion recognition have been computed as 37.67% and 62.33%, respectively.
|
| 134 |
+
|
| 135 |
+
Table 1: Detailed performance of the individual and fusion models.
|
| 136 |
+
|
| 137 |
+
<table><tr><td>Model</td><td>Accuracy</td><td>Precision</td><td>Recall</td><td>F1 Score</td></tr><tr><td>rPPG</td><td>37.45%</td><td>0.37</td><td>0.38</td><td>0.38</td></tr><tr><td>Facial Features</td><td>46.42%</td><td>0.49</td><td>0.49</td><td>0.49</td></tr><tr><td>Late Fusion</td><td>41.17%</td><td>0.43</td><td>0.42</td><td>0.42</td></tr><tr><td>Early Fusion</td><td>54.61%</td><td>0.56</td><td>0.58</td><td>0.57</td></tr></table>
|
| 138 |
+
|
| 139 |
+
Table 2: Average contribution of each modality towards emotion recognition.
|
| 140 |
+
|
| 141 |
+
<table><tr><td>Modality</td><td>Contribution</td></tr><tr><td>rPPG</td><td>37.67%</td></tr><tr><td>Visual</td><td>62.33%</td></tr></table>
|
| 142 |
+
|
| 143 |
+
Table 1 shows that both the individual models performed reasonably well. However, the fusion model outperformed the individual models, demonstrating the advantage of combining rPPG signals and facial feature information for emotion recognition.
|
| 144 |
+
|
| 145 |
+
### 3.3 Discussion
|
| 146 |
+
|
| 147 |
+
This paper presents a compelling case for including multimodal context in emotion recognition. While the models trained on individual modalities show moderate performance, their fusion significantly improves emotion recognition accuracy. It emphasizes the complementarity of these modalities in capturing emotional states. However, the late fusion of modalities underperforms compared to the early fusion approach, indicating that integrating modalities at an earlier stage allows for more effective learning of emotional states.
|
| 148 |
+
|
| 149 |
+
However, this study has a few limitations of the proposed work. The IEMOCAP dataset, while widely used, may limit the generalizability of the findings. Cross-dataset experiments on larger and more diverse datasets could further strengthen the results. Moreover, more modalities such as audio, text, and other physiological signals can also be incorporated for emotion recognition. Finally, a more in-depth interpretability mechanism can be developed to explain the role of individual features in emotion detection.
|
| 150 |
+
|
| 151 |
+
## 4 Conclusion
|
| 152 |
+
|
| 153 |
+
This work presents a multimodal emotion recognition framework using rPPG signals and facial features. It paves the way for practical applications where transparent and interpretable emotion understanding is important. The results highlight the benefits of integrating multiple modalities for emotion recognition, with an early fusion approach yielding the highest accuracy. While there are limitations and potential improvements, our study provides a promising direction for future research in emotion recognition, emphasizing the importance of multimodal data and fusion techniques.
|
| 154 |
+
|
| 155 |
+
## References
|
| 156 |
+
|
| 157 |
+
[1] Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. A Review of Affective Computing: From
|
| 158 |
+
|
| 159 |
+
Unimodal Analysis to Multimodal Fusion. Elsevier Information Fusion Journal, 37:98-125, 2017.
|
| 160 |
+
|
| 161 |
+
[2] Yucel Cimtay, Erhan Ekmekcioglu, and Seyma Caglar-Ozhan. Cross Subject Multimodal Emotion Recognition Based on Hybrid Fusion. IEEE Access, 8:168865-168878, 2020.
|
| 162 |
+
|
| 163 |
+
[3] Tadas Baltrušaitis, Chaitanya Ahuja, and Louis Morency. Multimodal Machine Learning: A Survey and Taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 41(2):423-443, 2018.
|
| 164 |
+
|
| 165 |
+
[4] Andrei Paleyes, Raoul-Gabriel Urma, and Neil D Lawrence. Challenges in Deploying Machine Learning: A Survey of Case Studies. ACM Computing Surveys, 55(6):1-29, 2022.
|
| 166 |
+
|
| 167 |
+
[5] Zitong Yu, Xiaobai Li, and Guoying Zhao. Facial Video-based Physiological Signal Measurement: Recent Advances and Affective Applications. Signal Processing Magazine, 38(6):50-58, 2021.
|
| 168 |
+
|
| 169 |
+
[6] Sarthak Malik, Puneet Kumar, and Balasubramanian Raman. Towards Interpretable Facial Emotion Recognition. In The 12th Indian Conference on Computer Vision, Graphics and Image Processing, pages 1-9, 2021.
|
| 170 |
+
|
| 171 |
+
[7] Nannan Wang, Xinbo Gao, Dacheng Tao, Heng Yang, and Xuelong Li. Facial Feature Point Detection: A Comprehensive Survey. Neurocomputing, 275:50-65, 2018.
|
| 172 |
+
|
| 173 |
+
[8] Zhihong Zeng, Maja Pantic, Glenn I Roisman, and Thomas S Huang. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 31(1):39-58, 2009.
|
| 174 |
+
|
| 175 |
+
[9] Tianrong Rao, Xiaoxu Li, and Min Xu. Learning Multi-level Deep Representations for Image Emotion Classification. Neural Processing Letters, pp. 1-19, 2019.
|
| 176 |
+
|
| 177 |
+
[10] M Xu, F Zhang, and S Khan. Improve Accuracy of Speech Emotion Recognition with Attention Head Fusion. In IEEE Annual Computing and Communication Workshop and Conference (CCWC), pages 1058-1064, 2020.
|
| 178 |
+
|
| 179 |
+
[11] Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. DialogueRNN: An Attentive RNN for Emotion Detection in Conversations. In The 31st AAAI Conference on Artificial Intelligence (AAAI), volume 33, pages 6818-6825, 2019.
|
| 180 |
+
|
| 181 |
+
[12] W James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. Definitions, Methods, and Applications in Interpretable Machine Learning. Proceedings of the National Academy of Sciences, 116(44):22071-22080, 2019.
|
| 182 |
+
|
| 183 |
+
[13] Luca Longo et al. Explainable Artificial Intelligence: Concepts, Applications, Research Challenges and Visions. In The Springer International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), pages 1-16, 2020.
|
| 184 |
+
|
| 185 |
+
[14] Marco Tulio Ribeiro, S. Singh, and C. Guestrin. Why Should I Trust You? Explaining Predictions of Any Classifier. In International Conference on Knowledge Discovery & Data mining (KDD), pages 1135-1144, 2016.
|
| 186 |
+
|
| 187 |
+
[15] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. In The IEEE/CVF International Conference on Computer Vision (ICCV), pages 618-626, 2017.
|
| 188 |
+
|
| 189 |
+
[16] Sander Soo. Object Detection using Haar Cascade Classifier. Institute of Computer Science, University of Tartu, 2(3):1-12, 2014.
|
| 190 |
+
|
| 191 |
+
[17] André Altmann, Laura Tolosi, Oliver Sander, and Thomas Lengauer. Permutation Importance: A Corrected Feature Importance Measure. Bioinformatics, 26(10):1340-1347, 2010.
|
| 192 |
+
|
| 193 |
+
[18] Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. IEMOCAP: Interactive Emotional dyadic MOtion CAPture data. Language Resources and Evaluation, 42(4), 2008.
|
| 194 |
+
|
| 195 |
+
[19] Davis E. King. DLIB Models. https://github.com/davisking/dlib-models, 2016. Accessed on 21.05.2023.
|
| 196 |
+
|
| 197 |
+
[20] Hong-Wei Ng and Stefan Winkler. A Data Driven Approach to Cleaning Large Face Datasets. In IEEE International Conference on Image Processing (ICIP), pages 343-347. IEEE, 2014.
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/1w8vMnVeJB/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ INTERPRETABLE MULTIMODAL EMOTION RECOGNITION USING FACIAL FEATURES AND PHYSIOLOGICAL SIGNALS
|
| 2 |
+
|
| 3 |
+
Puneet Kumar and Xiaobai Li*
|
| 4 |
+
|
| 5 |
+
CMVS, University of Oulu, Finland.
|
| 6 |
+
|
| 7 |
+
{puneet.kumar, xiaobai.li}@oulu.fi
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
This paper aims to demonstrate the importance and feasibility of fusing multimodal information for emotion recognition. It introduces a multimodal framework for emotion understanding by fusing the information from visual facial features and rPPG signals extracted from the input videos. An interpretability technique based on permutation feature importance analysis has also been implemented to compute the contributions of rPPG and visual modalities toward classifying a given input video into a particular emotion class. The experiments on IEMOCAP dataset demonstrate that the emotion classification performance improves by combining the complementary information from multiple modalities.
|
| 12 |
+
|
| 13 |
+
Keywords: Affective Computing, Interpretable & Deployable AI, Multimodal Analysis, rPPG, Facial Features.
|
| 14 |
+
|
| 15 |
+
§ 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Emotions, characterized by a rich and complex mix of physiological and cognitive states, hold significant importance across multiple fields such as psychology, human-computer interaction, affective computing, and even extending to broader domains such as virtual reality, user experience design, healthcare, and education [1]. Understanding and accurately interpreting emotions is essential in human communication and social interactions [2]. With the surge in the development and accessibility of multimodal sensing technologies, researchers can explore multiple modalities to enhance the accuracy and robustness of emotion recognition systems [3]. The current research trend focuses on building Artificial Intelligence (AI) systems that can be deployed for real-life applications [4].
|
| 18 |
+
|
| 19 |
+
Two such modalities, facial expressions and physiological signals, have garnered significant attention due to the rich information they offer and their non-invasive nature [5]. Facial expressions, direct and non-invasive indicators of emotion, have been thoroughly investigated [6]. Various techniques involving the extraction of facial landmarks, local descriptors, or holistic representations have been proposed to capture nuanced variations in facial muscle movements that reflect different emotional states [7]. Physiological signals, such as remote photoplethysmography (rPPG) signals, provide another layer of emotional cues. These signals, obtained through non-contact video-based techniques, offer insights into physiological changes associated with emotional responses [5]. The interplay of these two modalities offers a more holistic understanding of emotions, thus enhancing the robustness of emotion recognition systems [8].
|
| 20 |
+
|
| 21 |
+
Emotion classification through audio-visual information is a well-established research task $\left\lbrack {9,{10},{11}}\right\rbrack$ . However, recognizing emotion using the physiological context along with the audio-visual information score for further exploration [5]. Furthermore, despite the significant advancements, many multimodal emotion recognition models do not provide meaningful interpretations for their predictions $\left\lbrack {{12},{13}}\right\rbrack$ . Most existing interpretability techniques have been implemented for visual modality and have yet to be fully explored for multimodal analysis $\left\lbrack {{14},{15},6}\right\rbrack$ .
|
| 22 |
+
|
| 23 |
+
This paper proposes an interpretable multimodal emotion recognition framework that extracts rPPG signals and facial features from the input videos and uses their combined context for emotion detection. The Haar cascades classifier [16] has been implemented to extract the rPPG signals, whereas a pre-trained ResNet-34-based network extracts the visual features. Further, early and late fusion approaches that integrate the static facial expression features and dynamic rPPG signals to capture both spatial and temporal aspects of emotions have been incorporated.
|
| 24 |
+
|
| 25 |
+
An interpretability technique based on permutation feature importance (PFI) [17] has also been incorporated that computes the contribution of rPPG and visual modality towards classifying a given input video into a particular emotion class. The experiments performed on Interactive Emotional Dyadic Motion Capture (IEMOCAP) dataset [18] have resulted in an accuracy of ${54.61}\%$ while classifying the input videos into ten emotion classes ('neutral, 'happy, 'sad, 'angry, 'excited, ' 'frustrated, ' 'fearful, 'surprised, ' 'distressed' and 'other'). The increased performance on using the multimodal context than the individual accuracies on using rPPG or visual modality alone advocates the importance of leveraging the multimodal context for emotion understanding. The average contributions of rPPG and visual modalities towards emotion recognition have been computed as 37.67% and 62.33%, respectively.
|
| 26 |
+
|
| 27 |
+
*Corresponding Author: xiaobai.li@oulu.fi
|
| 28 |
+
|
| 29 |
+
The contributions of this paper can be summarized as follows:
|
| 30 |
+
|
| 31 |
+
* A multimodal emotion recognition framework has been proposed to classify a given video into discrete emotion classes. It extracts the dynamic rPPG signals from the input videos and combines them with static facial expressions using early and late fusion approaches.
|
| 32 |
+
|
| 33 |
+
* An interpretability technique has been incorporated that computes the contribution of rPPG and visual modalities towards emotion classification using the PFI algorithm.
|
| 34 |
+
|
| 35 |
+
* Extensive experiments have been performed on the IEMOCAP dataset, and the results have been presented in terms of accuracy, precision, recall, F1 score, and modality-wise contributions toward emotion classification.
|
| 36 |
+
|
| 37 |
+
§ 2 PROPOSED METHOD
|
| 38 |
+
|
| 39 |
+
The proposed framework has been diagrammatically depicted in Figure 1 and described in the following sections.
|
| 40 |
+
|
| 41 |
+
< g r a p h i c s >
|
| 42 |
+
|
| 43 |
+
Figure 1: Schematic illustration of the proposed framework.
|
| 44 |
+
|
| 45 |
+
§ 2.1 PREPROCESSING AND FEATURE EXTRACTION
|
| 46 |
+
|
| 47 |
+
The video files are loaded and processed frame by frame using OpenCV (cv2) library ${}^{1}$ and processed to extract rPPG signals and facial features.
|
| 48 |
+
|
| 49 |
+
i) rPPG Signals Extraction: Face detection within each video frame during the rPPG signal extraction process is accomplished using Haar cascades [16]. The region of interest (ROI), predominantly the facial region, is isolated from each frame, after which the mean intensity is computed to generate the rPPG signal for each video. The calculation of the mean intensity within the ROI $\left( {\bar{I}c}\right)$ is represented in Eq. 1.
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
\bar{I}c = \frac{1}{N}\sum {x = 1}^{W}\mathop{\sum }\limits_{{y = 1}}^{H}{I}_{x,y,c} \tag{1}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
Where ${I}_{x,y,c}$ is the intensity of the pixel at location(x, y)for color channel $c$ in the ROI, and $N$ is the total number of pixels in the ROI, whereas $W$ and $H$ represent the width and height of the ROI, respectively, and $c \in R,G,B$ .
|
| 56 |
+
|
| 57 |
+
ii) Facial Features Extraction: Facial feature extraction employs Dlib's shape predictor [19], which is a version of the ResNet-34 trained on Face Scrub dataset[20]. As per Eq. 2, it identifies 68 facial landmarks for each detected face within every frame, distinguishing unique facial characteristics.
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
P = D\left( {F,\left\{ {L}_{i}\right\} }\right) \tag{2}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
F = \left\lbrack {{f}_{1},{f}_{2},\ldots ,{f}_{n}}\right\rbrack
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
${}^{1}$ https://opencv.org/
|
| 68 |
+
|
| 69 |
+
Where $F$ represents the face detected in a frame, $P$ represents the predicted points on the face, $D\left( {F,\left\{ {L}_{i}\right\} }\right)$ is the function for predicting points on the face, and ${L}_{ - }i$ is the set of landmark points for the ${i}^{th}$ point. As signals from different videos might differ in length, it becomes crucial to standardize the input for the neural network model. This standardization is achieved by zero-padding $\bar{I}$ and $P$ to match the maximum signal length.
|
| 70 |
+
|
| 71 |
+
§ 2.2 MULTIMODAL FEATURE FUSION
|
| 72 |
+
|
| 73 |
+
Early fusion and late fusion approaches are used to combine the rPPG signals and facial features.
|
| 74 |
+
|
| 75 |
+
i) Early Fusion: In the early fusion approach, the rPPG signals and facial features are concatenated before being fed into the model. The fused data are then passed through a neural network comprising a flatten layer, followed by CNN layers of dimensions 512 and 256, and the final layer of size equal to the number of classes. The flatten layer transforms the 3D input tensor into a 1D tensor, and the subsequent CNN layers functions perform the classification task. The model structure is represented as per Eq. 3.
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{I}^{\prime } = \text{ concatenate }\left( {\bar{I}c,P}\right)
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{I}^{\prime \prime } = \operatorname{flatten}\left( {I}^{\prime }\right) \tag{3}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{F}_{\text{ early }} = \operatorname{NNet}\left( {{I}^{\prime \prime },C}\right)
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
Where $I$ is the input shape, $C$ denotes the number of classes, $\bar{I}c$ is the mean intensity within the ROI from the rPPG signals, $P$ represents the facial features, ${NNet}$ represents the early fusion network and ${F}_{\text{ early }}$ is the output of the early fusion.
|
| 90 |
+
|
| 91 |
+
ii) Late Fusion: In the late fusion approach, the rPPG and visual models are trained separately, and their outputs are combined using a weighted average. Eq. 4 represents a late fusion approach where the models are trained separately, and their outputs are combined in the final output ${F}_{\text{ late }}$ .
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{F}_{\text{ late }} = {w}_{1} \cdot {M}_{\mathrm{{rPPG}}}\left( {\bar{I}c}\right) + {w}_{2} \cdot {M}_{\text{ facial }}\left( P\right) \tag{4}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
Where ${M}_{\mathrm{{rPPG}}}\left( {\bar{I}c}\right)$ and ${M}_{\text{ facial }}\left( P\right)$ represent the outputs of the rPPG model and the visual model, respectively, and ${w}_{1}$ and ${w}_{2}$ are the weights assigned to each model’s output in the final fusion.
|
| 98 |
+
|
| 99 |
+
§ 2.3 EMOTION CLASSIFICATION
|
| 100 |
+
|
| 101 |
+
This study employs three separate models for emotion classification. Two of these models operate independently, utilizing rPPG signals and facial features. The third model operates via 'early fusion,' exploiting the combined context of data from the rPPG and visual models. The outputs of these individual models are then collaboratively integrated through a 'late fusion' approach that uses a weighted addition technique. The individual models, based on rPPG signals and facial features, are constructed as follows.
|
| 102 |
+
|
| 103 |
+
i) rPPG Model: This model utilizes a Deep Convolutional Neural Network (CNN) with two hidden layers. It incorporates Rectified Linear Unit (ReLU) activation functions for emotion classification derived from rPPG signals.
|
| 104 |
+
|
| 105 |
+
ii) Visual Model: This model, built on facial features, employs a ResNet-based Deep CNN with two hidden layers and ReLU activation functions.
|
| 106 |
+
|
| 107 |
+
§ 2.4 INTERPRETABILITY
|
| 108 |
+
|
| 109 |
+
An explainability method based on permutation feature importance (PFI) [17] is implemented, which is used to estimate the importance of features by permuting the values of each feature and measuring the resulting impact on model performance. The PFI of feature $j$ is the decrease in the model score when values of feature $j$ are randomly permuted. PFI for a feature $j$ is the difference in the model score when the values of feature $j$ are randomly permuted. Eq. 5 mathematically represents the concept of permutation feature importance.
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
{PFI}\left( j\right) = {E}_{\pi }\left\lbrack {f\left( {X}^{\left( i\right) }\right) }\right\rbrack - {E}_{\pi }\left\lbrack {f\left( {X}_{{\pi }_{j}}^{\left( i\right) }\right) }\right\rbrack \tag{5}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
Where $\operatorname{PFI}\left( j\right)$ is the permutation feature importance of feature $j,{E}_{\pi }\left\lbrack {f\left( {X}^{\left( i\right) }\right) }\right\rbrack$ is the expected value of the model score over all samples in the dataset when the model is scored normally, ${E}_{\pi }\left\lbrack {f\left( {X}_{{\pi }_{j}}^{\left( i\right) }\right) }\right\rbrack$ is the expected value of the model score when the values of feature $j$ are permuted according to some permutation $\pi$ , and ${X}_{{\pi }_{j}}^{\left( i\right) }$ denotes the dataset ${X}^{\left( i\right) }$ with the values of feature $j$ permuted according to $\pi$ .
|
| 116 |
+
|
| 117 |
+
§ 3 RESULTS AND DISCUSSION
|
| 118 |
+
|
| 119 |
+
§ 3.1 EXPERIMENTAL SETUP
|
| 120 |
+
|
| 121 |
+
The emotion classification experiments have been performed on the IEMOCAP dataset [18] consisting of 10,039 videos labeled with ten discrete emotion labels ('neutral," happy, 'sad," angry, 'excited, 'frustrated,' 'fearful,' 'surprised,' 'distressed' and 'other'). The model training has been trained on NVIDIA RTX 4090 GPU for 50 epochs with a batch size of 32 and a learning rate of 0.001 . The performance has been evaluated using accuracy, precision, recall, and F1 score metrics.
|
| 122 |
+
|
| 123 |
+
§ 3.2 RESULTS
|
| 124 |
+
|
| 125 |
+
Table 1 summarizes the accuracy of the individual and fusion models, whereas the average contributions of rPPG and visual modalities towards emotion recognition in the early fusion setup are presented in Table 2. The proposed framework has demonstrated an emotion classification accuracy of 54.61%, and the average contributions of rPPG and visual modalities towards emotion recognition have been computed as 37.67% and 62.33%, respectively.
|
| 126 |
+
|
| 127 |
+
Table 1: Detailed performance of the individual and fusion models.
|
| 128 |
+
|
| 129 |
+
max width=
|
| 130 |
+
|
| 131 |
+
Model Accuracy Precision Recall F1 Score
|
| 132 |
+
|
| 133 |
+
1-5
|
| 134 |
+
rPPG 37.45% 0.37 0.38 0.38
|
| 135 |
+
|
| 136 |
+
1-5
|
| 137 |
+
Facial Features 46.42% 0.49 0.49 0.49
|
| 138 |
+
|
| 139 |
+
1-5
|
| 140 |
+
Late Fusion 41.17% 0.43 0.42 0.42
|
| 141 |
+
|
| 142 |
+
1-5
|
| 143 |
+
Early Fusion 54.61% 0.56 0.58 0.57
|
| 144 |
+
|
| 145 |
+
1-5
|
| 146 |
+
|
| 147 |
+
Table 2: Average contribution of each modality towards emotion recognition.
|
| 148 |
+
|
| 149 |
+
max width=
|
| 150 |
+
|
| 151 |
+
Modality Contribution
|
| 152 |
+
|
| 153 |
+
1-2
|
| 154 |
+
rPPG 37.67%
|
| 155 |
+
|
| 156 |
+
1-2
|
| 157 |
+
Visual 62.33%
|
| 158 |
+
|
| 159 |
+
1-2
|
| 160 |
+
|
| 161 |
+
Table 1 shows that both the individual models performed reasonably well. However, the fusion model outperformed the individual models, demonstrating the advantage of combining rPPG signals and facial feature information for emotion recognition.
|
| 162 |
+
|
| 163 |
+
§ 3.3 DISCUSSION
|
| 164 |
+
|
| 165 |
+
This paper presents a compelling case for including multimodal context in emotion recognition. While the models trained on individual modalities show moderate performance, their fusion significantly improves emotion recognition accuracy. It emphasizes the complementarity of these modalities in capturing emotional states. However, the late fusion of modalities underperforms compared to the early fusion approach, indicating that integrating modalities at an earlier stage allows for more effective learning of emotional states.
|
| 166 |
+
|
| 167 |
+
However, this study has a few limitations of the proposed work. The IEMOCAP dataset, while widely used, may limit the generalizability of the findings. Cross-dataset experiments on larger and more diverse datasets could further strengthen the results. Moreover, more modalities such as audio, text, and other physiological signals can also be incorporated for emotion recognition. Finally, a more in-depth interpretability mechanism can be developed to explain the role of individual features in emotion detection.
|
| 168 |
+
|
| 169 |
+
§ 4 CONCLUSION
|
| 170 |
+
|
| 171 |
+
This work presents a multimodal emotion recognition framework using rPPG signals and facial features. It paves the way for practical applications where transparent and interpretable emotion understanding is important. The results highlight the benefits of integrating multiple modalities for emotion recognition, with an early fusion approach yielding the highest accuracy. While there are limitations and potential improvements, our study provides a promising direction for future research in emotion recognition, emphasizing the importance of multimodal data and fusion techniques.
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/2w4CsrCUXq/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Chemically Interpretable Molecular Representation for Property Prediction
|
| 2 |
+
|
| 3 |
+
M S B Roshan ${}^{+ \dagger * }$ , Nirav Bhatt ${}^{+ \dagger * }$
|
| 4 |
+
|
| 5 |
+
${}^{ + }$ BioSystems Engineering and Control Group, Department of Biotechnology, IIT Madras ${}^{ \dagger }$ Robert Bosch Centre for Data Science and Artificial Intelligence (RBCDSAI), IIT Madras *Centre for Integrative Biology and Systems medicinE (IBSE), IIT Madras
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Molecular property prediction using a molecule's structure is a crucial step in drug and novel material discovery, as computational screening approaches rely on predicted properties to refine the existing design of molecules. Although the problem has existed for decades, it has recently gained attention due to the advent of big data and deep learning. On average, one FDA drug is approved for 250 compounds entering the preclinical research stage, requiring screening of chemical libraries containing more than 20000 compounds. In-silico property prediction approaches using learnable representations increase the pace of development and reduce the cost of discovery. We propose developing molecule representations using functional groups in chemistry to address the problem of deciphering the relationship between a molecule's structure and property. Functional groups are substructures in a molecule with distinctive chemical properties that influence its chemical characteristics. These substructures are found by (i) curating functional groups annotated by chemists and (ii) mining a large corpus of molecules to extract frequent substructures using a pattern-mining algorithm. We show that the Functional Group Representation (FGR) framework beats state-of-the-art models on several benchmark datasets while ensuring explainability between the predicted property and molecular structure to experimentalists.
|
| 10 |
+
|
| 11 |
+
## 1 Introduction
|
| 12 |
+
|
| 13 |
+
Molecular property prediction is a task that finds applications in drug discovery, quantum mechanical attribute prediction of molecules, hydrophobicity prediction, material design and drug toxicity prediction. In the field of drug discovery and novel material discovery, computational approaches for predicting molecular properties can boost the processes of finding better drug candidates and materials $\left\lbrack {1,2}\right\rbrack$ . Characterising and predicting molecular properties is one of the most crucial problems in drug discovery. Numerous strategies are being used globally to enhance efficiency and improve the success of the drug discovery and development process. These strategies use a wide range of data such as genomics and proteomics, drug molecule structures and properties, and methods such as pharmaceutical modelling and artificial intelligence [3]. On average, one drug is approved by US FDA for five compounds entering clinical trials that, in turn, are the result of thorough preclinical testing of 250 compounds themselves selected by screening 5000-10000 compounds [4]. Experimentally testing many such compounds is both time and resource-consuming. In recent years, computational methods have significantly increased in the drug discovery domain [3]. The traditional computational approaches for in-silico molecular property prediction have relied on extracting fingerprints or hand-engineered features. Since these features are typically designed based on the property prediction task, it captures features only relevant to the particular task.
|
| 14 |
+
|
| 15 |
+
In contrast to traditional computational approaches, deep learning-based (DL) approaches can automatically learn features from molecules directly for the task at hand, and hence, it can reduce the time and cost for property prediction $\left\lbrack {5,6}\right\rbrack$ . Instantaneous molecular property prediction using deep learning algorithms can help generate novel molecules with desired profiles and engineer artificial synthesis pathways faster and cheaper. Graph neural networks (GNN) and their variants have been widely used for molecular property prediction tasks due to their ability to generate better molecular representations $\left\lbrack {7,6,8,9,{10},{11},{12}}\right\rbrack$ . These approaches use the information on atoms, bonds, topology, interactions and molecular geometry (3D spatial structure) of molecules for learning molecular representation. However, GNN-based approaches require a large amount of labelled data for a particular task, and it is impossible to generate such a large number of labelled data for several applications. Several graph-based self-supervised learning approaches have been proposed to learn molecular representation from unlabelled molecular data to handle the problem of limited labelled data $\left\lbrack {9,{13},{14}}\right\rbrack$ .
|
| 16 |
+
|
| 17 |
+
Although GNNs and self-supervised learning models have provided promising results on several property prediction tasks, the relationships between properties and molecule structures are challenging to interpret due to the complex molecular representations generated by these methods for chemists. For novel molecule discovery and drug repurposing applications, chemically interpretable molecular representation is essential for testing the generated molecules via wet-lab experiments by chemists. Hence, a chemistry-inspired representation of molecules can be vital in achieving interpretability and improved predictive performance of these models.
|
| 18 |
+
|
| 19 |
+
In this work, we propose a molecular representation learning framework that uses the concept of function groups in chemistry. The functional groups are substructures in a molecule that are attributed to the chemical properties of the molecule, including its reactivity. This work proposes a functional group representation (FGR) framework that allows embedding molecules based on their substructures. Firstly, we introduce two approaches for the generation of the functional group vocabulary, namely, functional groups (FG) curated from the OCHEM database [15] and mined functional groups (MFG) from the PubChem database [16]. Then, we develop four different latent feature encodings using the FG- and MFG-based vocabulary generated in the first step for property prediction tasks. Further, we investigate the effect of pretraining using unlabelled molecules in the PubChem database on the property prediction tasks. We perform experiments on several benchmark datasets in the available literature and compare the results of the proposed FGR framework in this work with other state-of-the-art methods. We demonstrate that the FGR framework outperforms several property prediction tasks or provides comparable results on several other tasks compared to the state-of-the-art methods while providing interpretability to chemists and practitioners.
|
| 20 |
+
|
| 21 |
+
## 2 Objectives
|
| 22 |
+
|
| 23 |
+
O1 Generate a functional group vocabulary characterised by chemists and extract frequent sub-structures from a large chemical corpus.
|
| 24 |
+
|
| 25 |
+
O2 Learn functions ${f}_{{\mathbf{x}}_{G}} : {\mathbf{x}}_{G} \rightarrow {\mathbf{z}}_{G}$ using autoencoders [17] where ${\mathbf{x}}_{G}$ is a multi-hot vector of appropriate dimension (say $p$ ) depending on the input representation and ${\mathbf{z}}_{G} \in {\mathbb{R}}^{l}$ is the learnt latent vector.
|
| 26 |
+
|
| 27 |
+
O3 Decode the predicted property and molecular structure relationship using gradient-based model agnostic interpretability methods.
|
| 28 |
+
|
| 29 |
+
## 3 Methodology
|
| 30 |
+
|
| 31 |
+
In this work, a set of SMILES strings for $n$ molecules, $\mathcal{S} = \left\{ {{S}_{1},{S}_{2},\ldots ,{S}_{n}}\right\}$ which might be associated with a property $y$ is considered. Furthermore, we also incorporate 2D global molecular descriptors to augment the learnt representation (FGR-Desc) and increase the performance of downstream property prediction tasks. The methods are summarised in Figure 1.
|
| 32 |
+
|
| 33 |
+
- Generation of Functional Group vocabulary: In this study, we use the OCHEM [15] database, which has a collection of 2786 functional groups (FG) characterised by chemists and frequent sub-structures are recognised using a sequential pattern mining algorithm applied on $\mathcal{S}$ from the PubChem database $(n > {114}$ million). Based on the frequency threshold $\eta ,{3000}$ mined functional groups are identified (MFG). Then, any molecule ${S}_{i} \in \mathcal{S}$ can be represented by a multi-one-hot encoded vector, ${\left\lbrack {x}_{1},{x}_{2},\ldots ,{x}_{b}\right\rbrack }^{T}$ where ${x}_{i} = 1$ if ${FG}{R}_{i} \in {S}_{i}$ and ${x}_{i} = 0$ , if ${FG}{R}_{i} \notin {S}_{i}$ .
|
| 34 |
+
|
| 35 |
+
- Pretraining and Property Prediction: Pretraining is decoupled from the downstream property prediction to develop a global representation capable of interpreting the chemical space that can be applied to any task. For the pretraining step, the autoencoder is trained separately from the downstream property prediction task. The reconstruction loss of the training phase in is minimized for all the molecules in the database for the pretraining purpose. One of the preliminary challenges of the encoder-decoder pretraining is the determination of the dimension of the latent feature vector. Hyper-parameter optimization is performed to obtain the dimension of the latent feature vectors for all four types of encodings. A fully connected neural network is used to compute a probability score $p\left( {\mathbf{x}}_{G}\right) \in \left\lbrack {0,1}\right\rbrack$ based on ${\mathbf{z}}_{G}$ (latent feature vector) for property prediction.
|
| 36 |
+
|
| 37 |
+
- Interpretability: We evaluate each input feature's contribution to the model's output using primary attribution methods like feature permutation, integrated gradients and gradient SHAP $\left\lbrack {{18},{19}}\right\rbrack$ . The goodness of
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+
Figure 1: Overview of the Proposed Methodology: A) FG Representation, B) MFG Representation, C) Descriptor Representation, D) Latent Representation for FGR and Property Prediction Module
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
|
| 45 |
+
Figure 2: Overview of Interpretability Analysis: For any given property, attribution scores for input features are calculated and the substructures can be visualised overlapped with the scores
|
| 46 |
+
|
| 47 |
+
explanations is quantified using infidelity and sensitivity metrics. A visualisation tool is also developed to highlight essential substructures that contribute to predicting desired properties, as shown in Figure 2.
|
| 48 |
+
|
| 49 |
+
## 4 Results
|
| 50 |
+
|
| 51 |
+
Extensive evaluation of the model was done for robustness and generalizability on classification and regression tasks using five-fold random and scaffold splits. The results are summarized in Table 1 and Table 2.
|
| 52 |
+
|
| 53 |
+
## 5 Conclusion
|
| 54 |
+
|
| 55 |
+
This work presents a functional group representation (FGR) framework using functional groups in chemistry for molecular representation learning. The framework allows four types of molecular representations: FG, MFG, FG-MFG-based and FG-MFG-descriptors-based representation. The proposed FGR framework-based molecular embeddings have been evaluated on several benchmark datasets. The proposed framework performs at par and sometimes better than the state-of-the-art algorithms in classification tasks. The FGR framework also provides chemically interpretable encoding as it is inspired by rules of chemistry to maintain explainability with the encoding. In the proposed framework, autoencoders are used to learn latent representations. Also, we demonstrated that the pretraining in the FGR framework could be performed due to decoupling between the latent representation learning
|
| 56 |
+
|
| 57 |
+
<table><tr><td colspan="4">Scaffold Split Classification (ROC-AUC) $\uparrow$</td></tr><tr><td>Dataset</td><td>$\mathbf{{FGR}}$</td><td>DMPNN</td><td>GEM</td></tr><tr><td>BACE</td><td>${0.89} \pm {0.01}$</td><td>${0.86} \pm {0.05}$</td><td>${0.86} \pm {0.01}$</td></tr><tr><td>BBBP</td><td>$\mathbf{{0.96} \pm {0.008}}$</td><td>${0.92} \pm {0.02}$</td><td>${0.72} \pm {0.00}$</td></tr><tr><td>Tox21</td><td>${0.71} \pm {0.01}$</td><td>${0.69} \pm {0.01}$</td><td>${0.78} \pm {0.001}$</td></tr><tr><td>ClinTox</td><td>$\mathbf{{0.99} \pm {0.002}}$</td><td>${0.88} \pm {0.03}$</td><td>${0.90} \pm {0.01}$</td></tr><tr><td>SIDER</td><td>${0.72} \pm {0.07}$</td><td>${0.63} \pm {0.03}$</td><td>${0.67} \pm {0.004}$</td></tr></table>
|
| 58 |
+
|
| 59 |
+
Table 1: Comparison of ROC-AUC scores for FGR, DMPNN [6], and GEM [8]
|
| 60 |
+
|
| 61 |
+
<table><tr><td colspan="4">Scaffold Split Regression (RMSE) $\downarrow$</td></tr><tr><td>Dataset</td><td>$\mathbf{{FGR}}$</td><td>DMPNN</td><td>GEM</td></tr><tr><td>ESOL</td><td>${0.62} \pm {0.06}$</td><td>${1.05} \pm {0.008}$</td><td>${0.79} \pm {0.02}$</td></tr><tr><td>FreeSolv</td><td>${0.78} \pm {0.19}$</td><td>${2.08} \pm {0.082}$</td><td>${1.87} \pm {0.094}$</td></tr><tr><td>Lipo</td><td>${0.64} \pm {0.035}$</td><td>${0.68} \pm {0.016}$</td><td>${0.66} \pm {0.008}$</td></tr></table>
|
| 62 |
+
|
| 63 |
+
Table 2: Comparison of RMSE scores for FGR, DMPNN [6], and GEM [8]
|
| 64 |
+
|
| 65 |
+
task and the property prediction task. It is envisaged to extend the FGR framework for building pre-trained models with explainability using self-supervised learning on large-scale molecular data.
|
| 66 |
+
|
| 67 |
+
## References
|
| 68 |
+
|
| 69 |
+
[1] W Patrick Walters and Regina Barzilay. Applications of deep learning in molecule generation and molecular property prediction. Accounts of chemical research, 54(2):263-270, 2020.
|
| 70 |
+
|
| 71 |
+
[2] Oliver Wieder, Stefan Kohlbacher, Mélaine Kuenemann, Arthur Garon, Pierre Ducrot, Thomas Seidel, and Thierry Langer. A compact review of molecular property prediction with graph neural networks. Drug Discovery Today: Technologies, 37:1-12, 2020.
|
| 72 |
+
|
| 73 |
+
[3] Geoffrey Kabue Kiriiri, Peter Mbugua Njogu, and Alex Njoroge Mwangi. Exploring different approaches to improve the success of drug discovery and development projects: a review. Future Journal of Pharmaceutical Sciences, 6(1):1-12, 2020.
|
| 74 |
+
|
| 75 |
+
[4] Jie Shen and Christos A Nicolaou. Molecular property prediction: recent trends in the era of artificial intelligence. Drug Discovery Today: Technologies, 32:29-36, 2019.
|
| 76 |
+
|
| 77 |
+
[5] Andreas Mayr, Günter Klambauer, Thomas Unterthiner, Marvin Steijaert, Jörg K Wegner, Hugo Ceulemans, Djork-Arné Clevert, and Sepp Hochreiter. Large-scale comparison of machine learning methods for drug target prediction on chembl. Chemical science, 9(24):5441-5451, 2018.
|
| 78 |
+
|
| 79 |
+
[6] Kevin Yang, Kyle Swanson, Wengong Jin, Connor Coley, Philipp Eiden, Hua Gao, Angel Guzman-Perez, Timothy Hopper, Brian Kelley, Miriam Mathea, Andrew Palmer, Volker Settels, Tommi Jaakkola, Klavs Jensen, and Regina Barzilay. Analyzing Learned Molecular Representations for Property Prediction. Journal of Chemical Information and Modeling, 59(8):3370-3388, 8 2019.
|
| 80 |
+
|
| 81 |
+
[7] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pages 1263-1272. PMLR, 2017.
|
| 82 |
+
|
| 83 |
+
[8] Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, Fan Wang, Hua Wu, and Haifeng Wang. Geometry-enhanced molecular representation learning for property prediction. Nature Machine Intelligence, 4(2):127-134, 2022.
|
| 84 |
+
|
| 85 |
+
[9] Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou Huang. Self-supervised graph transformer on large-scale molecular data. Advances in Neural Information Processing Systems, 33:12559-12571, 2020.
|
| 86 |
+
|
| 87 |
+
[10] Fan-Yun Sun, Jordan Hoffman, Vikas Verma, and Jian Tang. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In International Conference on Learning Representations, 2019.
|
| 88 |
+
|
| 89 |
+
[11] Chengqiang Lu, Qi Liu, Chao Wang, Zhenya Huang, Peize Lin, and Lixin He. Molecular property prediction: A multilevel quantum interactions modeling perspective. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 1052-1060, 2019.
|
| 90 |
+
|
| 91 |
+
[12] Jonathan M Stokes, Kevin Yang, Kyle Swanson, Wengong Jin, Andres Cubillos-Ruiz, Nina M Donghia, Craig R MacNair, Shawn French, Lindsey A Carfrae, Zohar Bloom-Ackermann, et al. A deep learning approach to antibiotic discovery. Cell, 180(4):688-702, 2020.
|
| 92 |
+
|
| 93 |
+
[13] Seyone Chithrananda, Gabriel Grand, and Bharath Ramsundar. Chemberta: large-scale self-supervised pretraining for molecular property prediction. arXiv preprint arXiv:2010.09885, 2020.
|
| 94 |
+
|
| 95 |
+
[14] Zaixi Zhang, Qi Liu, Hao Wang, Chengqiang Lu, and Chee-Kong Lee. Motif-based graph self-supervised learning for molecular property prediction. Advances in Neural Information Processing Systems, 34:15870- 15882, 2021.
|
| 96 |
+
|
| 97 |
+
[15] Iurii Sushko, Sergii Novotarskyi, Robert Körner, Anil Kumar Pandey, Matthias Rupp, Wolfram Teetz, Stefan Brandmaier, Ahmed Abdelaziz, Volodymyr V Prokopenko, Vsevolod Y Tanchuk, et al. Online chemical modeling environment (ochem): web platform for data storage, model development and publishing of chemical information. Journal of computer-aided molecular design, 25:533-554, 2011.
|
| 98 |
+
|
| 99 |
+
[16] Sunghwan Kim, Paul A Thiessen, Evan E Bolton, Jie Chen, Gang Fu, Asta Gindulyte, Lianyi Han, Jane He, Siqian He, Benjamin A Shoemaker, et al. Pubchem substance and compound databases. Nucleic acids research, 44(D1):D1202-D1213, 2016.
|
| 100 |
+
|
| 101 |
+
[17] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504-507, 2006.
|
| 102 |
+
|
| 103 |
+
[18] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319-3328. PMLR, 2017.
|
| 104 |
+
|
| 105 |
+
[19] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/2w4CsrCUXq/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,103 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ CHEMICALLY INTERPRETABLE MOLECULAR REPRESENTATION FOR PROPERTY PREDICTION
|
| 2 |
+
|
| 3 |
+
M S B Roshan ${}^{+ \dagger * }$ , Nirav Bhatt ${}^{+ \dagger * }$
|
| 4 |
+
|
| 5 |
+
${}^{ + }$ BioSystems Engineering and Control Group, Department of Biotechnology, IIT Madras ${}^{ \dagger }$ Robert Bosch Centre for Data Science and Artificial Intelligence (RBCDSAI), IIT Madras *Centre for Integrative Biology and Systems medicinE (IBSE), IIT Madras
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Molecular property prediction using a molecule's structure is a crucial step in drug and novel material discovery, as computational screening approaches rely on predicted properties to refine the existing design of molecules. Although the problem has existed for decades, it has recently gained attention due to the advent of big data and deep learning. On average, one FDA drug is approved for 250 compounds entering the preclinical research stage, requiring screening of chemical libraries containing more than 20000 compounds. In-silico property prediction approaches using learnable representations increase the pace of development and reduce the cost of discovery. We propose developing molecule representations using functional groups in chemistry to address the problem of deciphering the relationship between a molecule's structure and property. Functional groups are substructures in a molecule with distinctive chemical properties that influence its chemical characteristics. These substructures are found by (i) curating functional groups annotated by chemists and (ii) mining a large corpus of molecules to extract frequent substructures using a pattern-mining algorithm. We show that the Functional Group Representation (FGR) framework beats state-of-the-art models on several benchmark datasets while ensuring explainability between the predicted property and molecular structure to experimentalists.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Molecular property prediction is a task that finds applications in drug discovery, quantum mechanical attribute prediction of molecules, hydrophobicity prediction, material design and drug toxicity prediction. In the field of drug discovery and novel material discovery, computational approaches for predicting molecular properties can boost the processes of finding better drug candidates and materials $\left\lbrack {1,2}\right\rbrack$ . Characterising and predicting molecular properties is one of the most crucial problems in drug discovery. Numerous strategies are being used globally to enhance efficiency and improve the success of the drug discovery and development process. These strategies use a wide range of data such as genomics and proteomics, drug molecule structures and properties, and methods such as pharmaceutical modelling and artificial intelligence [3]. On average, one drug is approved by US FDA for five compounds entering clinical trials that, in turn, are the result of thorough preclinical testing of 250 compounds themselves selected by screening 5000-10000 compounds [4]. Experimentally testing many such compounds is both time and resource-consuming. In recent years, computational methods have significantly increased in the drug discovery domain [3]. The traditional computational approaches for in-silico molecular property prediction have relied on extracting fingerprints or hand-engineered features. Since these features are typically designed based on the property prediction task, it captures features only relevant to the particular task.
|
| 14 |
+
|
| 15 |
+
In contrast to traditional computational approaches, deep learning-based (DL) approaches can automatically learn features from molecules directly for the task at hand, and hence, it can reduce the time and cost for property prediction $\left\lbrack {5,6}\right\rbrack$ . Instantaneous molecular property prediction using deep learning algorithms can help generate novel molecules with desired profiles and engineer artificial synthesis pathways faster and cheaper. Graph neural networks (GNN) and their variants have been widely used for molecular property prediction tasks due to their ability to generate better molecular representations $\left\lbrack {7,6,8,9,{10},{11},{12}}\right\rbrack$ . These approaches use the information on atoms, bonds, topology, interactions and molecular geometry (3D spatial structure) of molecules for learning molecular representation. However, GNN-based approaches require a large amount of labelled data for a particular task, and it is impossible to generate such a large number of labelled data for several applications. Several graph-based self-supervised learning approaches have been proposed to learn molecular representation from unlabelled molecular data to handle the problem of limited labelled data $\left\lbrack {9,{13},{14}}\right\rbrack$ .
|
| 16 |
+
|
| 17 |
+
Although GNNs and self-supervised learning models have provided promising results on several property prediction tasks, the relationships between properties and molecule structures are challenging to interpret due to the complex molecular representations generated by these methods for chemists. For novel molecule discovery and drug repurposing applications, chemically interpretable molecular representation is essential for testing the generated molecules via wet-lab experiments by chemists. Hence, a chemistry-inspired representation of molecules can be vital in achieving interpretability and improved predictive performance of these models.
|
| 18 |
+
|
| 19 |
+
In this work, we propose a molecular representation learning framework that uses the concept of function groups in chemistry. The functional groups are substructures in a molecule that are attributed to the chemical properties of the molecule, including its reactivity. This work proposes a functional group representation (FGR) framework that allows embedding molecules based on their substructures. Firstly, we introduce two approaches for the generation of the functional group vocabulary, namely, functional groups (FG) curated from the OCHEM database [15] and mined functional groups (MFG) from the PubChem database [16]. Then, we develop four different latent feature encodings using the FG- and MFG-based vocabulary generated in the first step for property prediction tasks. Further, we investigate the effect of pretraining using unlabelled molecules in the PubChem database on the property prediction tasks. We perform experiments on several benchmark datasets in the available literature and compare the results of the proposed FGR framework in this work with other state-of-the-art methods. We demonstrate that the FGR framework outperforms several property prediction tasks or provides comparable results on several other tasks compared to the state-of-the-art methods while providing interpretability to chemists and practitioners.
|
| 20 |
+
|
| 21 |
+
§ 2 OBJECTIVES
|
| 22 |
+
|
| 23 |
+
O1 Generate a functional group vocabulary characterised by chemists and extract frequent sub-structures from a large chemical corpus.
|
| 24 |
+
|
| 25 |
+
O2 Learn functions ${f}_{{\mathbf{x}}_{G}} : {\mathbf{x}}_{G} \rightarrow {\mathbf{z}}_{G}$ using autoencoders [17] where ${\mathbf{x}}_{G}$ is a multi-hot vector of appropriate dimension (say $p$ ) depending on the input representation and ${\mathbf{z}}_{G} \in {\mathbb{R}}^{l}$ is the learnt latent vector.
|
| 26 |
+
|
| 27 |
+
O3 Decode the predicted property and molecular structure relationship using gradient-based model agnostic interpretability methods.
|
| 28 |
+
|
| 29 |
+
§ 3 METHODOLOGY
|
| 30 |
+
|
| 31 |
+
In this work, a set of SMILES strings for $n$ molecules, $\mathcal{S} = \left\{ {{S}_{1},{S}_{2},\ldots ,{S}_{n}}\right\}$ which might be associated with a property $y$ is considered. Furthermore, we also incorporate 2D global molecular descriptors to augment the learnt representation (FGR-Desc) and increase the performance of downstream property prediction tasks. The methods are summarised in Figure 1.
|
| 32 |
+
|
| 33 |
+
* Generation of Functional Group vocabulary: In this study, we use the OCHEM [15] database, which has a collection of 2786 functional groups (FG) characterised by chemists and frequent sub-structures are recognised using a sequential pattern mining algorithm applied on $\mathcal{S}$ from the PubChem database $(n > {114}$ million). Based on the frequency threshold $\eta ,{3000}$ mined functional groups are identified (MFG). Then, any molecule ${S}_{i} \in \mathcal{S}$ can be represented by a multi-one-hot encoded vector, ${\left\lbrack {x}_{1},{x}_{2},\ldots ,{x}_{b}\right\rbrack }^{T}$ where ${x}_{i} = 1$ if ${FG}{R}_{i} \in {S}_{i}$ and ${x}_{i} = 0$ , if ${FG}{R}_{i} \notin {S}_{i}$ .
|
| 34 |
+
|
| 35 |
+
* Pretraining and Property Prediction: Pretraining is decoupled from the downstream property prediction to develop a global representation capable of interpreting the chemical space that can be applied to any task. For the pretraining step, the autoencoder is trained separately from the downstream property prediction task. The reconstruction loss of the training phase in is minimized for all the molecules in the database for the pretraining purpose. One of the preliminary challenges of the encoder-decoder pretraining is the determination of the dimension of the latent feature vector. Hyper-parameter optimization is performed to obtain the dimension of the latent feature vectors for all four types of encodings. A fully connected neural network is used to compute a probability score $p\left( {\mathbf{x}}_{G}\right) \in \left\lbrack {0,1}\right\rbrack$ based on ${\mathbf{z}}_{G}$ (latent feature vector) for property prediction.
|
| 36 |
+
|
| 37 |
+
* Interpretability: We evaluate each input feature's contribution to the model's output using primary attribution methods like feature permutation, integrated gradients and gradient SHAP $\left\lbrack {{18},{19}}\right\rbrack$ . The goodness of
|
| 38 |
+
|
| 39 |
+
< g r a p h i c s >
|
| 40 |
+
|
| 41 |
+
Figure 1: Overview of the Proposed Methodology: A) FG Representation, B) MFG Representation, C) Descriptor Representation, D) Latent Representation for FGR and Property Prediction Module
|
| 42 |
+
|
| 43 |
+
< g r a p h i c s >
|
| 44 |
+
|
| 45 |
+
Figure 2: Overview of Interpretability Analysis: For any given property, attribution scores for input features are calculated and the substructures can be visualised overlapped with the scores
|
| 46 |
+
|
| 47 |
+
explanations is quantified using infidelity and sensitivity metrics. A visualisation tool is also developed to highlight essential substructures that contribute to predicting desired properties, as shown in Figure 2.
|
| 48 |
+
|
| 49 |
+
§ 4 RESULTS
|
| 50 |
+
|
| 51 |
+
Extensive evaluation of the model was done for robustness and generalizability on classification and regression tasks using five-fold random and scaffold splits. The results are summarized in Table 1 and Table 2.
|
| 52 |
+
|
| 53 |
+
§ 5 CONCLUSION
|
| 54 |
+
|
| 55 |
+
This work presents a functional group representation (FGR) framework using functional groups in chemistry for molecular representation learning. The framework allows four types of molecular representations: FG, MFG, FG-MFG-based and FG-MFG-descriptors-based representation. The proposed FGR framework-based molecular embeddings have been evaluated on several benchmark datasets. The proposed framework performs at par and sometimes better than the state-of-the-art algorithms in classification tasks. The FGR framework also provides chemically interpretable encoding as it is inspired by rules of chemistry to maintain explainability with the encoding. In the proposed framework, autoencoders are used to learn latent representations. Also, we demonstrated that the pretraining in the FGR framework could be performed due to decoupling between the latent representation learning
|
| 56 |
+
|
| 57 |
+
max width=
|
| 58 |
+
|
| 59 |
+
4|c|Scaffold Split Classification (ROC-AUC) $\uparrow$
|
| 60 |
+
|
| 61 |
+
1-4
|
| 62 |
+
Dataset $\mathbf{{FGR}}$ DMPNN GEM
|
| 63 |
+
|
| 64 |
+
1-4
|
| 65 |
+
BACE ${0.89} \pm {0.01}$ ${0.86} \pm {0.05}$ ${0.86} \pm {0.01}$
|
| 66 |
+
|
| 67 |
+
1-4
|
| 68 |
+
BBBP $\mathbf{{0.96} \pm {0.008}}$ ${0.92} \pm {0.02}$ ${0.72} \pm {0.00}$
|
| 69 |
+
|
| 70 |
+
1-4
|
| 71 |
+
Tox21 ${0.71} \pm {0.01}$ ${0.69} \pm {0.01}$ ${0.78} \pm {0.001}$
|
| 72 |
+
|
| 73 |
+
1-4
|
| 74 |
+
ClinTox $\mathbf{{0.99} \pm {0.002}}$ ${0.88} \pm {0.03}$ ${0.90} \pm {0.01}$
|
| 75 |
+
|
| 76 |
+
1-4
|
| 77 |
+
SIDER ${0.72} \pm {0.07}$ ${0.63} \pm {0.03}$ ${0.67} \pm {0.004}$
|
| 78 |
+
|
| 79 |
+
1-4
|
| 80 |
+
|
| 81 |
+
Table 1: Comparison of ROC-AUC scores for FGR, DMPNN [6], and GEM [8]
|
| 82 |
+
|
| 83 |
+
max width=
|
| 84 |
+
|
| 85 |
+
4|c|Scaffold Split Regression (RMSE) $\downarrow$
|
| 86 |
+
|
| 87 |
+
1-4
|
| 88 |
+
Dataset $\mathbf{{FGR}}$ DMPNN GEM
|
| 89 |
+
|
| 90 |
+
1-4
|
| 91 |
+
ESOL ${0.62} \pm {0.06}$ ${1.05} \pm {0.008}$ ${0.79} \pm {0.02}$
|
| 92 |
+
|
| 93 |
+
1-4
|
| 94 |
+
FreeSolv ${0.78} \pm {0.19}$ ${2.08} \pm {0.082}$ ${1.87} \pm {0.094}$
|
| 95 |
+
|
| 96 |
+
1-4
|
| 97 |
+
Lipo ${0.64} \pm {0.035}$ ${0.68} \pm {0.016}$ ${0.66} \pm {0.008}$
|
| 98 |
+
|
| 99 |
+
1-4
|
| 100 |
+
|
| 101 |
+
Table 2: Comparison of RMSE scores for FGR, DMPNN [6], and GEM [8]
|
| 102 |
+
|
| 103 |
+
task and the property prediction task. It is envisaged to extend the FGR framework for building pre-trained models with explainability using self-supervised learning on large-scale molecular data.
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/EGZ8XdoLm0/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,150 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Active Learning with Human Heuristics: An Algorithm Robust to Labelling Bias
|
| 2 |
+
|
| 3 |
+
Sriram Ravichandran ${}^{ + }$ , Nandan Sudarsanam ${}^{ + }$ , Konstantinos Katsikopoulos ${}^{ \dagger }$ , Balaraman Ravindran ${}^{ + }$ ${}^{ + }$ Indian Institute of Technology, Madras
|
| 4 |
+
|
| 5 |
+
${}^{ \dagger }$ University of Southampton, UK
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Active learning(AL) enables prediction algorithms to achieve better performance with fewer data points by adaptively querying an oracle for output labels. In many instances, the oracle is a human. According to behavioral sciences, humans provide labels by employing decision heuristics which tend to offer biased labels.AL algorithms trained with such labels could in turn provide incorrect predictions, which could make the decisions made by such models unfair. How would modelling the oracle with such heuristics affect the performance of AL algorithms? We investigate three human heuristics (fast-and frugal tree, tallying, and franklin's rule) combined with four active learning algorithms (entropy-based, multi-view learning, density-based, and novel density-based) and apply them to five datasets from domains such as health, wealth and sustainability. A first novel finding is that if a heuristic leads to significant labelling bias, the performance of active learning algorithms significantly drops, sometimes below random sampling. Thus, it is key to design active learning algorithms robust to labeling bias. Our second contribution is a novel density-based algorithm that achieves an overall median improvement of ${31}\%$ over current algorithms when the oracle has a significant labelling bias. In sum, designing and benchmarking active learning algorithms should incorporate the modelling of human decision heuristics.
|
| 10 |
+
|
| 11 |
+
## 1 Introduction
|
| 12 |
+
|
| 13 |
+
AI is being used in various significant applications that affect human lives. These include recruitment, consumer lending, healthcare, criminal justice, etc. Building prediction models is crucial for automating such decision processes because it enables decisions based on data rather than relying solely on intuition or past experiences. There is an increasing need for training such models in conditions where obtaining labels is significantly more expensive than their attributes. Moreover, due to the sensitivity of the applications the trained models is also be expected to be fair i.e. devoid of bias that exists when a human makes a decision. Active learning (AL) algorithms have the leverage of choosing the data points to be queried at each instance, thereby reaching the benchmark accuracy with fewer queries (labeled instances). A typical active learner starts with a small number of labeled instances and queries for one or more unlabeled instances, then selects additional points to query based on the labels obtained from previous queries. Labeling the queried instances can be done in multiple ways and is therefore typically assumed to be an unbiased random response. For example, building a model to predict the durability of a car involves crash-testing cars to obtain labels that are highly expensive, making this a suitable application for AL algorithms. However, a substantial subset of AL-based querying involves a human annotator. For instance, A review of AL papers searched with the keyword "Active Learning" that were published during 2021-2023 across prominent venues such as Nature Communications, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Journal of Machine Learning Research and Advances in Neural Information Processing Systems shows that about ${63}\%$ of the works involved the usage of human-annotated labels. Traditional literature in behavioral economics[1] highlights the deviation of the human decision-making process from rationality, which they defined as bias. Providing labels for AL should be no exception.
|
| 14 |
+
|
| 15 |
+
However, annotator bias and its implications on trained models are acknowledged in only a small subset of AL literature. For instance, Deepesh et.al.[2] noticed that behavioral biases in the oracle decrease the classification accuracy of prediction models built by at least ${20}\%$ . Moreover, Burr et.al.[3], in their extensive literature survey on AL, mentioned the reliability of the labels provided by humans might be compromised due to difficulties faced in comprehending the instances that might impact the quality of the labels obtained.
|
| 16 |
+
|
| 17 |
+
This understanding resulted in development of a class of AL algorithms that considers the biases present in the human oracle.
|
| 18 |
+
|
| 19 |
+
Works belonging to this class $\left\lbrack {4,5}\right\rbrack$ considered the presence of human bias as random or a uniformly distributed error while proposing novel algorithms.J.Du et.al.[6] on the other hand, proposed an algorithm with an exploration and exploitation approach by relabeling data points that could be wrongly labeled. The oracle here was modeled based on the assumption that the probability of obtaining biased labels depends on the maximum posterior probability of an instance computed with the ground truth labels.
|
| 20 |
+
|
| 21 |
+
In all the above works, the oracle was assumed to offer incorrect responses randomly, or the label bias was synthetically injected based on certain assumptions. However, Herbert Simon, the founder of bounded rationality, argues that people must utilize approximations for the majority of tasks, including simple decision heuristics[7]. Additionally, Gigerenzer et al.[8] pointed out several human heuristics existent under bounded rationality that the human mind tends to follow as its incapable of superhuman reasoning.
|
| 22 |
+
|
| 23 |
+
The above works support that human oracle is likely to use decision strategies during annotations, and the label bias tends to result from the heuristic used. This makes it essential to study the effect of decision strategies on the active learning models since a model trained with an unfair human decision strategy could make unfair decisions.
|
| 24 |
+
|
| 25 |
+
This study contributes to the active learning literature by asserting that the decision strategy used by the oracle significantly affects the relative performance of AL algorithms, thereby necessitating the need to benchmark AL algorithms with human decision strategies. We also propose a novel AL algorithm that pioneers the birth of a new class of algorithms built based on human decision strategies.
|
| 26 |
+
|
| 27 |
+
The rest of the paper has been structured as follows. The methodology is laid forth in Section 2, including explanations of the datasets, AL algorithms, and human heuristics utilized in the study. After discussing the results in Section 3, Section 4 concludes by summarising the same.
|
| 28 |
+
|
| 29 |
+
## 2 Methodology
|
| 30 |
+
|
| 31 |
+
Typically, the active learner chooses the instance to obtain $\operatorname{label}\left( {x}_{i}\right)$ from the pool of unlabeled instances(X) sequentially based on its query strategy and queries the same to the Human. The labels thus obtained $\left( {y}_{i}\right)$ train the AL after every query. In our study, we mimic the functionality of the human oracle using fast and frugal heuristics such as the fast and frugal tree (FFT), tallying, and a conventional heuristic(Franklin's rule). The decision strategies ensure that the bias labels provided to the oracle are not random but are based on the instance for which querying is done.[see section2.1]
|
| 32 |
+
|
| 33 |
+
To perform the experiments, we chose five labeled data sets from various domains such as Health[Cleveland Heart disease[9]], Wealth[To predict fraudulent firm[10]], Automobile[Car Condition prediction[11]], Food science[Wine Prediction[12]] and Sustainability[Biodegradable Data set[13]].
|
| 34 |
+
|
| 35 |
+
For our study, we considered the pool-based sampling scenario where the pool of instances is ranked based on the query strategy. The active learner then selects the best query based on these ranks. The AL algorithms considered were Entropy Sampling, Multi-view learning with co-testing, Conventional Density-based learning, and Novel Density-based learning[see section 2.2]
|
| 36 |
+
|
| 37 |
+
### 2.1 When is a Fast and Frugal Decision strategy likely to provide an unbiased label?
|
| 38 |
+
|
| 39 |
+
To get a rational understanding of situations where fast and frugal heuristics(FFT and Tallying) provide incorrect labels, We postulate the following hypothesis:
|
| 40 |
+
|
| 41 |
+
Hypothesis 1 Data points whose attribute values are farther away from their corresponding mean attribute value are less prone to obtaining biased labels from human oracle/heuristics.
|
| 42 |
+
|
| 43 |
+
The above hypothesis was formulated based on the intuition that the decisions made by Fast and frugal heuristics always compare the attribute values to constant values. In FFT and Tallying, this constant value tends to be the mean attribute value.
|
| 44 |
+
|
| 45 |
+
This hypothesis can be illustrated with a case where the task is to classify a car's condition based on its usage period (Let the average usage be five years). Intuitively, the human oracle would find it easier to classify cars that are $2/{10}$ years old than a car that has been used for five years. i.e., Cars with attribute values closer to their mean.
|
| 46 |
+
|
| 47 |
+
On the datasets taken into consideration, fast and frugal heuristics were employed to produce predictions in order to test the hypothesis. Table 1 and Table 2 show that the prediction accuracy of the heuristics was significantly higher for data points that were farther away from the mean(FM) compared to data points that were closer to the mean(CM), thereby supporting our claim.
|
| 48 |
+
|
| 49 |
+
<table><tr><td>Sr.No.</td><td>Data-set Name</td><td>FM(%)</td><td>CM(%)</td><td>Overall(%)</td></tr><tr><td>1</td><td>Biodegradable Data set</td><td>78.74</td><td>73.33</td><td>77.02</td></tr><tr><td>2</td><td>Car Prediction</td><td>80.61</td><td>68.56</td><td>71.29</td></tr><tr><td>3</td><td>Cleveland Heart Disease Data set</td><td>95.45</td><td>83.83</td><td>84.72</td></tr><tr><td>4</td><td>Audit Dataset</td><td>96.5</td><td>94.4</td><td>95.7</td></tr><tr><td>5</td><td>Wine Dataset</td><td>100</td><td>86.7</td><td>87.07</td></tr></table>
|
| 50 |
+
|
| 51 |
+
Table 1: Accuracy of Predictions made by Tallying heuristic
|
| 52 |
+
|
| 53 |
+
<table><tr><td>Sr.No.</td><td>Data-set Name</td><td>FM(%)</td><td>CM(%)</td><td>Overall(%)</td></tr><tr><td>1</td><td>Biodegradable Data set</td><td>76.44</td><td>57.33</td><td>70.97</td></tr><tr><td>2</td><td>Car Prediction</td><td>94.1</td><td>88.5</td><td>92.59</td></tr><tr><td>3</td><td>Cleveland Heart Disease Data set</td><td>81.25</td><td>80.07</td><td>81.25</td></tr><tr><td>4</td><td>Audit Data</td><td>96.5</td><td>91.1</td><td>94.42</td></tr><tr><td>5</td><td>Wine Data set</td><td>100</td><td>97.1</td><td>97.75</td></tr></table>
|
| 54 |
+
|
| 55 |
+
Table 2: Accuracy of Predictions made by FFT heuristic
|
| 56 |
+
|
| 57 |
+
### 2.2 Novel Density-based Learning
|
| 58 |
+
|
| 59 |
+
The experimentally supported hypothesis(section 2.1) motivates the development of a query strategy that queries data points whose attribute values are farther away from their mean attribute value. It must also be noted those instances tend to have lower cosine Information density values. Existing algorithms, such as conventional density-based learning, are based on metrics directly proportional to entropy and cosine similarity. This makes them prefer querying data points more susceptible to obtaining biased labels. Hence, we consider a modified metric:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
H\left( x\right) = - \frac{\mathop{\sum }\limits_{k}{p}_{k}\log \left( {p}_{k}\right) }{\left( \frac{1}{U}\right) \mathop{\sum }\limits_{{u = 1}}^{U}\operatorname{sim}\left( {x,{x}^{u}}\right) } \tag{1}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
As the above formula indicates, the data points are ranked based on their similarity to other unlabeled data points in the pool set $\left( {\frac{1}{U}\mathop{\sum }\limits_{{u = 1}}^{U}\operatorname{sim}\left( {x,{x}^{u}}\right) }\right)$ as well as the entropy measure. U represents the pool of unlabeled instances after every query. The metric is expected to motivate the learner to query data points with high entropy and low information density, i.e.query data points that are useful and tend to obtain accurate labels.
|
| 66 |
+
|
| 67 |
+
## 3 Results and Discussion
|
| 68 |
+
|
| 69 |
+
The AL models were trained based on the labels produced by human heuristics. This was repeated for every heuristic-AL algorithm-decision strategy combination, and the trained model's accuracy was measured after each query. Conventional studies involve the evaluation of AL algorithms using Learning curves(Accuracy vs. data points queried). However, it is reasonably apparent to expect a decrease in the accuracy of both AL algorithms and random sampling across data points queried when labels are provided due to biased decision strategies. Thereby, evaluating algorithms based on absolute accuracy is redundant in this study.
|
| 70 |
+
|
| 71 |
+
However, the relative accuracy of AL algorithms compared to that of Random sampling would help understand the comparative effectiveness within active learning algorithms in the presence of decision strategies. Hence we introduce a particular metric,’Leverage’ $\left\lbrack {L}_{i}\right\rbrack$ , to visualize the same.
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{L}_{i} = A{L}_{i} - \text{ RandomSampling }{g}_{i} \tag{2}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
Here, $A{L}_{i}$ and ${RandomSamplin}{g}_{i}$ represent the accuracy obtained by the respective query strategies after "i" no. of queries.
|
| 78 |
+
|
| 79 |
+
Furthermore, in order to find the relative robustness within the AL algorithms, we assess the decrease in the effectiveness of AL algorithms observed due to the influx of decision strategies i.e., drop in leverage across the learning phase $\left\lbrack {\nabla }_{i}\right\rbrack$ :
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{\nabla }_{i} = {\left\lbrack {L}_{i}\right\rbrack }_{\text{Ground }} - {\left\lbrack {L}_{i}\right\rbrack }_{\text{Decision Strategy }} \tag{3}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
In Eqn.3, ${\left\lbrack {L}_{i}\right\rbrack }_{\text{Decision Strategy }}$ represents the active learning algorithm’s leverage after obtaining labels as a result of the "Decision Strategy" for "i" queries.
|
| 86 |
+
|
| 87 |
+
The Leverage curve/Drop in leverage curve plotted based on the above help in representing both the absolute effectiveness and drop in the efficacy of AL algorithms when the fast and frugal heuristics provide significantly incorrect labels(see Appendix).
|
| 88 |
+
|
| 89 |
+
<table><tr><td>Absolute Leverage</td><td>Entropy(%)</td><td>MVL(%)</td><td>Proposed(%)</td><td>Conventional(%)</td><td>Improvement(%)</td></tr><tr><td>Biodegradable-FFT</td><td>1.59</td><td>1.53</td><td>2.68</td><td>-1.5</td><td>68.29</td></tr><tr><td>Biodegradable-Tallying</td><td>2.26</td><td>1.97</td><td>2.66</td><td>1.63</td><td>17.62</td></tr><tr><td>Car Rate-FFT</td><td>1.11</td><td>0.51</td><td>1.24</td><td>0.34</td><td>12.15</td></tr><tr><td>Car Rate-Tallying</td><td>0.52</td><td>0.36</td><td>1</td><td>-1.76</td><td>92.03</td></tr><tr><td>Cleviand Heart-FFT</td><td>0.44</td><td>0.49</td><td>0.53</td><td>0.46</td><td>9.41</td></tr><tr><td>Cleviand Heart-Tallying</td><td>1.76</td><td>1.66</td><td>1.46</td><td>1.75</td><td>-16.82</td></tr><tr><td>Wine-Tallying</td><td>3.74</td><td>3.62</td><td>3.58</td><td>3.76</td><td>-4.91</td></tr><tr><td>Drop in Leverage</td><td>Entropy(%)</td><td>MVL(%)</td><td>Proposed(%)</td><td>Conventional(%)</td><td>Decrease in drop(%)</td></tr><tr><td>Biodegradable-FFT</td><td>8.64</td><td>8.13</td><td>5.62</td><td>10.52</td><td>30.9</td></tr><tr><td>Biodegradable-Tallying</td><td>7.08</td><td>7.02</td><td>4.71</td><td>6.86</td><td>31.34</td></tr><tr><td>Car Rate-FFT</td><td>0.11</td><td>0.13</td><td>-0.45</td><td>0.51</td><td>524.92</td></tr><tr><td>Car Rate-Tallying</td><td>0.69</td><td>0.29</td><td>-0.17</td><td>2.46</td><td>159.01</td></tr><tr><td>Cleviand Heart-FFT</td><td>0.5</td><td>0.58</td><td>0.83</td><td>0.2</td><td>-317.12</td></tr><tr><td>Clevland Heart-Tallying</td><td>-0.043</td><td>0.037</td><td>0.979</td><td>-0.356</td><td>-374.89</td></tr><tr><td>Wine-Tallying</td><td>2.59</td><td>2.69</td><td>2.09</td><td>2.56</td><td>18.5</td></tr></table>
|
| 90 |
+
|
| 91 |
+
Figure 1: Top-Avg. leverage of AL algorithms, Bottom-Avg. drop in Leverage of AL algorithms
|
| 92 |
+
|
| 93 |
+
Figure 1 represents the average Absolute and Drop in Leverage experienced by the AL algorithms through the learning phase(until convergence) specifically in scenarios where fast and frugal heuristics(FFT and Tallying) provided significantly incorrect labels.
|
| 94 |
+
|
| 95 |
+
The proposed density-based learning performs better than other algorithms by showing a median improvement of 11% and a median decrease in a drop of 31% compared to the best-performing algorithm. The notable reduction in drop-in leverage demonstrates the robustness of the proposed algorithm. When heuristics like Franklin's rule gave mostly close-to-ground truth labels, the algorithm was not discovered to perform the best. As a result, the suggested approach is subjected to be used only in situations where heuristics provide considerably biased labels.
|
| 96 |
+
|
| 97 |
+
## 4 Conclusion
|
| 98 |
+
|
| 99 |
+
The primary motive of the work was to model the oracle with human heuristics, which enabled the study of human heuristics' impact on AL algorithms. The same was achieved with three human heuristics(Fast and frugal tree(FFT), Tallying, Franklin's rule), four AL algorithms(Entropy based, Multi-view Learning, Density-based, Novel-density based), and five data sets. The performance of AL algorithms decreased considerably when human heuristics provided significantly incorrect labels. This necessitated a novel algorithm robust to bias labels provided by decision strategies. Our empirically proven hypothesis that heuristics tend to provide correct labels when queried data points with attribute values farther from the mean led to a novel density-based AL algorithm.
|
| 100 |
+
|
| 101 |
+
The proposed density-based learning algorithm improved absolute leverage by 11% in comparison to the best-performing algorithm. Moreover, the median decrease in drop-in leverage was 31% making the proposed algorithm a preferred one. The ability of the proposed algorithm to query instances that are likely to provide accurate labels and its lesser dependency on the labels obtained attributes to its good performance. On the other hand, when biased labels provided by the human heuristics were minimal, the proposed algorithm was not found useful, thereby restricting its usage in such scenarios.
|
| 102 |
+
|
| 103 |
+
In sum, the variation in the relative performance of Active Learning algorithms w.r.t decision strategies advocates the need for bench-marking algorithms in existing AL literature using the decision strategy framework proposed in the study. Moreover, the findings strongly motivate the need for a new era of algorithms in the AL domain that considers the uncertainty of the oracle while providing labels on instances, one of which has been achieved in this study.
|
| 104 |
+
|
| 105 |
+
## References
|
| 106 |
+
|
| 107 |
+
[1] Amos Tversky and Daniel Kahneman. Judgment under uncertainty: Heuristics and biases. Science 185(4157):1124-1131, 1974.
|
| 108 |
+
|
| 109 |
+
[2] Deepesh Agarwal, Obdulia Covarrubias-Zambrano, Stefan Bossmann, and Balasubramaniam Natarajan. Impacts of behavioral biases on active learning strategies. In 2022 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), pages 256-261, 2022.
|
| 110 |
+
|
| 111 |
+
[3] Burr Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 2009.
|
| 112 |
+
|
| 113 |
+
[4] Victor S. Sheng, Foster Provost, and Panagiotis G. Ipeirotis. Get another label? improving data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, page 614-622. Association for Computing Machinery, 2008.
|
| 114 |
+
|
| 115 |
+
[5] Perry Groot, Adriana Birlutiu, and Tom Heskes. Learning from multiple annotators with gaussian processes. In Timo Honkela, Włodzistaw Duch, Mark Girolami, and Samuel Kaski, editors, Artificial Neural Networks and Machine Learning - ICANN 2011, pages 159-164, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg.
|
| 116 |
+
|
| 117 |
+
[6] J. Du and C. Ling. Active learning with human-like noisy oracle. 2010 IEEE International Conference On Data Mining, pages 797-802, 2010.
|
| 118 |
+
|
| 119 |
+
[7] Herbert A. Simon. Invariants of human behavior. Annual Review of Psychology, 41(1):1-20, 1990.
|
| 120 |
+
|
| 121 |
+
[8] Gigerenzer, Peter Todd, Jean Czerlinski, Jennifer Davis, Gerd Gigerenzer, Daniel Goldstein, Adam Goodie, Ralph Hertwig, Ulrich Hoffrage, Kathryn Laskey, Laura Martignon, and Geoffrey Miller. Simple Heuristics That Make Us Smart. 01 1999.
|
| 122 |
+
|
| 123 |
+
[9] Jeroen Eggermont, Joost Kok, and Walter Kosters. Genetic programming for data classification: Partitioning the search space. volume 2, pages 1001-1005, 032004.
|
| 124 |
+
|
| 125 |
+
[10] N. Hooda, S. Bawa, and P. Fraudulent Firm Classification: A Rana. Case study of an external audit. Applied Artificial Intelligence, 32:48-64, 2018.
|
| 126 |
+
|
| 127 |
+
[11] Ivan Bratko Demsar.J Zupan. B, Marko Bohanec. Machine learning by function decomposition. In International Conference on Machine Learning, 1997.
|
| 128 |
+
|
| 129 |
+
[12] Olivier Y.de Vel Aeberhard. S, Danny Coomans. Improvements to the classification performance of rda. Journal of Chemometrics, 7, 1993.
|
| 130 |
+
|
| 131 |
+
[13] K. Mansouri, T. Ringsted, D. Ballabio, R. Todeschini, and V. Consonni. Quantitative structure-activity relationship models for ready biodegradability of chemicals. Journal Of Chemical Information And Modeling, 53:867-878, 2013.
|
| 132 |
+
|
| 133 |
+
## A Appendix
|
| 134 |
+
|
| 135 |
+

|
| 136 |
+
|
| 137 |
+
Figure 2: Leverage curves of active learning algorithms when the oracle provided labels with significant bias are a result of FFT
|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
|
| 141 |
+
Figure 3: Leverage curves of active learning algorithms when the oracle provided labels with significant bias as a result of tallying heuristic
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
Figure 4: Drop in leverage across the learning phase of active learning algorithms for FFT
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
|
| 149 |
+
Figure 5: Drop in leverage across the learning phase of active learning algorithms for tallying
|
| 150 |
+
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/EGZ8XdoLm0/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,190 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ ACTIVE LEARNING WITH HUMAN HEURISTICS: AN ALGORITHM ROBUST TO LABELLING BIAS
|
| 2 |
+
|
| 3 |
+
Sriram Ravichandran ${}^{ + }$ , Nandan Sudarsanam ${}^{ + }$ , Konstantinos Katsikopoulos ${}^{ \dagger }$ , Balaraman Ravindran ${}^{ + }$ ${}^{ + }$ Indian Institute of Technology, Madras
|
| 4 |
+
|
| 5 |
+
${}^{ \dagger }$ University of Southampton, UK
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Active learning(AL) enables prediction algorithms to achieve better performance with fewer data points by adaptively querying an oracle for output labels. In many instances, the oracle is a human. According to behavioral sciences, humans provide labels by employing decision heuristics which tend to offer biased labels.AL algorithms trained with such labels could in turn provide incorrect predictions, which could make the decisions made by such models unfair. How would modelling the oracle with such heuristics affect the performance of AL algorithms? We investigate three human heuristics (fast-and frugal tree, tallying, and franklin's rule) combined with four active learning algorithms (entropy-based, multi-view learning, density-based, and novel density-based) and apply them to five datasets from domains such as health, wealth and sustainability. A first novel finding is that if a heuristic leads to significant labelling bias, the performance of active learning algorithms significantly drops, sometimes below random sampling. Thus, it is key to design active learning algorithms robust to labeling bias. Our second contribution is a novel density-based algorithm that achieves an overall median improvement of ${31}\%$ over current algorithms when the oracle has a significant labelling bias. In sum, designing and benchmarking active learning algorithms should incorporate the modelling of human decision heuristics.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
AI is being used in various significant applications that affect human lives. These include recruitment, consumer lending, healthcare, criminal justice, etc. Building prediction models is crucial for automating such decision processes because it enables decisions based on data rather than relying solely on intuition or past experiences. There is an increasing need for training such models in conditions where obtaining labels is significantly more expensive than their attributes. Moreover, due to the sensitivity of the applications the trained models is also be expected to be fair i.e. devoid of bias that exists when a human makes a decision. Active learning (AL) algorithms have the leverage of choosing the data points to be queried at each instance, thereby reaching the benchmark accuracy with fewer queries (labeled instances). A typical active learner starts with a small number of labeled instances and queries for one or more unlabeled instances, then selects additional points to query based on the labels obtained from previous queries. Labeling the queried instances can be done in multiple ways and is therefore typically assumed to be an unbiased random response. For example, building a model to predict the durability of a car involves crash-testing cars to obtain labels that are highly expensive, making this a suitable application for AL algorithms. However, a substantial subset of AL-based querying involves a human annotator. For instance, A review of AL papers searched with the keyword "Active Learning" that were published during 2021-2023 across prominent venues such as Nature Communications, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Journal of Machine Learning Research and Advances in Neural Information Processing Systems shows that about ${63}\%$ of the works involved the usage of human-annotated labels. Traditional literature in behavioral economics[1] highlights the deviation of the human decision-making process from rationality, which they defined as bias. Providing labels for AL should be no exception.
|
| 14 |
+
|
| 15 |
+
However, annotator bias and its implications on trained models are acknowledged in only a small subset of AL literature. For instance, Deepesh et.al.[2] noticed that behavioral biases in the oracle decrease the classification accuracy of prediction models built by at least ${20}\%$ . Moreover, Burr et.al.[3], in their extensive literature survey on AL, mentioned the reliability of the labels provided by humans might be compromised due to difficulties faced in comprehending the instances that might impact the quality of the labels obtained.
|
| 16 |
+
|
| 17 |
+
This understanding resulted in development of a class of AL algorithms that considers the biases present in the human oracle.
|
| 18 |
+
|
| 19 |
+
Works belonging to this class $\left\lbrack {4,5}\right\rbrack$ considered the presence of human bias as random or a uniformly distributed error while proposing novel algorithms.J.Du et.al.[6] on the other hand, proposed an algorithm with an exploration and exploitation approach by relabeling data points that could be wrongly labeled. The oracle here was modeled based on the assumption that the probability of obtaining biased labels depends on the maximum posterior probability of an instance computed with the ground truth labels.
|
| 20 |
+
|
| 21 |
+
In all the above works, the oracle was assumed to offer incorrect responses randomly, or the label bias was synthetically injected based on certain assumptions. However, Herbert Simon, the founder of bounded rationality, argues that people must utilize approximations for the majority of tasks, including simple decision heuristics[7]. Additionally, Gigerenzer et al.[8] pointed out several human heuristics existent under bounded rationality that the human mind tends to follow as its incapable of superhuman reasoning.
|
| 22 |
+
|
| 23 |
+
The above works support that human oracle is likely to use decision strategies during annotations, and the label bias tends to result from the heuristic used. This makes it essential to study the effect of decision strategies on the active learning models since a model trained with an unfair human decision strategy could make unfair decisions.
|
| 24 |
+
|
| 25 |
+
This study contributes to the active learning literature by asserting that the decision strategy used by the oracle significantly affects the relative performance of AL algorithms, thereby necessitating the need to benchmark AL algorithms with human decision strategies. We also propose a novel AL algorithm that pioneers the birth of a new class of algorithms built based on human decision strategies.
|
| 26 |
+
|
| 27 |
+
The rest of the paper has been structured as follows. The methodology is laid forth in Section 2, including explanations of the datasets, AL algorithms, and human heuristics utilized in the study. After discussing the results in Section 3, Section 4 concludes by summarising the same.
|
| 28 |
+
|
| 29 |
+
§ 2 METHODOLOGY
|
| 30 |
+
|
| 31 |
+
Typically, the active learner chooses the instance to obtain $\operatorname{label}\left( {x}_{i}\right)$ from the pool of unlabeled instances(X) sequentially based on its query strategy and queries the same to the Human. The labels thus obtained $\left( {y}_{i}\right)$ train the AL after every query. In our study, we mimic the functionality of the human oracle using fast and frugal heuristics such as the fast and frugal tree (FFT), tallying, and a conventional heuristic(Franklin's rule). The decision strategies ensure that the bias labels provided to the oracle are not random but are based on the instance for which querying is done.[see section2.1]
|
| 32 |
+
|
| 33 |
+
To perform the experiments, we chose five labeled data sets from various domains such as Health[Cleveland Heart disease[9]], Wealth[To predict fraudulent firm[10]], Automobile[Car Condition prediction[11]], Food science[Wine Prediction[12]] and Sustainability[Biodegradable Data set[13]].
|
| 34 |
+
|
| 35 |
+
For our study, we considered the pool-based sampling scenario where the pool of instances is ranked based on the query strategy. The active learner then selects the best query based on these ranks. The AL algorithms considered were Entropy Sampling, Multi-view learning with co-testing, Conventional Density-based learning, and Novel Density-based learning[see section 2.2]
|
| 36 |
+
|
| 37 |
+
§ 2.1 WHEN IS A FAST AND FRUGAL DECISION STRATEGY LIKELY TO PROVIDE AN UNBIASED LABEL?
|
| 38 |
+
|
| 39 |
+
To get a rational understanding of situations where fast and frugal heuristics(FFT and Tallying) provide incorrect labels, We postulate the following hypothesis:
|
| 40 |
+
|
| 41 |
+
Hypothesis 1 Data points whose attribute values are farther away from their corresponding mean attribute value are less prone to obtaining biased labels from human oracle/heuristics.
|
| 42 |
+
|
| 43 |
+
The above hypothesis was formulated based on the intuition that the decisions made by Fast and frugal heuristics always compare the attribute values to constant values. In FFT and Tallying, this constant value tends to be the mean attribute value.
|
| 44 |
+
|
| 45 |
+
This hypothesis can be illustrated with a case where the task is to classify a car's condition based on its usage period (Let the average usage be five years). Intuitively, the human oracle would find it easier to classify cars that are $2/{10}$ years old than a car that has been used for five years. i.e., Cars with attribute values closer to their mean.
|
| 46 |
+
|
| 47 |
+
On the datasets taken into consideration, fast and frugal heuristics were employed to produce predictions in order to test the hypothesis. Table 1 and Table 2 show that the prediction accuracy of the heuristics was significantly higher for data points that were farther away from the mean(FM) compared to data points that were closer to the mean(CM), thereby supporting our claim.
|
| 48 |
+
|
| 49 |
+
max width=
|
| 50 |
+
|
| 51 |
+
Sr.No. Data-set Name FM(%) CM(%) Overall(%)
|
| 52 |
+
|
| 53 |
+
1-5
|
| 54 |
+
1 Biodegradable Data set 78.74 73.33 77.02
|
| 55 |
+
|
| 56 |
+
1-5
|
| 57 |
+
2 Car Prediction 80.61 68.56 71.29
|
| 58 |
+
|
| 59 |
+
1-5
|
| 60 |
+
3 Cleveland Heart Disease Data set 95.45 83.83 84.72
|
| 61 |
+
|
| 62 |
+
1-5
|
| 63 |
+
4 Audit Dataset 96.5 94.4 95.7
|
| 64 |
+
|
| 65 |
+
1-5
|
| 66 |
+
5 Wine Dataset 100 86.7 87.07
|
| 67 |
+
|
| 68 |
+
1-5
|
| 69 |
+
|
| 70 |
+
Table 1: Accuracy of Predictions made by Tallying heuristic
|
| 71 |
+
|
| 72 |
+
max width=
|
| 73 |
+
|
| 74 |
+
Sr.No. Data-set Name FM(%) CM(%) Overall(%)
|
| 75 |
+
|
| 76 |
+
1-5
|
| 77 |
+
1 Biodegradable Data set 76.44 57.33 70.97
|
| 78 |
+
|
| 79 |
+
1-5
|
| 80 |
+
2 Car Prediction 94.1 88.5 92.59
|
| 81 |
+
|
| 82 |
+
1-5
|
| 83 |
+
3 Cleveland Heart Disease Data set 81.25 80.07 81.25
|
| 84 |
+
|
| 85 |
+
1-5
|
| 86 |
+
4 Audit Data 96.5 91.1 94.42
|
| 87 |
+
|
| 88 |
+
1-5
|
| 89 |
+
5 Wine Data set 100 97.1 97.75
|
| 90 |
+
|
| 91 |
+
1-5
|
| 92 |
+
|
| 93 |
+
Table 2: Accuracy of Predictions made by FFT heuristic
|
| 94 |
+
|
| 95 |
+
§ 2.2 NOVEL DENSITY-BASED LEARNING
|
| 96 |
+
|
| 97 |
+
The experimentally supported hypothesis(section 2.1) motivates the development of a query strategy that queries data points whose attribute values are farther away from their mean attribute value. It must also be noted those instances tend to have lower cosine Information density values. Existing algorithms, such as conventional density-based learning, are based on metrics directly proportional to entropy and cosine similarity. This makes them prefer querying data points more susceptible to obtaining biased labels. Hence, we consider a modified metric:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
H\left( x\right) = - \frac{\mathop{\sum }\limits_{k}{p}_{k}\log \left( {p}_{k}\right) }{\left( \frac{1}{U}\right) \mathop{\sum }\limits_{{u = 1}}^{U}\operatorname{sim}\left( {x,{x}^{u}}\right) } \tag{1}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
As the above formula indicates, the data points are ranked based on their similarity to other unlabeled data points in the pool set $\left( {\frac{1}{U}\mathop{\sum }\limits_{{u = 1}}^{U}\operatorname{sim}\left( {x,{x}^{u}}\right) }\right)$ as well as the entropy measure. U represents the pool of unlabeled instances after every query. The metric is expected to motivate the learner to query data points with high entropy and low information density, i.e.query data points that are useful and tend to obtain accurate labels.
|
| 104 |
+
|
| 105 |
+
§ 3 RESULTS AND DISCUSSION
|
| 106 |
+
|
| 107 |
+
The AL models were trained based on the labels produced by human heuristics. This was repeated for every heuristic-AL algorithm-decision strategy combination, and the trained model's accuracy was measured after each query. Conventional studies involve the evaluation of AL algorithms using Learning curves(Accuracy vs. data points queried). However, it is reasonably apparent to expect a decrease in the accuracy of both AL algorithms and random sampling across data points queried when labels are provided due to biased decision strategies. Thereby, evaluating algorithms based on absolute accuracy is redundant in this study.
|
| 108 |
+
|
| 109 |
+
However, the relative accuracy of AL algorithms compared to that of Random sampling would help understand the comparative effectiveness within active learning algorithms in the presence of decision strategies. Hence we introduce a particular metric,’Leverage’ $\left\lbrack {L}_{i}\right\rbrack$ , to visualize the same.
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
{L}_{i} = A{L}_{i} - \text{ RandomSampling }{g}_{i} \tag{2}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
Here, $A{L}_{i}$ and ${RandomSamplin}{g}_{i}$ represent the accuracy obtained by the respective query strategies after "i" no. of queries.
|
| 116 |
+
|
| 117 |
+
Furthermore, in order to find the relative robustness within the AL algorithms, we assess the decrease in the effectiveness of AL algorithms observed due to the influx of decision strategies i.e., drop in leverage across the learning phase $\left\lbrack {\nabla }_{i}\right\rbrack$ :
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
{\nabla }_{i} = {\left\lbrack {L}_{i}\right\rbrack }_{\text{ Ground }} - {\left\lbrack {L}_{i}\right\rbrack }_{\text{ Decision Strategy }} \tag{3}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
In Eqn.3, ${\left\lbrack {L}_{i}\right\rbrack }_{\text{ Decision Strategy }}$ represents the active learning algorithm’s leverage after obtaining labels as a result of the "Decision Strategy" for "i" queries.
|
| 124 |
+
|
| 125 |
+
The Leverage curve/Drop in leverage curve plotted based on the above help in representing both the absolute effectiveness and drop in the efficacy of AL algorithms when the fast and frugal heuristics provide significantly incorrect labels(see Appendix).
|
| 126 |
+
|
| 127 |
+
max width=
|
| 128 |
+
|
| 129 |
+
Absolute Leverage Entropy(%) MVL(%) Proposed(%) Conventional(%) Improvement(%)
|
| 130 |
+
|
| 131 |
+
1-6
|
| 132 |
+
Biodegradable-FFT 1.59 1.53 2.68 -1.5 68.29
|
| 133 |
+
|
| 134 |
+
1-6
|
| 135 |
+
Biodegradable-Tallying 2.26 1.97 2.66 1.63 17.62
|
| 136 |
+
|
| 137 |
+
1-6
|
| 138 |
+
Car Rate-FFT 1.11 0.51 1.24 0.34 12.15
|
| 139 |
+
|
| 140 |
+
1-6
|
| 141 |
+
Car Rate-Tallying 0.52 0.36 1 -1.76 92.03
|
| 142 |
+
|
| 143 |
+
1-6
|
| 144 |
+
Cleviand Heart-FFT 0.44 0.49 0.53 0.46 9.41
|
| 145 |
+
|
| 146 |
+
1-6
|
| 147 |
+
Cleviand Heart-Tallying 1.76 1.66 1.46 1.75 -16.82
|
| 148 |
+
|
| 149 |
+
1-6
|
| 150 |
+
Wine-Tallying 3.74 3.62 3.58 3.76 -4.91
|
| 151 |
+
|
| 152 |
+
1-6
|
| 153 |
+
Drop in Leverage Entropy(%) MVL(%) Proposed(%) Conventional(%) Decrease in drop(%)
|
| 154 |
+
|
| 155 |
+
1-6
|
| 156 |
+
Biodegradable-FFT 8.64 8.13 5.62 10.52 30.9
|
| 157 |
+
|
| 158 |
+
1-6
|
| 159 |
+
Biodegradable-Tallying 7.08 7.02 4.71 6.86 31.34
|
| 160 |
+
|
| 161 |
+
1-6
|
| 162 |
+
Car Rate-FFT 0.11 0.13 -0.45 0.51 524.92
|
| 163 |
+
|
| 164 |
+
1-6
|
| 165 |
+
Car Rate-Tallying 0.69 0.29 -0.17 2.46 159.01
|
| 166 |
+
|
| 167 |
+
1-6
|
| 168 |
+
Cleviand Heart-FFT 0.5 0.58 0.83 0.2 -317.12
|
| 169 |
+
|
| 170 |
+
1-6
|
| 171 |
+
Clevland Heart-Tallying -0.043 0.037 0.979 -0.356 -374.89
|
| 172 |
+
|
| 173 |
+
1-6
|
| 174 |
+
Wine-Tallying 2.59 2.69 2.09 2.56 18.5
|
| 175 |
+
|
| 176 |
+
1-6
|
| 177 |
+
|
| 178 |
+
Figure 1: Top-Avg. leverage of AL algorithms, Bottom-Avg. drop in Leverage of AL algorithms
|
| 179 |
+
|
| 180 |
+
Figure 1 represents the average Absolute and Drop in Leverage experienced by the AL algorithms through the learning phase(until convergence) specifically in scenarios where fast and frugal heuristics(FFT and Tallying) provided significantly incorrect labels.
|
| 181 |
+
|
| 182 |
+
The proposed density-based learning performs better than other algorithms by showing a median improvement of 11% and a median decrease in a drop of 31% compared to the best-performing algorithm. The notable reduction in drop-in leverage demonstrates the robustness of the proposed algorithm. When heuristics like Franklin's rule gave mostly close-to-ground truth labels, the algorithm was not discovered to perform the best. As a result, the suggested approach is subjected to be used only in situations where heuristics provide considerably biased labels.
|
| 183 |
+
|
| 184 |
+
§ 4 CONCLUSION
|
| 185 |
+
|
| 186 |
+
The primary motive of the work was to model the oracle with human heuristics, which enabled the study of human heuristics' impact on AL algorithms. The same was achieved with three human heuristics(Fast and frugal tree(FFT), Tallying, Franklin's rule), four AL algorithms(Entropy based, Multi-view Learning, Density-based, Novel-density based), and five data sets. The performance of AL algorithms decreased considerably when human heuristics provided significantly incorrect labels. This necessitated a novel algorithm robust to bias labels provided by decision strategies. Our empirically proven hypothesis that heuristics tend to provide correct labels when queried data points with attribute values farther from the mean led to a novel density-based AL algorithm.
|
| 187 |
+
|
| 188 |
+
The proposed density-based learning algorithm improved absolute leverage by 11% in comparison to the best-performing algorithm. Moreover, the median decrease in drop-in leverage was 31% making the proposed algorithm a preferred one. The ability of the proposed algorithm to query instances that are likely to provide accurate labels and its lesser dependency on the labels obtained attributes to its good performance. On the other hand, when biased labels provided by the human heuristics were minimal, the proposed algorithm was not found useful, thereby restricting its usage in such scenarios.
|
| 189 |
+
|
| 190 |
+
In sum, the variation in the relative performance of Active Learning algorithms w.r.t decision strategies advocates the need for bench-marking algorithms in existing AL literature using the decision strategy framework proposed in the study. Moreover, the findings strongly motivate the need for a new era of algorithms in the AL domain that considers the uncertainty of the oracle while providing labels on instances, one of which has been achieved in this study.
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/L-NgOKyH7jZ/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Guiding Offline Reinforcement Learning Using a Safety Expert
|
| 2 |
+
|
| 3 |
+
Richa Verma ${}^{ + }$ , Kartik Bharadwaj ${}^{ + }$ , Harshad Khadilkar ${}^{ \dagger }$ , and Balaraman Ravindran* +TCS Research
|
| 4 |
+
|
| 5 |
+
${}^{ \dagger }$ Robert Bosch Centre for Data Science and Artificial Intelligence
|
| 6 |
+
|
| 7 |
+
*Department of Computer Science and Engineering, Indian Institute of Technology Madras
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Offline reinforcement learning is used to train policies in situations where it is expensive or infeasible to access the environment during training. An agent trained under such a scenario does not get corrective feedback once the learned policy starts diverging and may fall prey to the overestimation bias commonly seen in this setting. This increases the chances of the agent choosing unsafe/risky actions, especially in states with sparse to no representation in the training dataset. In this paper, we propose to leverage a safety expert to discourage the offline RL agent from choosing unsafe actions in under-represented states in the dataset. The proposed framework in this paper transfers the safety expert's knowledge in an offline setting for states with high uncertainty to prevent catastrophic failures from occurring in safety-critical domains. We use a simple but effective approach to quantify the state uncertainty based on how frequently they appear in a training dataset. In states with high uncertainty, the offline RL agent mimics the safety expert while maximizing the long-term reward. We modify TD3+BC, an existing offline RL algorithm, as a part of the proposed approach. We demonstrate empirically that our approach performs better than $\mathrm{{TD}}3 + \mathrm{{BC}}$ on some control tasks and comparably on others across two sets of benchmark datasets while reducing the chance of taking unsafe actions in sparse regions of the state space.
|
| 12 |
+
|
| 13 |
+
## 1 Introduction
|
| 14 |
+
|
| 15 |
+
Reinforcement Learning (RL) has seen advancement and achieved great success in solving complex tasks with high dimensional state and action spaces, including games [1, 2, 3, 4], and some tasks from robotics [5]. An RL agent trained in an online setting takes an action $a$ in state $s$ and interacts with the environment to observe a reward $r$ . It then updates its policy based on the observed reward. However, it may be risky or costly to interact with the environment repeatedly in real-world situations. It may be infeasible in the cases where a high quality simulator is not available or cannot be built.
|
| 16 |
+
|
| 17 |
+
In offline RL (also known as batch RL), the agent is not allowed to interact with the environment. It has access to a fixed-sized dataset collected by any arbitrary policy which may or may not be known [6]. Real-world applications can benefit from this setting because access to the environment may be limited, challenging or not possible. Such applications which are already deployed can also generate datasets to learn from. Offline RL enables the use of such logged datasets for learning and can even allow us to leverage an expert in the form of a human operator, rule-based systems or a policy trained with a similar objective. Some approaches such as [7] show that dataset collected by an expert during learning in an online setting can also be used, however, using the expert itself to facilitate learning in offline RL eliminates the need for data collection and is helpful in settings where data privacy needs to be enforced.
|
| 18 |
+
|
| 19 |
+
Overestimation of the values of out-of-distribution actions is a fundamental challenge in offline RL. This also applies to certain actions which can be deemed as "unsafe" in safety-critical applications such as autonomous driving, robotic learning, healthcare, etc. For robotic learning, the conditions for a safety breach during an episode are easier to define (e.g. recording how many times the robot has fallen, or a grasped object has been dropped). The challenge in this domain is to learn an optimal policy for a task while minimizing the frequency of above-mentioned instances of catastrophic failures during training.
|
| 20 |
+
|
| 21 |
+
In this paper, we study how to utilize a safety expert in an offline RL setting for states with high uncertainty to minimize failures during training. This safety expert isn't necessarily optimal and can be learned or defined by a rule-based system for each task without reference to the underlying task reward. We use a simple but effective approach to quantify the uncertainty of the states based on how frequent the visited states are in a given training dataset. This information is used to conservatively modify the critic target, therefore propagating it to the value function estimation. We believe that incorporating a safety expert in the form of a pre-trained teacher policy along with quantifying state uncertainty can be effective in this setting. It reduces the chances of the offline RL agent engaging in potentially risky exploratory behavior, thus enabling robotic learning from massive datasets. We show that it can allow the agent to learn safe behavior without explicitly defining constraints on actions, which can be hard to do in an offline setting.
|
| 22 |
+
|
| 23 |
+
Our goal is to selectively utilize a safe teacher policy to reduce the chances of risky/unsafe behavior encountered during the deployment of a learned offline RL policy while still maintaining high performance. Our main contributions are summarized below:
|
| 24 |
+
|
| 25 |
+
- We propose a framework called Guided Offline RL (GORL) that trains an agent to learn efficiently from an offline dataset while leveraging a safety expert in regions of high uncertainty.
|
| 26 |
+
|
| 27 |
+
- We evaluate our approach on a set of datasets from the D4RL benchmark of continuous control tasks [8] and show that the proposed framework either performs better or comparably to TD3+BC [9], a popular SOTA offline RL algorithm on most of the tasks.
|
| 28 |
+
|
| 29 |
+
## 2 Related Work
|
| 30 |
+
|
| 31 |
+
Offline RL. The existing offline RL methods mainly use some approach that allows the learned policy to stay close to the data collection policy. There are various ways of implementing this. One way is to estimate the behavior policy and then learning a parameterized policy [10, 11]. Another line of works uses divergence regularization [12, 13, 14 to keep the two policies close to each other. Some other works suggest the use of a weighted version of behavior cloning to encourage choosing actions with high advantage [15, 16] or use uncertainty as weight of a state-action pair before making updates [17]. Some methods incorporate the notion of safety and modify the set actions that can be chosen based on their counts [18]. promising direction of literature looks at using pessimism and implementing divergence regularization as a part of value estimation [19, 20]. The goal of this work is different from these works which focus on developing RL alorithms specifically for an offline setting. We study knowledge transfer from a safety expert to an agent learning in the offline setting.
|
| 32 |
+
|
| 33 |
+
Reinforcement Learning from Demonstration. RL literature has many examples of learning from teacher policies or demonstrations in an online setting, especially in hard exploration environments. There are policy distillation techniques [21, 22] for training student networks such that their outputs (e.g., Q-values) are similar to those of teacher networks. Learning from demonstrations is another promising area. A replay buffer in an off-policy RL setting can be used to hold teacher demonstrations, which can be combined with samples generated by a student agent during training. DQfD [23] and Ape-X DQfD [24] are some of the examples of such methods for a discrete setting while methods suggested by [25, 26] work for continuous control tasks.
|
| 34 |
+
|
| 35 |
+
## 3 Proposed Approach
|
| 36 |
+
|
| 37 |
+
In offline RL, the problem of extrapolation error [10] is prevalent which means that the agent is unable to evaluate out-of-distribution actions properly. Our focus is on designing a framework to discourage the agent from selecting unsafe OOD actions while trying to learn an optimal policy from the dataset. We present such a framework that requires minimal modifications to a pre-existing offline RL algorithm. Our framework builds on top of TD3+BC [9]. We modify the critic target term to include state uncertainty. We also include a regularization term to push the offline policy towards the safety expert in states with poor confidence. The safety expert can be defined by any rule-based system or a pre-trained policy. We denote the agent’s confidence w.r.t a state as $\operatorname{conf}\left( s\right) \in \left\lbrack {0,1}\right\rbrack$ , where the confidence is computed by using SimHash algorithm [27]. SimHash uses Locality-Sensitive Hashing (LSH) to convert continuous, high-dimensional data to discrete hash codes. LSH preserves the distances among data points, such that those with similar hashes are close to each other. We use SimHash which is a computationally efficient LSH technique and it measures the similarity of the states contained in the training dataset $\mathcal{D}$ by angular distance. Here, we can use any technique which can transform the high-dimensional continuous state space into discrete bins based on their closeness. The following equation shows how hash codes are computed:
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
\mu \left( s\right) = \operatorname{sgn}\left( {{Ag}\left( s\right) }\right) \in {\left\lbrack -1,1\right\rbrack }^{k}. \tag{1}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
where $A \in {\mathcal{R}}^{k \times d}$ is a matrix with each entry drawn i.i.d. from a standard Gaussian and $g : S \rightarrow {\mathcal{R}}^{\mathcal{D}}$ is a preprocessing function. The dimension of binary codes is $k$ and it controls the granularity of the state space discretization. This algorithm was originally used as an exploration method but we use it to bin the states contained in the dataset $\mathcal{D}$ into hash codes of size $k$ . We use $k = {50}$ for all the tasks after careful experimentation with multiple tasks. We populate the hashtable by recording the counts of states mapped to each hash code, before training an agent. We normalize the state count values by using max-min normalization. Further, we query the hashtable to retrieve these counts during training and use the values as $\operatorname{conf}\left( s\right)$ in the below critic target update equation:
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
Q\left( {s, a}\right) = r + \gamma * \mathop{\max }\limits_{{a}^{\prime }}Q\left( {{s}^{\prime },{a}^{\prime }}\right) \underset{\text{uncertainty weighted learning from the safety expert}}{\underbrace{-\left( {1 - \operatorname{conf}\left( s\right) }\right) * {\left( a - {\pi }_{T}\left( s\right) \right) }^{2}}}. \tag{2}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
where ${\pi }_{T}\left( s\right)$ is a teacher policy used as the safety expert. It is trained in an online setting using a continuous control algorithm known as TD3 [28]. More details on training the policy ${\pi }_{T}\left( s\right)$ to be safe are provided in the next section.
|
| 50 |
+
|
| 51 |
+
Note that the value of $\operatorname{conf}\left( s\right)$ is lower for under-represented states in the given dataset $\mathcal{D}$ and the lower the confidence, the higher will be the push towards the safety expert, ${\pi }_{T}\left( s\right)$ . Also, the modified update equation reduces the values of all the(s, a)pairs in the dataset except the ones with the action suggested by the safety expert. This discourages the agent from picking unsafe action values in regions of high uncertainty. This completes the description of our framework called Guided Offline RL (GORL) which involves making a few small, but effective, modifications to TD3+BC.
|
| 52 |
+
|
| 53 |
+
## 4 Experiments
|
| 54 |
+
|
| 55 |
+
We evaluate our proposed approach on the D4RL benchmark of OpenAI gym MuJoCo tasks [8]. We use the TD3+BC algorithm trained on MuJoCo tasks (Hopper-v2 and Walker2d-v2) as the baseline. We train a teacher policy ${\pi }_{T}$ to be used as the safety expert using TD3 for $1\mathrm{M}$ online steps. For the policy to be safe, we add a step penalty of the form ctrl_cost_weight $* \operatorname{sum}\left( \right.$ action ${}^{2}$ ) which is simply a cost for penalizing the agent if it takes actions that are too large. We observe that by doing so, we can discourage the agent from applying high values of torques to various joints of a MuJoCo robot and hence prevent it from making jittery moves. We choose ctrl_cost_weight as 0.1 and 0.01 for Hopper-v2 and Walker2d-v2, respectively, after tuning. These environments have in-built rewards which penalise the agent when it falls or when the height of the top (along the z-axis) becomes too high or too low. Further, we train the offline RL agent on various environment-dataset pairs using the safety expert policy ${\pi }_{T}$ as a part of the framework described in the previous section.
|
| 56 |
+
|
| 57 |
+
<table><tr><td>Dataset</td><td>Environment</td><td>TD3+BC</td><td>Guided Offline RL</td></tr><tr><td rowspan="2">Random</td><td>Hopper-v2</td><td>${8.53} \pm {0.23}$</td><td>${6.03} \pm {2.03}$</td></tr><tr><td>Walker2d-v2</td><td>${0.95} \pm {0.33}$</td><td>${2.83} \pm {3.57}$</td></tr><tr><td rowspan="2">Medium</td><td>Hopper-v2</td><td>${60.12} \pm {1.35}$</td><td>${57.77} \pm {3.07}$</td></tr><tr><td>Walker2d-v2</td><td>${86.17} \pm {0.3}$</td><td>${83.78} \pm {2.91}$</td></tr><tr><td rowspan="2">Medium-Replay</td><td>Hopper-v2</td><td>${56.71} \pm {19.16}$</td><td>${85.61} \pm {5.14}$</td></tr><tr><td>Walker2d-v2</td><td>${73.56} \pm {11.19}$</td><td>${84.67} \pm {0.77}$</td></tr><tr><td rowspan="2">Medium-Expert</td><td>Hopper-v2</td><td>${95.16} \pm {9.85}$</td><td>${106.11} \pm {5.92}$</td></tr><tr><td>Walker2d-v2</td><td>${110.26} \pm {0.65}$</td><td>${110.6} \pm {0.21}$</td></tr><tr><td rowspan="2">Expert</td><td>Hopper-v2</td><td>${110.97} \pm {1.45}$</td><td>${111.62} \pm {0.37}$</td></tr><tr><td>Walker2d-v2</td><td>${110.12} \pm {0.47}$</td><td>${109.91} \pm {0.13}$</td></tr><tr><td/><td>Total</td><td>${712.55} \pm {44.98}$</td><td>${758.93} \pm {24.12}$</td></tr></table>
|
| 58 |
+
|
| 59 |
+
Table 1: Average normalized score using the D4RL -v2 datasets. The highest performing scores are highlighted. $\pm$ captures the standard deviation over seeds. TD3+BC algorithm is re-run using author-provided implementation. The results are after averaging over the final 10 evaluations and 3 seeds. No additional hyperparameter tuning was performed. TD3+BC and Guided TD3+BC achieve comparable performance.
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+
Figure 1: Percent difference of performance of Guided Offline RL w.r.t baseline TD3+BC algorithm. Here, h = Hopper-v2, w $=$ Walker2d-v2, $\mathrm{r} =$ random, $\mathrm{m} =$ medium, $\mathrm{{mr}} =$ medium-replay, $\mathrm{{me}} =$ medium-expert, $\mathrm{e} =$ expert. The proposed approach works better in reducing the number of falls in Walker2d environment as compared to Hopper (left). The reduction of the cumulative sum of the actions is more pronounced for Hopper (right).
|
| 64 |
+
|
| 65 |
+
We use the author-provided implementations for both TD3 and TD3+BC. We use the same base hyperparameters as the respective authors for these algorithms and train the baseline and the offline RL agent for three random seeds. In all experiments, the offline agent and the baseline agent do 10 evaluation episodes after every 5000 offline training steps till they reach 1M training steps. We use the normalized score from D4RL for evaluation and we average the scores of all seeds for each environment. We report the final performance results in Table 1. In Figure 1, we report the percentage difference between Guided Offline RL and TD3+BC w.r.t. the total number of times the agent falls or its agent's height crosses the safe range (Walker2d-v2) during all the evaluation episodes occurring within $1\mathrm{M}$ training steps. We also report the percentage difference between the cumulative sum of the actions across all evaluation steps for each dataset-environment pair.
|
| 66 |
+
|
| 67 |
+
Our results show that including a safe teacher policy can help in reducing the number of falls that an agent has. We also show that the approach can keep the sum of actions low in most cases, as compared to the baseline. The proposed approach works better in reducing the number of falls in Walker2d environment as compared to Hopper (left). Here, our approach works better for the dataset-environment pairs for which the dataset collection policy is less similar to the safe teacher policy. The reduction of the cumulative sum of the actions is more pronounced for Hopper. We believe that if ${\pi }_{T}$ is trained using a constrained method to keep the sum of the actions low, the results could be better. We find our approach only marginally increases the training time as compared to that of the baseline. All run time experiments were run with a single GeForce GTX 1080 Ti GPU and an Intel(R) Xeon(R) CPU E5-2640 v4.
|
| 68 |
+
|
| 69 |
+
## 5 Conclusion
|
| 70 |
+
|
| 71 |
+
In this paper, we present Guided Offline RL framework which relies on state uncertainty estimation and safety expert knowledge to discourage an offline RL agent from choosing risky/unsafe actions. We have shown that an existing offline RL algorithm called TD3+BC can be easily modified to design the proposed framework. Our experiments show that our approach performs comparably or better on multiple MuJoCo tasks from D4RL benchmark while trying to minimize unsafe incidents during evaluation. We believe that our framework can be used as an add-on to help to achieve better results while adhering to safety. As future work, we consider using other forms of the safety expert such as human interventions, heuristics etc. and evaluate them on a diverse set of safety tasks. We also plan on studying the effectiveness of the framework when coupled with other SOTA offline RL algorithms.
|
| 72 |
+
|
| 73 |
+
## References
|
| 74 |
+
|
| 75 |
+
[1] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and
|
| 76 |
+
|
| 77 |
+
Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
|
| 78 |
+
|
| 79 |
+
[2] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. nature, 518(7540):529-533, 2015.
|
| 80 |
+
|
| 81 |
+
[3] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
|
| 82 |
+
|
| 83 |
+
[4] Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In 2017 IEEE international conference on robotics and automation (ICRA), pages 3389-3396. IEEE, 2017.
|
| 84 |
+
|
| 85 |
+
[5] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334-1373, 2016.
|
| 86 |
+
|
| 87 |
+
[6] Sascha Lange, Thomas Gabel, and Martin Riedmiller. Batch reinforcement learning. In Reinforcement learning, pages 45-73. Springer, 2012.
|
| 88 |
+
|
| 89 |
+
[7] Seunghyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, and Jinwoo Shin. Addressing distribution shift in online reinforcement learning with offline datasets. 2020.
|
| 90 |
+
|
| 91 |
+
[8] Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219, 2020.
|
| 92 |
+
|
| 93 |
+
[9] Scott Fujimoto and Shixiang Shane Gu. A minimalist approach to offline reinforcement learning. Advances in neural information processing systems, 34:20132-20145, 2021.
|
| 94 |
+
|
| 95 |
+
[10] Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In International conference on machine learning, pages 2052-2062. PMLR, 2019.
|
| 96 |
+
|
| 97 |
+
[11] Seyed Kamyar Seyed Ghasemipour, Richard Zemel, and Shixiang Gu. A divergence minimization perspective on imitation learning methods. In Conference on Robot Learning, pages 1259-1277. PMLR, 2020.
|
| 98 |
+
|
| 99 |
+
[12] Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. arXiv preprint arXiv:1907.00456, 2019.
|
| 100 |
+
|
| 101 |
+
[13] Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy q-learning via bootstrapping error reduction. Advances in Neural Information Processing Systems, 32, 2019.
|
| 102 |
+
|
| 103 |
+
[14] Scott Fujimoto, Edoardo Conti, Mohammad Ghavamzadeh, and Joelle Pineau. Benchmarking batch deep reinforcement learning algorithms. arXiv preprint arXiv:1910.01708, 2019.
|
| 104 |
+
|
| 105 |
+
[15] Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177, 2019.
|
| 106 |
+
|
| 107 |
+
[16] Ashvin Nair, Abhishek Gupta, Murtaza Dalal, and Sergey Levine. Awac: Accelerating online reinforcement learning with offline datasets. arXiv preprint arXiv:2006.09359, 2020.
|
| 108 |
+
|
| 109 |
+
[17] Yue Wu, Shuangfei Zhai, Nitish Srivastava, Joshua Susskind, Jian Zhang, Ruslan Salakhutdinov, and Hanlin Goh. Uncertainty weighted actor-critic for offline reinforcement learning. arXiv preprint arXiv:2105.08140, 2021.
|
| 110 |
+
|
| 111 |
+
[18] Romain Laroche, Paul Trichelair, and Remi Tachet Des Combes. Safe policy improvement with baseline bootstrapping. In International Conference on Machine Learning, pages 3652-3661. PMLR, 2019.
|
| 112 |
+
|
| 113 |
+
[19] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179-1191, 2020.
|
| 114 |
+
|
| 115 |
+
[20] Jacob Buckman, Carles Gelada, and Marc G Bellemare. The importance of pessimism in fixed-dataset policy optimization. arXiv preprint arXiv:2009.06799, 2020.
|
| 116 |
+
|
| 117 |
+
[21] Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. arXiv preprint arXiv:1511.06342, 2015.
|
| 118 |
+
|
| 119 |
+
[22] Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distillation. arXiv preprint arXiv:1511.06295, 2015.
|
| 120 |
+
|
| 121 |
+
[23] Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Ian Osband, et al. Deep q-learning from demonstrations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
|
| 122 |
+
|
| 123 |
+
[24] Tobias Pohlen, Bilal Piot, Todd Hester, Mohammad Gheshlaghi Azar, Dan Horgan, David Budden, Gabriel Barth-Maron, Hado Van Hasselt, John Quan, Mel Večerík, et al. Observe and look further: Achieving consistent performance on atari. arXiv preprint arXiv:1805.11593, 2018.
|
| 124 |
+
|
| 125 |
+
[25] Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Overcoming exploration in reinforcement learning with demonstrations. In 2018 IEEE international conference on robotics and automation (ICRA), pages 6292-6299. IEEE, 2018.
|
| 126 |
+
|
| 127 |
+
[26] Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, and Martin Riedmiller. Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. arXiv preprint arXiv:1707.08817, 2017.
|
| 128 |
+
|
| 129 |
+
[27] Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, OpenAI Xi Chen, Yan Duan, John Schulman, Filip DeTurck, and Pieter Abbeel. # exploration: A study of count-based exploration for deep reinforcement learning. Advances in neural information processing systems, 30, 2017.
|
| 130 |
+
|
| 131 |
+
[28] Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pages 1587-1596. PMLR, 2018.
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/L-NgOKyH7jZ/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ GUIDING OFFLINE REINFORCEMENT LEARNING USING A SAFETY EXPERT
|
| 2 |
+
|
| 3 |
+
Richa Verma ${}^{ + }$ , Kartik Bharadwaj ${}^{ + }$ , Harshad Khadilkar ${}^{ \dagger }$ , and Balaraman Ravindran* +TCS Research
|
| 4 |
+
|
| 5 |
+
${}^{ \dagger }$ Robert Bosch Centre for Data Science and Artificial Intelligence
|
| 6 |
+
|
| 7 |
+
*Department of Computer Science and Engineering, Indian Institute of Technology Madras
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Offline reinforcement learning is used to train policies in situations where it is expensive or infeasible to access the environment during training. An agent trained under such a scenario does not get corrective feedback once the learned policy starts diverging and may fall prey to the overestimation bias commonly seen in this setting. This increases the chances of the agent choosing unsafe/risky actions, especially in states with sparse to no representation in the training dataset. In this paper, we propose to leverage a safety expert to discourage the offline RL agent from choosing unsafe actions in under-represented states in the dataset. The proposed framework in this paper transfers the safety expert's knowledge in an offline setting for states with high uncertainty to prevent catastrophic failures from occurring in safety-critical domains. We use a simple but effective approach to quantify the state uncertainty based on how frequently they appear in a training dataset. In states with high uncertainty, the offline RL agent mimics the safety expert while maximizing the long-term reward. We modify TD3+BC, an existing offline RL algorithm, as a part of the proposed approach. We demonstrate empirically that our approach performs better than $\mathrm{{TD}}3 + \mathrm{{BC}}$ on some control tasks and comparably on others across two sets of benchmark datasets while reducing the chance of taking unsafe actions in sparse regions of the state space.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Reinforcement Learning (RL) has seen advancement and achieved great success in solving complex tasks with high dimensional state and action spaces, including games [1, 2, 3, 4], and some tasks from robotics [5]. An RL agent trained in an online setting takes an action $a$ in state $s$ and interacts with the environment to observe a reward $r$ . It then updates its policy based on the observed reward. However, it may be risky or costly to interact with the environment repeatedly in real-world situations. It may be infeasible in the cases where a high quality simulator is not available or cannot be built.
|
| 16 |
+
|
| 17 |
+
In offline RL (also known as batch RL), the agent is not allowed to interact with the environment. It has access to a fixed-sized dataset collected by any arbitrary policy which may or may not be known [6]. Real-world applications can benefit from this setting because access to the environment may be limited, challenging or not possible. Such applications which are already deployed can also generate datasets to learn from. Offline RL enables the use of such logged datasets for learning and can even allow us to leverage an expert in the form of a human operator, rule-based systems or a policy trained with a similar objective. Some approaches such as [7] show that dataset collected by an expert during learning in an online setting can also be used, however, using the expert itself to facilitate learning in offline RL eliminates the need for data collection and is helpful in settings where data privacy needs to be enforced.
|
| 18 |
+
|
| 19 |
+
Overestimation of the values of out-of-distribution actions is a fundamental challenge in offline RL. This also applies to certain actions which can be deemed as "unsafe" in safety-critical applications such as autonomous driving, robotic learning, healthcare, etc. For robotic learning, the conditions for a safety breach during an episode are easier to define (e.g. recording how many times the robot has fallen, or a grasped object has been dropped). The challenge in this domain is to learn an optimal policy for a task while minimizing the frequency of above-mentioned instances of catastrophic failures during training.
|
| 20 |
+
|
| 21 |
+
In this paper, we study how to utilize a safety expert in an offline RL setting for states with high uncertainty to minimize failures during training. This safety expert isn't necessarily optimal and can be learned or defined by a rule-based system for each task without reference to the underlying task reward. We use a simple but effective approach to quantify the uncertainty of the states based on how frequent the visited states are in a given training dataset. This information is used to conservatively modify the critic target, therefore propagating it to the value function estimation. We believe that incorporating a safety expert in the form of a pre-trained teacher policy along with quantifying state uncertainty can be effective in this setting. It reduces the chances of the offline RL agent engaging in potentially risky exploratory behavior, thus enabling robotic learning from massive datasets. We show that it can allow the agent to learn safe behavior without explicitly defining constraints on actions, which can be hard to do in an offline setting.
|
| 22 |
+
|
| 23 |
+
Our goal is to selectively utilize a safe teacher policy to reduce the chances of risky/unsafe behavior encountered during the deployment of a learned offline RL policy while still maintaining high performance. Our main contributions are summarized below:
|
| 24 |
+
|
| 25 |
+
* We propose a framework called Guided Offline RL (GORL) that trains an agent to learn efficiently from an offline dataset while leveraging a safety expert in regions of high uncertainty.
|
| 26 |
+
|
| 27 |
+
* We evaluate our approach on a set of datasets from the D4RL benchmark of continuous control tasks [8] and show that the proposed framework either performs better or comparably to TD3+BC [9], a popular SOTA offline RL algorithm on most of the tasks.
|
| 28 |
+
|
| 29 |
+
§ 2 RELATED WORK
|
| 30 |
+
|
| 31 |
+
Offline RL. The existing offline RL methods mainly use some approach that allows the learned policy to stay close to the data collection policy. There are various ways of implementing this. One way is to estimate the behavior policy and then learning a parameterized policy [10, 11]. Another line of works uses divergence regularization [12, 13, 14 to keep the two policies close to each other. Some other works suggest the use of a weighted version of behavior cloning to encourage choosing actions with high advantage [15, 16] or use uncertainty as weight of a state-action pair before making updates [17]. Some methods incorporate the notion of safety and modify the set actions that can be chosen based on their counts [18]. promising direction of literature looks at using pessimism and implementing divergence regularization as a part of value estimation [19, 20]. The goal of this work is different from these works which focus on developing RL alorithms specifically for an offline setting. We study knowledge transfer from a safety expert to an agent learning in the offline setting.
|
| 32 |
+
|
| 33 |
+
Reinforcement Learning from Demonstration. RL literature has many examples of learning from teacher policies or demonstrations in an online setting, especially in hard exploration environments. There are policy distillation techniques [21, 22] for training student networks such that their outputs (e.g., Q-values) are similar to those of teacher networks. Learning from demonstrations is another promising area. A replay buffer in an off-policy RL setting can be used to hold teacher demonstrations, which can be combined with samples generated by a student agent during training. DQfD [23] and Ape-X DQfD [24] are some of the examples of such methods for a discrete setting while methods suggested by [25, 26] work for continuous control tasks.
|
| 34 |
+
|
| 35 |
+
§ 3 PROPOSED APPROACH
|
| 36 |
+
|
| 37 |
+
In offline RL, the problem of extrapolation error [10] is prevalent which means that the agent is unable to evaluate out-of-distribution actions properly. Our focus is on designing a framework to discourage the agent from selecting unsafe OOD actions while trying to learn an optimal policy from the dataset. We present such a framework that requires minimal modifications to a pre-existing offline RL algorithm. Our framework builds on top of TD3+BC [9]. We modify the critic target term to include state uncertainty. We also include a regularization term to push the offline policy towards the safety expert in states with poor confidence. The safety expert can be defined by any rule-based system or a pre-trained policy. We denote the agent’s confidence w.r.t a state as $\operatorname{conf}\left( s\right) \in \left\lbrack {0,1}\right\rbrack$ , where the confidence is computed by using SimHash algorithm [27]. SimHash uses Locality-Sensitive Hashing (LSH) to convert continuous, high-dimensional data to discrete hash codes. LSH preserves the distances among data points, such that those with similar hashes are close to each other. We use SimHash which is a computationally efficient LSH technique and it measures the similarity of the states contained in the training dataset $\mathcal{D}$ by angular distance. Here, we can use any technique which can transform the high-dimensional continuous state space into discrete bins based on their closeness. The following equation shows how hash codes are computed:
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
\mu \left( s\right) = \operatorname{sgn}\left( {{Ag}\left( s\right) }\right) \in {\left\lbrack -1,1\right\rbrack }^{k}. \tag{1}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
where $A \in {\mathcal{R}}^{k \times d}$ is a matrix with each entry drawn i.i.d. from a standard Gaussian and $g : S \rightarrow {\mathcal{R}}^{\mathcal{D}}$ is a preprocessing function. The dimension of binary codes is $k$ and it controls the granularity of the state space discretization. This algorithm was originally used as an exploration method but we use it to bin the states contained in the dataset $\mathcal{D}$ into hash codes of size $k$ . We use $k = {50}$ for all the tasks after careful experimentation with multiple tasks. We populate the hashtable by recording the counts of states mapped to each hash code, before training an agent. We normalize the state count values by using max-min normalization. Further, we query the hashtable to retrieve these counts during training and use the values as $\operatorname{conf}\left( s\right)$ in the below critic target update equation:
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
Q\left( {s,a}\right) = r + \gamma * \mathop{\max }\limits_{{a}^{\prime }}Q\left( {{s}^{\prime },{a}^{\prime }}\right) \underset{\text{ uncertainty weighted learning from the safety expert }}{\underbrace{-\left( {1 - \operatorname{conf}\left( s\right) }\right) * {\left( a - {\pi }_{T}\left( s\right) \right) }^{2}}}. \tag{2}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
where ${\pi }_{T}\left( s\right)$ is a teacher policy used as the safety expert. It is trained in an online setting using a continuous control algorithm known as TD3 [28]. More details on training the policy ${\pi }_{T}\left( s\right)$ to be safe are provided in the next section.
|
| 50 |
+
|
| 51 |
+
Note that the value of $\operatorname{conf}\left( s\right)$ is lower for under-represented states in the given dataset $\mathcal{D}$ and the lower the confidence, the higher will be the push towards the safety expert, ${\pi }_{T}\left( s\right)$ . Also, the modified update equation reduces the values of all the(s, a)pairs in the dataset except the ones with the action suggested by the safety expert. This discourages the agent from picking unsafe action values in regions of high uncertainty. This completes the description of our framework called Guided Offline RL (GORL) which involves making a few small, but effective, modifications to TD3+BC.
|
| 52 |
+
|
| 53 |
+
§ 4 EXPERIMENTS
|
| 54 |
+
|
| 55 |
+
We evaluate our proposed approach on the D4RL benchmark of OpenAI gym MuJoCo tasks [8]. We use the TD3+BC algorithm trained on MuJoCo tasks (Hopper-v2 and Walker2d-v2) as the baseline. We train a teacher policy ${\pi }_{T}$ to be used as the safety expert using TD3 for $1\mathrm{M}$ online steps. For the policy to be safe, we add a step penalty of the form ctrl_cost_weight $* \operatorname{sum}\left( \right.$ action ${}^{2}$ ) which is simply a cost for penalizing the agent if it takes actions that are too large. We observe that by doing so, we can discourage the agent from applying high values of torques to various joints of a MuJoCo robot and hence prevent it from making jittery moves. We choose ctrl_cost_weight as 0.1 and 0.01 for Hopper-v2 and Walker2d-v2, respectively, after tuning. These environments have in-built rewards which penalise the agent when it falls or when the height of the top (along the z-axis) becomes too high or too low. Further, we train the offline RL agent on various environment-dataset pairs using the safety expert policy ${\pi }_{T}$ as a part of the framework described in the previous section.
|
| 56 |
+
|
| 57 |
+
max width=
|
| 58 |
+
|
| 59 |
+
Dataset Environment TD3+BC Guided Offline RL
|
| 60 |
+
|
| 61 |
+
1-4
|
| 62 |
+
2*Random Hopper-v2 ${8.53} \pm {0.23}$ ${6.03} \pm {2.03}$
|
| 63 |
+
|
| 64 |
+
2-4
|
| 65 |
+
Walker2d-v2 ${0.95} \pm {0.33}$ ${2.83} \pm {3.57}$
|
| 66 |
+
|
| 67 |
+
1-4
|
| 68 |
+
2*Medium Hopper-v2 ${60.12} \pm {1.35}$ ${57.77} \pm {3.07}$
|
| 69 |
+
|
| 70 |
+
2-4
|
| 71 |
+
Walker2d-v2 ${86.17} \pm {0.3}$ ${83.78} \pm {2.91}$
|
| 72 |
+
|
| 73 |
+
1-4
|
| 74 |
+
2*Medium-Replay Hopper-v2 ${56.71} \pm {19.16}$ ${85.61} \pm {5.14}$
|
| 75 |
+
|
| 76 |
+
2-4
|
| 77 |
+
Walker2d-v2 ${73.56} \pm {11.19}$ ${84.67} \pm {0.77}$
|
| 78 |
+
|
| 79 |
+
1-4
|
| 80 |
+
2*Medium-Expert Hopper-v2 ${95.16} \pm {9.85}$ ${106.11} \pm {5.92}$
|
| 81 |
+
|
| 82 |
+
2-4
|
| 83 |
+
Walker2d-v2 ${110.26} \pm {0.65}$ ${110.6} \pm {0.21}$
|
| 84 |
+
|
| 85 |
+
1-4
|
| 86 |
+
2*Expert Hopper-v2 ${110.97} \pm {1.45}$ ${111.62} \pm {0.37}$
|
| 87 |
+
|
| 88 |
+
2-4
|
| 89 |
+
Walker2d-v2 ${110.12} \pm {0.47}$ ${109.91} \pm {0.13}$
|
| 90 |
+
|
| 91 |
+
1-4
|
| 92 |
+
X Total ${712.55} \pm {44.98}$ ${758.93} \pm {24.12}$
|
| 93 |
+
|
| 94 |
+
1-4
|
| 95 |
+
|
| 96 |
+
Table 1: Average normalized score using the D4RL -v2 datasets. The highest performing scores are highlighted. $\pm$ captures the standard deviation over seeds. TD3+BC algorithm is re-run using author-provided implementation. The results are after averaging over the final 10 evaluations and 3 seeds. No additional hyperparameter tuning was performed. TD3+BC and Guided TD3+BC achieve comparable performance.
|
| 97 |
+
|
| 98 |
+
< g r a p h i c s >
|
| 99 |
+
|
| 100 |
+
Figure 1: Percent difference of performance of Guided Offline RL w.r.t baseline TD3+BC algorithm. Here, h = Hopper-v2, w $=$ Walker2d-v2, $\mathrm{r} =$ random, $\mathrm{m} =$ medium, $\mathrm{{mr}} =$ medium-replay, $\mathrm{{me}} =$ medium-expert, $\mathrm{e} =$ expert. The proposed approach works better in reducing the number of falls in Walker2d environment as compared to Hopper (left). The reduction of the cumulative sum of the actions is more pronounced for Hopper (right).
|
| 101 |
+
|
| 102 |
+
We use the author-provided implementations for both TD3 and TD3+BC. We use the same base hyperparameters as the respective authors for these algorithms and train the baseline and the offline RL agent for three random seeds. In all experiments, the offline agent and the baseline agent do 10 evaluation episodes after every 5000 offline training steps till they reach 1M training steps. We use the normalized score from D4RL for evaluation and we average the scores of all seeds for each environment. We report the final performance results in Table 1. In Figure 1, we report the percentage difference between Guided Offline RL and TD3+BC w.r.t. the total number of times the agent falls or its agent's height crosses the safe range (Walker2d-v2) during all the evaluation episodes occurring within $1\mathrm{M}$ training steps. We also report the percentage difference between the cumulative sum of the actions across all evaluation steps for each dataset-environment pair.
|
| 103 |
+
|
| 104 |
+
Our results show that including a safe teacher policy can help in reducing the number of falls that an agent has. We also show that the approach can keep the sum of actions low in most cases, as compared to the baseline. The proposed approach works better in reducing the number of falls in Walker2d environment as compared to Hopper (left). Here, our approach works better for the dataset-environment pairs for which the dataset collection policy is less similar to the safe teacher policy. The reduction of the cumulative sum of the actions is more pronounced for Hopper. We believe that if ${\pi }_{T}$ is trained using a constrained method to keep the sum of the actions low, the results could be better. We find our approach only marginally increases the training time as compared to that of the baseline. All run time experiments were run with a single GeForce GTX 1080 Ti GPU and an Intel(R) Xeon(R) CPU E5-2640 v4.
|
| 105 |
+
|
| 106 |
+
§ 5 CONCLUSION
|
| 107 |
+
|
| 108 |
+
In this paper, we present Guided Offline RL framework which relies on state uncertainty estimation and safety expert knowledge to discourage an offline RL agent from choosing risky/unsafe actions. We have shown that an existing offline RL algorithm called TD3+BC can be easily modified to design the proposed framework. Our experiments show that our approach performs comparably or better on multiple MuJoCo tasks from D4RL benchmark while trying to minimize unsafe incidents during evaluation. We believe that our framework can be used as an add-on to help to achieve better results while adhering to safety. As future work, we consider using other forms of the safety expert such as human interventions, heuristics etc. and evaluate them on a diverse set of safety tasks. We also plan on studying the effectiveness of the framework when coupled with other SOTA offline RL algorithms.
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/VJvluDhBfOS/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Transcribing Educational Videos Using Whisper A preliminary study on using AI for transcribing educational videos
|
| 2 |
+
|
| 3 |
+
Ashwin Rao
|
| 4 |
+
|
| 5 |
+
University of Helsinki
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Videos are increasingly being used for e-learning, and transcripts are vital to enhance the learning experience. The costs and delays of generating transcripts can be alleviated by automatic speech recognition (ASR) systems. In this article, we quantify the transcripts generated by whisper for 25 educational videos and identify some open avenues of research when leveraging ASR for transcribing educational videos.
|
| 10 |
+
|
| 11 |
+
## 1 Introduction
|
| 12 |
+
|
| 13 |
+
During the last decade, we have witnessed an increase in the volume of video content that is disseminated over the Internet. The pandemic further exacerbated this trend as people started to consume a wide category of videos from their houses [1]. Along with lectures, we have also witnessed a rise in the conferences and talks that are being recorded and uploaded online on streaming sites. These videos augment the material taught in the classrooms and are increasingly being leveraged for educational purposes [2].
|
| 14 |
+
|
| 15 |
+
Educational videos, like entertainment videos, are consumed in a combination of personal devices such as laptops, tablets, smartphones, and studies. The capabilities of the audio systems on these devices vary significantly, and a given audio file may sound different on each of these devices [3]. Words in an audio segment recorded by amateurs may sound clear and comprehensible on one device, and the same audio segment may be unintelligible on another device. Furthermore, the educational videos might include the voices of people from a wide range of ethnicities, and the speakers might also not be native speakers of the language in which they are speaking. Clearly, the audio quality of educational videos is vital, and addressing acoustic issues can result in drastic improvement in the quality of the material [4]. However, the video and audio quality of educational videos might not be optimal for all devices because they may not be professionally created, edited, and processed.
|
| 16 |
+
|
| 17 |
+
Audio transcripts and subcaptions help alleviate the issues in the audio quality and enable the viewers to receive a correct interpretation of the content. For instance, Gernsbacher has shown that captions are particularly beneficial for persons watching videos in their non-native language [5]. Although generating transcripts has been non-trivial, recent advances in speech-to-text generation have shown promising results in transcribing audio content. In the context of videos, transcripts are different from subtitles: transcripts typically refer to a textual copy of the words someone has said in the video, while subtitles refer to the textual versions of the dialogues in the video [6]. Subtitles can either be open or closed: open subtitles are embedded in the video frames, while closed subtitles are stored separately and can be overlayed over the video frames or can be displayed on a second screen. A variant of closed subtitles is closed captions which contain an additional description of the audio-video content being shown, such as the sound made by animals, etc. At times, a transcript can also include additional description; examples include laughter by students, audience clapping, etc. A key difference between a transcript and the subtitles is that a transcript does not contain the time stamp at which the words in the transcript were said.
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
WEBVTT
|
| 22 |
+
|
| 23 |
+
Kind: captions
|
| 24 |
+
|
| 25 |
+
Language: en
|
| 26 |
+
|
| 27 |
+
${00} : {00} : {00.040}\cdots > {00} : {00} : {02.460}$
|
| 28 |
+
|
| 29 |
+
The following content is
|
| 30 |
+
|
| 31 |
+
provided under a Creative
|
| 32 |
+
|
| 33 |
+
${00} : {00} : {02.460}\; - \rightarrow {00} : {00} : {03.870}$
|
| 34 |
+
|
| 35 |
+
Commons license.
|
| 36 |
+
|
| 37 |
+
---
|
| 38 |
+
|
| 39 |
+
Figure 1: Example Closed Caption. The metadata (the file format and language) is followed by the time stamps during which the text can be shown.
|
| 40 |
+
|
| 41 |
+
In this article, we do a preliminary evaluation of the quality of transcripts generated by whisper [7]. We focus on the speech-to-text translation, and not on the time stamp at which the word was spoken. Although there is a wide range of tools and models for generating transcripts, we focus our attention on whisper. Our goal is to get an understanding of using whisper for academic videos and identify open avenues of research in the area of leveraging ASR for transcribing academic videos.
|
| 42 |
+
|
| 43 |
+
## 2 Methodology
|
| 44 |
+
|
| 45 |
+
Tools used and data processing pipeline. For our analysis, we first collect a set of 25 YouTube videos that have closed captions that are not automatically generated; YouTube shows if the captions are auto-generated or provided by the content creator. For each video, we use yt-dlp to download the best audio files corresponding to the video and the available captions (as transcripts). The downloaded captions are the baseline for our evaluation. We do this because YouTube keeps multiple versions of the same video, and dynamically adapts to the optimal audio/video quality depending on the network connectivity. We then use whisper [7] to generate the transcripts, and run it in our cluster powered by NVidia V100 GPUs [8]. The generated transcripts are then compared with our baseline transcripts downloaded from YouTube using jiwer. We summarize the tools used in Table 1.
|
| 46 |
+
|
| 47 |
+
Automatic Transcript Generation (Speech to Text). In this article, we restrict ourselves to whisper [7]. Whisper offers multiple models which can be used to process the transcribe the audio files, and in our evaluation we restrict ourselves to the following five models (number of parameters in parenthesis) of which large-v2 is a multilingual model: base.en (74 M), tiny.en (39 M), small.en (244 M), medium.en (769 M), and large-v2 (1550 M). We acknowledge that there is a wide range of open-source tools and models including Kaldi [9], Flashlight [10], and Paddlespeech [11]. We plan to analyze the efficiency of these tools in our subsequent works.
|
| 48 |
+
|
| 49 |
+
<table><tr><td>Tool</td><td>Version</td><td>Usage</td></tr><tr><td>whisper</td><td>20230314</td><td>Speech to text conversion.</td></tr><tr><td>jiwer</td><td>3.0.1</td><td>Compare the text in two files.</td></tr><tr><td>yt-dlp</td><td>2023.03.04</td><td>Download audio files and transcripts.</td></tr><tr><td>opusinfo</td><td>0.1.10</td><td>Extract metadata from audio files.</td></tr></table>
|
| 50 |
+
|
| 51 |
+
Table 1: Software Tools
|
| 52 |
+
|
| 53 |
+
Metrics for evaluating transcript quality. The Word Error Rate (WER) is a commonly used metric for comparing texts [12] and it is computed as ${WER} = \frac{S + D + I}{N = H + S + D}$ where $H$ is the number of hits (correct words), $S$ is the number of substitutions, $D$ is the number of deletions, and $I$ is the number of insertions, and $N$ denotes the number of words in the reference (baseline) against which the hypothesis (results of the transcribing tool) is being evaluated. In contrast, the Match Error Rate (MER) is the probability of an incorrect match [12], and is given by ${MER} = \frac{S + D + I}{H + S + D + I}$ . The Word Information Lost (WIL) is an approximation for the Relative Information Lost (RIL) which is computed using the hits, substitutions, insertions, and deletions [12]; the RIL measures the statistical dependence between the reference and the hypothesis and is calculated using the Shannon entropy. Our goal is not to compare the metrics, and instead we rely on the WER, MER, and WIL to evaluate the performance of the transcription. We use jiwer to compute the WER, MER, and WIL. It is known that jiwer can end up computing a higher WER without normalizing the text [7], and the WER depends on the normalization technique used. For this preliminary analysis we avoid using any custom normalizations, and we plan to explore the impact of normalization in a subsequent study.
|
| 54 |
+
|
| 55 |
+
Dataset Description. Of the 25 YouTube videos, 15 were from lectures on MIT OCW. The remaining 10 included 5 talks at Google, one talk at MIT OCW, and four Turing Award lectures. ${}^{1}$ . In Figure 2, we present the playback duration (size in seconds) of each of the videos and the average bitrate of the audio file. The quality of the audio file is important because it can affect the quality of the transcripts being generated, and we observe that the audio files downloaded have an average bit rate of at least ${92}\mathrm{{kbps}}$ . Note that the audio file was encoded in opus audio format which supports variable bitrate and is optimized for speech [13]. We also observe that the audio files were sampled at ${48}\mathrm{{kHz}}$ . Whisper internally converts the audio file to ${16}\mathrm{{kHz}}$ , and we believe that the audio files in our dataset have a sufficiently higher frequency from which audio segments can be sampled at ${16}\mathrm{{kHz}}$ .
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
|
| 59 |
+
Figure 2: Average Bitrate of the Audio Files.
|
| 60 |
+
|
| 61 |
+
---
|
| 62 |
+
|
| 63 |
+
${}^{1}$ Availability: The details of these videos are available with our code and datasets at: https://version.helsinki.fi/ transcribe-educational-videos/preliminary-study-dai2023/
|
| 64 |
+
|
| 65 |
+
---
|
| 66 |
+
|
| 67 |
+
## 3 Evaluation
|
| 68 |
+
|
| 69 |
+
In Figure 3, we present the time required to transcribe a video for a given playback time (see Figure 3(a)), and also for a given word count in our baseline transcripts (see Figure 3(b)). We observe that the time to transcribe increases linearly with the playback duration and word count, and the larger models require more time. We present these results to give a ballpark on what to expect, and we are aware that these times are heavily biased to the audio content, and the computational capabilities in our cluster.
|
| 70 |
+
|
| 71 |
+

|
| 72 |
+
|
| 73 |
+
Figure 3: Transcription Time. The transcription time, i.e., the time to generate transcripts, increases linearly with the playback duration and word count. The larger models require more time than their smaller counterparts.
|
| 74 |
+
|
| 75 |
+
In Figure 4, we plot the fraction of the playback time that a given model took to transcribe the video. We observe that even the large-v2 model was able to complete the transcription process in less than ${25}\%$ of the time required to playback the video. For the videos in our dataset, and while running whisper on our servers, we observe that the base, tiny, and small models took less than ${10}\%$ of the playback time to transcribe the video, and the larger models took less than 25% of the playback time. A typical human transcriber would require at least the playback time to listen to the whole audio. In Table 2, we present a snippet of the transcripts generated using Whisper. In this snippet, the speaker asks the audience member to repeat what they said because of audio issues. We see that the original transcript marks the conversation as inaudible while the whisper tries to guess what is said, and the results vary with the model size. Clearly, this speed-up when using smaller models is meaningless if the quality of the transcription is poor.
|
| 76 |
+
|
| 77 |
+

|
| 78 |
+
|
| 79 |
+
Figure 4: Relative transcription time. If the playback time is ${50}\mathrm{\;s}$ and it takes ${10}\mathrm{\;s}$ to generate the transcript then the fraction of playback time is ${10}/{50} = {0.2}$ , i.e., generating a transcript required ${20}\%$ of the playback time. (Range $= \min ,\max$ )
|
| 80 |
+
|
| 81 |
+

|
| 82 |
+
|
| 83 |
+
Table 2: Example transcript with high WER. The above transcripts are for a segment at time offset 1h:02m:58s of the the following video https://www.youtube.com/watch?v=3LVeEjsn8Ts#t=62m58s.
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
|
| 87 |
+
Figure 5: Transcript quality. The error bars represent the min and max across the files in the dataset.
|
| 88 |
+
|
| 89 |
+
In Figure 5, we present the WER, MER, and WIL when using the various models. Across all the metrics, we observe that the WER, MER, and WIL decreases as the number of parameters in the models increases. An exception is for the large-v2 model. We believe that this is primarily due to the lack of using a normalizer [7], and the audio segments that were marked inaudible in the original transcripts. As shown in Table 2, whisper transcribes the conversation marked inaudible by the human transcriber, and the volume of text generated (sans punctuations) by the large-v2 model is larger than the other models thus resulting in a higher error rate.
|
| 90 |
+
|
| 91 |
+
Along with the example provided in Table 2, we also observe a high WER, a high WIL, and a high MER for other videos, as highlighted by the error bars in Figure 5. To better understand this behavior, we present the fraction of hits, substitutions, deletions, and insertions in Figure 6. Across all models, we observe that the hits are above ${80}\%$ for the majority of videos, and the fraction of hits increases with the number of parameters. However, for some videos, such as the one in Table 2, we observe a large number of substitutions, insertions, and deletions.
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
|
| 95 |
+
Figure 6: Fraction of Hits, Substitutions, Deletions, and Insertions. Error bars represent the min and max across files in our dataset. The cutout zooms into the Deletions and Insertions.
|
| 96 |
+
|
| 97 |
+
One reason for the high error rates is that whisper does not provide inaudible as output and tries to extract the text even from the audio which a human transcriber might mark as inaudible. This is further exacerbated by not leveraging the context. For instance, in the example shown in Table 2 the conversation was about domain-specific architecture, and the question being asked was on the same topic, and yet some of the models wrongly predicted the outcome to be Thomas version architecture or Thomas’s certificate architecture. These predictions are bullshit ${}^{2}$ because they (and the underlying models) are indifferent to truth. Furthermore, although only two substitutions are needed to replace thomas certificate architecture to domain specific architecture, incorrect predictions like these diminish the usefulness of the generated transcripts. We believe that marking the audio segments as either inaudible or its equivalent that indicates a low confidence in the transcription result would be more beneficial in such scenarios. This is achievable by tweaking some thresholds in whisper's configurations, and we plan to explore their impact in subsequent works.
|
| 98 |
+
|
| 99 |
+
## 4 Concluding Remarks and Avenues for Future Work
|
| 100 |
+
|
| 101 |
+
We performed a preliminary analysis on the transcription capabilities of Whisper, however we cannot draw any strong conclusions: our dataset is heavily biased to the videos picked by the author, and the results are only for the models of one tool, whisper. However, we gained some insights such as the importance of marking audio segments as inaudible, and how inaudible audio segments affect the quality of transcripts generated by ASR systems.
|
| 102 |
+
|
| 103 |
+
Some avenues for future work in this area include: a) metrics that account for the semantic information such as the importance of each word, and evaluate the quality of transcripts in end-user studies; b) comparing the transcription results from different models; c) evaluating transcription capabilities for languages other than English, and also for non-native speakers for these languages; d) quantifying the impact of multiple speakers from different ethical backgrounds in the same video/audio; e) approaches to identify the context of the lecture/talk, and leveraging it for better transcriptions; f) quantifying the costs for generating transcripts for different accelerators, and identifying effectiveness of accelerators for transcript generation on end-user devices; and g) quantifying the quality of subtiles including the timestamp of the words and descriptions of the sounds that are generated by the ASR system.
|
| 104 |
+
|
| 105 |
+
Acknowledgement. The authors wish to thank the Finnish Computing Competence Infrastructure (FCCI) for supporting this project with computational and data storage resources
|
| 106 |
+
|
| 107 |
+
---
|
| 108 |
+
|
| 109 |
+
${}^{2}$ We apologize for the use of profanity, and we rely on the following quote by Harry Frankfurt [14] for describing the term bullshit: "it is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction."
|
| 110 |
+
|
| 111 |
+
---
|
| 112 |
+
|
| 113 |
+
## References
|
| 114 |
+
|
| 115 |
+
[1] Anja Feldmann, Oliver Gasser, Franziska Lichtblau, Enric Pujol, Ingmar Poese, Christoph Dietzel, Daniel Wagner, Matthias Wichtlhuber, Juan Tapiador, Narseo Vallina-Rodriguez, Oliver Hohlfeld, and Georgios Smarag-dakis. The lockdown effect: Implications of the covid-19 pandemic on internet traffic. In Proceedings of the ACM Internet Measurement Conference, IMC '20, page 1-18, New York, NY, USA, 2020. Association for Computing Machinery.
|
| 116 |
+
|
| 117 |
+
[2] Daniel T Seaton, Sergiy Nesterko, Tommy Mullaney, Justin Reich, Andrew Ho, and Isaac Chuang. Characterizing video use in the catalogue of mitx moocs. Proceedings of the European MOOC Stakeholder Summit, pages 140-146, 2014.
|
| 118 |
+
|
| 119 |
+
[3] Why we all need subtitles now. https://www.youtube.com/watch?v=VYJtb2YXae8.accessed 2023-May-01.
|
| 120 |
+
|
| 121 |
+
[4] Craig H Richardson. Improving audio quality in distance learning applications. 1998.
|
| 122 |
+
|
| 123 |
+
[5] Morton Ann Gernsbacher. Video captions benefit everyone. Policy insights from the behavioral and brain sciences, 2(1):195-202, 2015.
|
| 124 |
+
|
| 125 |
+
[6] Subtitles - wikipedia. https://en.wikipedia.org/wiki/Subtitles.accessed 2023-May-01.
|
| 126 |
+
|
| 127 |
+
[7] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision, 2022.
|
| 128 |
+
|
| 129 |
+
$\left\lbrack 8\right\rbrack \;$ https://wiki.helsinki.fi/display/it4sci/HPC+Environment+User+Guide.accessed 2023-May-01.
|
| 130 |
+
|
| 131 |
+
[9] Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Han-nemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. The kaldi speech recognition toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding, number CONF. IEEE Signal Processing Society, 2011.
|
| 132 |
+
|
| 133 |
+
[10] Jacob Kahn, Vineel Pratap, Tatiana Likhomanenko, Qiantong Xu, Awni Hannun, Jeff Cai, Paden Tomasello, Ann Lee, Edouard Grave, Gilad Avidov, Benoit Steiner, Vitaliy Liptchinsky, Gabriel Synnaeve, and Ronan Collobert. Flashlight: Enabling innovation in tools for machine learning, 2022.
|
| 134 |
+
|
| 135 |
+
[11] Junkun Chen Xintong Li Renjie Zheng Yuxin Huang Xiaojie Chen Enlei Gong Zeyu Chen Xiaoguang Hu dianhai yu Yanjun Ma Liang Huang Hui Zhang, Tian Yuan. Paddlespeech: An easy-to-use all-in-one speech toolkit. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations. Association for Computational Linguistics, 2022.
|
| 136 |
+
|
| 137 |
+
[12] Andrew Cameron Morris, Viktoria Maier, and Phil Green. From WER and RIL to MER and WIL: improved evaluation measures for connected speech recognition. In Eighth International Conference on Spoken Language Processing, 2004.
|
| 138 |
+
|
| 139 |
+
[13] Jean-Marc Valin, Koen Vos, and T Terriberry. Rfc 6716: Definition of the opus audio codec, 2012.
|
| 140 |
+
|
| 141 |
+
[14] Harry G Frankfurt. On bullshit. Princeton University Press, 2005.
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/VJvluDhBfOS/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TRANSCRIBING EDUCATIONAL VIDEOS USING WHISPER A PRELIMINARY STUDY ON USING AI FOR TRANSCRIBING EDUCATIONAL VIDEOS
|
| 2 |
+
|
| 3 |
+
Ashwin Rao
|
| 4 |
+
|
| 5 |
+
University of Helsinki
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Videos are increasingly being used for e-learning, and transcripts are vital to enhance the learning experience. The costs and delays of generating transcripts can be alleviated by automatic speech recognition (ASR) systems. In this article, we quantify the transcripts generated by whisper for 25 educational videos and identify some open avenues of research when leveraging ASR for transcribing educational videos.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
During the last decade, we have witnessed an increase in the volume of video content that is disseminated over the Internet. The pandemic further exacerbated this trend as people started to consume a wide category of videos from their houses [1]. Along with lectures, we have also witnessed a rise in the conferences and talks that are being recorded and uploaded online on streaming sites. These videos augment the material taught in the classrooms and are increasingly being leveraged for educational purposes [2].
|
| 14 |
+
|
| 15 |
+
Educational videos, like entertainment videos, are consumed in a combination of personal devices such as laptops, tablets, smartphones, and studies. The capabilities of the audio systems on these devices vary significantly, and a given audio file may sound different on each of these devices [3]. Words in an audio segment recorded by amateurs may sound clear and comprehensible on one device, and the same audio segment may be unintelligible on another device. Furthermore, the educational videos might include the voices of people from a wide range of ethnicities, and the speakers might also not be native speakers of the language in which they are speaking. Clearly, the audio quality of educational videos is vital, and addressing acoustic issues can result in drastic improvement in the quality of the material [4]. However, the video and audio quality of educational videos might not be optimal for all devices because they may not be professionally created, edited, and processed.
|
| 16 |
+
|
| 17 |
+
Audio transcripts and subcaptions help alleviate the issues in the audio quality and enable the viewers to receive a correct interpretation of the content. For instance, Gernsbacher has shown that captions are particularly beneficial for persons watching videos in their non-native language [5]. Although generating transcripts has been non-trivial, recent advances in speech-to-text generation have shown promising results in transcribing audio content. In the context of videos, transcripts are different from subtitles: transcripts typically refer to a textual copy of the words someone has said in the video, while subtitles refer to the textual versions of the dialogues in the video [6]. Subtitles can either be open or closed: open subtitles are embedded in the video frames, while closed subtitles are stored separately and can be overlayed over the video frames or can be displayed on a second screen. A variant of closed subtitles is closed captions which contain an additional description of the audio-video content being shown, such as the sound made by animals, etc. At times, a transcript can also include additional description; examples include laughter by students, audience clapping, etc. A key difference between a transcript and the subtitles is that a transcript does not contain the time stamp at which the words in the transcript were said.
|
| 18 |
+
|
| 19 |
+
WEBVTT
|
| 20 |
+
|
| 21 |
+
Kind: captions
|
| 22 |
+
|
| 23 |
+
Language: en
|
| 24 |
+
|
| 25 |
+
${00} : {00} : {00.040}\cdots > {00} : {00} : {02.460}$
|
| 26 |
+
|
| 27 |
+
The following content is
|
| 28 |
+
|
| 29 |
+
provided under a Creative
|
| 30 |
+
|
| 31 |
+
${00} : {00} : {02.460}\; - \rightarrow {00} : {00} : {03.870}$
|
| 32 |
+
|
| 33 |
+
Commons license.
|
| 34 |
+
|
| 35 |
+
Figure 1: Example Closed Caption. The metadata (the file format and language) is followed by the time stamps during which the text can be shown.
|
| 36 |
+
|
| 37 |
+
In this article, we do a preliminary evaluation of the quality of transcripts generated by whisper [7]. We focus on the speech-to-text translation, and not on the time stamp at which the word was spoken. Although there is a wide range of tools and models for generating transcripts, we focus our attention on whisper. Our goal is to get an understanding of using whisper for academic videos and identify open avenues of research in the area of leveraging ASR for transcribing academic videos.
|
| 38 |
+
|
| 39 |
+
§ 2 METHODOLOGY
|
| 40 |
+
|
| 41 |
+
Tools used and data processing pipeline. For our analysis, we first collect a set of 25 YouTube videos that have closed captions that are not automatically generated; YouTube shows if the captions are auto-generated or provided by the content creator. For each video, we use yt-dlp to download the best audio files corresponding to the video and the available captions (as transcripts). The downloaded captions are the baseline for our evaluation. We do this because YouTube keeps multiple versions of the same video, and dynamically adapts to the optimal audio/video quality depending on the network connectivity. We then use whisper [7] to generate the transcripts, and run it in our cluster powered by NVidia V100 GPUs [8]. The generated transcripts are then compared with our baseline transcripts downloaded from YouTube using jiwer. We summarize the tools used in Table 1.
|
| 42 |
+
|
| 43 |
+
Automatic Transcript Generation (Speech to Text). In this article, we restrict ourselves to whisper [7]. Whisper offers multiple models which can be used to process the transcribe the audio files, and in our evaluation we restrict ourselves to the following five models (number of parameters in parenthesis) of which large-v2 is a multilingual model: base.en (74 M), tiny.en (39 M), small.en (244 M), medium.en (769 M), and large-v2 (1550 M). We acknowledge that there is a wide range of open-source tools and models including Kaldi [9], Flashlight [10], and Paddlespeech [11]. We plan to analyze the efficiency of these tools in our subsequent works.
|
| 44 |
+
|
| 45 |
+
max width=
|
| 46 |
+
|
| 47 |
+
Tool Version Usage
|
| 48 |
+
|
| 49 |
+
1-3
|
| 50 |
+
whisper 20230314 Speech to text conversion.
|
| 51 |
+
|
| 52 |
+
1-3
|
| 53 |
+
jiwer 3.0.1 Compare the text in two files.
|
| 54 |
+
|
| 55 |
+
1-3
|
| 56 |
+
yt-dlp 2023.03.04 Download audio files and transcripts.
|
| 57 |
+
|
| 58 |
+
1-3
|
| 59 |
+
opusinfo 0.1.10 Extract metadata from audio files.
|
| 60 |
+
|
| 61 |
+
1-3
|
| 62 |
+
|
| 63 |
+
Table 1: Software Tools
|
| 64 |
+
|
| 65 |
+
Metrics for evaluating transcript quality. The Word Error Rate (WER) is a commonly used metric for comparing texts [12] and it is computed as ${WER} = \frac{S + D + I}{N = H + S + D}$ where $H$ is the number of hits (correct words), $S$ is the number of substitutions, $D$ is the number of deletions, and $I$ is the number of insertions, and $N$ denotes the number of words in the reference (baseline) against which the hypothesis (results of the transcribing tool) is being evaluated. In contrast, the Match Error Rate (MER) is the probability of an incorrect match [12], and is given by ${MER} = \frac{S + D + I}{H + S + D + I}$ . The Word Information Lost (WIL) is an approximation for the Relative Information Lost (RIL) which is computed using the hits, substitutions, insertions, and deletions [12]; the RIL measures the statistical dependence between the reference and the hypothesis and is calculated using the Shannon entropy. Our goal is not to compare the metrics, and instead we rely on the WER, MER, and WIL to evaluate the performance of the transcription. We use jiwer to compute the WER, MER, and WIL. It is known that jiwer can end up computing a higher WER without normalizing the text [7], and the WER depends on the normalization technique used. For this preliminary analysis we avoid using any custom normalizations, and we plan to explore the impact of normalization in a subsequent study.
|
| 66 |
+
|
| 67 |
+
Dataset Description. Of the 25 YouTube videos, 15 were from lectures on MIT OCW. The remaining 10 included 5 talks at Google, one talk at MIT OCW, and four Turing Award lectures. ${}^{1}$ . In Figure 2, we present the playback duration (size in seconds) of each of the videos and the average bitrate of the audio file. The quality of the audio file is important because it can affect the quality of the transcripts being generated, and we observe that the audio files downloaded have an average bit rate of at least ${92}\mathrm{{kbps}}$ . Note that the audio file was encoded in opus audio format which supports variable bitrate and is optimized for speech [13]. We also observe that the audio files were sampled at ${48}\mathrm{{kHz}}$ . Whisper internally converts the audio file to ${16}\mathrm{{kHz}}$ , and we believe that the audio files in our dataset have a sufficiently higher frequency from which audio segments can be sampled at ${16}\mathrm{{kHz}}$ .
|
| 68 |
+
|
| 69 |
+
< g r a p h i c s >
|
| 70 |
+
|
| 71 |
+
Figure 2: Average Bitrate of the Audio Files.
|
| 72 |
+
|
| 73 |
+
${}^{1}$ Availability: The details of these videos are available with our code and datasets at: https://version.helsinki.fi/ transcribe-educational-videos/preliminary-study-dai2023/
|
| 74 |
+
|
| 75 |
+
§ 3 EVALUATION
|
| 76 |
+
|
| 77 |
+
In Figure 3, we present the time required to transcribe a video for a given playback time (see Figure 3(a)), and also for a given word count in our baseline transcripts (see Figure 3(b)). We observe that the time to transcribe increases linearly with the playback duration and word count, and the larger models require more time. We present these results to give a ballpark on what to expect, and we are aware that these times are heavily biased to the audio content, and the computational capabilities in our cluster.
|
| 78 |
+
|
| 79 |
+
< g r a p h i c s >
|
| 80 |
+
|
| 81 |
+
Figure 3: Transcription Time. The transcription time, i.e., the time to generate transcripts, increases linearly with the playback duration and word count. The larger models require more time than their smaller counterparts.
|
| 82 |
+
|
| 83 |
+
In Figure 4, we plot the fraction of the playback time that a given model took to transcribe the video. We observe that even the large-v2 model was able to complete the transcription process in less than ${25}\%$ of the time required to playback the video. For the videos in our dataset, and while running whisper on our servers, we observe that the base, tiny, and small models took less than ${10}\%$ of the playback time to transcribe the video, and the larger models took less than 25% of the playback time. A typical human transcriber would require at least the playback time to listen to the whole audio. In Table 2, we present a snippet of the transcripts generated using Whisper. In this snippet, the speaker asks the audience member to repeat what they said because of audio issues. We see that the original transcript marks the conversation as inaudible while the whisper tries to guess what is said, and the results vary with the model size. Clearly, this speed-up when using smaller models is meaningless if the quality of the transcription is poor.
|
| 84 |
+
|
| 85 |
+
< g r a p h i c s >
|
| 86 |
+
|
| 87 |
+
Figure 4: Relative transcription time. If the playback time is ${50}\mathrm{\;s}$ and it takes ${10}\mathrm{\;s}$ to generate the transcript then the fraction of playback time is ${10}/{50} = {0.2}$ , i.e., generating a transcript required ${20}\%$ of the playback time. (Range $= \min ,\max$ )
|
| 88 |
+
|
| 89 |
+
< g r a p h i c s >
|
| 90 |
+
|
| 91 |
+
Table 2: Example transcript with high WER. The above transcripts are for a segment at time offset 1h:02m:58s of the the following video https://www.youtube.com/watch?v=3LVeEjsn8Ts#t=62m58s.
|
| 92 |
+
|
| 93 |
+
< g r a p h i c s >
|
| 94 |
+
|
| 95 |
+
Figure 5: Transcript quality. The error bars represent the min and max across the files in the dataset.
|
| 96 |
+
|
| 97 |
+
In Figure 5, we present the WER, MER, and WIL when using the various models. Across all the metrics, we observe that the WER, MER, and WIL decreases as the number of parameters in the models increases. An exception is for the large-v2 model. We believe that this is primarily due to the lack of using a normalizer [7], and the audio segments that were marked inaudible in the original transcripts. As shown in Table 2, whisper transcribes the conversation marked inaudible by the human transcriber, and the volume of text generated (sans punctuations) by the large-v2 model is larger than the other models thus resulting in a higher error rate.
|
| 98 |
+
|
| 99 |
+
Along with the example provided in Table 2, we also observe a high WER, a high WIL, and a high MER for other videos, as highlighted by the error bars in Figure 5. To better understand this behavior, we present the fraction of hits, substitutions, deletions, and insertions in Figure 6. Across all models, we observe that the hits are above ${80}\%$ for the majority of videos, and the fraction of hits increases with the number of parameters. However, for some videos, such as the one in Table 2, we observe a large number of substitutions, insertions, and deletions.
|
| 100 |
+
|
| 101 |
+
< g r a p h i c s >
|
| 102 |
+
|
| 103 |
+
Figure 6: Fraction of Hits, Substitutions, Deletions, and Insertions. Error bars represent the min and max across files in our dataset. The cutout zooms into the Deletions and Insertions.
|
| 104 |
+
|
| 105 |
+
One reason for the high error rates is that whisper does not provide inaudible as output and tries to extract the text even from the audio which a human transcriber might mark as inaudible. This is further exacerbated by not leveraging the context. For instance, in the example shown in Table 2 the conversation was about domain-specific architecture, and the question being asked was on the same topic, and yet some of the models wrongly predicted the outcome to be Thomas version architecture or Thomas’s certificate architecture. These predictions are bullshit ${}^{2}$ because they (and the underlying models) are indifferent to truth. Furthermore, although only two substitutions are needed to replace thomas certificate architecture to domain specific architecture, incorrect predictions like these diminish the usefulness of the generated transcripts. We believe that marking the audio segments as either inaudible or its equivalent that indicates a low confidence in the transcription result would be more beneficial in such scenarios. This is achievable by tweaking some thresholds in whisper's configurations, and we plan to explore their impact in subsequent works.
|
| 106 |
+
|
| 107 |
+
§ 4 CONCLUDING REMARKS AND AVENUES FOR FUTURE WORK
|
| 108 |
+
|
| 109 |
+
We performed a preliminary analysis on the transcription capabilities of Whisper, however we cannot draw any strong conclusions: our dataset is heavily biased to the videos picked by the author, and the results are only for the models of one tool, whisper. However, we gained some insights such as the importance of marking audio segments as inaudible, and how inaudible audio segments affect the quality of transcripts generated by ASR systems.
|
| 110 |
+
|
| 111 |
+
Some avenues for future work in this area include: a) metrics that account for the semantic information such as the importance of each word, and evaluate the quality of transcripts in end-user studies; b) comparing the transcription results from different models; c) evaluating transcription capabilities for languages other than English, and also for non-native speakers for these languages; d) quantifying the impact of multiple speakers from different ethical backgrounds in the same video/audio; e) approaches to identify the context of the lecture/talk, and leveraging it for better transcriptions; f) quantifying the costs for generating transcripts for different accelerators, and identifying effectiveness of accelerators for transcript generation on end-user devices; and g) quantifying the quality of subtiles including the timestamp of the words and descriptions of the sounds that are generated by the ASR system.
|
| 112 |
+
|
| 113 |
+
Acknowledgement. The authors wish to thank the Finnish Computing Competence Infrastructure (FCCI) for supporting this project with computational and data storage resources
|
| 114 |
+
|
| 115 |
+
${}^{2}$ We apologize for the use of profanity, and we rely on the following quote by Harry Frankfurt [14] for describing the term bullshit: "it is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction."
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/c4A2txzl82P/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Coincidence Detection Is All You Need
|
| 2 |
+
|
| 3 |
+
Celestine Preetham Lawrence ${}^{ + }$
|
| 4 |
+
|
| 5 |
+
${}^{ + }$ Bernoulli Institute and Groningen Cognitive Systems and Materials Center (CogniGron), University of Groningen, 9700 AB, Groningen, Netherlands.
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
This paper demonstrates that the performance of coincidence detection - a classic neuromorphic signal processing method found in Rosenblatt's perceptrons with distributed transmission times, can be competitive to a state-of-the-art deep learning method for pattern recognition. Hence, we cannot remain comfortably numb to the prevailing dogma that efficient matrix-vector operations is all we need; but should enquire with greater vigour if more advanced continual learning methods (running on spiking neural network hardware with neuromodulatory mechanisms at multiple timescales) can beat the accuracy of task-specific deep learning methods. With regards to deployability, coincidence detection is an interpretable shallow learning method and its applications provide a commercial use-case for neuromorphic hardware such as Intel Loihi.
|
| 10 |
+
|
| 11 |
+
## 1 Introduction
|
| 12 |
+
|
| 13 |
+
Frank Rosenblatt and his team (1957-1971) built and analyzed several kinds of perceptrons [1, 2, 3, 4, - networks of sensory, association and receptor neurons; which in contemporary deep learning terminology relates to the input, hidden and output layers. The propagating signals were binary (compatible with a spike-based view), the synaptic delays (transmission times) and weights (memory states) could be analog, the network could be recurrent and was often randomly interconnected, and learning often meant tuning the weights of the association-receptor subnetwork by some error-corrective reinforcement. The synaptic delays were not learnt but instead randomly distributed in Rosenblatt's Tobermory perceptrons [5], and this was rich enough to realize concentration-invariant and uniform time-warp invariant spatiotemporal classification by logarithmic encoding and coincidence detection. However, the processing speed of commercial Von Neumann computers advanced exponentially and outperformed neuromorphic hardware on yesterdecade's benchmarks [6]. The Tobermory perceptron was forgotten, nevertheless, the utility of logarithmic encoding and coincidence detection was formalized by John Hopfield [7] as an efficient solution to the analog match problem in pattern recognition.
|
| 14 |
+
|
| 15 |
+
Now, half a century after the accidental demise of Rosenblatt, neuromorphic signal processors are making a comeback. For example, (1) Intel's Loihi with spike-time dependent plasticity mechanisms for learning olfactory pattern recognizers [8]; (2) Physical reservoir computing networks [9] where the interconnectivity of the hidden layer is unchanged, closer to the spirit of Rosenblatt's randomly interconnected sensory-association subnetwork.
|
| 16 |
+
|
| 17 |
+
Here, to strengthen the case for revisiting classic methods on novel and modern hardware, we evaluate the performance of coincidence detection in comparison to a deep learning method. Nothing more, nothing less, although this work was triggered by a rabid interest in employing artificial intelligence to sniff out infections and prevent future pandemics.
|
| 18 |
+
|
| 19 |
+
## 2 Methods
|
| 20 |
+
|
| 21 |
+
Here, we consider the work [10] of an interdisciplinary team, where a 26 layer convolutional neural network with residual connections (ResNet-26) was successfully trained for classifying pathogenic bacteria by Raman spectroscopy. In their work, there are $N = {30}$ classes of bacterial isolates and they begin with a ResNet-26 pre-trained on $N \times {2000}$ spectra, then for each class $n = 1 : N$ there are $M = {100}$ training spectra, and similarly $N \times M = {3000}$ test spectra. Each spectrum $\mathbf{x}$ contains 1000 floating-point numbers ranging between 0 and 1 . Although compute intensive, their deep learning method proved to be a tool of great convenience for pattern recognition in a challenging dataset, where intra-isolate spectra were often more dissimilar than inter-isolate spectra.
|
| 22 |
+
|
| 23 |
+
Our method to tackle the above dataset, is inspired by the theory of how coincidence detection [7] in animal brains is fundamental for odour classification in complex and turbulent mixtures. Each class $n$ has a vector representation ${\mathbf{w}}_{n}$ that is learnt, and an input vector $\mathbf{x}$ results in an output class $y\left( \mathbf{x}\right) = {\arg }_{n}\max \left( {\mathbf{x} \land {\mathbf{w}}_{n}}\right)$ where we introduce the operator $\land$ to represent the coincidence between two signals. The analytical nature of coincidence detection depends on the specificities of the ion-channels and the membranes involved [11], and may even incorporate nonlinear leaky-integrate [12] multiple timescale mechanisms. We do not yet have a complete theory of neuromorphic signal processing, so here we introduce an approximation for the translation and scale-invariant property of coincidence detection as
|
| 24 |
+
|
| 25 |
+
$$
|
| 26 |
+
{\arg }_{n}\max \left( {\mathbf{x}\bigwedge {\mathbf{w}}_{n}}\right) \approx {\arg }_{n}\max \left( {{\mathbf{w}}_{n} \cdot \widehat{\mathbf{x}}}\right) , \tag{1}
|
| 27 |
+
$$
|
| 28 |
+
|
| 29 |
+
Table 1: Test accuracy (%)
|
| 30 |
+
|
| 31 |
+
<table><tr><td>ResNet-26</td><td>Coincidence detection</td></tr><tr><td>${82.2} \pm {0.3}$ (from [10])</td><td>82.7 (this work)</td></tr></table>
|
| 32 |
+
|
| 33 |
+
where $\widehat{\mathbf{x}}$ is the zero-mean unit-variance normalization of $\mathbf{x}$ .
|
| 34 |
+
|
| 35 |
+
Thus, the approximation in Eq. (1) allows $y\left( \mathbf{x}\right)$ to be learnt by a logistic regression on the normalized dataset. We discard the pre-training data, pre-process the training and test spectra by a range-1 mean filter, and use the default method for logistic regression in Wolfram Mathematica (L2-regularization $= {0.0001}$ , optimization method $=$ limited-memory BFGS). Code is provided in the supplemental material for reproducibility.
|
| 36 |
+
|
| 37 |
+
## 3 Result and outlook
|
| 38 |
+
|
| 39 |
+
The coincidence detection (via normalized logistic regression) method introduced here achieves a test accuracy greater than ResNet-26 (see Table 1), and it took less than 3 seconds to train the classifier on a modern desktop (without any special-purpose GPUs). Check https://openreview.net/attachment?id=xT5rDp5VqK0&name= supplementary_material for Wolfram Mathematica and Python code and plots of the training and test data, and confusion matrices. Note that the training data was fit all at once to a ${100}\%$ accuracy. With a more neuromorphic coincidence detection method and a learning method that adapts the synaptic delays $\mathbf{w}$ continually, to keep track under changing environmental conditions, we may achieve even greater accuracies.
|
| 40 |
+
|
| 41 |
+
## Reviewer contributions
|
| 42 |
+
|
| 43 |
+
This paper has been previously reviewed at NeurIPS 2022 (https://openreview.net/forum?id=xT5rDp5VqK0) but not recommended for immediate publication for reasons including that it has been only tested on a single dataset. I believe it is good to present this work in a reasonable venue and thereby motivate stakeholders to test coincidence detection on more datasets. Here below, I summarize relevant contributions as author responses to a selection of reviews. Note that the review process also revealed a typo in the supplementary material, where it was wrongly commented that "standardization is performed across samples..." - it should instead read as "standardization is performed samplewise - each sample has a zero-mean and unit-variance across its features...".
|
| 44 |
+
|
| 45 |
+
Reviewer V6Wx: The simple "coincidence" detector gives very good results compared with a deep net. Although this could be demonstrating an advantage of coincidence detection, it may also be that the classification problem is actually not that difficult. Paper [10] seems to only apply a deep net to the problem. The authors only apply a linear function. What do other functions do? k-nearest neighbors, SVMs, ...?
|
| 46 |
+
|
| 47 |
+
1. Is there no more suitable implementation of coincidence detection, e.g., within a spiking net?
|
| 48 |
+
|
| 49 |
+
2. Is your model in eq 1 not simply a perceptron? (With normalized inputs and a max on the outputs)
|
| 50 |
+
|
| 51 |
+
Response: Ref. [10] already explored traditional methods (k-NN, SVM) and justified their choice for a deep learning method.
|
| 52 |
+
|
| 53 |
+
1. Yes, references [11] and [12] point to this, but are expensive to implement on conventional hardware. Future work should compare how the approximate implementation of coincidence detection compares to more advanced methods on neuromorphic hardware.
|
| 54 |
+
|
| 55 |
+
2. Yes, is it not beautiful? Did you notice that the normalization is performed across a different axis in comparison to the standard suggestion of Python sklearn for logistic regression? (Conventional wisdom is that it is a bad idea to do a normalization in this way, which is why perceptrons were not employed with this kind of pre-processing until now. This paper instead argues from the theory of coincidence detection that it is actually a good idea for preprocessing datasets that are compatible with the analog match problem, which turns out to be true upon evaluation in this empirical dataset.)
|
| 56 |
+
|
| 57 |
+
Reviewer ctyh: There is an interesting empirical observation here, yet the narrative is too shallow...
|
| 58 |
+
|
| 59 |
+
Response: The result in table-1 speaks for itself (i.e. here is a novel method with better performance in comparison to the impactful deep learning method by a large team of researchers in Stanford university, cited over 300 times). Of course, this novel method will need to be applied to other datasets (which is why it needs to be presented in a conference to gain the attention of fellow researchers). Moreover, references [7], [11], [12] have been thoughtfully chosen as related work.
|
| 60 |
+
|
| 61 |
+
Reviewer QphW: Authors should consider generating more stats on their accuracy % and provide a more thorough comparison with the baseline (ResNet-26). Further, authors should share additional experiments breaking down the contribution of standardization and smoothing steps. Lastly, explaining why their model fares better than the deep learning model...
|
| 62 |
+
|
| 63 |
+
Response: The reviewer asks for more stats, but is it not futile? Given that this is anyhow based on performance on a single dataset? The focus of this paper is to demonstrate that the approximation for coincidence detection introduced here is able to solve an analog match problem (discussed insightfully by Hopfield [7], but not as wellknown as it should be). That the model fares slightly better is a bonus, actually deep learning methods can surely learn a coincidence detector (albeit in a computationally expensive way). Moreover, in order to ensure reproducibility, the method was tested in two programming languages Mathematica (yielding an accuracy of 82.7% as reported in the main text) and Python (yielding an accuracy of 82.9% as reported in the supplementary material).
|
| 64 |
+
|
| 65 |
+
## References
|
| 66 |
+
|
| 67 |
+
[1] Frank Rosenblatt. The perceptron, a perceiving and recognizing automaton Project Para. Cornell Aeronautical Laboratory, Inc. Report no. 85-460-1, 1957.
|
| 68 |
+
|
| 69 |
+
[2] Frank Rosenblatt. The perceptron: A theory of statistical separability in cognitive systems. Cornell Aeronautical Laboratory, Inc. Report no. VG-1196-G-1, 1958.
|
| 70 |
+
|
| 71 |
+
[3] Frank Rosenblatt. Principles of neurodynamics. perceptrons and the theory of brain mechanisms. Cornell Aeronautical Laboratory, Inc. Report no. 1196-G-8, 1961.
|
| 72 |
+
|
| 73 |
+
[4] Frank Rosenblatt. Cognitive systems research program. Technical report, Cornell University, Ithaca, New York, 1971.
|
| 74 |
+
|
| 75 |
+
[5] Frank Rosenblatt. A description of the tobermory perceptron. In Collected Technical Papers, volume 2. Cornell University, Ithaca, New York, 1963.
|
| 76 |
+
|
| 77 |
+
[6] George Nagy. Neural networks-then and now. IEEE Transactions on Neural Networks, 2(2):316-318, 1991.
|
| 78 |
+
|
| 79 |
+
[7] John J Hopfield. Pattern recognition computation using action potential timing for stimulus representation. Nature, 376(6535):33-36, 1995.
|
| 80 |
+
|
| 81 |
+
[8] Nabil Imam and Thomas A Cleland. Rapid online learning and robust recall in a neuromorphic olfactory circuit. Nature Machine Intelligence, 2(3):181-191, 2020.
|
| 82 |
+
|
| 83 |
+
[9] G. Tanaka, T. Yamane, J.B. Héroux, R. Nakane, N. Kanazawa, S. Takeda, H. Numata, D. Nakano, and A. Hirose. Recent advances in physical reservoir computing: A review. Neural Networks, 115:100-123, 2019.
|
| 84 |
+
|
| 85 |
+
[10] Chi-Sing Ho, Neal Jean, Catherine A Hogan, Lena Blackmon, Stefanie S Jeffrey, Mark Holodniy, Niaz Banaei, Amr AE Saleh, Stefano Ermon, and Jennifer Dionne. Rapid identification of pathogenic bacteria using raman spectroscopy and deep learning. Nature communications, 10(1):1-8, 2019.
|
| 86 |
+
|
| 87 |
+
[11] Nelson Spruston. Pyramidal neurons: dendritic structure and synaptic integration. Nature Reviews Neuroscience, 9(3):206-221, 2008.
|
| 88 |
+
|
| 89 |
+
[12] Wondimu Teka, Toma M Marinov, and Fidel Santamaria. Neuronal spike timing adaptation described with a fractional leaky integrate-and-fire model. PLoS computational biology, 10(3):e1003526, 2014.
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/c4A2txzl82P/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ COINCIDENCE DETECTION IS ALL YOU NEED
|
| 2 |
+
|
| 3 |
+
Celestine Preetham Lawrence ${}^{ + }$
|
| 4 |
+
|
| 5 |
+
${}^{ + }$ Bernoulli Institute and Groningen Cognitive Systems and Materials Center (CogniGron), University of Groningen, 9700 AB, Groningen, Netherlands.
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
This paper demonstrates that the performance of coincidence detection - a classic neuromorphic signal processing method found in Rosenblatt's perceptrons with distributed transmission times, can be competitive to a state-of-the-art deep learning method for pattern recognition. Hence, we cannot remain comfortably numb to the prevailing dogma that efficient matrix-vector operations is all we need; but should enquire with greater vigour if more advanced continual learning methods (running on spiking neural network hardware with neuromodulatory mechanisms at multiple timescales) can beat the accuracy of task-specific deep learning methods. With regards to deployability, coincidence detection is an interpretable shallow learning method and its applications provide a commercial use-case for neuromorphic hardware such as Intel Loihi.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Frank Rosenblatt and his team (1957-1971) built and analyzed several kinds of perceptrons [1, 2, 3, 4, - networks of sensory, association and receptor neurons; which in contemporary deep learning terminology relates to the input, hidden and output layers. The propagating signals were binary (compatible with a spike-based view), the synaptic delays (transmission times) and weights (memory states) could be analog, the network could be recurrent and was often randomly interconnected, and learning often meant tuning the weights of the association-receptor subnetwork by some error-corrective reinforcement. The synaptic delays were not learnt but instead randomly distributed in Rosenblatt's Tobermory perceptrons [5], and this was rich enough to realize concentration-invariant and uniform time-warp invariant spatiotemporal classification by logarithmic encoding and coincidence detection. However, the processing speed of commercial Von Neumann computers advanced exponentially and outperformed neuromorphic hardware on yesterdecade's benchmarks [6]. The Tobermory perceptron was forgotten, nevertheless, the utility of logarithmic encoding and coincidence detection was formalized by John Hopfield [7] as an efficient solution to the analog match problem in pattern recognition.
|
| 14 |
+
|
| 15 |
+
Now, half a century after the accidental demise of Rosenblatt, neuromorphic signal processors are making a comeback. For example, (1) Intel's Loihi with spike-time dependent plasticity mechanisms for learning olfactory pattern recognizers [8]; (2) Physical reservoir computing networks [9] where the interconnectivity of the hidden layer is unchanged, closer to the spirit of Rosenblatt's randomly interconnected sensory-association subnetwork.
|
| 16 |
+
|
| 17 |
+
Here, to strengthen the case for revisiting classic methods on novel and modern hardware, we evaluate the performance of coincidence detection in comparison to a deep learning method. Nothing more, nothing less, although this work was triggered by a rabid interest in employing artificial intelligence to sniff out infections and prevent future pandemics.
|
| 18 |
+
|
| 19 |
+
§ 2 METHODS
|
| 20 |
+
|
| 21 |
+
Here, we consider the work [10] of an interdisciplinary team, where a 26 layer convolutional neural network with residual connections (ResNet-26) was successfully trained for classifying pathogenic bacteria by Raman spectroscopy. In their work, there are $N = {30}$ classes of bacterial isolates and they begin with a ResNet-26 pre-trained on $N \times {2000}$ spectra, then for each class $n = 1 : N$ there are $M = {100}$ training spectra, and similarly $N \times M = {3000}$ test spectra. Each spectrum $\mathbf{x}$ contains 1000 floating-point numbers ranging between 0 and 1 . Although compute intensive, their deep learning method proved to be a tool of great convenience for pattern recognition in a challenging dataset, where intra-isolate spectra were often more dissimilar than inter-isolate spectra.
|
| 22 |
+
|
| 23 |
+
Our method to tackle the above dataset, is inspired by the theory of how coincidence detection [7] in animal brains is fundamental for odour classification in complex and turbulent mixtures. Each class $n$ has a vector representation ${\mathbf{w}}_{n}$ that is learnt, and an input vector $\mathbf{x}$ results in an output class $y\left( \mathbf{x}\right) = {\arg }_{n}\max \left( {\mathbf{x} \land {\mathbf{w}}_{n}}\right)$ where we introduce the operator $\land$ to represent the coincidence between two signals. The analytical nature of coincidence detection depends on the specificities of the ion-channels and the membranes involved [11], and may even incorporate nonlinear leaky-integrate [12] multiple timescale mechanisms. We do not yet have a complete theory of neuromorphic signal processing, so here we introduce an approximation for the translation and scale-invariant property of coincidence detection as
|
| 24 |
+
|
| 25 |
+
$$
|
| 26 |
+
{\arg }_{n}\max \left( {\mathbf{x}\bigwedge {\mathbf{w}}_{n}}\right) \approx {\arg }_{n}\max \left( {{\mathbf{w}}_{n} \cdot \widehat{\mathbf{x}}}\right) , \tag{1}
|
| 27 |
+
$$
|
| 28 |
+
|
| 29 |
+
Table 1: Test accuracy (%)
|
| 30 |
+
|
| 31 |
+
max width=
|
| 32 |
+
|
| 33 |
+
ResNet-26 Coincidence detection
|
| 34 |
+
|
| 35 |
+
1-2
|
| 36 |
+
${82.2} \pm {0.3}$ (from [10]) 82.7 (this work)
|
| 37 |
+
|
| 38 |
+
1-2
|
| 39 |
+
|
| 40 |
+
where $\widehat{\mathbf{x}}$ is the zero-mean unit-variance normalization of $\mathbf{x}$ .
|
| 41 |
+
|
| 42 |
+
Thus, the approximation in Eq. (1) allows $y\left( \mathbf{x}\right)$ to be learnt by a logistic regression on the normalized dataset. We discard the pre-training data, pre-process the training and test spectra by a range-1 mean filter, and use the default method for logistic regression in Wolfram Mathematica (L2-regularization $= {0.0001}$ , optimization method $=$ limited-memory BFGS). Code is provided in the supplemental material for reproducibility.
|
| 43 |
+
|
| 44 |
+
§ 3 RESULT AND OUTLOOK
|
| 45 |
+
|
| 46 |
+
The coincidence detection (via normalized logistic regression) method introduced here achieves a test accuracy greater than ResNet-26 (see Table 1), and it took less than 3 seconds to train the classifier on a modern desktop (without any special-purpose GPUs). Check https://openreview.net/attachment?id=xT5rDp5VqK0&name= supplementary_material for Wolfram Mathematica and Python code and plots of the training and test data, and confusion matrices. Note that the training data was fit all at once to a ${100}\%$ accuracy. With a more neuromorphic coincidence detection method and a learning method that adapts the synaptic delays $\mathbf{w}$ continually, to keep track under changing environmental conditions, we may achieve even greater accuracies.
|
| 47 |
+
|
| 48 |
+
§ REVIEWER CONTRIBUTIONS
|
| 49 |
+
|
| 50 |
+
This paper has been previously reviewed at NeurIPS 2022 (https://openreview.net/forum?id=xT5rDp5VqK0) but not recommended for immediate publication for reasons including that it has been only tested on a single dataset. I believe it is good to present this work in a reasonable venue and thereby motivate stakeholders to test coincidence detection on more datasets. Here below, I summarize relevant contributions as author responses to a selection of reviews. Note that the review process also revealed a typo in the supplementary material, where it was wrongly commented that "standardization is performed across samples..." - it should instead read as "standardization is performed samplewise - each sample has a zero-mean and unit-variance across its features...".
|
| 51 |
+
|
| 52 |
+
Reviewer V6Wx: The simple "coincidence" detector gives very good results compared with a deep net. Although this could be demonstrating an advantage of coincidence detection, it may also be that the classification problem is actually not that difficult. Paper [10] seems to only apply a deep net to the problem. The authors only apply a linear function. What do other functions do? k-nearest neighbors, SVMs, ...?
|
| 53 |
+
|
| 54 |
+
1. Is there no more suitable implementation of coincidence detection, e.g., within a spiking net?
|
| 55 |
+
|
| 56 |
+
2. Is your model in eq 1 not simply a perceptron? (With normalized inputs and a max on the outputs)
|
| 57 |
+
|
| 58 |
+
Response: Ref. [10] already explored traditional methods (k-NN, SVM) and justified their choice for a deep learning method.
|
| 59 |
+
|
| 60 |
+
1. Yes, references [11] and [12] point to this, but are expensive to implement on conventional hardware. Future work should compare how the approximate implementation of coincidence detection compares to more advanced methods on neuromorphic hardware.
|
| 61 |
+
|
| 62 |
+
2. Yes, is it not beautiful? Did you notice that the normalization is performed across a different axis in comparison to the standard suggestion of Python sklearn for logistic regression? (Conventional wisdom is that it is a bad idea to do a normalization in this way, which is why perceptrons were not employed with this kind of pre-processing until now. This paper instead argues from the theory of coincidence detection that it is actually a good idea for preprocessing datasets that are compatible with the analog match problem, which turns out to be true upon evaluation in this empirical dataset.)
|
| 63 |
+
|
| 64 |
+
Reviewer ctyh: There is an interesting empirical observation here, yet the narrative is too shallow...
|
| 65 |
+
|
| 66 |
+
Response: The result in table-1 speaks for itself (i.e. here is a novel method with better performance in comparison to the impactful deep learning method by a large team of researchers in Stanford university, cited over 300 times). Of course, this novel method will need to be applied to other datasets (which is why it needs to be presented in a conference to gain the attention of fellow researchers). Moreover, references [7], [11], [12] have been thoughtfully chosen as related work.
|
| 67 |
+
|
| 68 |
+
Reviewer QphW: Authors should consider generating more stats on their accuracy % and provide a more thorough comparison with the baseline (ResNet-26). Further, authors should share additional experiments breaking down the contribution of standardization and smoothing steps. Lastly, explaining why their model fares better than the deep learning model...
|
| 69 |
+
|
| 70 |
+
Response: The reviewer asks for more stats, but is it not futile? Given that this is anyhow based on performance on a single dataset? The focus of this paper is to demonstrate that the approximation for coincidence detection introduced here is able to solve an analog match problem (discussed insightfully by Hopfield [7], but not as wellknown as it should be). That the model fares slightly better is a bonus, actually deep learning methods can surely learn a coincidence detector (albeit in a computationally expensive way). Moreover, in order to ensure reproducibility, the method was tested in two programming languages Mathematica (yielding an accuracy of 82.7% as reported in the main text) and Python (yielding an accuracy of 82.9% as reported in the supplementary material).
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/kjTVwUVVWP/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,201 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Bandits Approach to Intelligent Tutoring Systems using Concept Evolution
|
| 2 |
+
|
| 3 |
+
Sudha ${\mathrm{S}}^{ + }$ , Arun Rajkumar ${}^{ + }$
|
| 4 |
+
|
| 5 |
+
${}^{ + }$ Indian Institute of Technology Madras
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
With the huge number of learning resources available online today, the Intelligent Tutoring Systems (ITS) are of great need more than ever. An ITS is a system that personalizes the course contents to each learner. In this paper, we address the problem of suggesting an effective & efficient learning sequences to learners based on their knowledge levels. We take a multi-armed bandits approach to action choosing where we suggest that action which has the highest estimated learning outcome at each step. We model the actions as Beta distributions & the learners' knowledge level as concept vectors. We also learn the prerequisite relationships that can exist among the concepts automatically. We propose a novel algorithm that achieves the goal efficiently. Our experimental results show that our algorithm's performance is comparable to that of the optimal algorithm.
|
| 10 |
+
|
| 11 |
+
## 1 Introduction
|
| 12 |
+
|
| 13 |
+
Traditional teaching methods utilize a uniform approach for all learners, disregarding individual abilities and needs. Intelligent Tutoring Systems (ITS) adapt teaching strategies according to learner's unique parameters. This paper presents an ITS framework for devising tailored learning actions sequences for each learner, optimizing concept learning. We model this problem as a Multi-Armed Bandits setting, viewing learning actions as arms and the learning level gained as rewards. The model also considers prerequisite relationships between concepts.
|
| 14 |
+
|
| 15 |
+
Our approach allows a learner's knowledge level to range between 0 and 1 , a shift from the conventional binary (0,1)states. This accounts for varying mastery levels of a concept. Our framework permits each learning action to contribute variably to multiple concepts. We also incorporate prerequisite relationships between concepts with varying intensity levels. The algorithm autonomously learns these prerequisite relationships, negating the need for expert input.
|
| 16 |
+
|
| 17 |
+
## 2 Related Work
|
| 18 |
+
|
| 19 |
+
[1] suggests a Zone of Proximal Development (ZPD)-based action sequence selection, incorporating multi-armed bandits to maximize rewards. Their method relies heavily on time-consuming ZPD graph creation by an expert, a dependency absent in our approach.
|
| 20 |
+
|
| 21 |
+
[2] applies a POMDP approach to ITS in a question-and-answer context, limiting learner concept understanding to binary(0,1)values. Our method allows continuous values in $\left\lbrack {0,1}\right\rbrack$ for knowledge levels, uses practical learning actions like videos, and doesn't require prerequisite information.[3] also applies a POMDP approach to ITS, but solving a POMDP is generally challenging due to the polynomial number of states.
|
| 22 |
+
|
| 23 |
+
[4] embeds Personalised Learning Action (PLA) between fixed assessment sequences to boost immediate assessment performance using the CLUB & ACLUB algorithms. Unlike them, our goal is efficient concept learning, not immediate assessment performance.
|
| 24 |
+
|
| 25 |
+
[5] proposes a Thompson Sampling & Knowledge Gradient variation for PLAs to improve immediate assessment performance, but doesn't address prerequisite dependencies. Our focus is on concept learning. [6] merges automatic curriculum generation with ZPDES bandits approach, framing curriculum generation as a graph coloring problem. This approach requires intensive ZPD graph initialization.
|
| 26 |
+
|
| 27 |
+
## 3 Problem Setting & Modelling Assumptions
|
| 28 |
+
|
| 29 |
+
$N$ denotes the count of learners in an ITS system aiming to learn $K$ concepts. Each learner $i$ ’s knowledge state is indicated by vector ${C}_{i} \in {\left\lbrack 0,1\right\rbrack }^{K}$ , with ${C}_{ij}$ signifying learner $i$ ’s mastery of concept $\mathrm{j}$ (e.g., ${C}_{23} = {0.7}$ means learner 2 has ${70}\%$ grasp of concept 3). ITS system’s objective is to teach all $N$ learners all $K$ concepts to a threshold $\theta$ level of mastery.
|
| 30 |
+
|
| 31 |
+
ITS possesses a set of actions $A$ (e.g., videos, lectures) affecting the learner’s knowledge level. The system learns the impact of these actions over time. Concept relationships are considered in two cases: one assumes independence, and the other considers prerequisite relationships affecting the impact of an action on a concept.
|
| 32 |
+
|
| 33 |
+
Learner-specific parameters determine individual learning rates, accommodating variations between fast and slow learners. The ITS must deduce these rates. We assume learner knowledge evolves Markovianly, and knowledge level estimates are assumed to be noisy.
|
| 34 |
+
|
| 35 |
+
## Independent Concepts:
|
| 36 |
+
|
| 37 |
+
For the independent concepts, the effect of action $a$ on concept $i$ at round $t$ is given as follows:
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
{c}_{i}^{t + 1} = {c}_{i}^{t} + \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \cdot \left( {1 - {c}_{i}^{t}}\right) \tag{1}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
where $a$ is the action chosen at time step $t$ and $\operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right)$ is the CDF value of the action $a$ at value ${c}_{i}^{t}$ .
|
| 44 |
+
|
| 45 |
+
## Dependent Concepts:
|
| 46 |
+
|
| 47 |
+
The value update for the dependent concepts is as follows:
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
{c}_{i}^{t + 1} = {c}_{i}^{t} + \mathop{\sum }\limits_{{j = 1}}^{D}{c}_{j}^{t}{\lambda }_{j - > i} \cdot \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \cdot \left( {1 - {c}_{i}^{t}}\right) \tag{2}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
where $D$ is the number of prerequisite concepts to ${c}_{i}$ and $\mathop{\sum }\limits_{{j = 1}}^{D}{\lambda }_{j - > i} = 1$
|
| 54 |
+
|
| 55 |
+
Here again, $\operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right)$ is the value of the Beta CDF at value ${c}_{i}^{t}$ .
|
| 56 |
+
|
| 57 |
+
Learner Specific Parameter:To model the learner’s unique abilities, we use a user specific parameter ${\gamma }_{i} \in$ $\left\lbrack {0,1}\right\rbrack$ . The effect of an action on a learner then will depend on the action, the specific learner $\&$ the current knowledge state of the learner. This is made formal below:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{c}_{i}^{t + 1} = {c}_{i}^{t} + {\gamma }_{i} \cdot \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \cdot \left( {1 - {c}_{i}^{t}}\right) \tag{3}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
Parameters to Estimate: The ITS system is completely specified using $2 * K$ action parameters that govern the Beta CDFs, and $K * N$ parameters that describe the learner’s knowledge state and $N$ learner specific parameters.
|
| 64 |
+
|
| 65 |
+
## 4 ITS-BPECE - Bandits Based Parameter Estimtation for Concept Evolution
|
| 66 |
+
|
| 67 |
+
This section gives an overview of the parameter estimation for the independent $\&$ dependent concepts. The parameters that need to be estimated for the independent and the dependent concepts are different. Hence, the estimation approaches vary as well. The subsequent subsections give an overview of the algorithm we propose which we call Bandits based Parameter Estimation for Concept Evolution (BPECE) and the section ends with a pseudo code of the BPECE in Algorithm 1.
|
| 68 |
+
|
| 69 |
+
## Algorithm Overview:
|
| 70 |
+
|
| 71 |
+
We start off by choosing an action uniformly at random till each action has been chosen a minimum of (a small value) ${A}_{min}$ times. We observe the data thus generated which looks as:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\left\{ {\ldots ,\left( {{C}_{i1}^{t},{C}_{i1}^{t + 1}}\right) ,\left( {{C}_{i2}^{{t}^{\prime }},{C}_{i2}^{{t}^{\prime } + 1}}\right) ,\ldots }\right\} \tag{4}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
If it is an independent concept in question, we use the Zeroth-Order(ZO) optimization to estimate the action parameters. The objective function for the $\mathrm{{ZO}}$ in the case of independent concepts is:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
f\left( {{\alpha }_{a},{\beta }_{a}}\right) = \left( \frac{{c}_{i}^{t + 1} - {c}_{i}^{t}}{1 - {c}_{i}^{t}}\right) - \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \tag{5}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
We use the ZO estimation after every ${D}_{min}$ number of data samples we collect and we increase the value of ${D}_{\text{min }}$ over time.
|
| 84 |
+
|
| 85 |
+
In the dependent concepts case, not only do we have to estimate the action parameters, but also the ${\lambda }_{j - > i}$ parameters for all dependency pairs(i, j). We start off by fixing the values of ${\lambda }_{j - > i} = \frac{1}{K - 1}$ for all(i, j). We estimate the Beta parameters using the ZO optimization. To estimate the ${\lambda }_{j - > i}$ parameters, we fix the Beta parameters thus obtained. We train a Neural Network (NN) for each dependent concept with the concept vector being the input and the objective value being the output.
|
| 86 |
+
|
| 87 |
+
We alternatively fix ${\lambda }_{j - > i}$ and estimate Beta parameters and fix Beta parameters and estimate ${\lambda }_{j - > i}$ till the values of the parameter converge. Algorithm 1 presents the pseudo code of the algorithm.
|
| 88 |
+
|
| 89 |
+
We incorporate the MAB idea of choosing the arms that have the highest reward by picking those actions that push the learner concept vectors the farthest. We use a version of $\epsilon$ -greedy where we pick the best action with probability $\left( {1 - \epsilon }\right)$ and an action uniformly at random with probability $\epsilon$ . While we use an $\epsilon$ -Greedy strategy, more sophisticated bandit strategies can also be used in the framework.
|
| 90 |
+
|
| 91 |
+
Algorithm 1: BPECE
|
| 92 |
+
|
| 93 |
+
---
|
| 94 |
+
|
| 95 |
+
Input: A set of learner concept vector estimates, ${C}_{j}, j = 1,2,\ldots N$
|
| 96 |
+
|
| 97 |
+
Parameters ${A}_{min},{D}_{min},\epsilon$
|
| 98 |
+
|
| 99 |
+
Output: Next action ${a}_{j}$ for each learner $j$
|
| 100 |
+
|
| 101 |
+
for $j \leftarrow 1$ to $N$ do
|
| 102 |
+
|
| 103 |
+
if $\exists a \in A$ where $\operatorname{count}\left( a\right) < {A}_{\text{min }}$ then
|
| 104 |
+
|
| 105 |
+
${a}_{j} \leftarrow a$
|
| 106 |
+
|
| 107 |
+
end
|
| 108 |
+
|
| 109 |
+
else
|
| 110 |
+
|
| 111 |
+
for ${c}_{ji} \in {C}_{j}$ do
|
| 112 |
+
|
| 113 |
+
if ${c}_{ji}$ is Independent then
|
| 114 |
+
|
| 115 |
+
Estimate the $\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ using Zeroth-Order Optimization on 5
|
| 116 |
+
|
| 117 |
+
end
|
| 118 |
+
|
| 119 |
+
if ${c}_{ji}$ is Dependent then
|
| 120 |
+
|
| 121 |
+
Initialize ${\lambda }_{k - > {ji}}$ values uniformly $\forall \left( {k,{ij}}\right)$
|
| 122 |
+
|
| 123 |
+
while ${\lambda }_{k - > {ji}}$ AND $\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ are not converged do
|
| 124 |
+
|
| 125 |
+
Fix ${\lambda }_{k - > {ji}}$
|
| 126 |
+
|
| 127 |
+
Estimate $\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ using Zeroth-Order Optimization on 5
|
| 128 |
+
|
| 129 |
+
$\operatorname{Fix}\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$
|
| 130 |
+
|
| 131 |
+
Estimate ${\lambda }_{k - > {ji}}$ using the Neural Nets
|
| 132 |
+
|
| 133 |
+
end
|
| 134 |
+
|
| 135 |
+
end
|
| 136 |
+
|
| 137 |
+
end
|
| 138 |
+
|
| 139 |
+
Update ${C}_{{j}_{a}}^{\prime }$ using Equation $3\& 2\forall a \in A$
|
| 140 |
+
|
| 141 |
+
With probability $1 - \epsilon$
|
| 142 |
+
|
| 143 |
+
${a}_{j} \leftarrow \arg \mathop{\max }\limits_{{a \in A}}{\begin{Vmatrix}{C}_{{j}_{a}}^{\prime } - {C}_{j}\end{Vmatrix}}_{2}$
|
| 144 |
+
|
| 145 |
+
With probability $\epsilon$
|
| 146 |
+
|
| 147 |
+
${a}_{j} \leftarrow$ choose an action $a \in A$ uniformly at random
|
| 148 |
+
|
| 149 |
+
end
|
| 150 |
+
|
| 151 |
+
end
|
| 152 |
+
|
| 153 |
+
---
|
| 154 |
+
|
| 155 |
+
## 5 Experiments
|
| 156 |
+
|
| 157 |
+
Setup: We use as performance metric the number of steps/rounds it takes for all concept values to go beyond 0.9. We compare our algorithm results against an optimal algorithm. The optimal algorithm we consider is an algorithm that has all the true parameter values of the actions and the dependencies and uses those to pick the best action for the learners greedily.
|
| 158 |
+
|
| 159 |
+
## Results for Independent Concepts
|
| 160 |
+
|
| 161 |
+
Figure 1 depicts the results for the Independent case where we vary different parameters.
|
| 162 |
+
|
| 163 |
+

|
| 164 |
+
|
| 165 |
+
Figure 1: Number of Steps for the Independent Concepts with varying parameters
|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
|
| 169 |
+
Figure 2: Total & Average Number of Steps for Independent Concepts for varying number of learners with the learner-specific parameter
|
| 170 |
+
|
| 171 |
+
## Results for Independent Concepts with Student-Specific Parameter
|
| 172 |
+
|
| 173 |
+
Figure 2 shows the results for the case where we include a learner-specific parameter $\gamma$ that accounts for each learner's learning rate. We vary the number of learners from 2 through 50 while fixing the number of actions and concepts.
|
| 174 |
+
|
| 175 |
+
## Results for Dependent Concepts
|
| 176 |
+
|
| 177 |
+
Figure 3 shows the results for the number of steps taken with for dependent concepts, while the Figure ?? shows the average number of steps taken per learner. We vary the number of dependent concepts from 1 to 4 to show how the algorithm performs in each case.
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
|
| 181 |
+
Figure 3: Number of Steps for Dependent Concepts with Varying No of Dependent Concepts
|
| 182 |
+
|
| 183 |
+
## 6 Conclusion & Future Work
|
| 184 |
+
|
| 185 |
+
We proposed a novel bandits based parameter estimation approach to suggest learning actions to learners based on each learner's knowledge level. We considered the cases where the concepts are independent and dependent. In the dependent case, we took into consideration the prerequisite relationships between various concepts. We modeled each learning action's effect on a concept as a function of Beta distribution. For the prerequisite relationships, we trained NNs to estimate the degree of dependence. Finally, we used an $\epsilon$ -greedy approach to choose the best action for the learners. We back our proposed method with extensive experimental results.
|
| 186 |
+
|
| 187 |
+
As a future work, we can extend the learner-specific parameters to account for the different learning rate each learner for the dependent concepts as well.
|
| 188 |
+
|
| 189 |
+
## References
|
| 190 |
+
|
| 191 |
+
[1] Benjamin Clement, Didier Roy, Pierre-Yves Oudeyer, and Manuel Lopes. Multi-armed bandits for intelligent tutoring systems. arXiv preprint arXiv:1310.3174, 2013.
|
| 192 |
+
|
| 193 |
+
[2] Fangju Wang. Pomdp framework for building an intelligent tutoring system. In CSEDU (1), pages 233-240, 2014.
|
| 194 |
+
|
| 195 |
+
[3] Jeremiah T Folsom-Kovarik, Gita Sukthankar, and Sae Schatz. Tractable pomdp representations for intelligent tutoring systems. ACM Transactions on Intelligent Systems and Technology (TIST), 4(2):1-22, 2013.
|
| 196 |
+
|
| 197 |
+
[4] Andrew S Lan and Richard G Baraniuk. A contextual bandits framework for personalized learning action selection. In ${EDM}$ , pages ${424} - {429},{2016}$ .
|
| 198 |
+
|
| 199 |
+
[5] Indu Manickam, Andrew S Lan, and Richard G Baraniuk. Contextual multi-armed bandit algorithms for personalized learning action selection. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6344-6348. IEEE, 2017.
|
| 200 |
+
|
| 201 |
+
[6] Tong Mu, Karan Goel, and Emma Brunskill. Program2tutor: Combining automatic curriculum generation with multi-armed bandits for intelligent tutoring systems. In Conference on Neural Information Processing Systems, 2017.
|
RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/kjTVwUVVWP/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ A BANDITS APPROACH TO INTELLIGENT TUTORING SYSTEMS USING CONCEPT EVOLUTION
|
| 2 |
+
|
| 3 |
+
Sudha ${\mathrm{S}}^{ + }$ , Arun Rajkumar ${}^{ + }$
|
| 4 |
+
|
| 5 |
+
${}^{ + }$ Indian Institute of Technology Madras
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
With the huge number of learning resources available online today, the Intelligent Tutoring Systems (ITS) are of great need more than ever. An ITS is a system that personalizes the course contents to each learner. In this paper, we address the problem of suggesting an effective & efficient learning sequences to learners based on their knowledge levels. We take a multi-armed bandits approach to action choosing where we suggest that action which has the highest estimated learning outcome at each step. We model the actions as Beta distributions & the learners' knowledge level as concept vectors. We also learn the prerequisite relationships that can exist among the concepts automatically. We propose a novel algorithm that achieves the goal efficiently. Our experimental results show that our algorithm's performance is comparable to that of the optimal algorithm.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Traditional teaching methods utilize a uniform approach for all learners, disregarding individual abilities and needs. Intelligent Tutoring Systems (ITS) adapt teaching strategies according to learner's unique parameters. This paper presents an ITS framework for devising tailored learning actions sequences for each learner, optimizing concept learning. We model this problem as a Multi-Armed Bandits setting, viewing learning actions as arms and the learning level gained as rewards. The model also considers prerequisite relationships between concepts.
|
| 14 |
+
|
| 15 |
+
Our approach allows a learner's knowledge level to range between 0 and 1, a shift from the conventional binary (0,1)states. This accounts for varying mastery levels of a concept. Our framework permits each learning action to contribute variably to multiple concepts. We also incorporate prerequisite relationships between concepts with varying intensity levels. The algorithm autonomously learns these prerequisite relationships, negating the need for expert input.
|
| 16 |
+
|
| 17 |
+
§ 2 RELATED WORK
|
| 18 |
+
|
| 19 |
+
[1] suggests a Zone of Proximal Development (ZPD)-based action sequence selection, incorporating multi-armed bandits to maximize rewards. Their method relies heavily on time-consuming ZPD graph creation by an expert, a dependency absent in our approach.
|
| 20 |
+
|
| 21 |
+
[2] applies a POMDP approach to ITS in a question-and-answer context, limiting learner concept understanding to binary(0,1)values. Our method allows continuous values in $\left\lbrack {0,1}\right\rbrack$ for knowledge levels, uses practical learning actions like videos, and doesn't require prerequisite information.[3] also applies a POMDP approach to ITS, but solving a POMDP is generally challenging due to the polynomial number of states.
|
| 22 |
+
|
| 23 |
+
[4] embeds Personalised Learning Action (PLA) between fixed assessment sequences to boost immediate assessment performance using the CLUB & ACLUB algorithms. Unlike them, our goal is efficient concept learning, not immediate assessment performance.
|
| 24 |
+
|
| 25 |
+
[5] proposes a Thompson Sampling & Knowledge Gradient variation for PLAs to improve immediate assessment performance, but doesn't address prerequisite dependencies. Our focus is on concept learning. [6] merges automatic curriculum generation with ZPDES bandits approach, framing curriculum generation as a graph coloring problem. This approach requires intensive ZPD graph initialization.
|
| 26 |
+
|
| 27 |
+
§ 3 PROBLEM SETTING & MODELLING ASSUMPTIONS
|
| 28 |
+
|
| 29 |
+
$N$ denotes the count of learners in an ITS system aiming to learn $K$ concepts. Each learner $i$ ’s knowledge state is indicated by vector ${C}_{i} \in {\left\lbrack 0,1\right\rbrack }^{K}$ , with ${C}_{ij}$ signifying learner $i$ ’s mastery of concept $\mathrm{j}$ (e.g., ${C}_{23} = {0.7}$ means learner 2 has ${70}\%$ grasp of concept 3). ITS system’s objective is to teach all $N$ learners all $K$ concepts to a threshold $\theta$ level of mastery.
|
| 30 |
+
|
| 31 |
+
ITS possesses a set of actions $A$ (e.g., videos, lectures) affecting the learner’s knowledge level. The system learns the impact of these actions over time. Concept relationships are considered in two cases: one assumes independence, and the other considers prerequisite relationships affecting the impact of an action on a concept.
|
| 32 |
+
|
| 33 |
+
Learner-specific parameters determine individual learning rates, accommodating variations between fast and slow learners. The ITS must deduce these rates. We assume learner knowledge evolves Markovianly, and knowledge level estimates are assumed to be noisy.
|
| 34 |
+
|
| 35 |
+
§ INDEPENDENT CONCEPTS:
|
| 36 |
+
|
| 37 |
+
For the independent concepts, the effect of action $a$ on concept $i$ at round $t$ is given as follows:
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
{c}_{i}^{t + 1} = {c}_{i}^{t} + \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \cdot \left( {1 - {c}_{i}^{t}}\right) \tag{1}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
where $a$ is the action chosen at time step $t$ and $\operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right)$ is the CDF value of the action $a$ at value ${c}_{i}^{t}$ .
|
| 44 |
+
|
| 45 |
+
§ DEPENDENT CONCEPTS:
|
| 46 |
+
|
| 47 |
+
The value update for the dependent concepts is as follows:
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
{c}_{i}^{t + 1} = {c}_{i}^{t} + \mathop{\sum }\limits_{{j = 1}}^{D}{c}_{j}^{t}{\lambda }_{j - > i} \cdot \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \cdot \left( {1 - {c}_{i}^{t}}\right) \tag{2}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
where $D$ is the number of prerequisite concepts to ${c}_{i}$ and $\mathop{\sum }\limits_{{j = 1}}^{D}{\lambda }_{j - > i} = 1$
|
| 54 |
+
|
| 55 |
+
Here again, $\operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right)$ is the value of the Beta CDF at value ${c}_{i}^{t}$ .
|
| 56 |
+
|
| 57 |
+
Learner Specific Parameter:To model the learner’s unique abilities, we use a user specific parameter ${\gamma }_{i} \in$ $\left\lbrack {0,1}\right\rbrack$ . The effect of an action on a learner then will depend on the action, the specific learner $\&$ the current knowledge state of the learner. This is made formal below:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{c}_{i}^{t + 1} = {c}_{i}^{t} + {\gamma }_{i} \cdot \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \cdot \left( {1 - {c}_{i}^{t}}\right) \tag{3}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
Parameters to Estimate: The ITS system is completely specified using $2 * K$ action parameters that govern the Beta CDFs, and $K * N$ parameters that describe the learner’s knowledge state and $N$ learner specific parameters.
|
| 64 |
+
|
| 65 |
+
§ 4 ITS-BPECE - BANDITS BASED PARAMETER ESTIMTATION FOR CONCEPT EVOLUTION
|
| 66 |
+
|
| 67 |
+
This section gives an overview of the parameter estimation for the independent $\&$ dependent concepts. The parameters that need to be estimated for the independent and the dependent concepts are different. Hence, the estimation approaches vary as well. The subsequent subsections give an overview of the algorithm we propose which we call Bandits based Parameter Estimation for Concept Evolution (BPECE) and the section ends with a pseudo code of the BPECE in Algorithm 1.
|
| 68 |
+
|
| 69 |
+
§ ALGORITHM OVERVIEW:
|
| 70 |
+
|
| 71 |
+
We start off by choosing an action uniformly at random till each action has been chosen a minimum of (a small value) ${A}_{min}$ times. We observe the data thus generated which looks as:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\left\{ {\ldots ,\left( {{C}_{i1}^{t},{C}_{i1}^{t + 1}}\right) ,\left( {{C}_{i2}^{{t}^{\prime }},{C}_{i2}^{{t}^{\prime } + 1}}\right) ,\ldots }\right\} \tag{4}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
If it is an independent concept in question, we use the Zeroth-Order(ZO) optimization to estimate the action parameters. The objective function for the $\mathrm{{ZO}}$ in the case of independent concepts is:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
f\left( {{\alpha }_{a},{\beta }_{a}}\right) = \left( \frac{{c}_{i}^{t + 1} - {c}_{i}^{t}}{1 - {c}_{i}^{t}}\right) - \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \tag{5}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
We use the ZO estimation after every ${D}_{min}$ number of data samples we collect and we increase the value of ${D}_{\text{ min }}$ over time.
|
| 84 |
+
|
| 85 |
+
In the dependent concepts case, not only do we have to estimate the action parameters, but also the ${\lambda }_{j - > i}$ parameters for all dependency pairs(i, j). We start off by fixing the values of ${\lambda }_{j - > i} = \frac{1}{K - 1}$ for all(i, j). We estimate the Beta parameters using the ZO optimization. To estimate the ${\lambda }_{j - > i}$ parameters, we fix the Beta parameters thus obtained. We train a Neural Network (NN) for each dependent concept with the concept vector being the input and the objective value being the output.
|
| 86 |
+
|
| 87 |
+
We alternatively fix ${\lambda }_{j - > i}$ and estimate Beta parameters and fix Beta parameters and estimate ${\lambda }_{j - > i}$ till the values of the parameter converge. Algorithm 1 presents the pseudo code of the algorithm.
|
| 88 |
+
|
| 89 |
+
We incorporate the MAB idea of choosing the arms that have the highest reward by picking those actions that push the learner concept vectors the farthest. We use a version of $\epsilon$ -greedy where we pick the best action with probability $\left( {1 - \epsilon }\right)$ and an action uniformly at random with probability $\epsilon$ . While we use an $\epsilon$ -Greedy strategy, more sophisticated bandit strategies can also be used in the framework.
|
| 90 |
+
|
| 91 |
+
Algorithm 1: BPECE
|
| 92 |
+
|
| 93 |
+
Input: A set of learner concept vector estimates, ${C}_{j},j = 1,2,\ldots N$
|
| 94 |
+
|
| 95 |
+
Parameters ${A}_{min},{D}_{min},\epsilon$
|
| 96 |
+
|
| 97 |
+
Output: Next action ${a}_{j}$ for each learner $j$
|
| 98 |
+
|
| 99 |
+
for $j \leftarrow 1$ to $N$ do
|
| 100 |
+
|
| 101 |
+
if $\exists a \in A$ where $\operatorname{count}\left( a\right) < {A}_{\text{ min }}$ then
|
| 102 |
+
|
| 103 |
+
${a}_{j} \leftarrow a$
|
| 104 |
+
|
| 105 |
+
end
|
| 106 |
+
|
| 107 |
+
else
|
| 108 |
+
|
| 109 |
+
for ${c}_{ji} \in {C}_{j}$ do
|
| 110 |
+
|
| 111 |
+
if ${c}_{ji}$ is Independent then
|
| 112 |
+
|
| 113 |
+
Estimate the $\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ using Zeroth-Order Optimization on 5
|
| 114 |
+
|
| 115 |
+
end
|
| 116 |
+
|
| 117 |
+
if ${c}_{ji}$ is Dependent then
|
| 118 |
+
|
| 119 |
+
Initialize ${\lambda }_{k - > {ji}}$ values uniformly $\forall \left( {k,{ij}}\right)$
|
| 120 |
+
|
| 121 |
+
while ${\lambda }_{k - > {ji}}$ AND $\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ are not converged do
|
| 122 |
+
|
| 123 |
+
Fix ${\lambda }_{k - > {ji}}$
|
| 124 |
+
|
| 125 |
+
Estimate $\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ using Zeroth-Order Optimization on 5
|
| 126 |
+
|
| 127 |
+
$\operatorname{Fix}\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$
|
| 128 |
+
|
| 129 |
+
Estimate ${\lambda }_{k - > {ji}}$ using the Neural Nets
|
| 130 |
+
|
| 131 |
+
end
|
| 132 |
+
|
| 133 |
+
end
|
| 134 |
+
|
| 135 |
+
end
|
| 136 |
+
|
| 137 |
+
Update ${C}_{{j}_{a}}^{\prime }$ using Equation $3\& 2\forall a \in A$
|
| 138 |
+
|
| 139 |
+
With probability $1 - \epsilon$
|
| 140 |
+
|
| 141 |
+
${a}_{j} \leftarrow \arg \mathop{\max }\limits_{{a \in A}}{\begin{Vmatrix}{C}_{{j}_{a}}^{\prime } - {C}_{j}\end{Vmatrix}}_{2}$
|
| 142 |
+
|
| 143 |
+
With probability $\epsilon$
|
| 144 |
+
|
| 145 |
+
${a}_{j} \leftarrow$ choose an action $a \in A$ uniformly at random
|
| 146 |
+
|
| 147 |
+
end
|
| 148 |
+
|
| 149 |
+
end
|
| 150 |
+
|
| 151 |
+
§ 5 EXPERIMENTS
|
| 152 |
+
|
| 153 |
+
Setup: We use as performance metric the number of steps/rounds it takes for all concept values to go beyond 0.9. We compare our algorithm results against an optimal algorithm. The optimal algorithm we consider is an algorithm that has all the true parameter values of the actions and the dependencies and uses those to pick the best action for the learners greedily.
|
| 154 |
+
|
| 155 |
+
§ RESULTS FOR INDEPENDENT CONCEPTS
|
| 156 |
+
|
| 157 |
+
Figure 1 depicts the results for the Independent case where we vary different parameters.
|
| 158 |
+
|
| 159 |
+
2000 Opt algorith 350 Opt algori BPECE BPECE 120 100 50 #Action (c) # Actions vs # Steps, (d) # Actions vs # Steps, # Stds = 10 Stds = 20 BPECE BPECI 1500 250 50 200 2.5 7.5 12.5 17.5 20.0 #Students Concepts (a) # learners vs # Steps (b) # Concepts vs # Steps
|
| 160 |
+
|
| 161 |
+
Figure 1: Number of Steps for the Independent Concepts with varying parameters
|
| 162 |
+
|
| 163 |
+
BPECE #Learners (b) # learners vs # Avg Steps #Learners (a) # learners vs # Steps
|
| 164 |
+
|
| 165 |
+
Figure 2: Total & Average Number of Steps for Independent Concepts for varying number of learners with the learner-specific parameter
|
| 166 |
+
|
| 167 |
+
§ RESULTS FOR INDEPENDENT CONCEPTS WITH STUDENT-SPECIFIC PARAMETER
|
| 168 |
+
|
| 169 |
+
Figure 2 shows the results for the case where we include a learner-specific parameter $\gamma$ that accounts for each learner's learning rate. We vary the number of learners from 2 through 50 while fixing the number of actions and concepts.
|
| 170 |
+
|
| 171 |
+
§ RESULTS FOR DEPENDENT CONCEPTS
|
| 172 |
+
|
| 173 |
+
Figure 3 shows the results for the number of steps taken with for dependent concepts, while the Figure ?? shows the average number of steps taken per learner. We vary the number of dependent concepts from 1 to 4 to show how the algorithm performs in each case.
|
| 174 |
+
|
| 175 |
+
Opt algorithm BPECE, kdep=2 BPECE, kdep=3 15.0 17. #Concepts #Actions (b) # Concepts vs # Steps (c) # Actions vs # Steps 700 600 BPECE, kclcp=3 100 5.0 #Students (a) # learners vs # Steps
|
| 176 |
+
|
| 177 |
+
Figure 3: Number of Steps for Dependent Concepts with Varying No of Dependent Concepts
|
| 178 |
+
|
| 179 |
+
§ 6 CONCLUSION & FUTURE WORK
|
| 180 |
+
|
| 181 |
+
We proposed a novel bandits based parameter estimation approach to suggest learning actions to learners based on each learner's knowledge level. We considered the cases where the concepts are independent and dependent. In the dependent case, we took into consideration the prerequisite relationships between various concepts. We modeled each learning action's effect on a concept as a function of Beta distribution. For the prerequisite relationships, we trained NNs to estimate the degree of dependence. Finally, we used an $\epsilon$ -greedy approach to choose the best action for the learners. We back our proposed method with extensive experimental results.
|
| 182 |
+
|
| 183 |
+
As a future work, we can extend the learner-specific parameters to account for the different learning rate each learner for the dependent concepts as well.
|