diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/QhOp8oE2Pm/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/QhOp8oE2Pm/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..712313035b592b8c9ca5ec24b28c961635958688 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/QhOp8oE2Pm/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,591 @@ +000 054 + +# Gamli - Icelandic Oral History Corpus: Design, Collection and Evaluation + +001 055 + +002 056 + +003 Anonymous Author + +Affiliation / Address line 1 + +006 Affiliation / Address line 2 Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +## Abstract + +013 This paper presents Gamli, an ASR corpus for Icelandic oral histories, the first of its kind for this language, derived from the + +016 Ísmús ethnographic collection. Corpora for oral histories differ in various ways + +018 from corpora for general ASR, namely they contain spontaneous speech, multiple speakers per channel, noisy environ- + +021 ments, the effects of historic recording equipment, and typically a large propor- + +023 tion of elderly speakers. Gamli contains 188 hours of aligned speech and tran- + +026 scripts, split into a training set and a test set. We describe our approach for creating + +028 the transcripts, through both Optical Character Recognition of previous transcripts and post-editing of ASR output. We also + +031 describe our approach for aligning, segmenting, and filtering the corpus and fi- + +033 nally training a Kaldi ASR system, which achieves 22.1% word error rate (WER) on the Gamli test set, a substantial improvement from 53.4% word error rate from a baseline general ASR system for Ice- + +038 landic. + +## 1 Introduction + +Icelandic open-licensed speech corpora have in re- + +043 cent years grown in volume and numbers, there are now Talrómur (Sigurgeirsson et al., 2021), Málrómur (Steingrímsson et al., 2017), Samró- mur (Mollberg et al., 2020) and the Althingi's Parliamentary Speeches corpus (Helgadóttir et al., 2017b; Nikulásdóttir et al., 2018) to name a few. However both historical speech and older speakers are underrepresented in these corpora. For instance, regarding older speakers, in Samrómur, the largest open-licensed ASR corpus for Icelandic + +053 (2233 hours in the latest release (Hedström et al., + +2022)), only 4.8% of speakers are over 60 years 065 old. + +Gamli, the oral history speech corpus presented 067 in this paper differs from that in many ways. Firstly, it contains, predominantly, spontaneous + +speech in the form of interviews, secondly, it has a 070 very high ratio of older speakers (94.8% of speak- + +ers are over 60 years old), thirdly, background 072 noise is common as well as noise artefacts from + +historical recording equipment and lastly, historic 075 dialects (word choice and accent) are much more + +prevalent than in existing corpora. 077 + +The corpus contains 188 hours of aligned speech and transcripts split into a training set and + +a test set. This data, based on valuable historical 080 20th century recordings stored at the Department + +of Ethnology and Folklore at The Árni Magnús- 082 son Institute for Icelandic Studies, is therefore an important addition to the existing Icelandic speech + +corpora. ${}^{1}$ 085 + +The custom ASR system presented in this pa- + +per along with the corpus will in due course be 087 used to automatically transcribe all of the ethnographic audio recordings stored at the institute. + +The transcripts will then be made available on the 090 online portal Ismús ${}^{2}$ and paired with the respective + +recording. 092 + +## 2 Related Work + +For many years, ASR systems have been trained + +on unaligned transcriptions (Panayotov et al., 097 2015) and even approximate transcriptions of spontaneous speech (Jang and Hauptmann, 1999). In the case of Icelandic ASR for spontaneous speech, there has been an ongoing project (Hel-gadóttir et al., 2017b), (Helgadóttir et al., 2017a) to align and filter Icelandic parliamentary transcripts for ASR in order to reduce the manual work + +107 involved in transcribing parliamentary proceed- + +--- + +${}^{1}$ The corpus is available under an open license at https: //anonymo.us/gamli + +2 www.ismus.is + +--- + +109 ings. Creating the corpora involves text normalization, time-alignment, and filtering utterances. + +While ASR for oral histories is new for Icelandic, it is already being used in other languages. For example, the first large project was the MALACH project (Psutka et al., 2002) in 2002, where ASR transcriptions were used for indexing oral history archives and making them more searchable. However, some authors still consider oral history speech recognition an open problem (Picheny et al., 2019; Gref et al., 2020) and a recent study (Gref et al., 2022) found that human word error rate was ${8.7}\%$ on a German oral history corpus (taking into account case-sensitivity and annotation of hesitations). Whereas (Lippmann, 1997) found a human word error rate of less than 4% on the Switchboard corpus of spontaneous telephony speech and less than 0.4% on the Wall Street Journal corpus of clear read speech. This suggests that the minimum possible word error rate for ASR might be much higher on oral histories than it is for cleaner speech corpora. + +One other factor that makes oral history ASR an interesting challenge is the particularly high ratio of older speakers. It has been noted by (Vipperla et al., 2008) that for general ASR models, WER correlates strongly with age, even throughout a single speakers lifetime. This could be caused by multiple changes in aging voices, such as slower speaking rate, changes in F0 (decrease for males and increase for females), increase in jitter and shimmer (all from (Vipperla et al., 2008)), some of which could be mitigated by increasing the number of older speakers in the training set. However, other changes might not be so easily solved, such as a reduction of tongue and jaw strength and an increase in breathiness (all from (Vipperla et al., 2008)) which could reduce articulatory precision. + +## 3 Origin of the corpus + +The ethnography collection of the Department of Ethnology and Folklore at The Árni Magnússon Institute for Icelandic Studies contains more than 2,300 hours of audio recordings of oral heritage and traditions, with a little less than 2,500 interviewees. The oldest material are recordings made on wax cylinders in the early 20th century and the collection is continually expanding with new material being added every year. + +The bulk of the collection, however, consists + +of recordings from the 1960's and 1970's, mainly 162 + +the work of three collectors. Their focus was 163 to gather ethnographic material from the whole country, first and foremost from older generations - the majority of the informants were born before or around the turn of the 20th century, + +This resulted in an extensive collection of leg- 168 ends and fairy tales, accounts of beliefs and customs, poems, hymns, nursery rhymes, Icelandic ballads (rímur), occasional verses and more, with the material being variously spoken, sung or chanted. Apart from recited verse and that which is sung or chanted the speech is spontaneous. Accompanying the recordings is detailed metadata on the speaker, time and location of recording, as well as various other parameters such as genre (for different kinds of verse or prose material, e.g. poems or nursery rhymes, fairy tales or legends etc.), mode of performance (sung, chanted, spoken), key words, content (short summary, description), tale-types and motifs (in folktales and legends). + +### 3.1 Speaker distribution in the collection + +185 + +In their work the collectors mainly relied on a snowball method of sorts, asking speakers to point them to other possible informants, as well as contacting teachers or clergy to enquire about interesting subjects in their region. Speaker profession is often listed in the metadata, but there is no information about education, and most of the speakers were common people, i.e. workers, farmers, fishermen, housewives etc., with little formal education. + +Gender was probably not a decisive factor at the outset and the total ratio is ${57.6}\%$ male speakers and 42.4% female, i.e. based on the number of speakers. However, if audio length for each gender is included the difference increases quite a bit, i.e. 1504 hours (65%) for men vs. 821 hours (35%) for women. + +As mentioned, the data in the collection also stands out in that that the age of the speakers is higher than in other existing Icelandic corpora. The oldest speaker in the collection was 105 years old at the time of recording in 1954 and the oldest speaker in the collection, with regards to date of birth, was born in 1827, and recorded in 1904 (not included in the Gamli corpus). In fact, 72.4% of the speakers are older than 63 and ${31.4}\%$ are 71 - 80 years old. In Gamli this ratio is substantially + +higher, as detailed in Section 4. 215 + +### 3.2 Regional features in pronunciation + +217 The speakers in the collection are from all over the country and therefore reflect the various regional differences in pronunciation much better than recently recorded speech corpora such as Samró- + +222 mur, due to the fact that these regional features either have already more or less disappeared or are gradually disappearing. Amongst these features is for example the "hard" pronunciation of $/\mathrm{p},\mathrm{t},\mathrm{k}/$ (still a distinct feature) and voiced pronunciation of $/\mathrm{l},\mathrm{m},\mathrm{n}/$ before $/\mathrm{p},\mathrm{t},\mathrm{k}/$ in North-Iceland, ${rn}$ -, ${rl}$ -pronunciation in South-East-Iceland, monoph- + +229 thongs before $/\mathrm{{ng}},\mathrm{{nk}}/$ in the North-West etc. + +230 While these features are not tagged in any way + +231 in the Gamli corpus, the ASR system trained on + +232 the corpus seems to prove well on these features, with possibly the exception of labial or velar stops + +234 before $\left\lbrack \partial \right\rbrack$ , such as $\left\lbrack {\operatorname{hap}\partial \mathrm{I}}\right\rbrack$ instead of $\left\lbrack {\operatorname{hav}\partial \mathrm{I}}\right\rbrack$ for ${haf\delta i}$ or $\left\lbrack {\operatorname{lak}\delta \mathrm{I}}\right\rbrack$ instead of $\left\lbrack {\operatorname{lay}\delta \mathrm{I}}\right\rbrack$ for $\operatorname{lag}{\delta i}$ . We have, however, not inspected this systematically, so it needs further looking into to state the precision with any certainty. + +### 3.3 Recording procedure + +Most of the recordings were made at the speakers' homes, in many cases in elderly homes, and carried out by the interviewer. It was not uncommon that other people, e.g. children, spouses etc. were present during the recording sessions, but they were in most cases not meant to play a part in the recording. Because of this, and for various + +249 other reasons, some background noise and disturbances occur in the recordings, e.g. children playing, traffic sounds, phones ringing etc., but these are generally not prominent. + +254 Much of the recordings were recorded using high quality reel-to-reel tape recording devices, although some were done by amateurs who weren't as well equipped, whereas a part of the recordings are from the recording studios of The Ice- + +259 landic National Broadcasting Service (Porsteins-dóttir, 2013). + +The digitalization of these recordings began in the late 1990's and continued into the early 2000's with the recordings being converted into WAV for- + +264 mat as well as compressed MP3s for online use. + +## 4 Corpus content + +Gamli contains 188 hours of transcribed audio + +269 broken down into + +1. $\sim {145}$ hours from optical character recogni- 270 + +tion (OCR) of previous transcriptions in var- 271 + +ious formats 272 + +273 + +2. $\sim {43}$ hours of new transcriptions (post-edited 274 + +from ASR output) 275 + +276 + +The 145 hours include $\sim 8$ hours defined as a test + +set, which was manually reviewed and corrected 278 + +and annotated with speaker ID and time align- 279 + +ments in the annotation tool ${ELAN}$ . The test set 280 + +contains recordings with 10 speakers, 5 women 281 (239 minutes) and 5 men (219 minutes), plus the + +interviewers ( 4 men) and serves for evaluating the 283 system's performance. + +A validation set has not been defined for the cor- + +pus as the acoustic model training in Kaldi (Povey 286 et al., 2011) used a random sample of the training + +corpus for validation. 288 + +
Data splitHoursMale speakersFemale speakersTotal speakers
Training18011585200
Test85510
+ +Table 1: Data splits in Gamli + +291 + +293 + +### 4.1 Speaker distribution in the corpus + +296 + +The corpus contains 210 unique speakers, 90 + +women and 120 men (plus the interviewers: 13 298 men and 1 woman). At the outset we aimed to have the gender ratio as equal as possible in the + +acoustic training data, but with three men surpass- 301 ing 20 hours of speech each (with one topping at + +29 hours) and accounting for more than one third 303 of the entire data, that picture became quite distorted. As a result the gender bias in the corpus is + +even greater than in the collection itself, which is 306 unfortunate, but simply reflects the data that was + +at hand, i.e. ${73.5}\%$ vs. ${26.5}\%$ , cf. Section 4.2. 308 309 The age ranges from 38 to 99 , but most of the 310 speakers are ${60} + \left( {{94.8}\% }\right)$ , as shown in Figure 1, and the average age of the speakers is 77 years. + +This ratio is unprecedented in all existing corpora 313 for Icelandic speech (cf. 4.8% in Samrómur as referred to in Section 1) and makes Gamli an important addition to that collection. + +### 4.2 Corpus compilation + +318 + +As mentioned, the largest part of the corpus, about + +145 hours, stems from OCR of transcriptions at 320 + +the Department of Ethnology and Folklore at The 321 + +Årni Magnússon Institute for Icelandic Studies. 322 + +These transcripts that were generated over several 323 + +324 + +![019640ed-2683-7826-930f-244a41c8b393_3_198_168_599_373_0.jpg](images/019640ed-2683-7826-930f-244a41c8b393_3_198_168_599_373_0.jpg) + +Figure 1: Age distribution of unique speakers in the training set + +325 + +329 + +330 + +335 + +339 + +![019640ed-2683-7826-930f-244a41c8b393_3_197_697_600_366_0.jpg](images/019640ed-2683-7826-930f-244a41c8b393_3_197_697_600_366_0.jpg) + +Figure 2: Age distribution of unique speakers in the test set + +340 decades are not all in the same format (e.g. typewritten, dot printed, printed Word documents) and therefore needed first to be processed, i.e. scanned and OCRed (the results of which varied depending on the format). These transcripts were then catalogued and paired with the respective recordings. + +Once this ready data had been processed the first ASR output was produced and manually cor- + +362 rected. During that process it became evident that some of the recordings were ill suited at this stage as they often contained poetry, nursery rhymes and in some cases singing, where the ASR system could not be expected to do well as the focus was + +367 on spontaneous speech, where it performed much better (cf. Section 6). + +As a result, we made use of the detailed meta-data search parameters in the Ísmús portal in order to filter the best in-domain data for further training. We mainly relied on the so-called form parameter (genre) to try to exclude everything but spontaneous speech. This gave much better results and resulted in the 43 hours of post-edited + +377 data mentioned in Section 4. + +### 4.3 Normalizing, aligning, segmenting and filtering the transcripts for ASR training + +378 + +379 + +380 + +A large part of the transcripts did not have time 381 + +alignments and some had OCR spelling errors. 382 + +Therefore, we had to process the utterances before 383 + +using them to train the acoustic model. To do this, 384 + +we first normalized all sentences using the Regina 385 + +normalizer developed in (Sigurðardóttir, 2021) be- 386 fore aligning the transcripts to the audio and segmenting them. This step also removes sections + +with out-of-vocabulary words, which should ac- 389 count for errors stemming from the OCR. + +We then filtered those segments, removing any 391 that were deemed unintelligible to an intermediate ASR system. For this, a biased language model + +is applied to the segment, using words that appear 394 + +in the utterance's transcription. It then removes 396 segments where the system could not decode the words which appeared in the transcript. This is an iterative process, whereby an acoustic model is used to filter the training data, then that data is used to train a new acoustic model, which can then be used to re-align and re-filter the training data. These segmenting and filtering steps were all done with the Kaldi scripts (Segment long utterances nn3 ${)}^{3}$ and (Clean and segment data nn3). ${}^{4}$ + +406 + +## 5 Models (and out-of-domain data) + +We trained a hybrid ASR system in Kaldi. That 409 is, the language model and acoustic model were trained separately as opposed to an end-to-end system. For the acoustic and language models in the custom ASR system, we expanded the training sets with various out-of-domain data, which will be described in the following sections. + +416 + +### 5.1 Acoustic Model + +An acoustic model learns to map audio to a sequence of phonemes. The acoustic model is + +a TDNN (time-delayed neural network) chain 421 model trained in Kaldi. It was trained on the in-domain data described above, but also on various out-of-domain data, which included the following datasets: + +426 + +430 + +431 + +--- + +${}^{3}$ https://github.com/kaldi-asr/kaldi/ blob/master/egs/wsj/s5/steps/cleanup/ segment_long_utterances_nnet3.sh + +${}^{4}$ https://github.com/kaldi-asr/kaldi/ blob/master/egs/wsj/s5/steps/cleanup/ clean_and_segment_data_nnet3.sh + +--- + +1. Althingi’s Parliamentary Speeches. ${}^{5}$ A corpus of 514.5 hours of recorded speech from the Icelandic parliament (Helgadóttir et al., 2017a) + +2. 114.6 hours of speech from the first Samró- mur release, ${}^{6}$ leaving out children. + +3. 173.1 hours of unverified Samrómur data, ${}^{7}$ containing only speech with ${50} +$ year old men and ${60} +$ year old women. + +4. 228.2 hours of the RÚV TV unknown speakers dataset. ${}^{8}$ + +Data augmentation was also used to triple the entire training set. We added artificial noise and reverberation. For noisy data sets, e.g. call-center data sets, this is said to give better results than speed perturbations (Ko et al., 2017) and as was described earlier, background noise and disturbances are not uncommon in the data. + +### 5.2 Language Model + +A language model is necessary for outputting coherent texts, it learns a probability distribution for word sequences from a training corpus. The language model is an n-gram language model; 3- gram for decoding and 4-gram for rescoring. It was trained on in-domain data from the Gamli training set described in 4.2, both already existing ones and those resulting from the proofread ASR output. The out-of-domain data stems from the following sources: + +1. The Icelandic Gigaword Corpus (IGC) (Ste-ingrímsson et al., 2018). We use word forms from the 2022 version of the IGC. ${}^{9}$ + +2. Ethnographic data from the National Museum of Iceland in Sarpur. ${}^{10}$ + +3. Audio file descriptions from Ismús ${}^{11}$ for their content. + +4. Place name data from the Icelandic Place 486 + +Name Collection. ${}^{12}$ 487 + +488 + +### 5.3 Vocabulary and Pronunciation Dictionary + +489 + +490 + +The pronunciation dictionary maps words to se- 491 + +quences of phonemes. For the vocabulary we 492 used: + +1. All the word forms from The Database of Icelandic Morphology (Bjarnadóttir et al., + +2019). 497 + +2. OOV words from audio file descriptions in Is- 499 mús. + +3. Vocabulary from the training set (only the 502 data that was manually transcribed and not + +the OCR data); manually checked and added 504 where appropriate. + +4. OOV words from Sarpur; (manually checked 507 and added where appropriate). + +To get the phonemic transcriptions of each word a G2P model based on the Icelandic Pronunciation Dictionary for Language Technology ${}^{13}$ was used. + +## 6 Evaluation + +To assess the final ASR system's performance on the test set, we use Samrómur TDNN model as a baseline. This is a baseline model from a wellknown dataset of read Icelandic speech. While the ASR baseline system, Samrómur achieved 53.4% WER on the Gamli test set, the final ASR system performed much better, achieving 22.1% WER on the same set, as shown in Table 2. This compares the two overall systems, each including their own acoustic model, language model, and vocabulary. + +To investigate the differences in the two systems, we also compare the performance when taking demographic information into account in Fig- + +ure 3. As stated earlier, the test set contains 10 529 speakers and a total of 8 hours of audio. + +There appears to be a possible slight correlation between age and WER for the baseline system but not for the final system. Though it should be noted that the test set has too few data points to draw any significant conclusions. There is one outlier in the test set for both systems, an 85 year old man + +539 + +--- + +${}^{5}$ Available at: http://hdl.handle.net/20.500.12537/277 + +6 Available at: http://hdl.handle.net/20.500.12537/189 + +${}^{7}$ Available at: http://hdl.handle.net/20.500.12537/265 + +8 Available at: http://hdl.handle.net/20.500.12537/191 + +9http://hdl.handle.net/20.500.12537/ 254 + +${}^{10}$ https://sarpur.is/ + +11https://ismus.is/ + +12 nafnid.is + +${}^{13}$ Available at: http://hdl.handle.net/20.500.12537/99 + +--- + +540 + +![019640ed-2683-7826-930f-244a41c8b393_5_205_171_587_717_0.jpg](images/019640ed-2683-7826-930f-244a41c8b393_5_205_171_587_717_0.jpg) + +Figure 3: WER on the Gamli test set for the 10 unique speakers in the test set based on demographic information + +541 + +546 + +551 recorded in 1966, upon manual inspection of the audio it seems the speaker has particularly slurred speech and there is some noise from the recording equipment. + +
WEROOV-rate total wordsOOV-rate unique words
Baseline (Samrómur)53.4%1.1%6.8%
Final22.1%0.5%3.1%
+ +Table 2: ASR performance on the Gamli oral history test set + +## 7 Conclusion and Future Work + +In this paper we have presented Gamli, a corpus + +583 suitable for training speech recognition systems, we have aligned and segmented Icelandic oral histories from manual transcriptions (both OCR from typewritten transcripts and post-edited from ASR output), and filtered out unintelligible segments. + +588 We have described the compilation of the corpus, which has been published under an open license, the origins of the data and evaluation of an ASR system trained on the corpus. We have shown that using the corpus along with other rele- + +593 vant datasets can substantially lower WER for his- + +torical speech data, from 53.4% from a baseline 594 + +model to 22.1%. We also draw the conclusion that 595 + +it could be combined with other ASR training sets 596 + +which lack in data from older speakers in order to 597 + +reduce the word error rate for such speakers. 598 + +Our final ASR system will be used to automati- 599 600 cally transcribe the entire ethnographic audio data stored in Ismús, i.e. 2,300 hours of audio. We expect the outcome of that process to be in line with the results presented in this paper, with verse, + +nursery rhymes, singing etc. still remaining a chal- 605 lenge for the customised model, but accuracy for + +spontaneous speech to be more reliant on audio 607 quality and clarity of speech. Where the quality of these two factors is high, we expect the system to + +perform well. 610 + +Even though the WER may differ substantially for some files, the general outcome will nonetheless be a somewhat readable version of the Is-mús ethnographic collection. That output can sub- + +sequently be used in a number of ways: mak- 615 ing the data in Ismús more accessible for the + +user, both laymen and researchers, indecing the 617 archives for search queries (useful for longer audio files where the description can not do the en- + +tire content justice), and as a hypothesis transcript 620 for post-editing of more transcripts. + +The Gamli corpus itself should provide an inter- 622 esting challenge to ASR researchers interested in + +spontaneous speech, older speakers, noisy audio, 625 historical recordings and historical dialects. + +627 + +## References + +628 + +629 + +Kristín Bjarnadóttir, Kristín Ingibjörg Hlynsdóttir, and 630 + +Steinbór Steingrímsson. 2019. DIM: The Database 631 + +of Icelandic Morphology. In Proceedings of the 632 + +22nd Nordic Conference on Computational Linguis- 633 tics, Turku, Finland. + +634 + +Michael Gref, Nike Matthiesen, Sreeni- 635 + +vasa Hikkal Venugopala, Shalaka Satheesh, 636 + +Aswinkumar Vijayananth, Duc Bach Ha, 637 + +Sven Behnke, and Joachim Köhler. 2022. 638 https://doi.org/10.48550/ARXIV.2201.06868 A + +study on the ambiguity in human annotation of ger- 639 + +man oral history interviews for perceived emotion 640 + +recognition and sentiment analysis. 641 + +642 + +Michael Gref, Oliver Walter, Christoph Schmidt, Sven 643 + +Behnke, and Joachim Köhler. 2020. Multi-staged 644 + +cross-lingual acoustic model adaption for robust 645 speech recognition in real-world applications-a case + +study on german oral history interviews. arXiv 646 + +preprint arXiv:2005.12562. 647 + +Staffan Hedström, Judy Y. Fong, Ragn- + +649 heiður pórhallsdóttir, David Erik Mollberg, Smári Freyr Guǒmundsson, Ölafur Helgi Jónsson, Sunneva Porsteinsdóttir, Eydís Huld Magnúsdóttir, and Jon Gudnason. 2022. http://hdl.handle.net/20.500.12537/265 Samro-mur unverified 22.07. CLARIN-IS. + +Inga Rún Helgadóttir, Róbert Kjaran, Anna Björk Nikulásdóttir, and Jón Guönason. 2017a. http://hdl.handle.net/20.500.12537/277 Althingi's parliamentary speeches. CLARIN-IS. + +Inga Rún Helgadóttir, Róbert Kjaran, Anna Björk Nikulásdóttir, and Jón Guönason. 2017b. Build- + +661 ing an asr corpus using althingi's parliamentary speeches. In Interspeech. + +663 Photina Jaeyun Jang and Alexander G Hauptmann. + +664 1999. Improving acoustic models with captioned multimedia speech. In Proceedings IEEE Interna- + +666 tional Conference on Multimedia Computing and Systems, volume 2, pages 767-771. IEEE. + +Tom Ko, Vijayaditya Peddinti, Daniel Povey, Michael L Seltzer, and Sanjeev Khudanpur. 2017. A study on data augmentation of reverberant speech for robust speech recognition. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5220-5224. IEEE. + +Richard P. Lippmann. 1997. https://doi.org/https://doi.org/10.1016/S0167- 6393(97)00021-6 Speech recognition by machines and humans. Speech Communication, 22(1):1-15. + +David Erik Mollberg, Ölafur Helgi Jónsson, Sun-neva THorsteinsdóttir, Steinbór Steingrímsson, Ey-dís Huld Magnúsdóttir, and Jón Guönason. 2020. + +681 Samrómur: Crowd-sourcing data collection for icelandic speech recognition. In International Conference on Language Resources and Evaluation. + +Anna B Nikulásdóttir, Inga R Helgadóttir, Matthías Pé- tursson, and Jón Guönason. 2018. Open asr for ice- + +686 landic: Resources and a baseline system. In Proc. LREC, volume 2018. + +Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. https://doi.org/10.1109/ICASSP.2015.7178964 + +691 Librispeech: An asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206-5210. + +Michael Picheny, Zóltan Tüske, Brian Kingsbury, Kar-tik Audhkhasi, Xiaodong Cui, and George Saon. 2019. Challenging the boundaries of speech recognition: The malach corpus. arXiv preprint arXiv:1908.03455. + +Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas 701 Burget, Ondrej Glembek, Nagendra Goel, Mirko + +Hannemann, Petr Motlicek, Yanmin Qian, Petr 702 + +Schwarz, Jan Silovsky, Georg Stemmer, and Karel 703 + +Vesely. 2011. The kaldi speech recognition toolkit. 704 + +In IEEE 2011 Workshop on Automatic Speech 705 + +Recognition and Understanding. IEEE Signal Pro- 706 cessing Society. IEEE Catalog No.: CFP11SRW-USB. + +708 + +Josef Psutka, Pavel Ircing, Josef V Psutka, Vlasta Radová, William J Byrne, Jan Hajič, Samuel Gust-man, and Bhuvana Ramabhadran. 2002. Automatic transcription of czech language oral history in the malach project: Resources and initial experiments. In Text, Speech and Dialogue: 5th International Conference, TSD 2002 Brno, Czech Republic, September 9-12, 2002 Proceedings 5, pages 253- 260. Springer. + +Helga Svala Sigurðardóttir. 2021. http://hdl.handle.net/20.500.12537/158 Text normalization corpus 21.10 (2021-10-25). CLARIN-IS. + +Atli Sigurgeirsson, borsteinn Gunnarsson, Gunnar Örnölfsson, Eydís Magnúsdóttir, Ragnheiður Pórhallsdóttir, Stefán Jónsson, and Jón Guönason. 2021. https://aclanthology.org/2021.nodalida-main.50 Talrómur: A large Icelandic TTS corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 440-444, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden. + +Steinbór Steingrímsson, Jón Guònason, Sigrún Helgadóttir, and Eiríkur Rögnvaldsson. 2017. https://aclanthology.org/W17-0229 Málrómur: A manually verified corpus of recorded Icelandic speech. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 237-240, Gothenburg, Sweden. Association for Computational Linguistics. + +Steinbór Steingrímsson, Sigrún Helgadóttir, Eiríkur Rögnvaldsson, Starkaður Barkarson, and Jón Guö- nason. 2018. https://aclanthology.org/L18-1690 Risamálheild: A very large Icelandic text corpus. + +In Proceedings of the Eleventh International Confer- 740 ence on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Re- + +sources Association (ELRA). 743 + +Ravichander Vipperla, Steve Renals, and Joe Frankel. + +2008. Longitudinal study of asr performance on 745 + +ageing voices. 746 + +Rósa Porsteinsdóttir. 2013. Ismús (íslenskur músík-og menningararfur): An open-access database. The Retrospective Methods Network Newsletter, 7:97- + +101. 750 + +751 752 753 754 755 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/QhOp8oE2Pm/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/QhOp8oE2Pm/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..45773934ac69be78b006e67159b121edf16c32d2 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/QhOp8oE2Pm/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,483 @@ +000 054 + +§ GAMLI - ICELANDIC ORAL HISTORY CORPUS: DESIGN, COLLECTION AND EVALUATION + +001 055 + +002 056 + +003 Anonymous Author + +Affiliation / Address line 1 + +006 Affiliation / Address line 2 Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +§ ABSTRACT + +013 This paper presents Gamli, an ASR corpus for Icelandic oral histories, the first of its kind for this language, derived from the + +016 Ísmús ethnographic collection. Corpora for oral histories differ in various ways + +018 from corpora for general ASR, namely they contain spontaneous speech, multiple speakers per channel, noisy environ- + +021 ments, the effects of historic recording equipment, and typically a large propor- + +023 tion of elderly speakers. Gamli contains 188 hours of aligned speech and tran- + +026 scripts, split into a training set and a test set. We describe our approach for creating + +028 the transcripts, through both Optical Character Recognition of previous transcripts and post-editing of ASR output. We also + +031 describe our approach for aligning, segmenting, and filtering the corpus and fi- + +033 nally training a Kaldi ASR system, which achieves 22.1% word error rate (WER) on the Gamli test set, a substantial improvement from 53.4% word error rate from a baseline general ASR system for Ice- + +038 landic. + +§ 1 INTRODUCTION + +Icelandic open-licensed speech corpora have in re- + +043 cent years grown in volume and numbers, there are now Talrómur (Sigurgeirsson et al., 2021), Málrómur (Steingrímsson et al., 2017), Samró- mur (Mollberg et al., 2020) and the Althingi's Parliamentary Speeches corpus (Helgadóttir et al., 2017b; Nikulásdóttir et al., 2018) to name a few. However both historical speech and older speakers are underrepresented in these corpora. For instance, regarding older speakers, in Samrómur, the largest open-licensed ASR corpus for Icelandic + +053 (2233 hours in the latest release (Hedström et al., + +2022)), only 4.8% of speakers are over 60 years 065 old. + +Gamli, the oral history speech corpus presented 067 in this paper differs from that in many ways. Firstly, it contains, predominantly, spontaneous + +speech in the form of interviews, secondly, it has a 070 very high ratio of older speakers (94.8% of speak- + +ers are over 60 years old), thirdly, background 072 noise is common as well as noise artefacts from + +historical recording equipment and lastly, historic 075 dialects (word choice and accent) are much more + +prevalent than in existing corpora. 077 + +The corpus contains 188 hours of aligned speech and transcripts split into a training set and + +a test set. This data, based on valuable historical 080 20th century recordings stored at the Department + +of Ethnology and Folklore at The Árni Magnús- 082 son Institute for Icelandic Studies, is therefore an important addition to the existing Icelandic speech + +corpora. ${}^{1}$ 085 + +The custom ASR system presented in this pa- + +per along with the corpus will in due course be 087 used to automatically transcribe all of the ethnographic audio recordings stored at the institute. + +The transcripts will then be made available on the 090 online portal Ismús ${}^{2}$ and paired with the respective + +recording. 092 + +§ 2 RELATED WORK + +For many years, ASR systems have been trained + +on unaligned transcriptions (Panayotov et al., 097 2015) and even approximate transcriptions of spontaneous speech (Jang and Hauptmann, 1999). In the case of Icelandic ASR for spontaneous speech, there has been an ongoing project (Hel-gadóttir et al., 2017b), (Helgadóttir et al., 2017a) to align and filter Icelandic parliamentary transcripts for ASR in order to reduce the manual work + +107 involved in transcribing parliamentary proceed- + +${}^{1}$ The corpus is available under an open license at https: //anonymo.us/gamli + +2 www.ismus.is + +109 ings. Creating the corpora involves text normalization, time-alignment, and filtering utterances. + +While ASR for oral histories is new for Icelandic, it is already being used in other languages. For example, the first large project was the MALACH project (Psutka et al., 2002) in 2002, where ASR transcriptions were used for indexing oral history archives and making them more searchable. However, some authors still consider oral history speech recognition an open problem (Picheny et al., 2019; Gref et al., 2020) and a recent study (Gref et al., 2022) found that human word error rate was ${8.7}\%$ on a German oral history corpus (taking into account case-sensitivity and annotation of hesitations). Whereas (Lippmann, 1997) found a human word error rate of less than 4% on the Switchboard corpus of spontaneous telephony speech and less than 0.4% on the Wall Street Journal corpus of clear read speech. This suggests that the minimum possible word error rate for ASR might be much higher on oral histories than it is for cleaner speech corpora. + +One other factor that makes oral history ASR an interesting challenge is the particularly high ratio of older speakers. It has been noted by (Vipperla et al., 2008) that for general ASR models, WER correlates strongly with age, even throughout a single speakers lifetime. This could be caused by multiple changes in aging voices, such as slower speaking rate, changes in F0 (decrease for males and increase for females), increase in jitter and shimmer (all from (Vipperla et al., 2008)), some of which could be mitigated by increasing the number of older speakers in the training set. However, other changes might not be so easily solved, such as a reduction of tongue and jaw strength and an increase in breathiness (all from (Vipperla et al., 2008)) which could reduce articulatory precision. + +§ 3 ORIGIN OF THE CORPUS + +The ethnography collection of the Department of Ethnology and Folklore at The Árni Magnússon Institute for Icelandic Studies contains more than 2,300 hours of audio recordings of oral heritage and traditions, with a little less than 2,500 interviewees. The oldest material are recordings made on wax cylinders in the early 20th century and the collection is continually expanding with new material being added every year. + +The bulk of the collection, however, consists + +of recordings from the 1960's and 1970's, mainly 162 + +the work of three collectors. Their focus was 163 to gather ethnographic material from the whole country, first and foremost from older generations - the majority of the informants were born before or around the turn of the 20th century, + +This resulted in an extensive collection of leg- 168 ends and fairy tales, accounts of beliefs and customs, poems, hymns, nursery rhymes, Icelandic ballads (rímur), occasional verses and more, with the material being variously spoken, sung or chanted. Apart from recited verse and that which is sung or chanted the speech is spontaneous. Accompanying the recordings is detailed metadata on the speaker, time and location of recording, as well as various other parameters such as genre (for different kinds of verse or prose material, e.g. poems or nursery rhymes, fairy tales or legends etc.), mode of performance (sung, chanted, spoken), key words, content (short summary, description), tale-types and motifs (in folktales and legends). + +§ 3.1 SPEAKER DISTRIBUTION IN THE COLLECTION + +185 + +In their work the collectors mainly relied on a snowball method of sorts, asking speakers to point them to other possible informants, as well as contacting teachers or clergy to enquire about interesting subjects in their region. Speaker profession is often listed in the metadata, but there is no information about education, and most of the speakers were common people, i.e. workers, farmers, fishermen, housewives etc., with little formal education. + +Gender was probably not a decisive factor at the outset and the total ratio is ${57.6}\%$ male speakers and 42.4% female, i.e. based on the number of speakers. However, if audio length for each gender is included the difference increases quite a bit, i.e. 1504 hours (65%) for men vs. 821 hours (35%) for women. + +As mentioned, the data in the collection also stands out in that that the age of the speakers is higher than in other existing Icelandic corpora. The oldest speaker in the collection was 105 years old at the time of recording in 1954 and the oldest speaker in the collection, with regards to date of birth, was born in 1827, and recorded in 1904 (not included in the Gamli corpus). In fact, 72.4% of the speakers are older than 63 and ${31.4}\%$ are 71 - 80 years old. In Gamli this ratio is substantially + +higher, as detailed in Section 4. 215 + +§ 3.2 REGIONAL FEATURES IN PRONUNCIATION + +217 The speakers in the collection are from all over the country and therefore reflect the various regional differences in pronunciation much better than recently recorded speech corpora such as Samró- + +222 mur, due to the fact that these regional features either have already more or less disappeared or are gradually disappearing. Amongst these features is for example the "hard" pronunciation of $/\mathrm{p},\mathrm{t},\mathrm{k}/$ (still a distinct feature) and voiced pronunciation of $/\mathrm{l},\mathrm{m},\mathrm{n}/$ before $/\mathrm{p},\mathrm{t},\mathrm{k}/$ in North-Iceland, ${rn}$ -, ${rl}$ -pronunciation in South-East-Iceland, monoph- + +229 thongs before $/\mathrm{{ng}},\mathrm{{nk}}/$ in the North-West etc. + +230 While these features are not tagged in any way + +231 in the Gamli corpus, the ASR system trained on + +232 the corpus seems to prove well on these features, with possibly the exception of labial or velar stops + +234 before $\left\lbrack \partial \right\rbrack$ , such as $\left\lbrack {\operatorname{hap}\partial \mathrm{I}}\right\rbrack$ instead of $\left\lbrack {\operatorname{hav}\partial \mathrm{I}}\right\rbrack$ for ${haf\delta i}$ or $\left\lbrack {\operatorname{lak}\delta \mathrm{I}}\right\rbrack$ instead of $\left\lbrack {\operatorname{lay}\delta \mathrm{I}}\right\rbrack$ for $\operatorname{lag}{\delta i}$ . We have, however, not inspected this systematically, so it needs further looking into to state the precision with any certainty. + +§ 3.3 RECORDING PROCEDURE + +Most of the recordings were made at the speakers' homes, in many cases in elderly homes, and carried out by the interviewer. It was not uncommon that other people, e.g. children, spouses etc. were present during the recording sessions, but they were in most cases not meant to play a part in the recording. Because of this, and for various + +249 other reasons, some background noise and disturbances occur in the recordings, e.g. children playing, traffic sounds, phones ringing etc., but these are generally not prominent. + +254 Much of the recordings were recorded using high quality reel-to-reel tape recording devices, although some were done by amateurs who weren't as well equipped, whereas a part of the recordings are from the recording studios of The Ice- + +259 landic National Broadcasting Service (Porsteins-dóttir, 2013). + +The digitalization of these recordings began in the late 1990's and continued into the early 2000's with the recordings being converted into WAV for- + +264 mat as well as compressed MP3s for online use. + +§ 4 CORPUS CONTENT + +Gamli contains 188 hours of transcribed audio + +269 broken down into + +1. $\sim {145}$ hours from optical character recogni- 270 + +tion (OCR) of previous transcriptions in var- 271 + +ious formats 272 + +273 + +2. $\sim {43}$ hours of new transcriptions (post-edited 274 + +from ASR output) 275 + +276 + +The 145 hours include $\sim 8$ hours defined as a test + +set, which was manually reviewed and corrected 278 + +and annotated with speaker ID and time align- 279 + +ments in the annotation tool ${ELAN}$ . The test set 280 + +contains recordings with 10 speakers, 5 women 281 (239 minutes) and 5 men (219 minutes), plus the + +interviewers ( 4 men) and serves for evaluating the 283 system's performance. + +A validation set has not been defined for the cor- + +pus as the acoustic model training in Kaldi (Povey 286 et al., 2011) used a random sample of the training + +corpus for validation. 288 + +max width= + +Data split Hours Male speakers Female speakers Total speakers + +1-5 +Training 180 115 85 200 + +1-5 +Test 8 5 5 10 + +1-5 + +Table 1: Data splits in Gamli + +291 + +293 + +§ 4.1 SPEAKER DISTRIBUTION IN THE CORPUS + +296 + +The corpus contains 210 unique speakers, 90 + +women and 120 men (plus the interviewers: 13 298 men and 1 woman). At the outset we aimed to have the gender ratio as equal as possible in the + +acoustic training data, but with three men surpass- 301 ing 20 hours of speech each (with one topping at + +29 hours) and accounting for more than one third 303 of the entire data, that picture became quite distorted. As a result the gender bias in the corpus is + +even greater than in the collection itself, which is 306 unfortunate, but simply reflects the data that was + +at hand, i.e. ${73.5}\%$ vs. ${26.5}\%$ , cf. Section 4.2. 308 309 The age ranges from 38 to 99, but most of the 310 speakers are ${60} + \left( {{94.8}\% }\right)$ , as shown in Figure 1, and the average age of the speakers is 77 years. + +This ratio is unprecedented in all existing corpora 313 for Icelandic speech (cf. 4.8% in Samrómur as referred to in Section 1) and makes Gamli an important addition to that collection. + +§ 4.2 CORPUS COMPILATION + +318 + +As mentioned, the largest part of the corpus, about + +145 hours, stems from OCR of transcriptions at 320 + +the Department of Ethnology and Folklore at The 321 + +Årni Magnússon Institute for Icelandic Studies. 322 + +These transcripts that were generated over several 323 + +324 + + < g r a p h i c s > + +Figure 1: Age distribution of unique speakers in the training set + +325 + +329 + +330 + +335 + +339 + + < g r a p h i c s > + +Figure 2: Age distribution of unique speakers in the test set + +340 decades are not all in the same format (e.g. typewritten, dot printed, printed Word documents) and therefore needed first to be processed, i.e. scanned and OCRed (the results of which varied depending on the format). These transcripts were then catalogued and paired with the respective recordings. + +Once this ready data had been processed the first ASR output was produced and manually cor- + +362 rected. During that process it became evident that some of the recordings were ill suited at this stage as they often contained poetry, nursery rhymes and in some cases singing, where the ASR system could not be expected to do well as the focus was + +367 on spontaneous speech, where it performed much better (cf. Section 6). + +As a result, we made use of the detailed meta-data search parameters in the Ísmús portal in order to filter the best in-domain data for further training. We mainly relied on the so-called form parameter (genre) to try to exclude everything but spontaneous speech. This gave much better results and resulted in the 43 hours of post-edited + +377 data mentioned in Section 4. + +§ 4.3 NORMALIZING, ALIGNING, SEGMENTING AND FILTERING THE TRANSCRIPTS FOR ASR TRAINING + +378 + +379 + +380 + +A large part of the transcripts did not have time 381 + +alignments and some had OCR spelling errors. 382 + +Therefore, we had to process the utterances before 383 + +using them to train the acoustic model. To do this, 384 + +we first normalized all sentences using the Regina 385 + +normalizer developed in (Sigurðardóttir, 2021) be- 386 fore aligning the transcripts to the audio and segmenting them. This step also removes sections + +with out-of-vocabulary words, which should ac- 389 count for errors stemming from the OCR. + +We then filtered those segments, removing any 391 that were deemed unintelligible to an intermediate ASR system. For this, a biased language model + +is applied to the segment, using words that appear 394 + +in the utterance's transcription. It then removes 396 segments where the system could not decode the words which appeared in the transcript. This is an iterative process, whereby an acoustic model is used to filter the training data, then that data is used to train a new acoustic model, which can then be used to re-align and re-filter the training data. These segmenting and filtering steps were all done with the Kaldi scripts (Segment long utterances nn3 ${)}^{3}$ and (Clean and segment data nn3). ${}^{4}$ + +406 + +§ 5 MODELS (AND OUT-OF-DOMAIN DATA) + +We trained a hybrid ASR system in Kaldi. That 409 is, the language model and acoustic model were trained separately as opposed to an end-to-end system. For the acoustic and language models in the custom ASR system, we expanded the training sets with various out-of-domain data, which will be described in the following sections. + +416 + +§ 5.1 ACOUSTIC MODEL + +An acoustic model learns to map audio to a sequence of phonemes. The acoustic model is + +a TDNN (time-delayed neural network) chain 421 model trained in Kaldi. It was trained on the in-domain data described above, but also on various out-of-domain data, which included the following datasets: + +426 + +430 + +431 + +${}^{3}$ https://github.com/kaldi-asr/kaldi/ blob/master/egs/wsj/s5/steps/cleanup/ segment_long_utterances_nnet3.sh + +${}^{4}$ https://github.com/kaldi-asr/kaldi/ blob/master/egs/wsj/s5/steps/cleanup/ clean_and_segment_data_nnet3.sh + +1. Althingi’s Parliamentary Speeches. ${}^{5}$ A corpus of 514.5 hours of recorded speech from the Icelandic parliament (Helgadóttir et al., 2017a) + +2. 114.6 hours of speech from the first Samró- mur release, ${}^{6}$ leaving out children. + +3. 173.1 hours of unverified Samrómur data, ${}^{7}$ containing only speech with ${50} +$ year old men and ${60} +$ year old women. + +4. 228.2 hours of the RÚV TV unknown speakers dataset. ${}^{8}$ + +Data augmentation was also used to triple the entire training set. We added artificial noise and reverberation. For noisy data sets, e.g. call-center data sets, this is said to give better results than speed perturbations (Ko et al., 2017) and as was described earlier, background noise and disturbances are not uncommon in the data. + +§ 5.2 LANGUAGE MODEL + +A language model is necessary for outputting coherent texts, it learns a probability distribution for word sequences from a training corpus. The language model is an n-gram language model; 3- gram for decoding and 4-gram for rescoring. It was trained on in-domain data from the Gamli training set described in 4.2, both already existing ones and those resulting from the proofread ASR output. The out-of-domain data stems from the following sources: + +1. The Icelandic Gigaword Corpus (IGC) (Ste-ingrímsson et al., 2018). We use word forms from the 2022 version of the IGC. ${}^{9}$ + +2. Ethnographic data from the National Museum of Iceland in Sarpur. ${}^{10}$ + +3. Audio file descriptions from Ismús ${}^{11}$ for their content. + +4. Place name data from the Icelandic Place 486 + +Name Collection. ${}^{12}$ 487 + +488 + +§ 5.3 VOCABULARY AND PRONUNCIATION DICTIONARY + +489 + +490 + +The pronunciation dictionary maps words to se- 491 + +quences of phonemes. For the vocabulary we 492 used: + +1. All the word forms from The Database of Icelandic Morphology (Bjarnadóttir et al., + +2019). 497 + +2. OOV words from audio file descriptions in Is- 499 mús. + +3. Vocabulary from the training set (only the 502 data that was manually transcribed and not + +the OCR data); manually checked and added 504 where appropriate. + +4. OOV words from Sarpur; (manually checked 507 and added where appropriate). + +To get the phonemic transcriptions of each word a G2P model based on the Icelandic Pronunciation Dictionary for Language Technology ${}^{13}$ was used. + +§ 6 EVALUATION + +To assess the final ASR system's performance on the test set, we use Samrómur TDNN model as a baseline. This is a baseline model from a wellknown dataset of read Icelandic speech. While the ASR baseline system, Samrómur achieved 53.4% WER on the Gamli test set, the final ASR system performed much better, achieving 22.1% WER on the same set, as shown in Table 2. This compares the two overall systems, each including their own acoustic model, language model, and vocabulary. + +To investigate the differences in the two systems, we also compare the performance when taking demographic information into account in Fig- + +ure 3. As stated earlier, the test set contains 10 529 speakers and a total of 8 hours of audio. + +There appears to be a possible slight correlation between age and WER for the baseline system but not for the final system. Though it should be noted that the test set has too few data points to draw any significant conclusions. There is one outlier in the test set for both systems, an 85 year old man + +539 + +${}^{5}$ Available at: http://hdl.handle.net/20.500.12537/277 + +6 Available at: http://hdl.handle.net/20.500.12537/189 + +${}^{7}$ Available at: http://hdl.handle.net/20.500.12537/265 + +8 Available at: http://hdl.handle.net/20.500.12537/191 + +9http://hdl.handle.net/20.500.12537/ 254 + +${}^{10}$ https://sarpur.is/ + +11https://ismus.is/ + +12 nafnid.is + +${}^{13}$ Available at: http://hdl.handle.net/20.500.12537/99 + +540 + + < g r a p h i c s > + +Figure 3: WER on the Gamli test set for the 10 unique speakers in the test set based on demographic information + +541 + +546 + +551 recorded in 1966, upon manual inspection of the audio it seems the speaker has particularly slurred speech and there is some noise from the recording equipment. + +max width= + +X WER OOV-rate total words OOV-rate unique words + +1-4 +Baseline (Samrómur) 53.4% 1.1% 6.8% + +1-4 +Final 22.1% 0.5% 3.1% + +1-4 + +Table 2: ASR performance on the Gamli oral history test set + +§ 7 CONCLUSION AND FUTURE WORK + +In this paper we have presented Gamli, a corpus + +583 suitable for training speech recognition systems, we have aligned and segmented Icelandic oral histories from manual transcriptions (both OCR from typewritten transcripts and post-edited from ASR output), and filtered out unintelligible segments. + +588 We have described the compilation of the corpus, which has been published under an open license, the origins of the data and evaluation of an ASR system trained on the corpus. We have shown that using the corpus along with other rele- + +593 vant datasets can substantially lower WER for his- + +torical speech data, from 53.4% from a baseline 594 + +model to 22.1%. We also draw the conclusion that 595 + +it could be combined with other ASR training sets 596 + +which lack in data from older speakers in order to 597 + +reduce the word error rate for such speakers. 598 + +Our final ASR system will be used to automati- 599 600 cally transcribe the entire ethnographic audio data stored in Ismús, i.e. 2,300 hours of audio. We expect the outcome of that process to be in line with the results presented in this paper, with verse, + +nursery rhymes, singing etc. still remaining a chal- 605 lenge for the customised model, but accuracy for + +spontaneous speech to be more reliant on audio 607 quality and clarity of speech. Where the quality of these two factors is high, we expect the system to + +perform well. 610 + +Even though the WER may differ substantially for some files, the general outcome will nonetheless be a somewhat readable version of the Is-mús ethnographic collection. That output can sub- + +sequently be used in a number of ways: mak- 615 ing the data in Ismús more accessible for the + +user, both laymen and researchers, indecing the 617 archives for search queries (useful for longer audio files where the description can not do the en- + +tire content justice), and as a hypothesis transcript 620 for post-editing of more transcripts. + +The Gamli corpus itself should provide an inter- 622 esting challenge to ASR researchers interested in + +spontaneous speech, older speakers, noisy audio, 625 historical recordings and historical dialects. + +627 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TFZGxtsyk3/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TFZGxtsyk3/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..0c005e93e981a8c0a9a8536ff47ca6830d3650b1 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TFZGxtsyk3/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,333 @@ +000 054 + +# Adapting an Icelandic morphological database to Faroese + +001 055 + +002 056 + +003 Anonymous Author + +004 Affiliation / Address line 1 + +005 Affiliation / Address line 2 006 Affiliation / Address line 3 + +email@domain + +Anonymouser Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +## Abstract + +This paper describes the adaptation of the database system developed for the Database of Icelandic Morphology (DIM) to the Faroese language and the creation of the Faroese Morphological Database using that system from lexicographical data collected for a Faroese spellchecker project. + +## 1 Introduction + +The Faroese Morphological Database (FMD) ${}^{1}$ is the result of a joint project of the Árni Magnús-son Institute for Icelandic Studies and the University of the Faroe Islands. It consists of entries for Faroese words (lexemes) with with complete paradigms, including variants. Various kinds of metadata are included. It is based on a previously existing project in Iceland, the Database of Icelandic Morphology (Bjarnadóttir et al.,2019) ${}^{2}$ , and makes use of language data collected for a previous Faroese-language project, the spellchecker Rættstavarin. ${}^{3}$ Data from DIM is used in countless language technology projects in Iceland, including smart search engines, spell-checking and hyphenation tools, taggers and parsers, speech recognition tools, online word games, and DIM is also a popular online resource for the general public. It is hoped that the new Faroese sister project will grow to be as successful in spurring the development of language technology in the Faroe Islands and aiding the general public, researchers and language students in the use and study of the Faroese language. + +064 + +### 1.1 Goals + +065 + +The aim was to publish the FMD with the available 067 lexical data from Rættstavarin as well as the list of given names published by the Faroese Language + +Council ${}^{4}$ . The basic features of the DIM system 070 were used to generate all inflected forms, displaying searchable inflectional paradigms on the web and providing data for download, including all the inflected forms with POS tags, lemmas and basic metadata. + +Secondary goals included adding more meta-data such as tags for specific morphological, syntactic and pronunciation features, dialects, etc. Recent additions to the DIM system were also tested, in anticipation of their future use for Faroese. ${}^{5}$ + +Ultimately, the FMD should include all extant 082 forms of all words in the Faroese language, and they should include as much useful metadata as + +possible. Of course "all words" is a utopian ideal 085 + +as languages are constantly evolving and more vo- 087 cabulary is both created and discovered, but it is feasible in the relatively near future to have basically added all vocabulary from available digital texts and to have a pipeline for semi-automatically + +adding newly discovered vocabulary on a regu- 092 lar basis. In this initial project period we focused on readily available data from lexicographical sources. + +## 2 Linguistic similarity + +097 + +Faroese and Icelandic share many features such as three grammatical genders, masculine, feminine and neuter, and the four-case system of nominative, accusative, dative and genitive. Although the genitive is used much less in Faroese than Icelandic, it certainly exists and is morphologically + +107 similar. Nouns have inherent gender, while adjectives and determiners inflect for gender. Verbs inflect for mood, tense, person and number (Thráins-son et al., 2012). A full list of inflectional categories will be provided on the FMD website, in the same manner as on the DIM website. + +--- + +https://bendingar.fo + +${}^{2}$ https://bin.arnastofnun.is/DMII/ + +${}^{3}$ Rættstavarin is available as part of the Divvun language tool package at https://divvun.org/, and the source code is available on GitHub: https://github.com/giellalt/lang-fao; + +a description of the project (in Faroese) may be found here: https://www.setur.fo/fo/ setrid/almennar-taenastur-og-grunnar/ raettstavarin/ + +${}^{4}$ http://malrad.fo/page.php?Id=38&l=fo + +${}^{5}$ See the description of the classification system in Bjar-nadóttir et al. (2019). + +--- + +Due to these similarities it was evident from the start that all the tools and methods that have been developed for DIM could be applied to Faroese with only minimal changes; even the web interface can be presented in much the same way, with Faroese linguistic terms replacing the Icelandic terms (e.g. singular, nominative, comparative, etc.). At this initial stage of the project, the focus was on the main features of the system, though detailed tagging was employed for some particularly important or interesting morphological and pronunciation features. + +The database system for the FMD is run on a copy of the DIM system. More or less the complete software system from DIM has been set up for the FMD. The system includes the database backend, import tools, and website, with both online lookup and export functions for language technology projects. + +## 3 Building the database + +The premise of the project was to make use of existing data, and by far the largest set of lexicographical data available was the data from Rættstavarin. It, in turn, is largely derived from data from the electronic version of the Faroese dictionary (Poulsen, 1998; web version 2007, currently available at sprotin.fo). Another piece of low-hanging fruit was the official Faroese Language Council list of given names. + +### 3.1 System comparison + +The spellchecker data has words categorised by inflectional category according to a classification scheme which was created for the electronic version of the Faroese dictionary and slightly modified and expanded for the spellchecker. The spellchecker software has a template-based system that generates inflected forms from source files containing a lemma, a single template parameter and the name of the appropriate inflection pattern using a template for each pattern. + +The FMD (and DIM), somewhat similarly, uses a template-based system to generate inflected forms, though the conventions for parameters + +are different (more than one parameter may be 162 + +used to represent stem variations) and a relational 163 + +database system is used rather than text files. The 164 inflected forms are then stored in a table linked to the main table containing word entries. Additionally, a set of switches enables or disables the + +generation of specific sections of the inflectional 168 paradigm such as singular or plural, definite and indefinite forms for nouns, the different moods, voices and participles of a verb, etc. The first step for each inflection pattern, then, was to create a template for it. Then the list of words with that pattern from the spellchecker data could, in theory, be transformed with a simple script to the correct import format, as long as the inflectional patterns + +were compatible. 178 + +### 3.2 Adapted classification and error correction + +180 + +Indeed, the FMD has largely followed the 183 spellchecker's inflection classification scheme, but + +it has been necessary to add new patterns to ac- 185 count for the subtler variations in word inflections in Faroese. For example, a number of words had been assigned a pattern which correctly accounts for their most usual or regular inflected forms, but fails to account for certain variant forms, perhaps remnants of an older inflection, perhaps novel variants, sometimes dialectal forms, archaic forms or forms used in fixed expressions. Unless assigned a different inflection template, these words + +would therefore be missing some of their inflected 195 forms. In other cases the templates would have produced erroneous inflected forms. + +Some accidental errors were inherited from the Faroese dictionary, while some had been intro- + +duced by the spellchecker project, and many of 200 them were clearly the result of lack of care either in choosing the correct pattern, e.g. forgetting that a neuter noun whose stem ends in $- s$ needs to a pattern that doesn’t add an extra $- s$ in the genitive singular form, or in typing the pattern name, e.g. writing kv6 (feminine pattern 6) instead of $\mathrm{k}6$ (masculine pattern 6). These could often be corrected by assigning the words another existing pattern, but for many words new templates were needed. In some cases a word needs a pattern of its own due to its irregularity of inflection. There were also other errors in the spellchecker data such as typos and spelling errors and incorrectly entered + +template parameters. 215 + +It quickly became apparent that the number of + +217 errors in the source material was too great to leave unchecked. It would also be easier to identify and correct them early on while still working with the data in text files, rather than risking overwriting subsequent edits to database entries, particularly comment fields and other metadata, by updating them en masse later on. + +The database system also requires that words be designated as base words or compounds, and a binary split point is required for compounds, e.g. the compound noun havnarkona is written havnar_kona in the lemma field to indicate that it is composed of havnar- and kona. Compounding had been indicated to some extent in the + +232 spellchecker data, but haphazardly and also with some errors. + +234 These factors led to the conclusion that all words needed to be reviewed manually, though often somewhat cursorily due to time limitations, chiefly focusing on splitting compounds and checking for obvious errors. Along the way, tagging of morphological, usage and pronunciation characteristics was begun, and it was considered desirable that certain of them should always be tagged if possible, in particular: restriction of a word to a region or dialect; archaic, obsolete or rare usage; irregular correspondence of spelling and pronunciation; and unusual word formation patterns. This became a secondary goal of word review and, while it made it somewhat more time-consuming, it reduces the need to run through the data a second time later on, which would be even more time-consuming, and therefore serves our long-term goals well. The delay caused by manual review meant that there was no time to gather vocabulary from more sources in this round of the project, but the data has been greatly enriched and its quality improved, so it has been well worth it. + +### 3.3 Importation + +Data is imported into the FMD via text files with each line containing a single word entry, and may include many required and optional database fields, including the headword, the name of the inflection template, switches to limit the paradigm, and various metadata fields. These were generated semi-automatically from the spellchecker word lists and other sources using regular-expression scripting and then manually reviewed. Templates + +269 have been created manually or sometimes semi- + +automatically from other templates. 270 + +271 + +#### 3.3.1 Nouns + +272 + +The inflection of nouns was generally fairly easy 273 + +to handle as they don't have as many inflected 274 + +forms as adjectives or verbs and most of their pat- 275 + +terns were already well defined. Even so, many 276 new patterns for nouns needed to be created. For example, weak masculine nouns had only 5 basic patterns in the spellchecker data, with 3 more + +mixed patterns (combinations of two basic pat- 281 terns) and one pattern with an irregular variant, a + +total of 9 . In comparison, the FMD currently has 283 17 different templates for weak masculine nouns. This disparity is largely due to compounds with in- + +ternal inflection; e.g. lítlibeiggi 'little brother' (ac- 286 cusative lítlabeiggja) has a more complex inflec- + +tion than pápabeiggi 'father's brother' (accusative 288 pápabeiggja). As the FMD template system has each inflected form generated from one stem and an inflectional ending, these words usually require more "stems" than other words, to account for the changes in the first half of the compound due to its separate inflection. The Faroese dictionary had not classed these words separately from compounds with an immutable first half and the spellchecker made no provision for them, al- + +though the spellchecker project had already iden- 298 tified them as problematic. However, such compounds are known in Icelandic and had been dealt + +with successfully in DIM. The FMD has followed 301 the DIM practice of creating a separate version of + +each template for internally inflected compounds 303 where required. + +#### 3.3.2 Verbs and adjectives + +306 + +Verbs and adjectives have many more inflected + +forms than nouns, both in Faroese and Icelandic, 308 and partial information on the inflection of these word classes in the available sources were a problem in both projects. + +Verb paradigms in the Faroese dictionary are 313 limited, omitting first and second person singular conjugations, as well as the imperative and conjunctive (optative) moods and the present participle and the mediopassive voice. Adjective paradigms also lacked comparative and superlative forms. These were added in the spellchecker project along with expansion of verb conjugation, but the spellchecker data still contains only active voice conjugations for most verbs, and the com- + +parative and superlative forms of irregular adjec- 323 tives were not obvious. + +325 In the FMD, the verb templates now support full personal conjugation in active and mediopassive voice and a full declension of the past participle, and full paradigms are also displayed for all adjectives. Variant forms, contained in the Faroese dictionary but not found in the inflection tables or the spellchecker paradigms, have been added to the FMD. Additional variant forms from textual sources such as online media and the card index of word citations (Seðlasavnið) ${}^{6}$ at the University of the Faroe Islands, have also been added. + +Some software modifications were required to support Faroese verbs and adjectives, both of which can be useful for Icelandic as well. The mediopassive imperative singular (without pronominal clitic) had not previously been supported, but proved to be necessary for both languages. The indefinite inflection of the comparative occurs in most Faroese adjectives and was consequently added to the system. This category also exists in Icelandic but is extremely rare. + +The greater number of inflected forms of verbs, the need for expanding their paradigms and the greater number of irregular verbs than irregular nouns made the creation of verb templates more time-consuming, but on the other hand, there are over nine time as many nouns as verbs, which reduced the time needed for review of individual words, so that, overall, the nouns took more time. + +#### 3.3.3 Other parts of speech + +Inflection patterns for pronouns, determiners, articles and numerals have been created based on data gathered from the relevant dictionary entries, the spellchecker data, and from the Faroese grammar by Thráinsson et al. (2012). These never had inflection tables in the dictionary, only inline mentions of inflected forms and usage examples. The inflection of these word classes is relatively simple and does not contain problems on a different scale from the work on Icelandic. Uninflected word classes are also included in the data, but these present no problems and most of them have been added to the FMD. + +## 4 Present state + +Currently, the FMD contains over 72,000 entries. These include close to 67,000 words added from the spellchecker word lists and about 3,000 more + +taken directly from the dictionary, either via dic- 378 + +tionary data collected for the spellchecker project 379 + +or manual lookup on the web, and 1,688 given 380 + +names from the Faroese Language Council's name 381 list. Several hundred words have been added from other sources such as web texts and other pub- + +lished texts, Wiktionary ${}^{7}$ , and Thráinsson et al. 384 (2012). + +### 4.1 Future additions + +The FMD currently does not cover proper nouns 389 well. More are needed e.g. place names, company names and surnames. Many of these may be sourced from government lists, phone directories, etc. The Faroese Text Collection ${}^{8}$ has been used as a rough gauge of the completeness of the FMD and can serve as a source for further general vocabulary. Although it only has 1.1 million tokens, at this early stage in the development of the Faroese morphological database it yields some interesting material. It can continue to provide a means of evaluating the progress of the database, i.e. what proportion of unique tokens in the corpus are already in the database and whether the most frequent word forms in the corpus are included. After most or all of the vocabulary in the Faroese Text Collection has been added we will hopefully have access to a much larger Faroese corpus. We expect that there will be a number of erroneous and nonstandard forms in the corpus data; these will be added to a special part of the database dedicated to that purpose. + +411 + +## References + +414 + +Kristín Bjarnadóttir, Kristín Ingibjörg Hlyns- 416 dóttir, and Steinbór Steingrímsson. 2019. https://www.aclweb.org/anthology/W19-6116.pdf DIM: The Database of Icelandic Morphology. In Proceedings of the 22nd Nordic Conference on Computational Linguistics (NoDaLiDa 2019), pages 146-154. + +Jóhan Hendrik W. Poulsen. 1998. Føroysk orðabók. Føroya Fróðskaparfelag, Tórshavn, Faroe Islands. + +Höskuldur Thráinsson, Hjalmar P. Petersen, Jógvan 1 Lon Jacobsen, and Zakaris Svabo Hansen. 2012. Faroese - an overview and reference grammar, second edition. Faroe University Press and Linguistic + +431 + +--- + +${}^{7}$ https://en.wiktionary.org/wiki/ Category:Faroese_language + +8 https://spraakbanken.gu.se/en/ resources/fts + +6 https://sedlasavn.setur.fo/ + +--- + +432 Institute, University of Iceland, Tórshavn, Faroe Is- 486 + +433 lands and Reykjavík, Iceland. 487 + +434 488 + +435 489 + +436 490 + +437 491 + +438 492 + +439 493 + +494 + +495 + +496 + +443 497 + +444 498 + +445 499 + +446 500 + +447 501 + +448 502 + +449 503 + +450 504 + +451 505 + +452 506 + +453 507 + +508 + +455 509 + +456 + +457 + +458 512 + +459 + +460 514 + +461 + +462 + +463 517 + +464 + +465 519 + +466 520 + +467 521 + +468 522 + +469 523 + +470 524 + +471 525 + +472 526 + +473 527 + +474 528 + +475 529 + +476 530 + +477 531 + +478 532 + +479 533 + +480 534 + +481 535 + +482 536 + +483 537 + +484 538 + +485 539 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TFZGxtsyk3/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TFZGxtsyk3/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..bb0b0d16716f46682cae476e3219652507646e13 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TFZGxtsyk3/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,199 @@ +000 054 + +§ ADAPTING AN ICELANDIC MORPHOLOGICAL DATABASE TO FAROESE + +001 055 + +002 056 + +003 Anonymous Author + +004 Affiliation / Address line 1 + +005 Affiliation / Address line 2 006 Affiliation / Address line 3 + +email@domain + +Anonymouser Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +§ ABSTRACT + +This paper describes the adaptation of the database system developed for the Database of Icelandic Morphology (DIM) to the Faroese language and the creation of the Faroese Morphological Database using that system from lexicographical data collected for a Faroese spellchecker project. + +§ 1 INTRODUCTION + +The Faroese Morphological Database (FMD) ${}^{1}$ is the result of a joint project of the Árni Magnús-son Institute for Icelandic Studies and the University of the Faroe Islands. It consists of entries for Faroese words (lexemes) with with complete paradigms, including variants. Various kinds of metadata are included. It is based on a previously existing project in Iceland, the Database of Icelandic Morphology (Bjarnadóttir et al.,2019) ${}^{2}$ , and makes use of language data collected for a previous Faroese-language project, the spellchecker Rættstavarin. ${}^{3}$ Data from DIM is used in countless language technology projects in Iceland, including smart search engines, spell-checking and hyphenation tools, taggers and parsers, speech recognition tools, online word games, and DIM is also a popular online resource for the general public. It is hoped that the new Faroese sister project will grow to be as successful in spurring the development of language technology in the Faroe Islands and aiding the general public, researchers and language students in the use and study of the Faroese language. + +064 + +§ 1.1 GOALS + +065 + +The aim was to publish the FMD with the available 067 lexical data from Rættstavarin as well as the list of given names published by the Faroese Language + +Council ${}^{4}$ . The basic features of the DIM system 070 were used to generate all inflected forms, displaying searchable inflectional paradigms on the web and providing data for download, including all the inflected forms with POS tags, lemmas and basic metadata. + +Secondary goals included adding more meta-data such as tags for specific morphological, syntactic and pronunciation features, dialects, etc. Recent additions to the DIM system were also tested, in anticipation of their future use for Faroese. ${}^{5}$ + +Ultimately, the FMD should include all extant 082 forms of all words in the Faroese language, and they should include as much useful metadata as + +possible. Of course "all words" is a utopian ideal 085 + +as languages are constantly evolving and more vo- 087 cabulary is both created and discovered, but it is feasible in the relatively near future to have basically added all vocabulary from available digital texts and to have a pipeline for semi-automatically + +adding newly discovered vocabulary on a regu- 092 lar basis. In this initial project period we focused on readily available data from lexicographical sources. + +§ 2 LINGUISTIC SIMILARITY + +097 + +Faroese and Icelandic share many features such as three grammatical genders, masculine, feminine and neuter, and the four-case system of nominative, accusative, dative and genitive. Although the genitive is used much less in Faroese than Icelandic, it certainly exists and is morphologically + +107 similar. Nouns have inherent gender, while adjectives and determiners inflect for gender. Verbs inflect for mood, tense, person and number (Thráins-son et al., 2012). A full list of inflectional categories will be provided on the FMD website, in the same manner as on the DIM website. + +https://bendingar.fo + +${}^{2}$ https://bin.arnastofnun.is/DMII/ + +${}^{3}$ Rættstavarin is available as part of the Divvun language tool package at https://divvun.org/, and the source code is available on GitHub: https://github.com/giellalt/lang-fao; + +a description of the project (in Faroese) may be found here: https://www.setur.fo/fo/ setrid/almennar-taenastur-og-grunnar/ raettstavarin/ + +${}^{4}$ http://malrad.fo/page.php?Id=38&l=fo + +${}^{5}$ See the description of the classification system in Bjar-nadóttir et al. (2019). + +Due to these similarities it was evident from the start that all the tools and methods that have been developed for DIM could be applied to Faroese with only minimal changes; even the web interface can be presented in much the same way, with Faroese linguistic terms replacing the Icelandic terms (e.g. singular, nominative, comparative, etc.). At this initial stage of the project, the focus was on the main features of the system, though detailed tagging was employed for some particularly important or interesting morphological and pronunciation features. + +The database system for the FMD is run on a copy of the DIM system. More or less the complete software system from DIM has been set up for the FMD. The system includes the database backend, import tools, and website, with both online lookup and export functions for language technology projects. + +§ 3 BUILDING THE DATABASE + +The premise of the project was to make use of existing data, and by far the largest set of lexicographical data available was the data from Rættstavarin. It, in turn, is largely derived from data from the electronic version of the Faroese dictionary (Poulsen, 1998; web version 2007, currently available at sprotin.fo). Another piece of low-hanging fruit was the official Faroese Language Council list of given names. + +§ 3.1 SYSTEM COMPARISON + +The spellchecker data has words categorised by inflectional category according to a classification scheme which was created for the electronic version of the Faroese dictionary and slightly modified and expanded for the spellchecker. The spellchecker software has a template-based system that generates inflected forms from source files containing a lemma, a single template parameter and the name of the appropriate inflection pattern using a template for each pattern. + +The FMD (and DIM), somewhat similarly, uses a template-based system to generate inflected forms, though the conventions for parameters + +are different (more than one parameter may be 162 + +used to represent stem variations) and a relational 163 + +database system is used rather than text files. The 164 inflected forms are then stored in a table linked to the main table containing word entries. Additionally, a set of switches enables or disables the + +generation of specific sections of the inflectional 168 paradigm such as singular or plural, definite and indefinite forms for nouns, the different moods, voices and participles of a verb, etc. The first step for each inflection pattern, then, was to create a template for it. Then the list of words with that pattern from the spellchecker data could, in theory, be transformed with a simple script to the correct import format, as long as the inflectional patterns + +were compatible. 178 + +§ 3.2 ADAPTED CLASSIFICATION AND ERROR CORRECTION + +180 + +Indeed, the FMD has largely followed the 183 spellchecker's inflection classification scheme, but + +it has been necessary to add new patterns to ac- 185 count for the subtler variations in word inflections in Faroese. For example, a number of words had been assigned a pattern which correctly accounts for their most usual or regular inflected forms, but fails to account for certain variant forms, perhaps remnants of an older inflection, perhaps novel variants, sometimes dialectal forms, archaic forms or forms used in fixed expressions. Unless assigned a different inflection template, these words + +would therefore be missing some of their inflected 195 forms. In other cases the templates would have produced erroneous inflected forms. + +Some accidental errors were inherited from the Faroese dictionary, while some had been intro- + +duced by the spellchecker project, and many of 200 them were clearly the result of lack of care either in choosing the correct pattern, e.g. forgetting that a neuter noun whose stem ends in $- s$ needs to a pattern that doesn’t add an extra $- s$ in the genitive singular form, or in typing the pattern name, e.g. writing kv6 (feminine pattern 6) instead of $\mathrm{k}6$ (masculine pattern 6). These could often be corrected by assigning the words another existing pattern, but for many words new templates were needed. In some cases a word needs a pattern of its own due to its irregularity of inflection. There were also other errors in the spellchecker data such as typos and spelling errors and incorrectly entered + +template parameters. 215 + +It quickly became apparent that the number of + +217 errors in the source material was too great to leave unchecked. It would also be easier to identify and correct them early on while still working with the data in text files, rather than risking overwriting subsequent edits to database entries, particularly comment fields and other metadata, by updating them en masse later on. + +The database system also requires that words be designated as base words or compounds, and a binary split point is required for compounds, e.g. the compound noun havnarkona is written havnar_kona in the lemma field to indicate that it is composed of havnar- and kona. Compounding had been indicated to some extent in the + +232 spellchecker data, but haphazardly and also with some errors. + +234 These factors led to the conclusion that all words needed to be reviewed manually, though often somewhat cursorily due to time limitations, chiefly focusing on splitting compounds and checking for obvious errors. Along the way, tagging of morphological, usage and pronunciation characteristics was begun, and it was considered desirable that certain of them should always be tagged if possible, in particular: restriction of a word to a region or dialect; archaic, obsolete or rare usage; irregular correspondence of spelling and pronunciation; and unusual word formation patterns. This became a secondary goal of word review and, while it made it somewhat more time-consuming, it reduces the need to run through the data a second time later on, which would be even more time-consuming, and therefore serves our long-term goals well. The delay caused by manual review meant that there was no time to gather vocabulary from more sources in this round of the project, but the data has been greatly enriched and its quality improved, so it has been well worth it. + +§ 3.3 IMPORTATION + +Data is imported into the FMD via text files with each line containing a single word entry, and may include many required and optional database fields, including the headword, the name of the inflection template, switches to limit the paradigm, and various metadata fields. These were generated semi-automatically from the spellchecker word lists and other sources using regular-expression scripting and then manually reviewed. Templates + +269 have been created manually or sometimes semi- + +automatically from other templates. 270 + +271 + +§ 3.3.1 NOUNS + +272 + +The inflection of nouns was generally fairly easy 273 + +to handle as they don't have as many inflected 274 + +forms as adjectives or verbs and most of their pat- 275 + +terns were already well defined. Even so, many 276 new patterns for nouns needed to be created. For example, weak masculine nouns had only 5 basic patterns in the spellchecker data, with 3 more + +mixed patterns (combinations of two basic pat- 281 terns) and one pattern with an irregular variant, a + +total of 9 . In comparison, the FMD currently has 283 17 different templates for weak masculine nouns. This disparity is largely due to compounds with in- + +ternal inflection; e.g. lítlibeiggi 'little brother' (ac- 286 cusative lítlabeiggja) has a more complex inflec- + +tion than pápabeiggi 'father's brother' (accusative 288 pápabeiggja). As the FMD template system has each inflected form generated from one stem and an inflectional ending, these words usually require more "stems" than other words, to account for the changes in the first half of the compound due to its separate inflection. The Faroese dictionary had not classed these words separately from compounds with an immutable first half and the spellchecker made no provision for them, al- + +though the spellchecker project had already iden- 298 tified them as problematic. However, such compounds are known in Icelandic and had been dealt + +with successfully in DIM. The FMD has followed 301 the DIM practice of creating a separate version of + +each template for internally inflected compounds 303 where required. + +§ 3.3.2 VERBS AND ADJECTIVES + +306 + +Verbs and adjectives have many more inflected + +forms than nouns, both in Faroese and Icelandic, 308 and partial information on the inflection of these word classes in the available sources were a problem in both projects. + +Verb paradigms in the Faroese dictionary are 313 limited, omitting first and second person singular conjugations, as well as the imperative and conjunctive (optative) moods and the present participle and the mediopassive voice. Adjective paradigms also lacked comparative and superlative forms. These were added in the spellchecker project along with expansion of verb conjugation, but the spellchecker data still contains only active voice conjugations for most verbs, and the com- + +parative and superlative forms of irregular adjec- 323 tives were not obvious. + +325 In the FMD, the verb templates now support full personal conjugation in active and mediopassive voice and a full declension of the past participle, and full paradigms are also displayed for all adjectives. Variant forms, contained in the Faroese dictionary but not found in the inflection tables or the spellchecker paradigms, have been added to the FMD. Additional variant forms from textual sources such as online media and the card index of word citations (Seðlasavnið) ${}^{6}$ at the University of the Faroe Islands, have also been added. + +Some software modifications were required to support Faroese verbs and adjectives, both of which can be useful for Icelandic as well. The mediopassive imperative singular (without pronominal clitic) had not previously been supported, but proved to be necessary for both languages. The indefinite inflection of the comparative occurs in most Faroese adjectives and was consequently added to the system. This category also exists in Icelandic but is extremely rare. + +The greater number of inflected forms of verbs, the need for expanding their paradigms and the greater number of irregular verbs than irregular nouns made the creation of verb templates more time-consuming, but on the other hand, there are over nine time as many nouns as verbs, which reduced the time needed for review of individual words, so that, overall, the nouns took more time. + +§ 3.3.3 OTHER PARTS OF SPEECH + +Inflection patterns for pronouns, determiners, articles and numerals have been created based on data gathered from the relevant dictionary entries, the spellchecker data, and from the Faroese grammar by Thráinsson et al. (2012). These never had inflection tables in the dictionary, only inline mentions of inflected forms and usage examples. The inflection of these word classes is relatively simple and does not contain problems on a different scale from the work on Icelandic. Uninflected word classes are also included in the data, but these present no problems and most of them have been added to the FMD. + +§ 4 PRESENT STATE + +Currently, the FMD contains over 72,000 entries. These include close to 67,000 words added from the spellchecker word lists and about 3,000 more + +taken directly from the dictionary, either via dic- 378 + +tionary data collected for the spellchecker project 379 + +or manual lookup on the web, and 1,688 given 380 + +names from the Faroese Language Council's name 381 list. Several hundred words have been added from other sources such as web texts and other pub- + +lished texts, Wiktionary ${}^{7}$ , and Thráinsson et al. 384 (2012). + +§ 4.1 FUTURE ADDITIONS + +The FMD currently does not cover proper nouns 389 well. More are needed e.g. place names, company names and surnames. Many of these may be sourced from government lists, phone directories, etc. The Faroese Text Collection ${}^{8}$ has been used as a rough gauge of the completeness of the FMD and can serve as a source for further general vocabulary. Although it only has 1.1 million tokens, at this early stage in the development of the Faroese morphological database it yields some interesting material. It can continue to provide a means of evaluating the progress of the database, i.e. what proportion of unique tokens in the corpus are already in the database and whether the most frequent word forms in the corpus are included. After most or all of the vocabulary in the Faroese Text Collection has been added we will hopefully have access to a much larger Faroese corpus. We expect that there will be a number of erroneous and nonstandard forms in the corpus data; these will be added to a special part of the database dedicated to that purpose. + +411 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TqEvrDbInx/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TqEvrDbInx/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..6e8a48781061679923e1dc1c03b150717b0a698a --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TqEvrDbInx/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,767 @@ +000 054 + +# Integrating rules and neural nets for morphological tagging of Norwegian Results and challenges + +001 055 + +056 + +057 + +Anonymous Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 058 + +Affiliation / Address line 1 059 + +Affiliation / Address line 2 060 + +Affiliation / Address line 3 + +email@domain 062 + +## Abstract + +In this paper, we report on efforts to improve the Oslo-Bergen Tagger for Norwegian morphological tagging by using a hybrid system that combines the output of the rule-based Constraint Grammar tagger with a neural sequence-to-sequence model trained for tagging. The results are very promising for cases where the two systems intersect in tokenisation and morphological analysis, but problems remain in integrating the two systems in many cases. + +## 1 Introduction + +The Oslo-Bergen Tagger (OBT, Hagen and Johannessen 2003; Johannessen et al. 2012) is a widely used tool for morphological tagging of Norwegian text. It has existed in various incarnations for + +033 around 25 years, first as a purely rule-based system and later coupled with a statistical module for disambiguation. In this paper, we report on our recent efforts to bring the system into the age of neural networks and show that, even today, the rules boost accuracy considerably over a purely neural system, although there are challenges in combining rules and neural nets due to divergent tokeni-sations. + +The structure of the paper is as follows: In section 2 we give some historical background on OBT and Section 3 describes the current status of its rule-based component. Section 4 describes the training and evaluation data that we have used in developing the new system. Section 5 then provides the details of how our neural system was trained while Section 6 describes how it was combined with the rule system. Section 7 evaluates the performance of the neural system alone as well as + +053 the combined system. Section 8 concludes. + +## 2 History of the Oslo-Bergen Tagger + +065 + +The Oslo-Bergen Tagger was originally developed 067 between 1996 and 1998 by the Tagger Project at the University of Oslo. Rules for morphologi- + +cal and syntactic disambiguation were written in 070 the first version of the Constraint Grammar frame- + +work (Karlsson et al., 1995), retrospectively called 072 CG1. The rules were parsed by the only existing CG rule interpreter at the time, developed by Ling- + +soft AB. The input to CG disambiguation rules is 075 + +multitagged text, i.e., text where each token has 077 been annotated with all possible lexical analyses. Hence, the project also developed a lexicon with + +lemmas and inflected forms (later known as Norsk 080 ordbank) and a combined tokenizer/multitagger. + +The tagger was developed for both Bokmål and 082 Nynorsk, the two written varieties of Norwegian. In this article, we will only focus on the Bokmål + +version of the tagger, and only on the tokenizer 085 and the morphological disambiguation. + +The first version of the tagger was tested on an 087 unseen evaluation corpus with a wide variety of text genres and achieved an F1-score of 97.2 (Ha- + +gen and Johannessen, 2003, 90). The numbers be- 090 hind the F1-score - a precision of 95.4 and recall + +of 99.0 - reveal that the tagger leaves some ambi- 092 guity but makes relatively few errors. At the time, this was considered acceptable as the tagger was mostly used to annotate written corpora for linguistic research, where a high recall was consid- + +ered more important than a high precision. 097 + +In 2000 the rule interpreter was replaced by a reimplementation in Allegro Common Lisp made by Paul Meurer in cooperation with the Text Laboratory at the University of Oslo. At the time, Meurer was employed at Aksis in Bergen, and hence the tagger was named the Oslo-Bergen Tagger (OBT). + +Some years later the need for a new upgrade be- + +came urgent. Firstly, OBT was quite slow. This 107 was not a big problem in 2000, but soon our cor- + +109 pora were getting bigger, and speed became important. The project Norwegian Newspaper Corpus (2007-2009) gave the Text Laboratory the opportunity to translate the CG1 rules to the new more efficient and expressive CG3 format and to use a faster rule interpreter made by the VISL project at the University of Southern Denmark. Secondly, the ambiguities that were left in the output from OBT made the tagger unsuitable for many language technology purposes and applications that require the text to be completely disambiguated. We therefore extended OBT with a statistical module, implemented as a Hidden Markov Model, that disambiguated the remaining morphological ambiguities and also provided the system with a new feature: disambiguation of lemmas. The new OBT+Stat system achieved an accuracy of around 96 percent (Johannessen et al., 2012). + +In the version of the tagger presented here, we have replaced the original HMM module with one that is based on neural networks. We do this for two reasons: First, the new module employs technology that has proven to yield superior results in a variety of NLP tasks. Secondly, the original module did not take into consideration the ambiguity left by the CG rules, meaning that the HMM might select a tag that was previously removed by the disambiguation rules or not even present in the tagger lexicon. The new machine learning module ranks possible readings by probability, allowing us to find the most probable reading (if any) in the intersection between its output and the remaining CG readings, hence not discarding the work that has already been done by the CG disambiguation rules if the intersection is non-empty, but leaving a question as to what to do if the intersection is empty. + +## 3 The rule-based tokenizer and tagger + +In this section, we first present some of the main tasks for the tokenizer and multitagger before we give a short description of the constraint grammar module. The tokenizer uses a lexicon with all possible lexical readings, where a reading is a combination of a lemma and a morphosyntac-tic tag chosen from a set of 149 possible analyses. ${}^{1}$ The lexicon was originally based on Norsk + +ordbank2005,2but has since been updated with 162 + +words more recently introduced into the language 163 (such as tvitre 'tweet'). The newest version of the tokenizer is written in Python and mirrors in most cases the original tokenizer written in Perl. There is one major exception: The original system + +from the late '90s worked according to the strat- 168 egy "Disambiguate as soon as possible" (Karlsson et al., 1995). This resulted in fixed expressions like blant annet ('among other things' - adverb) and etter hvert ('little by little' - preposition) being allowed - and disambiguated - in the lexicon. In the recent version of the tokenizer, such expressions are removed from the lexicon and the possible ambiguity is dealt with in the CG module. The main principle for the tokenizer is therefore to split tokens on blank space or a sentence delimiter like a full stop or a question mark. For each token identified, the original word form is rendered inside a -tag and looked up in the lexicon. Non-sentence initial capitalized words are identified as proper nouns. Words that exist in the lexicon are assigned all readings found there. If the word is not found in the lexicon and not identified as a proper noun, the word is sent to a compound analyzer. Most unknown words will get an analysis here, as many of them are productively created compounds. Some words will still get the tag ukjent ('unknown') from the tokenizer. These words are often dialect words not standardized in the lexicon or foreign words. Figure A in the Appendix shows how the tokenizer and multitagger deals with the sentence ${TV}$ -programmet "Ut $i$ na-turen" begynner kl. 21.15. ("The TV program "Ut i naturen" starts at 21.15.'), which has quotation marks, abbreviations, and a time expression. + +The tokenizer also identifies sentences using sentence delimiters. A list of known abbreviations and linguistic rules, like the rule "the word including the full stop character is an abbreviation if the word is in the abbreviation list or if the following word is not capitalized", identifies abbreviations like ${kl}$ . (abbreviation for "o'clock" used to specify time in Norwegian) in Figure A. Headlines are also identified by rules and get their own tag. + +The constraint grammar module takes tokenized and multitagged text as input and its main task is to reduce the number of readings to ideally one per word. The number of readings left by the multitag- + +215 ger varies a lot. In the test corpus used in this article (which will be further described in Section 4) there are on average 2,04 readings per word. After the CG rules are applied, there are on average 1,09 readings left per word. + +--- + +${}^{1}$ The complete list is available at http://tekstlab.uio.no/obt-ny/morfosyn.html + +2https://www.nb.no/sprakbanken/en/ resource-catalogue/oai-nb-no-sbr-5/ + +--- + +Figure B in the Appendix shows the output from the CG module in debug mode for the sentence Rosa cupcakes hører kanskje med når man skal ha bloggtreff? ('Pink cupcakes might be part of a blog meeting?'). Readings that have been removed starting with ";" and the ID numbers of the rules applied are appended to each reading. Note that the English loan word cupcakes is not identified in the lexicon or in the compound analyzer and has got the tag ukjent 'unknown'. The compound bloggtreff 'blog meeting' was not in the lexicon but has got two readings from the compound analyzer. As the examples show, there are both REMOVE rules (remove a reading) and SELECT rules (select a reading). A rule can be very simple, like rule 2430 in Figure 1 that says "select the verb infinitive reading if the verb to the left is a modal auxiliary and not in the set of dangerous infinitives (= not likely infinitives)". + +--- + +#:2430 + +SELECT:2430 (verb inf) IF + + (NOT 0 farlige-inf) + + (-1m - hj - verb) + +i + +--- + +## Figure 1: Simple SELECT rule + +Figure 2 shows an example of a more complex rule with linked context conditions somewhere to the right in the sentence. The rule says: "choose the subjunction reading - if somewhere to the right there is a safe noun or pronoun (stop looking if a word on the way has a reading that is not an adverb, adjective or determinative) - and - if there is a word in the present or past tense after the noun/pronoun (adverbs between are fine)." + +--- + +#:2579 + +SELECT:2579 (sbu) IF + +(...) + +(**1C subst/pron BARRIER + + ikke-adv-adj-det) + +(**1C subst/pron LINK *1 + + ikke-adv LINK 0 pres/pret) + +i + +--- + +## Figure 2: More complex SELECT rule + +The CG grammar for Bokmål has more than + +269 2300 rules. 1995 of them are SELECT rules. + +Some rules apply to all possible words, while 270 + +some are rules for specific word forms. When the 271 original CG grammar was developed, a training corpus of 100000 words from novels, newspapers and magazines was used. For each new rule added to the grammar, we checked how the rule worked + +by looking at recall and precision. Most rules 276 remove or choose readings without making too many errors. But in the last period of the project, we made around 250 heuristic rules to speed up + +the disambiguation. These rules were riskier but in 281 our small training corpus, they worked well. Later + +in this article, we will see whether the combination 283 of the CG rules and the neural net is affected if the heuristic rules are removed from the grammar. + +286 + +## 4 Training and evaluation data + +The training and evaluation corpus that was used 288 in earlier stages of development of the OBT system is no longer suitable because the tagset and the tokenisation principles have evolved. Instead of bringing this corpus up to date, we chose to use the Norwegian Dependency Treebank (NDT, Solberg et al. 2014) in the development of the new version of OBT. The Bokmål part of NDT is around 300 000 tokens and consists of blog text, news text, parliament proceedings and government white papers. + +The NDT CoNLL data were converted to the + +format of the OBT. We also extracted the pure text 301 and ran OBT on it without statistical disambigua- + +tion, to compare the outputs. If the NDT analy- 303 sis was not among the analyses produced by OBT, we either corrected the NDT annotation if that was the source of the error, or changed the rules of the OBT system if that could easily be done. This pro- + +cess was iterated a few times. Notice that during 308 this period, the whole data set was used for development, as is common with rule-based systems. The goal was to improve both the accuracy of the rule-based disambiguation and the quality of the training data for the neural component. + +The performance of the rule-based system by the end of this phase is shown in Table 1. When heuristic rules are used, we see that in 7.5% of cases, OBT produces an ambiguous analysis containing the correct tag as one possibility, whereas ${1.8}\%$ of tokens are only given (one or more) wrong analyses. Disabling the heuristic rules reduces the number of wrong tags by ${0.2}\%$ but at the cost of an + +increase of ${3.3}\%$ of tokens that get an ambiguous 323 analysis containing the correct tag. + +325 The role of the statistical system is to pick the correct analysis in the ambiguous cases. On its own the neural net might be able to predict the right analysis even in cases where the rules are wrong. However, this analysis will be discarded when we intersect its output with the rules. + +with heuristic rules + +
unambiguous correct280650(90.7%)
ambiguous incl. correct23219(7.5%)
wrong5413(1.8%)
without heuristic rules
unambiguous correct270830(87.6%)
ambiguous incl. correct33597(10.8%)
wrong4855(1.6%)
+ +Table 1: Performance of the rule-based system + +For the training of the neural system, we then split the corpus into train-dev-test sets. While doing this, we made sure the output tags in the training set covered all output tags in the dev and test sets to ensure that the model was trained with samples from all tags. We do this by, first, initializing the Python random seed as 0 , then, splitting the data and checking if the training set covers all tags. If it does not, we increase the random seed by one and do the same until we find a training set that covers all the tags in the other sets. This way, we randomly split the dataset into 80-10-10 percent partitions to obtain train-dev-test datasets respectively. + +Finally, the data was reformatted for the neural network. Figure 3 shows an example of input and output for a sentence. The input is the tokenized form of the sentence. The output is the sequence of serialized tags for each token in the input. The token is an indicator that all tags of the corresponding input token have finished and tags of the next input token start afterward. + +--- + +INPUT: Men det er bare noe jeg tror . + +OUTPUT : + +:konj: clb + +:pron: 3 ent noyt pers + +:verb: pres + + :adv: + + :pron: 3 ent noyt pers + + :pron: 1 ent hum nom pers + + :verb: pres + +\$punc\$ : clb: + +--- + +Figure 3: An example input and output for a sentence. + +367 377 + +## 5 The neural system + +378 + +379 + +Recently, a BERT (Devlin et al., 2018) pre-trained 380 + +encoder (nb-bert-base) was published by the Nor- 381 + +wegian National Digital Library (Kummervold 382 et al., 2021). This pre-trained encoder for Nor- + +wegian provides a rich feature set that was pre- 384 viously lacking for the language. Furthermore, since the tagged corpus is very small in comparison to the corpus the pre-trained model was trained on, it is important to use the pre-trained model + +in order to be able to generalize to unseen data. 389 Therefore, we follow an approach similar to that + +of Omelianchuk et al. (2020) and use a sequence- 391 to-sequence (seq2seq) setting to tag the sentences using the pre-trained model. + +Sequence-to-sequence models have two main 394 + +components: an encoder and a decoder. The 396 encoder side is set as the encoder nb-bert-base (NbAiLab, 2021). For the decoder, we randomly initialize 6 layers of size 768 with 12 attention heads. The decoder also has cross-attention layers as it was shown to be effective in seq2seq training (Gheini et al., 2021). We freeze the encoder weights throughout the training since using the encoder as a feature extraction mechanism in this way was shown to be beneficial (Zoph et al., 2016) and is a common practice (Gheini et al., 2021). We use the EncoderDecoderModel provided by the HuggingFace transformers library (Wolf et al., 2020) to configure and train a model. + +The encoder-decoder model gets its input as the identifiers of the tokens (token numbers) in the input vocabulary and outputs the token numbers in the output vocabulary. Thus, the input and output are tokenized using these vocabularies. Since + +the encoder model had already been trained (nb- 416 bert-base) using the widely-utilized sub-word tok-enizer Wordpiece (Wu et al., 2016), we use that to-kenizer as provided by the Huggingface Tokeniz-ers library. For the decoder side, since our vocabulary size is very small and obvious ( 82 tags and 5 extra special tokens such as [CLS] and [SEP]), we do not need to train a special tokenizer. We define the vocabulary manually with these output tokens for use by the Wordpiece tokenizer. + +The training configuration is as follows: We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.0001 . We set the batch size to 16 sentences as this is the amount the graphic + +cards could handle. We use the negative log- 431 likelihood loss (Yao et al., 2020) to compute the loss in each batch between the model output and the expected output. For any other parameter not mentioned in this section, we use the default value defined by version 4.17.0 of the Transformers library in the objects of the following types: Bert-Config, EncoderDecoderModel, EncoderDecoder-Config, and BertModel. + +We evaluate the model using the dev set during the training. We do this by using the BLEU score (Papineni et al., 2002) that is widely utilized to evaluate seq2seq models. We compute the BLEU score between the expected output and the model output for each sentence. We get the average of these scores for the whole dev set. We run the training for 300 epochs and keep the model that results in the maximum average BLEU score for the dev set. + +## 6 Combining neural nets and rules + +As mentioned in section 2, the current system prefers tags that are found in the intersection between the output of the CG rules and that of the neural network. Ideally, we would be able to find such intersections for each individual token separately. However, since the probability of a reading for a particular token depends on the selected readings for all other tokens in the sentence, the only viable option is to consider readings for entire sentences. Thus, for each input sentence, we find the list of possible readings produced by the network and calculate its probability. Then for each reading in this list, ordered by decreasing probability, we go through each token and check whether the tag assigned by the network is also found among those left by the CG disambiguation rules. If it is not found, we skip to the next reading in the list. If it is found, we go on to check the next token, and so on until we reach the end of the sentence, at which point the reading is picked as the selected one for the sentence. For the present test set, we find intersecting tags for all tokens for 1412 of the 2003 sentences (70.5%). The cases with missing intersections may be due to differences in either tokenisation (205 cases) or tag assignments (386 cases) between the two systems. When the tokeni-sations are different, it is not clear what to do. But if the tokens are the same, but the tag assignments differ, we can default to the most probable reading in the neural net output. We explore this option in + +485 Section 7.2. + +Figure 4 shows a case where the tokenisation 486 + +of the neural system does not match with the gold 487 data in the test set. The neural system has split the initial, unknown proper name at a hyphen, whereas the CG tagger keeps it as one token. Since tokenisation is part of a preprocessing step and misalignments in tokenisation is a problem to be solved separately from tag assignment, in this paper we focus primarily on cases where the two systems do produce matching tokenisation, while improving the tokenisation match will be part of future work. + +Neural net: Garosu - gil , som betyr [...] CG: Garosu-gil , som betyr [...] + +Figure 4: Mismatching tokenisation 502 + +--- + + + + Figure 5: Non-intersecting tags + +--- + +504 + +519 + +522 + +524 + +Figure 5 shows the problem of mismatching 529 tags. For the first word, the CG tagger has left five possible analyses, and the neural net has correctly disambiguated to the plural adjective reading. However, OBT did not recognize the second word, cupcakes, and has therefore left an ukjent ('unknown') tag while the neural system has no analysis with that tag. Instead, the most probable analysis of the sentence according to the neural + +net has cupcakes correctly as an indefinite plural 539 noun. However, since tag probabilities are conditional on all other tags in the sentence these two analyses are incomparable: it is not safe to disambiguate the CG analysis of rosa based on this analysis from the neural net, especially not when the mismatching tag is on the neighbouring word cupcakes. + +
systemaccuracy
pure ML96.9%
OBT + ML99.0%
OBT w/o heur. + ML99.0%
+ +Table 2: Accuracy of different systems, sentences with intersecting tags + +In this particular case, the neural net is correct in its analysis of cupcakes. In general, it might be safe to assume that the neural system is correct in cases where the CG tagger assigns ukjent, and this is an option we will pursue in future research. However, as we will see in the Section 7 the neural system is often incorrect in cases where the tags do not intersect. Solving this problem may require more training data or fine-tuning the parameters of the tag generation process of the decoder of the seq2seq model. + +## 7 Evaluation and error analysis + +### 7.1 Sentences with intersecting tags + +We first focus on the restricted cases where the ML system and the CG grammars not only have matching tokenisations but also intersecting tags. We evaluate three different setups: 1 . the trained neural net used as a stand-alone morphological tagger 2. the rule-based system intersected with the neural net as described in Section 6 3 . as the previous, but without the heuristic rules. + +The performance of the three systems is shown in Table 2. Because we evaluate on intersecting tags only, the numbers do not show the actual performance of the system on running text. They do however clearly show that in the ${70.5}\%$ of cases where the tags intersect, the rules strongly improve the performance of the systems: two-thirds of the tokens that are mistagged by the neural net now get a correct analysis. We also see that it makes no difference whether we run the system with or without the heuristic rules: the reduction of wrong tags that we saw in Table 1 is balanced out by the increase in ambiguity. On the sentences where this setup works, the performance is extremely good + +at an accuracy of ${99.0}\%$ . By contrast, the widely 594 + +used Spacy tagger reports an accuracy of ${95.0}\%$ 595 + +for morphological tagging of Norwegian UD. ${}^{3}$ 596 + +Since removing the heuristic rules gave no in- 597 crease in performance, we focus on the setup with + +the full rule set in the following. This system 600 mistags 184 tokens (out of 18612 in total in the matching sentences of the test set), whereas the pure ML system mistags 565 tokens. However, the error profile of the two systems is quite different, + +suggesting possibilities for further improvement. 605 + +Tables 3 and 4 show the twelve most com- + +mon error types of the systems. We see that a 607 relatively common error in the OBT + ML system involves perfect participles which often co- + +exist with homonymous adjectives in Norwegian 610 (as in other Germanic languages, cf. English 'bored') with often very slight or no semantic difference. OBT+ML overapplies the adjective analysis (in three different varieties) compared to the gold data, for a total of ${14} + {10} + 6 = {30}$ errors. By contrast, the ML system on its own makes only $8 + 8 = {16}$ errors of this kind, suggesting that the rules disambiguate wrongly. Performance might therefore increase if we leave this decision to the neural net, though it is worth mentioning that this system makes 6 errors in the opposite direction (which only happens twice when the rules are used and therefore does not show up in the table). Apart from errors with participles, all other frequent errors involve gender assignment or number + +assignment on indefinite neuter nouns. The lat- 627 ter distinction is hard to make because these indefinite neuters make no morphological distinction between singular and plural and the context is not always clear. As for the gender errors, at least + +some of these are errors in the gold tags that were 632 not caught in our manual correction. The feminine/masculine distinction has disappeared in the Oslo dialect of Norwegian (Lødrup, 2013) and it may have been hard for the annotators to choose + +the correct tag. Another debatable case is gender 637 assignment on proper nouns, which is often missing from the ML system output, but is also not systematic in the gold data. Here it may be better to just standardise on not assigning gender to proper nouns. + +647 + +--- + +${}^{3}$ See https://spacy.io/models/nb.As the Norwegian UD corpus (Ovrelid and Hohle, 2016) is an automatic conversion of the NDT corpus, the complexity of the tasks should be comparable, although the test split is not identical. + +--- + +648 702 + +649 703 650 704 651 705 + +
Gold tagPredicted tagFreq
[':verb:', 'perf-part'][':adj:', '', 'ent', 'm/f', 'ub']14
[':subst:', 'appell', 'ent', 'mask', 'ub'][':subst:', 'appell', 'ent', 'fem', 'ub']13
[':verb:', 'perf-part'][':adj:', '', 'ent', 'nøyt', 'ub']10
[':adj:', 'ent', 'nøyt', 'pos', 'ub'][':adj:', 'ent', 'm/f', 'pos', 'ub']10
[':subst:', 'appell', 'fl', 'mask', 'ub'][':subst:', 'appell', 'fem', 'fl', 'ub']9
[':subst:', 'appell', 'ent', 'nøyt', 'ub'][':subst:', 'appell', 'fl', 'nøyt', 'ub']8
[':verb:', 'perf-part'][':adj:', 'ent', 'm/f', 'pos', 'ub']6
[':subst:', 'appell', 'be', 'fl', 'mask'][':subst:', 'appell', 'be', 'fem', 'fl']5
[':subst:', 'appell', 'be', 'ent', 'mask'][':subst:', 'prop']5
[':pron:', '3', 'fl', 'pers'][':det:', 'fl', 'kvant']4
[':subst:', 'appell', 'ent', 'mask', 'ub'][':subst:', 'appell', 'ent', 'nøyt', 'ub']4
[':subst:', 'appell', 'fl', 'nøyt', 'ub'][':subst:', 'appell', 'ent', 'nøyt', 'ub']4
+ +Table 3: Most frequent errors, OBT + ML + +652 706 + +653 707 + +654 708 + +655 709 + +656 710 + +657 711 + +658 712 + +659 713 + +660 714 + +661 715 + +662 716 + +663 717 + +664 718 + +665 719 + +666 720 + +667 721 + +668 722 + +
Gold tagPredicted tagFreq
[':subst:', 'appell', 'ent', 'mask', 'ub'][ ':subst:', 'appell', 'ent', 'fem', 'ub']13
[':adj:', 'ent', 'nøyt', 'pos', 'ub'][':adj:', 'ent', 'm/f', 'pos', 'ub']12
[':subst:', 'appell', 'fl', 'mask', 'ub'][':subst:', 'appell', 'fem', 'fl', 'ub']10
[':verb:', 'perf-part'][':adj:', '', 'ent', 'm/f', 'ub']8
[':verb:', 'perf-part'][':adj:', '', 'ent', 'nøyt', 'ub']8
[':subst:', 'mask', 'prop'][':subst:', 'prop']8
[':subst:', 'appell', 'ent', 'nøyt', 'ub'][':subst:', 'appell', 'fl', 'nøyt', 'ub']8
[':subst:', 'appell', 'ent', 'mask', 'ub'][':subst:', 'appell', 'ent', 'nøyt', 'ub']8
[':subst:', 'appell', 'ent', 'fem', 'ub'][':subst:', 'appell', 'ent', 'mask', 'ub']7
[':subst:', 'appell', 'be', 'fl', 'mask'][':subst:', 'appell', 'be', 'fem', 'fl']6
[':subst:', 'appell', 'ent', 'mask', 'ub'][':prep:']6
[':adj:', '', 'ent', 'm/f', 'ub'][':verb:', 'perf-part']6
+ +Table 4: Most frequent errors, ML system (intersecting tags only) + +669 723 + +670 724 + +671 725 + +672 726 + +673 727 + +674 728 + +675 729 + +676 730 + +677 731 + +678 732 + +679 733 + +680 734 + +681 735 + +682 736 + +683 737 + +684 738 + +685 739 + +686 740 + +687 741 + +
Gold tagPredicted tagFreq
[':adj:', 'ent', 'nøyt', 'pos', 'ub'][':adj:', 'ent', 'm/f', 'pos', 'ub']24
[':subst:', 'appell', 'ent', 'mask', 'ub'][':prep:']24
[':prep:'][':subst:', 'appell', 'ent', 'mask', 'ub']19
[':verb:', 'pres'][':prep:']18
[':subst:', 'appell', 'ent', 'mask', 'ub'][':subst:', 'appell', 'ent', 'fem', 'ub']18
[':prep:'][':subst:', 'prop']17
[':subst:', 'appell', 'ent', 'fem', 'ub'][':subst:', 'appell', 'ent', 'mask', 'ub']16
[':prep:']['\$punc\$', '::']15
[':prep:'][':verb:', 'pres']14
[':subst:', 'appell', 'fl', 'mask', 'ub'][':subst:', 'appell', 'fem', 'fl', 'ub']14
[':subst:', 'mask', 'prop'][':subst:', 'prop']14
[':subst:', 'appell', 'ent', 'mask', 'ub'][':subst:', 'appell', 'ent', 'nøyt', 'ub']14
+ +Table 5: Most frequent errors, ML system (all matching tokenisations) + +688 742 + +689 743 + +690 744 + +691 745 + +692 746 + +693 747 + +694 748 + +695 749 + +696 750 + +697 751 + +698 752 + +699 753 + +700 754 + +701 755 + +757 + +
systemaccuracy
pure ML92.8%
OBT w/o heur. + ML94.1%
+ +Table 6: Accuracy of different systems, all sentences with matching tokenisation + +### 7.2 All sentences with matching tokenisations + +To test whether the neural system can be trusted in cases where there is no overlap in tag assignment, we also evaluate the system on all sentences where the tokenisation is matching. We test two setups: one where we use the (non-heuristic) rules plus the neural system as described above, but default to the output of the neural tagger in cases where there is no overlap, and one where we only use the best ML tag. The performance of the two setups is given in Table 6 + +As we can see, the results drop considerably. Overall performance is now below that of the Spacy tagger. Put another way: when we evaluate all sentences with matching tokenisations, the size increases by 8036 tokens from 18612 to 26648, but the number of errors increases from 565 to 1940, i.e. 1375, indicating an error rate of 17.1% on the tokens where the intersection with the output of the CG tagger is empty. Table 5 shows the frequency of errors, which looks very different from Table 4. Most strikingly, there are now many errors involving the part-of-speech tag :prep: (preposition), which is both over- and underpre-dicted by the system. Prepositions are a closed class in Norwegian, as in many other languages, and so it is surprising that the system goes wrong in so many cases here. + +We used an encoder-decoder model to generate the tags given a sentence. This is a different approach from the majority of the work on tagging using deep learning, where the task is formalized as a sequence classification task. We have chosen to use this architecture as we have 82 tags in the gold data that would require training many sequence classifiers or a single classifier that would require many classes (tag combinations) ${}^{4}$ to be trained on. Since there are many layers between the input and output of our model ( 12 Bert, and 6 decoder layers), the model sometimes misses the syntactic alignment between the input and the output. This is, we believe, the main reason for the + +809 + +mismatches. 810 + +For future work, we focus on solving the issues 811 + +with mismatching and incorrect tagging. We plan 812 + +to use accuracy as the evaluation metric to select 813 + +the best-performing model using the dev set. In 814 + +addition, we plan to use various constraining con- 815 + +figurations of beam search on generating tags. In 816 our experiments, we observed that beam search considerably slowed down the evaluation on the dev set resulting in an overall performance drop in + +the training process. Thus, we plan to experiment 821 with the performance of a beam search-based evaluation by applying it for various epoch intervals but not all intervals. And finally, we plan to pick the best tag-set from the output of beam-search by + +introducing manual rules to avoid mismatching. 826 + +## 8 Conclusion + +828 + +We have presented a hybrid system for tagging Norwegian texts, based on intersecting the output + +of a rule-based Constraint Grammar system and 831 a neural sequence-to-sequence model based on a + +large, pre-trained language model. Our results so 833 far indicate that there are both great opportunities and considerable challenges in making such a sys- + +tem work. 836 + +On the plus side, we observe that when the to- + +kenisations of the two systems match and the in- 838 tersection of the possible analyses is non-empty, + +performance is extremely good at ${99.0}\%$ . On the 841 downside, it is challenging to make the two sys- + +tems work together; in about ${10}\%$ of cases, the 843 tokenisation does not match, and in around 20% of cases, the intersection of analyses is empty. We have seen that in some cases, it is tempting to let the neural system overrule the rules, but overall its + +performance in these cases is not good. Hence our 848 overall priority in future work will be to improve the neural system. + +851 + +## References + +853 + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding.http://arxiv.org/abs/1810.04805. + +Mozhdeh Gheini, Xiang Ren, and Jonathan May. 2021. Cross-attention is all you need: Adapting pretrained Transformers for machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1754-1765, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. + +858 + +863 + +--- + +${}^{4}$ See the tag combinations: http://tekstlab.uio.no/obt-ny/english/morphosyn.html + +--- + +Kristin Hagen and Janne Bondi Johannessen. 2003. 865 Parsing nordic languages (panola) - norsk versjon. In Henrik Holmboe, editor, Nordisk Sprogteknologi 2002, pages 89-96. Museum Tusculanum, Copenhagen. + +Janne Bondi Johannessen, Kristin Hagen, André Lynum, and Anders Nøklestad. 2012. Obt+stat. In Gisle Andersen, editor, Exploring Newspaper Language: Using the web to create and investigate a large corpus of modern Norwegian, pages 51-66. John Benjamins, Amsterdam. + +Fred Karlsson, Atro Voutilainen, Juha Heikkilä, and Arto Anttila, editors. 1995. Constraint Grammar: A Language-Independent Framework for Parsing Unrestricted Text. Mouton de Gruyter, Berlin. + +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, + +880 abs/1412.6980. + +Per E Kummervold, Javier De la Rosa, Freddy Wet-jen, and Svein Arne Brygfjeld. 2021. Operationaliz-ing a national digital library: The case for a Norwegian transformer model. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 20-29, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden. + +Helge Lødrup. 2013. Hvor mange genus er det i oslo-dialekten? Maal og Minne, 103(2). + +NbAiLab. 2021. Norwegian Transformer Model. https://github.com/NbAiLab/ notram/tree/0c90d6b28008df514c4ac8 47e4c9d68f4709a181, Accessed: 12.12.2022. + +Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. GECToR - grammatical error correction: Tag, not rewrite. In Proceedings of the 15th Workshop on Innovative Use of NLP for Building Educational Applications, pages 163--170. Association for Computational Linguistics. + +Lilja Øvrelid and Petter Hohle. 2016. Universal dependencies for norwegian. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1579-1585. + +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. + +Per Erik Solberg, Arne Skjærholt, Lilja Øvrelid, Kristin Hagen, and Janne Bondi Johannessen. 2014. The Norwegian dependency treebank. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 789-795, Reykjavik, Iceland. European Language Resources + +917 Association (ELRA). + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien 918 + +Chaumond, Clement Delangue, Anthony Moi, Pier- 919 + +ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- 920 + +icz, Joe Davison, Sam Shleifer, Patrick von Platen, 921 + +Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, 922 Teven Le Scao, Sylvain Gugger, Mariama Drame, + +Quentin Lhoest, and Alexander Rush. 2020. Trans- 923 + +formers: State-of-the-art natural language process- 924 + +ing. In Proceedings of the 2020 Conference on Em- 925 pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. As- + +sociation for Computational Linguistics. 928 + +929 + +930 + +931 + +932 + +933 + +934 + +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V 935 + +Le, Mohammad Norouzi, Wolfgang Macherey, 936 Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural ma- + +chine translation system: Bridging the gap between 939 human and machine translation. arXiv preprint + +arXiv:1609.08144. 941 + +942 + +943 + +944 + +945 + +946 + +947 + +948 + +Hengshuai Yao, Dong-lai Zhu, Bei Jiang, and Peng Yu. 949 2020. Negative log likelihood ratio loss for deep neural network classification. In Proceedings of the + +Future Technologies Conference (FTC) 2019, pages 951 276-282, Cham. Springer International Publishing. + +954 + +955 + +956 + +957 + +958 + +959 + +Barret Zoph, Deniz Yuret, Jonathan May, and Kevin 960 + +Knight. 2016. Transfer learning for low-resource 961 + +neural machine translation. In Proceedings of the 962 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568-1575, Austin, Texas. Association for Computational Linguistics. + +966 + +967 + +968 + +969 + +970 + +971 972 973 974 975 + +976 Tv-programmet "" + +977 "tv-program" subst appell noyt be ent 978 samset-leks <*program> <+programmet> 979 < "<≪>" 980 "\$«" 981 Ut 982 "" "ut" prep 983 "ut" adv 984 i 985 "" "i" prep 986 "i" subst appell mask ub ent 987 naturen 988 "" "natur" subst appell mask be ent 989 » 990 "<>>>" 991 "\$»" 992 begynner "" 993 "begynne" verb pres 994 "begynner" subst appell mask ub ent 995 kl. "" 996 "kl." subst appell fork 997 21.15 998 "<21.15>" "21.15" subst 999 "21.15" det kvant 1000 . 1001 "<.>" 1002 "\$." clb <<< <<< 1003 1004 Figure A: Tokenized and multitagged sentence 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 + +## Appendix: sample multitagger and CG output + +1026 + +--- + + 1027 + + 1028 + + 1029 + + 1030 + + 1031 + +Rosa 1032 "" + +"rosa" adj fl pos 1033 + +"rosa" adj nøyt ub ent pos 1034 + +"rosa" adj ub m/f ent pos 1035 "rosa" subst appell ubøy + +"rose" subst appell fem be ent 1036 + +; "rosa" adj be ent pos REMOVE:2311 1037 + +cupcakes 1038 "" + +"cupcakes" ukjent 1039 + +horer 1040 + +"" 1041 + +"hore" verb pres + +kanskje 1042 + +"" 1043 + +"kanskje" adv 1044 + +med 1045 "" + +"med" prep 1046 + +når 1047 + +"" 1048 "når" sbu SELECT:2579 + +; "n&" verb pres SELECT:2579 1049 + +; "n&r" adv REMOVE:3383 1050 + +man 1051 "" + +"man" pron ent pers hum nom 1052 + +SELECT:3451 1053 + +; "man" subst appell fem ub ent 1054 + +SELECT:3451 + +; "man" subst appell mask ub ent 1055 + +SELECT:3451 1056 + +; "mane" verb imp SELECT:3451 1057 + +skal 1058 "" + +"skulle" verb pres 1059 + + 1060 + +ha 1061 "" + +"ha" verb inf 1062 + +SELECT:2430 1063 + +; "ha" interj SELECT:2430 1064 + +; "ha" subst symb REMOVE:3574 + +; "ha" verb imp 1065 + +SELECT : 2430 1066 + +bloggtreff 1067 "" + +"bloggtreff" subst appell noyt ub ent 1068 + +samset-analyse <+treff> 1069 + +"bloggtreff" subst appell noyt ub fl 1070 + +samset-analyse <+treff> 1071 ? + +"" 1072 + +"\$?" clb <<< <<< 1073 + + 1074 + + 1075 + +Figure B: Tokenized, multitagged and disam- 1076 + +biguated sentence 1077 + +--- + +1078 1079 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TqEvrDbInx/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TqEvrDbInx/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..3231e96eff05fbbcd98f315736b34582a46ada38 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/TqEvrDbInx/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,656 @@ +000 054 + +§ INTEGRATING RULES AND NEURAL NETS FOR MORPHOLOGICAL TAGGING OF NORWEGIAN RESULTS AND CHALLENGES + +001 055 + +056 + +057 + +Anonymous Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 058 + +Affiliation / Address line 1 059 + +Affiliation / Address line 2 060 + +Affiliation / Address line 3 + +email@domain 062 + +§ ABSTRACT + +In this paper, we report on efforts to improve the Oslo-Bergen Tagger for Norwegian morphological tagging by using a hybrid system that combines the output of the rule-based Constraint Grammar tagger with a neural sequence-to-sequence model trained for tagging. The results are very promising for cases where the two systems intersect in tokenisation and morphological analysis, but problems remain in integrating the two systems in many cases. + +§ 1 INTRODUCTION + +The Oslo-Bergen Tagger (OBT, Hagen and Johannessen 2003; Johannessen et al. 2012) is a widely used tool for morphological tagging of Norwegian text. It has existed in various incarnations for + +033 around 25 years, first as a purely rule-based system and later coupled with a statistical module for disambiguation. In this paper, we report on our recent efforts to bring the system into the age of neural networks and show that, even today, the rules boost accuracy considerably over a purely neural system, although there are challenges in combining rules and neural nets due to divergent tokeni-sations. + +The structure of the paper is as follows: In section 2 we give some historical background on OBT and Section 3 describes the current status of its rule-based component. Section 4 describes the training and evaluation data that we have used in developing the new system. Section 5 then provides the details of how our neural system was trained while Section 6 describes how it was combined with the rule system. Section 7 evaluates the performance of the neural system alone as well as + +053 the combined system. Section 8 concludes. + +§ 2 HISTORY OF THE OSLO-BERGEN TAGGER + +065 + +The Oslo-Bergen Tagger was originally developed 067 between 1996 and 1998 by the Tagger Project at the University of Oslo. Rules for morphologi- + +cal and syntactic disambiguation were written in 070 the first version of the Constraint Grammar frame- + +work (Karlsson et al., 1995), retrospectively called 072 CG1. The rules were parsed by the only existing CG rule interpreter at the time, developed by Ling- + +soft AB. The input to CG disambiguation rules is 075 + +multitagged text, i.e., text where each token has 077 been annotated with all possible lexical analyses. Hence, the project also developed a lexicon with + +lemmas and inflected forms (later known as Norsk 080 ordbank) and a combined tokenizer/multitagger. + +The tagger was developed for both Bokmål and 082 Nynorsk, the two written varieties of Norwegian. In this article, we will only focus on the Bokmål + +version of the tagger, and only on the tokenizer 085 and the morphological disambiguation. + +The first version of the tagger was tested on an 087 unseen evaluation corpus with a wide variety of text genres and achieved an F1-score of 97.2 (Ha- + +gen and Johannessen, 2003, 90). The numbers be- 090 hind the F1-score - a precision of 95.4 and recall + +of 99.0 - reveal that the tagger leaves some ambi- 092 guity but makes relatively few errors. At the time, this was considered acceptable as the tagger was mostly used to annotate written corpora for linguistic research, where a high recall was consid- + +ered more important than a high precision. 097 + +In 2000 the rule interpreter was replaced by a reimplementation in Allegro Common Lisp made by Paul Meurer in cooperation with the Text Laboratory at the University of Oslo. At the time, Meurer was employed at Aksis in Bergen, and hence the tagger was named the Oslo-Bergen Tagger (OBT). + +Some years later the need for a new upgrade be- + +came urgent. Firstly, OBT was quite slow. This 107 was not a big problem in 2000, but soon our cor- + +109 pora were getting bigger, and speed became important. The project Norwegian Newspaper Corpus (2007-2009) gave the Text Laboratory the opportunity to translate the CG1 rules to the new more efficient and expressive CG3 format and to use a faster rule interpreter made by the VISL project at the University of Southern Denmark. Secondly, the ambiguities that were left in the output from OBT made the tagger unsuitable for many language technology purposes and applications that require the text to be completely disambiguated. We therefore extended OBT with a statistical module, implemented as a Hidden Markov Model, that disambiguated the remaining morphological ambiguities and also provided the system with a new feature: disambiguation of lemmas. The new OBT+Stat system achieved an accuracy of around 96 percent (Johannessen et al., 2012). + +In the version of the tagger presented here, we have replaced the original HMM module with one that is based on neural networks. We do this for two reasons: First, the new module employs technology that has proven to yield superior results in a variety of NLP tasks. Secondly, the original module did not take into consideration the ambiguity left by the CG rules, meaning that the HMM might select a tag that was previously removed by the disambiguation rules or not even present in the tagger lexicon. The new machine learning module ranks possible readings by probability, allowing us to find the most probable reading (if any) in the intersection between its output and the remaining CG readings, hence not discarding the work that has already been done by the CG disambiguation rules if the intersection is non-empty, but leaving a question as to what to do if the intersection is empty. + +§ 3 THE RULE-BASED TOKENIZER AND TAGGER + +In this section, we first present some of the main tasks for the tokenizer and multitagger before we give a short description of the constraint grammar module. The tokenizer uses a lexicon with all possible lexical readings, where a reading is a combination of a lemma and a morphosyntac-tic tag chosen from a set of 149 possible analyses. ${}^{1}$ The lexicon was originally based on Norsk + +ordbank2005,2but has since been updated with 162 + +words more recently introduced into the language 163 (such as tvitre 'tweet'). The newest version of the tokenizer is written in Python and mirrors in most cases the original tokenizer written in Perl. There is one major exception: The original system + +from the late '90s worked according to the strat- 168 egy "Disambiguate as soon as possible" (Karlsson et al., 1995). This resulted in fixed expressions like blant annet ('among other things' - adverb) and etter hvert ('little by little' - preposition) being allowed - and disambiguated - in the lexicon. In the recent version of the tokenizer, such expressions are removed from the lexicon and the possible ambiguity is dealt with in the CG module. The main principle for the tokenizer is therefore to split tokens on blank space or a sentence delimiter like a full stop or a question mark. For each token identified, the original word form is rendered inside a -tag and looked up in the lexicon. Non-sentence initial capitalized words are identified as proper nouns. Words that exist in the lexicon are assigned all readings found there. If the word is not found in the lexicon and not identified as a proper noun, the word is sent to a compound analyzer. Most unknown words will get an analysis here, as many of them are productively created compounds. Some words will still get the tag ukjent ('unknown') from the tokenizer. These words are often dialect words not standardized in the lexicon or foreign words. Figure A in the Appendix shows how the tokenizer and multitagger deals with the sentence ${TV}$ -programmet "Ut $i$ na-turen" begynner kl. 21.15. ("The TV program "Ut i naturen" starts at 21.15.'), which has quotation marks, abbreviations, and a time expression. + +The tokenizer also identifies sentences using sentence delimiters. A list of known abbreviations and linguistic rules, like the rule "the word including the full stop character is an abbreviation if the word is in the abbreviation list or if the following word is not capitalized", identifies abbreviations like ${kl}$ . (abbreviation for "o'clock" used to specify time in Norwegian) in Figure A. Headlines are also identified by rules and get their own tag. + +The constraint grammar module takes tokenized and multitagged text as input and its main task is to reduce the number of readings to ideally one per word. The number of readings left by the multitag- + +215 ger varies a lot. In the test corpus used in this article (which will be further described in Section 4) there are on average 2,04 readings per word. After the CG rules are applied, there are on average 1,09 readings left per word. + +${}^{1}$ The complete list is available at http://tekstlab.uio.no/obt-ny/morfosyn.html + +2https://www.nb.no/sprakbanken/en/ resource-catalogue/oai-nb-no-sbr-5/ + +Figure B in the Appendix shows the output from the CG module in debug mode for the sentence Rosa cupcakes hører kanskje med når man skal ha bloggtreff? ('Pink cupcakes might be part of a blog meeting?'). Readings that have been removed starting with ";" and the ID numbers of the rules applied are appended to each reading. Note that the English loan word cupcakes is not identified in the lexicon or in the compound analyzer and has got the tag ukjent 'unknown'. The compound bloggtreff 'blog meeting' was not in the lexicon but has got two readings from the compound analyzer. As the examples show, there are both REMOVE rules (remove a reading) and SELECT rules (select a reading). A rule can be very simple, like rule 2430 in Figure 1 that says "select the verb infinitive reading if the verb to the left is a modal auxiliary and not in the set of dangerous infinitives (= not likely infinitives)". + +#:2430 + +SELECT:2430 (verb inf) IF + + (NOT 0 farlige-inf) + + (-1m - hj - verb) + +i + +§ FIGURE 1: SIMPLE SELECT RULE + +Figure 2 shows an example of a more complex rule with linked context conditions somewhere to the right in the sentence. The rule says: "choose the subjunction reading - if somewhere to the right there is a safe noun or pronoun (stop looking if a word on the way has a reading that is not an adverb, adjective or determinative) - and - if there is a word in the present or past tense after the noun/pronoun (adverbs between are fine)." + +#:2579 + +SELECT:2579 (sbu) IF + +(...) + +(**1C subst/pron BARRIER + + ikke-adv-adj-det) + +(**1C subst/pron LINK *1 + + ikke-adv LINK 0 pres/pret) + +i + +§ FIGURE 2: MORE COMPLEX SELECT RULE + +The CG grammar for Bokmål has more than + +269 2300 rules. 1995 of them are SELECT rules. + +Some rules apply to all possible words, while 270 + +some are rules for specific word forms. When the 271 original CG grammar was developed, a training corpus of 100000 words from novels, newspapers and magazines was used. For each new rule added to the grammar, we checked how the rule worked + +by looking at recall and precision. Most rules 276 remove or choose readings without making too many errors. But in the last period of the project, we made around 250 heuristic rules to speed up + +the disambiguation. These rules were riskier but in 281 our small training corpus, they worked well. Later + +in this article, we will see whether the combination 283 of the CG rules and the neural net is affected if the heuristic rules are removed from the grammar. + +286 + +§ 4 TRAINING AND EVALUATION DATA + +The training and evaluation corpus that was used 288 in earlier stages of development of the OBT system is no longer suitable because the tagset and the tokenisation principles have evolved. Instead of bringing this corpus up to date, we chose to use the Norwegian Dependency Treebank (NDT, Solberg et al. 2014) in the development of the new version of OBT. The Bokmål part of NDT is around 300 000 tokens and consists of blog text, news text, parliament proceedings and government white papers. + +The NDT CoNLL data were converted to the + +format of the OBT. We also extracted the pure text 301 and ran OBT on it without statistical disambigua- + +tion, to compare the outputs. If the NDT analy- 303 sis was not among the analyses produced by OBT, we either corrected the NDT annotation if that was the source of the error, or changed the rules of the OBT system if that could easily be done. This pro- + +cess was iterated a few times. Notice that during 308 this period, the whole data set was used for development, as is common with rule-based systems. The goal was to improve both the accuracy of the rule-based disambiguation and the quality of the training data for the neural component. + +The performance of the rule-based system by the end of this phase is shown in Table 1. When heuristic rules are used, we see that in 7.5% of cases, OBT produces an ambiguous analysis containing the correct tag as one possibility, whereas ${1.8}\%$ of tokens are only given (one or more) wrong analyses. Disabling the heuristic rules reduces the number of wrong tags by ${0.2}\%$ but at the cost of an + +increase of ${3.3}\%$ of tokens that get an ambiguous 323 analysis containing the correct tag. + +325 The role of the statistical system is to pick the correct analysis in the ambiguous cases. On its own the neural net might be able to predict the right analysis even in cases where the rules are wrong. However, this analysis will be discarded when we intersect its output with the rules. + +with heuristic rules + +max width= + +unambiguous correct 280650 (90.7%) + +1-3 +ambiguous incl. correct 23219 (7.5%) + +1-3 +wrong 5413 (1.8%) + +1-3 +3|c|without heuristic rules + +1-3 +unambiguous correct 270830 (87.6%) + +1-3 +ambiguous incl. correct 33597 (10.8%) + +1-3 +wrong 4855 (1.6%) + +1-3 + +Table 1: Performance of the rule-based system + +For the training of the neural system, we then split the corpus into train-dev-test sets. While doing this, we made sure the output tags in the training set covered all output tags in the dev and test sets to ensure that the model was trained with samples from all tags. We do this by, first, initializing the Python random seed as 0, then, splitting the data and checking if the training set covers all tags. If it does not, we increase the random seed by one and do the same until we find a training set that covers all the tags in the other sets. This way, we randomly split the dataset into 80-10-10 percent partitions to obtain train-dev-test datasets respectively. + +Finally, the data was reformatted for the neural network. Figure 3 shows an example of input and output for a sentence. The input is the tokenized form of the sentence. The output is the sequence of serialized tags for each token in the input. The token is an indicator that all tags of the corresponding input token have finished and tags of the next input token start afterward. + +INPUT: Men det er bare noe jeg tror . + +OUTPUT : + +:konj: clb + +:pron: 3 ent noyt pers + +:verb: pres + + :adv: + + :pron: 3 ent noyt pers + + :pron: 1 ent hum nom pers + + :verb: pres + +$punc$ : clb: + +Figure 3: An example input and output for a sentence. + +367 377 + +§ 5 THE NEURAL SYSTEM + +378 + +379 + +Recently, a BERT (Devlin et al., 2018) pre-trained 380 + +encoder (nb-bert-base) was published by the Nor- 381 + +wegian National Digital Library (Kummervold 382 et al., 2021). This pre-trained encoder for Nor- + +wegian provides a rich feature set that was pre- 384 viously lacking for the language. Furthermore, since the tagged corpus is very small in comparison to the corpus the pre-trained model was trained on, it is important to use the pre-trained model + +in order to be able to generalize to unseen data. 389 Therefore, we follow an approach similar to that + +of Omelianchuk et al. (2020) and use a sequence- 391 to-sequence (seq2seq) setting to tag the sentences using the pre-trained model. + +Sequence-to-sequence models have two main 394 + +components: an encoder and a decoder. The 396 encoder side is set as the encoder nb-bert-base (NbAiLab, 2021). For the decoder, we randomly initialize 6 layers of size 768 with 12 attention heads. The decoder also has cross-attention layers as it was shown to be effective in seq2seq training (Gheini et al., 2021). We freeze the encoder weights throughout the training since using the encoder as a feature extraction mechanism in this way was shown to be beneficial (Zoph et al., 2016) and is a common practice (Gheini et al., 2021). We use the EncoderDecoderModel provided by the HuggingFace transformers library (Wolf et al., 2020) to configure and train a model. + +The encoder-decoder model gets its input as the identifiers of the tokens (token numbers) in the input vocabulary and outputs the token numbers in the output vocabulary. Thus, the input and output are tokenized using these vocabularies. Since + +the encoder model had already been trained (nb- 416 bert-base) using the widely-utilized sub-word tok-enizer Wordpiece (Wu et al., 2016), we use that to-kenizer as provided by the Huggingface Tokeniz-ers library. For the decoder side, since our vocabulary size is very small and obvious ( 82 tags and 5 extra special tokens such as [CLS] and [SEP]), we do not need to train a special tokenizer. We define the vocabulary manually with these output tokens for use by the Wordpiece tokenizer. + +The training configuration is as follows: We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.0001 . We set the batch size to 16 sentences as this is the amount the graphic + +cards could handle. We use the negative log- 431 likelihood loss (Yao et al., 2020) to compute the loss in each batch between the model output and the expected output. For any other parameter not mentioned in this section, we use the default value defined by version 4.17.0 of the Transformers library in the objects of the following types: Bert-Config, EncoderDecoderModel, EncoderDecoder-Config, and BertModel. + +We evaluate the model using the dev set during the training. We do this by using the BLEU score (Papineni et al., 2002) that is widely utilized to evaluate seq2seq models. We compute the BLEU score between the expected output and the model output for each sentence. We get the average of these scores for the whole dev set. We run the training for 300 epochs and keep the model that results in the maximum average BLEU score for the dev set. + +§ 6 COMBINING NEURAL NETS AND RULES + +As mentioned in section 2, the current system prefers tags that are found in the intersection between the output of the CG rules and that of the neural network. Ideally, we would be able to find such intersections for each individual token separately. However, since the probability of a reading for a particular token depends on the selected readings for all other tokens in the sentence, the only viable option is to consider readings for entire sentences. Thus, for each input sentence, we find the list of possible readings produced by the network and calculate its probability. Then for each reading in this list, ordered by decreasing probability, we go through each token and check whether the tag assigned by the network is also found among those left by the CG disambiguation rules. If it is not found, we skip to the next reading in the list. If it is found, we go on to check the next token, and so on until we reach the end of the sentence, at which point the reading is picked as the selected one for the sentence. For the present test set, we find intersecting tags for all tokens for 1412 of the 2003 sentences (70.5%). The cases with missing intersections may be due to differences in either tokenisation (205 cases) or tag assignments (386 cases) between the two systems. When the tokeni-sations are different, it is not clear what to do. But if the tokens are the same, but the tag assignments differ, we can default to the most probable reading in the neural net output. We explore this option in + +485 Section 7.2. + +Figure 4 shows a case where the tokenisation 486 + +of the neural system does not match with the gold 487 data in the test set. The neural system has split the initial, unknown proper name at a hyphen, whereas the CG tagger keeps it as one token. Since tokenisation is part of a preprocessing step and misalignments in tokenisation is a problem to be solved separately from tag assignment, in this paper we focus primarily on cases where the two systems do produce matching tokenisation, while improving the tokenisation match will be part of future work. + +Neural net: Garosu - gil, som betyr [...] CG: Garosu-gil, som betyr [...] + +Figure 4: Mismatching tokenisation 502 + + + + Figure 5: Non-intersecting tags + +504 + +519 + +522 + +524 + +Figure 5 shows the problem of mismatching 529 tags. For the first word, the CG tagger has left five possible analyses, and the neural net has correctly disambiguated to the plural adjective reading. However, OBT did not recognize the second word, cupcakes, and has therefore left an ukjent ('unknown') tag while the neural system has no analysis with that tag. Instead, the most probable analysis of the sentence according to the neural + +net has cupcakes correctly as an indefinite plural 539 noun. However, since tag probabilities are conditional on all other tags in the sentence these two analyses are incomparable: it is not safe to disambiguate the CG analysis of rosa based on this analysis from the neural net, especially not when the mismatching tag is on the neighbouring word cupcakes. + +max width= + +system accuracy + +1-2 +pure ML 96.9% + +1-2 +OBT + ML 99.0% + +1-2 +OBT w/o heur. + ML 99.0% + +1-2 + +Table 2: Accuracy of different systems, sentences with intersecting tags + +In this particular case, the neural net is correct in its analysis of cupcakes. In general, it might be safe to assume that the neural system is correct in cases where the CG tagger assigns ukjent, and this is an option we will pursue in future research. However, as we will see in the Section 7 the neural system is often incorrect in cases where the tags do not intersect. Solving this problem may require more training data or fine-tuning the parameters of the tag generation process of the decoder of the seq2seq model. + +§ 7 EVALUATION AND ERROR ANALYSIS + +§ 7.1 SENTENCES WITH INTERSECTING TAGS + +We first focus on the restricted cases where the ML system and the CG grammars not only have matching tokenisations but also intersecting tags. We evaluate three different setups: 1 . the trained neural net used as a stand-alone morphological tagger 2. the rule-based system intersected with the neural net as described in Section 6 3 . as the previous, but without the heuristic rules. + +The performance of the three systems is shown in Table 2. Because we evaluate on intersecting tags only, the numbers do not show the actual performance of the system on running text. They do however clearly show that in the ${70.5}\%$ of cases where the tags intersect, the rules strongly improve the performance of the systems: two-thirds of the tokens that are mistagged by the neural net now get a correct analysis. We also see that it makes no difference whether we run the system with or without the heuristic rules: the reduction of wrong tags that we saw in Table 1 is balanced out by the increase in ambiguity. On the sentences where this setup works, the performance is extremely good + +at an accuracy of ${99.0}\%$ . By contrast, the widely 594 + +used Spacy tagger reports an accuracy of ${95.0}\%$ 595 + +for morphological tagging of Norwegian UD. ${}^{3}$ 596 + +Since removing the heuristic rules gave no in- 597 crease in performance, we focus on the setup with + +the full rule set in the following. This system 600 mistags 184 tokens (out of 18612 in total in the matching sentences of the test set), whereas the pure ML system mistags 565 tokens. However, the error profile of the two systems is quite different, + +suggesting possibilities for further improvement. 605 + +Tables 3 and 4 show the twelve most com- + +mon error types of the systems. We see that a 607 relatively common error in the OBT + ML system involves perfect participles which often co- + +exist with homonymous adjectives in Norwegian 610 (as in other Germanic languages, cf. English 'bored') with often very slight or no semantic difference. OBT+ML overapplies the adjective analysis (in three different varieties) compared to the gold data, for a total of ${14} + {10} + 6 = {30}$ errors. By contrast, the ML system on its own makes only $8 + 8 = {16}$ errors of this kind, suggesting that the rules disambiguate wrongly. Performance might therefore increase if we leave this decision to the neural net, though it is worth mentioning that this system makes 6 errors in the opposite direction (which only happens twice when the rules are used and therefore does not show up in the table). Apart from errors with participles, all other frequent errors involve gender assignment or number + +assignment on indefinite neuter nouns. The lat- 627 ter distinction is hard to make because these indefinite neuters make no morphological distinction between singular and plural and the context is not always clear. As for the gender errors, at least + +some of these are errors in the gold tags that were 632 not caught in our manual correction. The feminine/masculine distinction has disappeared in the Oslo dialect of Norwegian (Lødrup, 2013) and it may have been hard for the annotators to choose + +the correct tag. Another debatable case is gender 637 assignment on proper nouns, which is often missing from the ML system output, but is also not systematic in the gold data. Here it may be better to just standardise on not assigning gender to proper nouns. + +647 + +${}^{3}$ See https://spacy.io/models/nb.As the Norwegian UD corpus (Ovrelid and Hohle, 2016) is an automatic conversion of the NDT corpus, the complexity of the tasks should be comparable, although the test split is not identical. + +648 702 + +649 703 650 704 651 705 + +max width= + +Gold tag Predicted tag Freq + +1-3 +[':verb:', 'perf-part'] [':adj:', '', 'ent', 'm/f', 'ub'] 14 + +1-3 +[':subst:', 'appell', 'ent', 'mask', 'ub'] [':subst:', 'appell', 'ent', 'fem', 'ub'] 13 + +1-3 +[':verb:', 'perf-part'] [':adj:', '', 'ent', 'nøyt', 'ub'] 10 + +1-3 +[':adj:', 'ent', 'nøyt', 'pos', 'ub'] [':adj:', 'ent', 'm/f', 'pos', 'ub'] 10 + +1-3 +[':subst:', 'appell', 'fl', 'mask', 'ub'] [':subst:', 'appell', 'fem', 'fl', 'ub'] 9 + +1-3 +[':subst:', 'appell', 'ent', 'nøyt', 'ub'] [':subst:', 'appell', 'fl', 'nøyt', 'ub'] 8 + +1-3 +[':verb:', 'perf-part'] [':adj:', 'ent', 'm/f', 'pos', 'ub'] 6 + +1-3 +[':subst:', 'appell', 'be', 'fl', 'mask'] [':subst:', 'appell', 'be', 'fem', 'fl'] 5 + +1-3 +[':subst:', 'appell', 'be', 'ent', 'mask'] [':subst:', 'prop'] 5 + +1-3 +[':pron:', '3', 'fl', 'pers'] [':det:', 'fl', 'kvant'] 4 + +1-3 +[':subst:', 'appell', 'ent', 'mask', 'ub'] [':subst:', 'appell', 'ent', 'nøyt', 'ub'] 4 + +1-3 +[':subst:', 'appell', 'fl', 'nøyt', 'ub'] [':subst:', 'appell', 'ent', 'nøyt', 'ub'] 4 + +1-3 + +Table 3: Most frequent errors, OBT + ML + +652 706 + +653 707 + +654 708 + +655 709 + +656 710 + +657 711 + +658 712 + +659 713 + +660 714 + +661 715 + +662 716 + +663 717 + +664 718 + +665 719 + +666 720 + +667 721 + +668 722 + +max width= + +Gold tag Predicted tag Freq + +1-3 +[':subst:', 'appell', 'ent', 'mask', 'ub'] [ ':subst:', 'appell', 'ent', 'fem', 'ub'] 13 + +1-3 +[':adj:', 'ent', 'nøyt', 'pos', 'ub'] [':adj:', 'ent', 'm/f', 'pos', 'ub'] 12 + +1-3 +[':subst:', 'appell', 'fl', 'mask', 'ub'] [':subst:', 'appell', 'fem', 'fl', 'ub'] 10 + +1-3 +[':verb:', 'perf-part'] [':adj:', '', 'ent', 'm/f', 'ub'] 8 + +1-3 +[':verb:', 'perf-part'] [':adj:', '', 'ent', 'nøyt', 'ub'] 8 + +1-3 +[':subst:', 'mask', 'prop'] [':subst:', 'prop'] 8 + +1-3 +[':subst:', 'appell', 'ent', 'nøyt', 'ub'] [':subst:', 'appell', 'fl', 'nøyt', 'ub'] 8 + +1-3 +[':subst:', 'appell', 'ent', 'mask', 'ub'] [':subst:', 'appell', 'ent', 'nøyt', 'ub'] 8 + +1-3 +[':subst:', 'appell', 'ent', 'fem', 'ub'] [':subst:', 'appell', 'ent', 'mask', 'ub'] 7 + +1-3 +[':subst:', 'appell', 'be', 'fl', 'mask'] [':subst:', 'appell', 'be', 'fem', 'fl'] 6 + +1-3 +[':subst:', 'appell', 'ent', 'mask', 'ub'] [':prep:'] 6 + +1-3 +[':adj:', '', 'ent', 'm/f', 'ub'] [':verb:', 'perf-part'] 6 + +1-3 + +Table 4: Most frequent errors, ML system (intersecting tags only) + +669 723 + +670 724 + +671 725 + +672 726 + +673 727 + +674 728 + +675 729 + +676 730 + +677 731 + +678 732 + +679 733 + +680 734 + +681 735 + +682 736 + +683 737 + +684 738 + +685 739 + +686 740 + +687 741 + +max width= + +Gold tag Predicted tag Freq + +1-3 +[':adj:', 'ent', 'nøyt', 'pos', 'ub'] [':adj:', 'ent', 'm/f', 'pos', 'ub'] 24 + +1-3 +[':subst:', 'appell', 'ent', 'mask', 'ub'] [':prep:'] 24 + +1-3 +[':prep:'] [':subst:', 'appell', 'ent', 'mask', 'ub'] 19 + +1-3 +[':verb:', 'pres'] [':prep:'] 18 + +1-3 +[':subst:', 'appell', 'ent', 'mask', 'ub'] [':subst:', 'appell', 'ent', 'fem', 'ub'] 18 + +1-3 +[':prep:'] [':subst:', 'prop'] 17 + +1-3 +[':subst:', 'appell', 'ent', 'fem', 'ub'] [':subst:', 'appell', 'ent', 'mask', 'ub'] 16 + +1-3 +[':prep:'] ['$punc$', '::'] 15 + +1-3 +[':prep:'] [':verb:', 'pres'] 14 + +1-3 +[':subst:', 'appell', 'fl', 'mask', 'ub'] [':subst:', 'appell', 'fem', 'fl', 'ub'] 14 + +1-3 +[':subst:', 'mask', 'prop'] [':subst:', 'prop'] 14 + +1-3 +[':subst:', 'appell', 'ent', 'mask', 'ub'] [':subst:', 'appell', 'ent', 'nøyt', 'ub'] 14 + +1-3 + +Table 5: Most frequent errors, ML system (all matching tokenisations) + +688 742 + +689 743 + +690 744 + +691 745 + +692 746 + +693 747 + +694 748 + +695 749 + +696 750 + +697 751 + +698 752 + +699 753 + +700 754 + +701 755 + +757 + +max width= + +system accuracy + +1-2 +pure ML 92.8% + +1-2 +OBT w/o heur. + ML 94.1% + +1-2 + +Table 6: Accuracy of different systems, all sentences with matching tokenisation + +§ 7.2 ALL SENTENCES WITH MATCHING TOKENISATIONS + +To test whether the neural system can be trusted in cases where there is no overlap in tag assignment, we also evaluate the system on all sentences where the tokenisation is matching. We test two setups: one where we use the (non-heuristic) rules plus the neural system as described above, but default to the output of the neural tagger in cases where there is no overlap, and one where we only use the best ML tag. The performance of the two setups is given in Table 6 + +As we can see, the results drop considerably. Overall performance is now below that of the Spacy tagger. Put another way: when we evaluate all sentences with matching tokenisations, the size increases by 8036 tokens from 18612 to 26648, but the number of errors increases from 565 to 1940, i.e. 1375, indicating an error rate of 17.1% on the tokens where the intersection with the output of the CG tagger is empty. Table 5 shows the frequency of errors, which looks very different from Table 4. Most strikingly, there are now many errors involving the part-of-speech tag :prep: (preposition), which is both over- and underpre-dicted by the system. Prepositions are a closed class in Norwegian, as in many other languages, and so it is surprising that the system goes wrong in so many cases here. + +We used an encoder-decoder model to generate the tags given a sentence. This is a different approach from the majority of the work on tagging using deep learning, where the task is formalized as a sequence classification task. We have chosen to use this architecture as we have 82 tags in the gold data that would require training many sequence classifiers or a single classifier that would require many classes (tag combinations) ${}^{4}$ to be trained on. Since there are many layers between the input and output of our model ( 12 Bert, and 6 decoder layers), the model sometimes misses the syntactic alignment between the input and the output. This is, we believe, the main reason for the + +809 + +mismatches. 810 + +For future work, we focus on solving the issues 811 + +with mismatching and incorrect tagging. We plan 812 + +to use accuracy as the evaluation metric to select 813 + +the best-performing model using the dev set. In 814 + +addition, we plan to use various constraining con- 815 + +figurations of beam search on generating tags. In 816 our experiments, we observed that beam search considerably slowed down the evaluation on the dev set resulting in an overall performance drop in + +the training process. Thus, we plan to experiment 821 with the performance of a beam search-based evaluation by applying it for various epoch intervals but not all intervals. And finally, we plan to pick the best tag-set from the output of beam-search by + +introducing manual rules to avoid mismatching. 826 + +§ 8 CONCLUSION + +828 + +We have presented a hybrid system for tagging Norwegian texts, based on intersecting the output + +of a rule-based Constraint Grammar system and 831 a neural sequence-to-sequence model based on a + +large, pre-trained language model. Our results so 833 far indicate that there are both great opportunities and considerable challenges in making such a sys- + +tem work. 836 + +On the plus side, we observe that when the to- + +kenisations of the two systems match and the in- 838 tersection of the possible analyses is non-empty, + +performance is extremely good at ${99.0}\%$ . On the 841 downside, it is challenging to make the two sys- + +tems work together; in about ${10}\%$ of cases, the 843 tokenisation does not match, and in around 20% of cases, the intersection of analyses is empty. We have seen that in some cases, it is tempting to let the neural system overrule the rules, but overall its + +performance in these cases is not good. Hence our 848 overall priority in future work will be to improve the neural system. + +851 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/UcWZrerHDCe/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/UcWZrerHDCe/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..a967ef3b5b2f796c1b56393d71fc365789212d3f --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/UcWZrerHDCe/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,527 @@ +000 054 + +# NoCoLA: The Norwegian Corpus of Linguistic Acceptability + +001 055 + +002 056 + +003 First Author + +004 Affiliation / Address line 1 + +005 Affiliation / Address line 2 006 Affiliation / Address line 3 + +email@domain + +Second Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +## Abstract + +While there has been a surge of large language models for Norwegian in recent years, we lack any tool to evaluate their understanding of grammaticality. We present two new Norwegian datasets for this task. + +018 ${\mathbf{{NoCoLA}}}_{\text{class }}$ is a supervised binary classification task where the goal is to discriminate between acceptable and non-acceptable + +021 sentences. On the other hand, ${\mathbf{{NoCoLA}}}_{\text{zero }}$ is a purely diagnostic task for evaluating + +023 the grammatical judgement of a language model in a completely zero-shot manner, + +026 i.e. without any further training. In this paper, we describe both datasets in detail, + +028 show how to use them for different flavors of language models, and conduct a comparative study of the existing Norwegian + +031 language models. + +033 + +## 1 Introduction + +Large pre-trained language models have recently led to a revolution in natural language processing (NLP) as they substantially increased the performance of most NLP tools (Peters et al., 2018; De- + +038 vlin et al., 2019). Large language models were originally developed for English, but a surge of Norwegian-based models has recently followed (Kutuzov et al., 2021; Kummervold et al., 2021; Hofmann et al., 2022). The remaining issue is that the Norwegian linguistic resources do not contain a large range of tasks to evaluate and compare these models on, as opposed to the English benchmark suites like GLUE (Wang et al., 2018), SuperGLUE (Wang et al., 2019) or GLGE (Liu et al., 2021), to name a few. + +We present two new datasets for evaluating the understanding language models have of Norwegian grammar, jointly called the Norwegian corpus of + +053 linguistic acceptability (NoCoLA). Our work is + +#Incorrect (inflection): Samfunnet ville bli mer forn@yet. #Correct: Samfunnet ville bli mer fornøyd. #Incorrect (word choice) : Jeg er ikke nordmann, med jeg trives i Norge. #Correct: Jeg er ikke nordmann, men jeg trives i Norge. + +Listing 1: Two illustrative examples of incorrect / correct sentence pairs from ${\mathbf{{NoCoLA}}}_{\text{zero }}$ . The English translations: "Society would be happier" and "I'm not Norwegian, but I enjoy living in Norway." + +065 + +067 + +069 + +070 + +072 + +075 + +limited to the most widely used of the written stan- 077 dards for Norwegian, namely Bokmål. This paper + +proposes two different views on the same set of 080 sentences, each with a slightly different purpose: + +- NoCoLA ${}_{\text{class }}$ is a collection of sentences split 082 into two classes: grammatically acceptable and non-acceptable. Thus, it is a binary classifica- + +tion task, where a language model is expected to 085 be first fine-tuned on the training data split. This + +task is more practically-oriented and evaluates 087 the fine-tuning abilities of a language model. The downside is that we cannot tell if the per- + +formance comes from its innate abilities or if it 090 + +was obtained from the supervised fine-tuning. 092 + +- NoCoL ${\mathbf{A}}_{\text{zero }}$ is a collection of pairs of sentences, where only one of them is grammatically acceptable. Here, we do not fine-tune on this task at all, the language model gives a probability to + +each of the two sentences, and we measure how 097 often the correct one gets a higher probability. While not as practical as the first task, the zero-shot evaluation provides a better estimate of the innate grammatical understanding. + +We provide a comprehensive evaluation of the ex- 102 isting Norwegian language models and release the data and code for an easy evaluation of new Norwegian models. ${}^{1}$ + +107 + +--- + +anonymized.for/review + +--- + +## 2 Related work + +The closest equivalent of our ${\mathbf{{NoCoLA}}}_{\text{class }}$ dataset is the English Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2019), while ${\mathbf{{NoCoLA}}}_{\text{zero }}$ roughly follows the The Benchmark of Linguistic Minimal Pairs for English and the English (BLiMP; Warstadt et al., 2020). + +CoLA. This dataset consists of 10600 acceptable and non-acceptable sentences collected manually from the linguistics literature, with the goal of covering specific linguistic phenomena - and the morphological, syntactic and semantic violation of rules connected to those phenomena. By collecting the data in this manner, one ensures that the dataset represents language phenomena that are central to human linguistic competence according to linguists. CoLA has become a standard task for evaluating English language models after it was included in the GLUE benchmark for natural language understanding (Wang et al., 2018). + +BLiMP. The BLiMP dataset consists of 67000 minimal pairs, all of them generated artificially. Some examples of phenomena covered in the dataset are determiner-noun agreement, verb argument structure and irregular verb-forms. Each pair differs only on one single parameter, namely the element that leads to the non-acceptability. + +Comparison with NoCoLA. Our datasets fill the same purpose for evaluation of language models in Norwegian as CoLA and BLiMP do for English. However, the source of the sentences is different. Our data consists of naturally produced sentences, instead of controlled and artificially generated ones. Where CoLA collects sentences that are handpicked by linguists to represent specific linguistic phenomena, our sentences contain errors that mirror the natural distribution of errors in texts by second language learners. Thus, NoCoLA gives an indication of how well a given language model distinguishes between acceptable and non-acceptable Norwegian text, but not of how well it understands the full range of possible grammatical phenomena of the language. NoCoLA is also substantially larger than CoLA, with almost 15 times more examples. The NoCoLA error types are not comparable to BLiMP, where the error-types describe the underlying grammatical problem. Instead, the NoCoLA error-types describe the + +161 changes that need to be made to correct the errors. + +## 3 Datasets description + +162 + +163 + +### 3.1 ASK corpus + +164 + +Both ${\mathbf{{NoCoLA}}}_{\text{class }}$ and ${\mathbf{{NoCoLA}}}_{\text{zero }}$ require a 165 source for both acceptable and non-acceptable sentences. The latter is hard to come by in most nat- + +uralistic text by adult native speakers. Our source 168 for both NoCoLA datasets is the ASK Corpus - A Language Learner Corpus of Norwegian as a Second Language (Tenfjord et al., 2006). It consists of submissions by second language learners of Norwegian Bokmål around the year 2000, each having of one or more essays. The essays are written as solutions to two separate Norwegian language exams, which are estimated in Berggren (2019) to be approximately CEFR-levels B1 and B2. The texts are limited to one of the written standards for Norwegian, namely Bokmål. + +There are 1935 submissions, with 46000 original sentences in total. Each essay has been manually corrected by native speakers, hereby called cor-rectors. The errors in the corpus are annotated with a set of error-codes, which indicate the change that needs to be done to correct the original passage. For instance, "F" indicates wrong morpho-syntactic category, while "PUNCM" means that punctuation is missing, and needs to be added. We have merged some of the error-codes so that we have a medium-grained way of understanding the performance of the models on the different types of errors found in ${\mathbf{{NoCoLA}}}_{\text{zero. }}$ . A short explanation of these error- + +codes can be found in the appendix. 195 + +### 3.2 Conversion from ASK to NoCoLA + +Sentence merging. For the NoCoLA datasets we want sentences as the unit for evaluation. There- + +fore we need to split the continuous text of ASK 200 into sentences. However, since some of the corrections suggested by the correctors affect the way the text is split into sentences, and we need alignment between the acceptable and non-acceptable in the pairs for ${\mathbf{{NoCoLA}}}_{\text{zero }}$ , we decided to always keep the longest available version in cases where there is disagreement between both versions. The principle applies to both datasets. Thus, the unit referred to as "sentence" in this paper can consist of multiple sentences. + +Error extraction. For each of these sentences, we first extract a corrected (acceptable) version. In order to test only minimal errors and to label + +each non-acceptable sentence with an error-type, 215 + +217 we generate one non-acceptable sentence for each error found in the originals. Therefore we extract almost 100000 non-acceptable sentences, as many of the original sentences have multiple errors. + +
DatasetTrain$\mathbf{{Dev}}$Test
${\mathbf{{NoCoLA}}}_{class}$116 19514 28914 383
${\mathbf{{NoCoLA}}}_{zero}$--99 115
+ +Table 1: Number of sentences and sentence pairs, respectively, for both NoCoLA datasets. + +229 + +Post-processing. We did a few additional adjustments to the dataset. All sentences are heuristically + +232 detokenized and removed if they contain an uneven count of quotation marks. If no error type + +234 is mentioned for a given correction, we also remove that sentence. In the original ASK dataset, sensitive words have been replaced by placehold-ers like "@sted" (place) and "@navn" (name) for anonymization purposes. We replace each placeholder with a substitute representation of that category, i.e. "Oslo" instead of "@sted", to normalize all sentences. In rare occasions, these replacements might cause some sentences to become erroneous, since the possible genitive and plural conjugations in the original texts are not annotated with separate placeholder-tokens. + +Conversion results. The final dataset contains 144867 sentences, 31.5% of which are acceptable. ${\mathbf{{NoCoLA}}}_{\text{class }}$ has been shuffled and then randomly split by the authors to ensure unbiased development and test sentences. The split has been done in an approximate 80:10:10 ratio, resulting in the sentence-level statistics from Table 1. + +## 4 Baseline models + +### 4.1 Evaluation of ${\mathrm{{NoCoLA}}}_{\text{class }}$ + +In order to evaluate language models on No- ${\mathbf{{CoLA}}}_{\text{class }}$ , we use the standard fine-tuning approach from Devlin et al. (2019). Accordingly, every sentence is tokenized, prepended by a special [CLS] token, appended by a [SEP] token and input to a pre-trained language model. Subsequently, the contextualized representation of the special [CLS] token is fed into a binary MLP classifier. The pre-trained weights of the language model are further trained together with the classi- + +269 fier weights. + +### 4.2 Evaluation of ${\mathrm{{NoCoLA}}}_{\text{zero }}$ + +270 + +One disadvantage of ${\mathbf{{NoCoLA}}}_{\text{class }}$ is that the re- 271 sults are skewed by the second-stage supervised training and it can be problematic to disentangle the properties of the LM from the classifier (Be- + +linkov, 2022). In contrast, pure LM-based evalu- 276 ation of ${\mathbf{{NoCoLA}}}_{zero}$ attempts to measure the linguistic knowledge of a language model in a zero-shot manner - without any additional training. The dataset consists of 99115 sentence pairs; each pair + +differs minimally on the surface level, but only 281 one of the sentences is acceptable. We can use the intrinsic ability of language models to assign a probability to every sentence and test how often a language model assigns a higher probability to the + +correct sentence, as in (Warstadt et al., 2020). 286 + +CLM evaluation. The causal language models are trained to estimate $p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{ < t}}\right)$ for sentence $\mathbf{s}$ and token ${\mathbf{s}}_{t}$ where ${\mathbf{s}}_{ < t} = \left( {{\mathbf{s}}_{i} \mid i < t}\right)$ ; then the sentence log-probability is simply given by $\log p\left( \mathbf{s}\right) =$ $\mathop{\sum }\limits_{{t = 1}}^{N}\log p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{ < t}}\right)$ . + +MLM evaluation. The issue with masked language models is that they are not designed to calculate the joint probability; they are trained to estimate $p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{\smallsetminus t}}\right)$ - the likelihood of a token ${s}_{t}$ given its bidirectional context ${\mathbf{s}}_{\smallsetminus t} = \left( {{\mathbf{s}}_{i} \mid i \neq t}\right)$ . We can however still use MLMs to infer a score for each sentence where a higher score corresponds to a more likely sentence. Wang and Cho (2019) defined pseudo-log-likelihood score of a sentence $s$ + +with model $\theta$ as 303 + +$$ +\operatorname{PLL}\left( \mathbf{s}\right) = \frac{1}{N}\mathop{\sum }\limits_{{t = 1}}^{N}\log p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{\smallsetminus t};\theta }\right) . +$$ + +Salazar et al. (2020) tested PLL and found that it 308 produces accurate predictions on BLiMP. We adopt their approach and evaluate our models with PLL. + +## 5 Results + +### 5.1 Results on ${\mathbf{{NoCoLA}}}_{\text{class }}$ + +313 + +The results from benchmarking the publicly available Norwegian language models on the classification task can be seen in Table 3. The classification accuracy is around ${80}\%$ for for these models. One exception is the slightly older NorBERT 1, which performs substantially worse, even if being trained on clean Norwegian data: Wikipedia and newspaper articles (Kutuzov et al., 2021). We use the + +English BERT ${}_{\text{base }}$ as a naive baseline, which gives 323 + +325 379 + +
ModelOverall
${\mathrm{{BERT}}}_{\text{base }}$ (Devlin et al. 2019)50.7053.5563.4360.4451.6979.3351.8582.5454.3154.1159.48
mBERTbase (Devlin et al. 2019)79.9269.0590.7476.9178.8483.9774.8887.8878.7280.4479.53
XLM-R base (Conneau et al. 2020)91.4385.2892.6087.4387.5683.9384.3390.6089.6391.9688.02
ScandiBERT (Hofmann et al. 2022)93.4389.7990.8490.1490.0587.1090.0890.5585.8290.6890.27
NB-BERT base (Kummervold et al. 2021)93.7689.1997.1486.5492.4873.9890.9492.7391.1594.7089.04
NorBERT 1 (Kutuzov et al., 2021)93.4688.4694.5488.6689.4188.4692.0194.2690.8393.0590.83
NorBERT 2 (Kutuzov et al., 2021)91.6688.2096.8889.2290.9175.8292.6793.1374.1892.6988.51
XLM-Rlarge (Conneau et al., 2020)92.5488.1790.0688.5789.2880.8484.5291.3589.7093.2488.27
NB-BERTlarge (Kummervold et al. 2021)95.2092.4195.1691.4791.9285.3393.3617.0189.5692.8790.51
+ +Table 2: The accuracy values of zero-shot evaluation on ${\mathbf{{NoCoLA}}}_{\text{zero. }}$ Fine-grained results over different error types are reported (Appendix A), as well as the overall average over all sentence pairs in the datasets. + +378 + +380 + +381 + +382 + +383 + +385 + +386 + +389 + +391 + +330 384 + +393 + +340 394 as a lower bound on the performance of any decent Norwegian language models. has the worst perfor- + +
Model$\mathbf{{Lang}.}$SizeAccuracy$\mathbf{{MCC}}$
${\mathrm{{BERT}}}_{\text{base }}$en110M${69.56}^{\pm {0.37}}$${23.99}^{\pm {0.41}}$
${\mathrm{{mBERT}}}_{\text{base }}$multi178M${75.28}^{\pm {0.66}}$${46.39}^{\pm {0.67}}$
${\mathrm{{XLM} - R}}_{\text{base }}$multi278M${79.29}^{\pm {0.20}}$${55.14}^{\pm {0.36}}$
ScandiBERTmulti124M${80.25}^{\pm {0.33}}$${57.12}^{\pm {0.37}}$
NB-BERT ${}_{\text{base}}$no178M${80.69}^{\pm {0.44}}$${58.10}^{\pm {0.48}}$
NorBERT 1no111M${71.53}^{\pm {0.80}}$${35.85}^{\pm {1.70}}$
NorBERT 2no125M${79.99}^{\pm {0.27}}$${56.09}^{\pm {0.30}}$
XLM-Rlargemulti560M${81.03}^{\pm {0.27}}$${58.56}^{\pm {0.30}}$
NB-BERT ${}_{\text{large }}$no355M${\mathbf{{81.43}}}^{\pm {0.32}}$${\mathbf{{59.68}}}^{\pm {0.14}}$
+ +Table 3: Accuracy and the Matthews correlation coefficient (Matthews, 1975), the main metric of ${\mathbf{{NoCoLA}}}_{\text{class }}$ . We report the mean and standard deviation across five runs on the test split. + +362 mance of all our models. The two largest models give a small increase in performance compared to the moderately sized versions of the same models. + +### 5.2 Results on ${\mathbf{{NoCoLA}}}_{\text{zero }}$ + +367 On the raw zero-shot diagnostic task (Table 2), all models trained on Norwegian or Scandinavian languages perform well with results around ${90}\%$ accuracy. The best performance comes, perhaps surprisingly, from NorBERT 1 - possibly because it was pre-trained on a relatively small clean corpus. Remarkably, increased number of parameters does not seem to improve performance on this task. + +We have also included accuracy scores for the in- + +377 dividual error-types, these fine-grained scores can + +be used as a helpful cue for NLP researchers who 396 develop new language models. Comparably low scores can signal a problem their training corpus + +or with their tokenizer. For example, the two mod- 399 els NB-BERTs are relatively weak on punctuation- + +related errors. The large version is trained on 401 uncased data, which explains its inability of this + +model to understand the case-related errors. Scan- 404 diBERT performs comparably to the Norwegian + +ones on most parameters except for spelling. 406 + +## 6 Conclusion + +408 + +In this paper we have proposed NoCoLA, the 409 first dataset for linguistic acceptance in Norwegian + +Bokmal. We showed how to use it for measuring 411 the linguistic knowledge of language models on both a classification task and a zero-shot probability comparison task. We have described how the datasets were created and what their motivation is, + +compared them to related work in English NLP 416 and showed how to use them for fine-grained error analysis of language models. + +Lastly, we evaluated all existing Norwegian language models on both proposed tasks. These re- + +sults suggest that models trained specifically for 421 Norwegian or Scandinavian languages perform better at discriminating between acceptable an non-acceptable sentences. The classification results also + +show that linguistic acceptability is a relatively hard 426 task, as none of the models achieved more than ${60}\%$ on the main MCC metric. The results on our diagnostic dataset highlight some shortcoming of + +the existing models. We will release all evaluation 430 + +sources in the camera-ready version. 431 + +## References + +433 Yonatan Belinkov. 2022. Probing Classifiers: Promises, Shortcomings, and Advances. Computational Lin- 435 guistics, 48(1):207-219. + +Stig Johan Berggren. 2019. Automated assessment of 438 norwegian 12 essays using multi-task learning. master thesis, university of oslo. + +440 Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised 443 cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- 445 ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Linguistics. + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. + +Valentin Hofmann, Goran Glavaš, Nikola Ljubešić, Janet B. Pierrehumbert, and Hinrich Schütze. 2022. Geographic adaptation of pretrained language models. + +Per E Kummervold, Javier De la Rosa, Freddy Wet-jen, and Svein Arne Brygfjeld. 2021. Operationaliz-ing a national digital library: The case for a Norwegian transformer model. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 20-29, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden. + +Andrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja $\varnothing$ vrelid, and Stephan Oepen. 2021. Large-scale con-textualised language modelling for Norwegian. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 30-40, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden. + +Dayiheng Liu, Yu Yan, Yeyun Gong, Weizhen Qi, Hang Zhang, Jian Jiao, Weizhu Chen, Jie Fu, Linjun Shou, Ming Gong, Pengcheng Wang, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Ruofei Zhang, Winnie Wu, Ming Zhou, and Nan Duan. 2021. GLGE: A new general language generation evaluation benchmark. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 408-420, Online. Association for Computational Linguistics. + +B.W. Matthews. 1975. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochimica et Biophysica Acta (BBA) - Protein Struc- 485 ture, 405(2):442-451. + +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt 486 + +Gardner, Christopher Clark, Kenton Lee, and Luke 487 + +Zettlemoyer. 2018. Deep contextualized word repre- 488 + +sentations. In Proceedings of the 2018 Conference of 489 + +the North American Chapter of the Association for 490 Computational Linguistics: Human Language Tech- + +nologies, Volume 1 (Long Papers), pages 2227-2237, 491 + +New Orleans, Louisiana. Association for Computa- 492 tional Linguistics. + +Julian Salazar, Davis Liang, Toan Q. Nguyen, and Ka-trin Kirchhoff. 2020. Masked language model scoring. In Proceedings of the 58th Annual Meeting of + +the Association for Computational Linguistics, pages 497 2699-2712, Online. Association for Computational + +Linguistics. 499 + +Kari Tenfjord, Paul Meurer, and Knut Hofland. 2006. In The ASK Corpus - A Language Learner Corpus of + +Norwegian as a Second Language. Proceedings from 502 5th International Conference on Language Resources + +and Evaluation (LREC), Genova 2006. [link]. 504 + +Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov ran- + +dom field language model. In Proceedings of the 507 Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 30-36, Min- + +neapolis, Minnesota. Association for Computational 509 Linguistics. + +Alex Wang, Yada Pruksachatkun, Nikita Nangia, Aman-preet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, + +Inc. 517 + +Alex Wang, Amanpreet Singh, Julian Michael, Felix + +Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: 519 A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the + +2018 EMNLP Workshop BlackboxNLP: Analyzing 522 and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Com- + +putational Linguistics. 524 + +Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo-hananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020. BLiMP: The benchmark of linguistic minimal pairs for English. Transactions of the + +Association for Computational Linguistics, 8:377- 529 392. + +Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational + +Linguistics, 7:625-641. 534 + +535 + +536 + +537 + +538 539 + +## A ${\mathbf{{NoCoLA}}}_{\text{zero }}$ error types + +541 + +542 - Inflection: wrong form of word. Merged from 543 ASK-codes "F": wrong morpho-syntactic form 544 and "INFL": suffix from correct category, but 545 wrong form for this particular word. " Jeg vet 546 ikke hvorfor jeg har valgt dette oppgaven." “I 547 do not know why I have chosen this task." + +548 - Word choice: wrong choice of word. Merged from ASK-codes "W": wrong word and "FL": word from another language. "Jeg er et eksem- 551 pelfor det." "I am an example of that" + +553 - Spelling: wrong spelling of word, corresponding to ASK-code "ORT". "De er en rik fammilie." "They are a rich family." + +556 - Missing: word should be added. Corresponding to ASK-code "M". "Norge kan bidra veldig mye 558 på Europeiske planet." "Norway can contribute a lot at the European level." + +561 - Superfluous: word should be removed. Corresponding to ASK-code "R". "Da mistet jeg den 563 beste vennen min i hele livet mitt." "Then I lost the best friend in my whole life." + +- Punctuation: add or remove punctuation. Cor- 566 responding to ASK-codes "PUNC", "PUNCM" and "PUNCR". "Hva skal jeg gjøre etterpâ." 568 "What should we do afterwards?" + +- Word order: wrong order of words or phrases. Corresponding to ASK-code "O". "Hvis du har tillatelse, du kan fiske også." "If you have a lisence, you can fish as well." + +- Capitalization: add or remove capitalization. Corresponding to ASK-code "CAP". "n' $\dot{\mathbf{a}}$ liker jeg meg godt i Oslo." "Now I enjoy myself in Oslo” 578 + +- Compounding: deviation regarding compounding. Corresponding to ASK-codes "PART" and "SPL". "Etter pã skal jeg studere for à bli syke-pleier." "Afterwards I want to study to become 583 a nurse." - Derivation: deviation regarding derivation. Corresponding to ASK-code "DER". "Derfor er jeg helt enig med forbudelse mot krenkende ut- 588 talelser." "Therefore I completely agree with the ban on offensive statements." + +- Other: any other error + +592 + +593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 + +617 618 619 + +620 + +621 + +622 + +623 + +624 + +625 626 + +627 + +628 + +629 + +630 631 632 633 634 + +635 636 637 + +638 + +639 + +640 + +641 + +642 + +643 + +644 + +645 646 647 + +648 702 + +649 703 + +650 704 + +651 705 + +652 706 + +653 707 + +654 708 + +655 709 + +656 710 + +657 711 + +658 712 + +659 713 + +660 714 + +661 715 + +![019640e8-2e3a-7a14-8eba-7b6892344dd8_6_209_639_1233_932_0.jpg](images/019640e8-2e3a-7a14-8eba-7b6892344dd8_6_209_639_1233_932_0.jpg) + +Figure 1: Distribution of error types in the NoCoLA datasets. + +662 716 + +663 717 + +664 718 + +665 719 + +666 720 + +667 721 + +668 722 + +669 723 + +670 724 + +671 725 + +672 726 + +673 727 + +674 728 + +675 729 + +676 730 + +677 731 + +678 732 + +679 733 + +680 734 + +681 735 + +682 736 + +683 737 + +684 738 + +685 739 + +686 740 + +687 741 + +688 742 + +689 743 + +690 744 + +691 745 + +692 746 + +693 747 + +694 748 + +695 749 + +696 750 + +697 751 + +698 752 + +699 753 + +700 754 + +701 755 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/UcWZrerHDCe/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/UcWZrerHDCe/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..3ee68d36c9e7b70381aa6aecfec22c1c60dd9908 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/UcWZrerHDCe/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,338 @@ +000 054 + +§ NOCOLA: THE NORWEGIAN CORPUS OF LINGUISTIC ACCEPTABILITY + +001 055 + +002 056 + +003 First Author + +004 Affiliation / Address line 1 + +005 Affiliation / Address line 2 006 Affiliation / Address line 3 + +email@domain + +Second Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +§ ABSTRACT + +While there has been a surge of large language models for Norwegian in recent years, we lack any tool to evaluate their understanding of grammaticality. We present two new Norwegian datasets for this task. + +018 ${\mathbf{{NoCoLA}}}_{\text{ class }}$ is a supervised binary classification task where the goal is to discriminate between acceptable and non-acceptable + +021 sentences. On the other hand, ${\mathbf{{NoCoLA}}}_{\text{ zero }}$ is a purely diagnostic task for evaluating + +023 the grammatical judgement of a language model in a completely zero-shot manner, + +026 i.e. without any further training. In this paper, we describe both datasets in detail, + +028 show how to use them for different flavors of language models, and conduct a comparative study of the existing Norwegian + +031 language models. + +033 + +§ 1 INTRODUCTION + +Large pre-trained language models have recently led to a revolution in natural language processing (NLP) as they substantially increased the performance of most NLP tools (Peters et al., 2018; De- + +038 vlin et al., 2019). Large language models were originally developed for English, but a surge of Norwegian-based models has recently followed (Kutuzov et al., 2021; Kummervold et al., 2021; Hofmann et al., 2022). The remaining issue is that the Norwegian linguistic resources do not contain a large range of tasks to evaluate and compare these models on, as opposed to the English benchmark suites like GLUE (Wang et al., 2018), SuperGLUE (Wang et al., 2019) or GLGE (Liu et al., 2021), to name a few. + +We present two new datasets for evaluating the understanding language models have of Norwegian grammar, jointly called the Norwegian corpus of + +053 linguistic acceptability (NoCoLA). Our work is + +#Incorrect (inflection): Samfunnet ville bli mer forn@yet. #Correct: Samfunnet ville bli mer fornøyd. #Incorrect (word choice) : Jeg er ikke nordmann, med jeg trives i Norge. #Correct: Jeg er ikke nordmann, men jeg trives i Norge. + +Listing 1: Two illustrative examples of incorrect / correct sentence pairs from ${\mathbf{{NoCoLA}}}_{\text{ zero }}$ . The English translations: "Society would be happier" and "I'm not Norwegian, but I enjoy living in Norway." + +065 + +067 + +069 + +070 + +072 + +075 + +limited to the most widely used of the written stan- 077 dards for Norwegian, namely Bokmål. This paper + +proposes two different views on the same set of 080 sentences, each with a slightly different purpose: + + * NoCoLA ${}_{\text{ class }}$ is a collection of sentences split 082 into two classes: grammatically acceptable and non-acceptable. Thus, it is a binary classifica- + +tion task, where a language model is expected to 085 be first fine-tuned on the training data split. This + +task is more practically-oriented and evaluates 087 the fine-tuning abilities of a language model. The downside is that we cannot tell if the per- + +formance comes from its innate abilities or if it 090 + +was obtained from the supervised fine-tuning. 092 + + * NoCoL ${\mathbf{A}}_{\text{ zero }}$ is a collection of pairs of sentences, where only one of them is grammatically acceptable. Here, we do not fine-tune on this task at all, the language model gives a probability to + +each of the two sentences, and we measure how 097 often the correct one gets a higher probability. While not as practical as the first task, the zero-shot evaluation provides a better estimate of the innate grammatical understanding. + +We provide a comprehensive evaluation of the ex- 102 isting Norwegian language models and release the data and code for an easy evaluation of new Norwegian models. ${}^{1}$ + +107 + +anonymized.for/review + +§ 2 RELATED WORK + +The closest equivalent of our ${\mathbf{{NoCoLA}}}_{\text{ class }}$ dataset is the English Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2019), while ${\mathbf{{NoCoLA}}}_{\text{ zero }}$ roughly follows the The Benchmark of Linguistic Minimal Pairs for English and the English (BLiMP; Warstadt et al., 2020). + +CoLA. This dataset consists of 10600 acceptable and non-acceptable sentences collected manually from the linguistics literature, with the goal of covering specific linguistic phenomena - and the morphological, syntactic and semantic violation of rules connected to those phenomena. By collecting the data in this manner, one ensures that the dataset represents language phenomena that are central to human linguistic competence according to linguists. CoLA has become a standard task for evaluating English language models after it was included in the GLUE benchmark for natural language understanding (Wang et al., 2018). + +BLiMP. The BLiMP dataset consists of 67000 minimal pairs, all of them generated artificially. Some examples of phenomena covered in the dataset are determiner-noun agreement, verb argument structure and irregular verb-forms. Each pair differs only on one single parameter, namely the element that leads to the non-acceptability. + +Comparison with NoCoLA. Our datasets fill the same purpose for evaluation of language models in Norwegian as CoLA and BLiMP do for English. However, the source of the sentences is different. Our data consists of naturally produced sentences, instead of controlled and artificially generated ones. Where CoLA collects sentences that are handpicked by linguists to represent specific linguistic phenomena, our sentences contain errors that mirror the natural distribution of errors in texts by second language learners. Thus, NoCoLA gives an indication of how well a given language model distinguishes between acceptable and non-acceptable Norwegian text, but not of how well it understands the full range of possible grammatical phenomena of the language. NoCoLA is also substantially larger than CoLA, with almost 15 times more examples. The NoCoLA error types are not comparable to BLiMP, where the error-types describe the underlying grammatical problem. Instead, the NoCoLA error-types describe the + +161 changes that need to be made to correct the errors. + +§ 3 DATASETS DESCRIPTION + +162 + +163 + +§ 3.1 ASK CORPUS + +164 + +Both ${\mathbf{{NoCoLA}}}_{\text{ class }}$ and ${\mathbf{{NoCoLA}}}_{\text{ zero }}$ require a 165 source for both acceptable and non-acceptable sentences. The latter is hard to come by in most nat- + +uralistic text by adult native speakers. Our source 168 for both NoCoLA datasets is the ASK Corpus - A Language Learner Corpus of Norwegian as a Second Language (Tenfjord et al., 2006). It consists of submissions by second language learners of Norwegian Bokmål around the year 2000, each having of one or more essays. The essays are written as solutions to two separate Norwegian language exams, which are estimated in Berggren (2019) to be approximately CEFR-levels B1 and B2. The texts are limited to one of the written standards for Norwegian, namely Bokmål. + +There are 1935 submissions, with 46000 original sentences in total. Each essay has been manually corrected by native speakers, hereby called cor-rectors. The errors in the corpus are annotated with a set of error-codes, which indicate the change that needs to be done to correct the original passage. For instance, "F" indicates wrong morpho-syntactic category, while "PUNCM" means that punctuation is missing, and needs to be added. We have merged some of the error-codes so that we have a medium-grained way of understanding the performance of the models on the different types of errors found in ${\mathbf{{NoCoLA}}}_{\text{ zero. }}$ . A short explanation of these error- + +codes can be found in the appendix. 195 + +§ 3.2 CONVERSION FROM ASK TO NOCOLA + +Sentence merging. For the NoCoLA datasets we want sentences as the unit for evaluation. There- + +fore we need to split the continuous text of ASK 200 into sentences. However, since some of the corrections suggested by the correctors affect the way the text is split into sentences, and we need alignment between the acceptable and non-acceptable in the pairs for ${\mathbf{{NoCoLA}}}_{\text{ zero }}$ , we decided to always keep the longest available version in cases where there is disagreement between both versions. The principle applies to both datasets. Thus, the unit referred to as "sentence" in this paper can consist of multiple sentences. + +Error extraction. For each of these sentences, we first extract a corrected (acceptable) version. In order to test only minimal errors and to label + +each non-acceptable sentence with an error-type, 215 + +217 we generate one non-acceptable sentence for each error found in the originals. Therefore we extract almost 100000 non-acceptable sentences, as many of the original sentences have multiple errors. + +max width= + +Dataset Train $\mathbf{{Dev}}$ Test + +1-4 +${\mathbf{{NoCoLA}}}_{class}$ 116 195 14 289 14 383 + +1-4 +${\mathbf{{NoCoLA}}}_{zero}$ - - 99 115 + +1-4 + +Table 1: Number of sentences and sentence pairs, respectively, for both NoCoLA datasets. + +229 + +Post-processing. We did a few additional adjustments to the dataset. All sentences are heuristically + +232 detokenized and removed if they contain an uneven count of quotation marks. If no error type + +234 is mentioned for a given correction, we also remove that sentence. In the original ASK dataset, sensitive words have been replaced by placehold-ers like "@sted" (place) and "@navn" (name) for anonymization purposes. We replace each placeholder with a substitute representation of that category, i.e. "Oslo" instead of "@sted", to normalize all sentences. In rare occasions, these replacements might cause some sentences to become erroneous, since the possible genitive and plural conjugations in the original texts are not annotated with separate placeholder-tokens. + +Conversion results. The final dataset contains 144867 sentences, 31.5% of which are acceptable. ${\mathbf{{NoCoLA}}}_{\text{ class }}$ has been shuffled and then randomly split by the authors to ensure unbiased development and test sentences. The split has been done in an approximate 80:10:10 ratio, resulting in the sentence-level statistics from Table 1. + +§ 4 BASELINE MODELS + +§ 4.1 EVALUATION OF ${\MATHRM{{NOCOLA}}}_{\TEXT{ CLASS }}$ + +In order to evaluate language models on No- ${\mathbf{{CoLA}}}_{\text{ class }}$ , we use the standard fine-tuning approach from Devlin et al. (2019). Accordingly, every sentence is tokenized, prepended by a special [CLS] token, appended by a [SEP] token and input to a pre-trained language model. Subsequently, the contextualized representation of the special [CLS] token is fed into a binary MLP classifier. The pre-trained weights of the language model are further trained together with the classi- + +269 fier weights. + +§ 4.2 EVALUATION OF ${\MATHRM{{NOCOLA}}}_{\TEXT{ ZERO }}$ + +270 + +One disadvantage of ${\mathbf{{NoCoLA}}}_{\text{ class }}$ is that the re- 271 sults are skewed by the second-stage supervised training and it can be problematic to disentangle the properties of the LM from the classifier (Be- + +linkov, 2022). In contrast, pure LM-based evalu- 276 ation of ${\mathbf{{NoCoLA}}}_{zero}$ attempts to measure the linguistic knowledge of a language model in a zero-shot manner - without any additional training. The dataset consists of 99115 sentence pairs; each pair + +differs minimally on the surface level, but only 281 one of the sentences is acceptable. We can use the intrinsic ability of language models to assign a probability to every sentence and test how often a language model assigns a higher probability to the + +correct sentence, as in (Warstadt et al., 2020). 286 + +CLM evaluation. The causal language models are trained to estimate $p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{ < t}}\right)$ for sentence $\mathbf{s}$ and token ${\mathbf{s}}_{t}$ where ${\mathbf{s}}_{ < t} = \left( {{\mathbf{s}}_{i} \mid i < t}\right)$ ; then the sentence log-probability is simply given by $\log p\left( \mathbf{s}\right) =$ $\mathop{\sum }\limits_{{t = 1}}^{N}\log p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{ < t}}\right)$ . + +MLM evaluation. The issue with masked language models is that they are not designed to calculate the joint probability; they are trained to estimate $p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{\smallsetminus t}}\right)$ - the likelihood of a token ${s}_{t}$ given its bidirectional context ${\mathbf{s}}_{\smallsetminus t} = \left( {{\mathbf{s}}_{i} \mid i \neq t}\right)$ . We can however still use MLMs to infer a score for each sentence where a higher score corresponds to a more likely sentence. Wang and Cho (2019) defined pseudo-log-likelihood score of a sentence $s$ + +with model $\theta$ as 303 + +$$ +\operatorname{PLL}\left( \mathbf{s}\right) = \frac{1}{N}\mathop{\sum }\limits_{{t = 1}}^{N}\log p\left( {{\mathbf{s}}_{t} \mid {\mathbf{s}}_{\smallsetminus t};\theta }\right) . +$$ + +Salazar et al. (2020) tested PLL and found that it 308 produces accurate predictions on BLiMP. We adopt their approach and evaluate our models with PLL. + +§ 5 RESULTS + +§ 5.1 RESULTS ON ${\MATHBF{{NOCOLA}}}_{\TEXT{ CLASS }}$ + +313 + +The results from benchmarking the publicly available Norwegian language models on the classification task can be seen in Table 3. The classification accuracy is around ${80}\%$ for for these models. One exception is the slightly older NorBERT 1, which performs substantially worse, even if being trained on clean Norwegian data: Wikipedia and newspaper articles (Kutuzov et al., 2021). We use the + +English BERT ${}_{\text{ base }}$ as a naive baseline, which gives 323 + +325 379 + +max width= + +Model ✓ X X ✓ X X X X X X Overall + +1-12 +${\mathrm{{BERT}}}_{\text{ base }}$ (Devlin et al. 2019) 50.70 53.55 63.43 60.44 51.69 79.33 51.85 82.54 54.31 54.11 59.48 + +1-12 +mBERTbase (Devlin et al. 2019) 79.92 69.05 90.74 76.91 78.84 83.97 74.88 87.88 78.72 80.44 79.53 + +1-12 +XLM-R base (Conneau et al. 2020) 91.43 85.28 92.60 87.43 87.56 83.93 84.33 90.60 89.63 91.96 88.02 + +1-12 +ScandiBERT (Hofmann et al. 2022) 93.43 89.79 90.84 90.14 90.05 87.10 90.08 90.55 85.82 90.68 90.27 + +1-12 +NB-BERT base (Kummervold et al. 2021) 93.76 89.19 97.14 86.54 92.48 73.98 90.94 92.73 91.15 94.70 89.04 + +1-12 +NorBERT 1 (Kutuzov et al., 2021) 93.46 88.46 94.54 88.66 89.41 88.46 92.01 94.26 90.83 93.05 90.83 + +1-12 +NorBERT 2 (Kutuzov et al., 2021) 91.66 88.20 96.88 89.22 90.91 75.82 92.67 93.13 74.18 92.69 88.51 + +1-12 +XLM-Rlarge (Conneau et al., 2020) 92.54 88.17 90.06 88.57 89.28 80.84 84.52 91.35 89.70 93.24 88.27 + +1-12 +NB-BERTlarge (Kummervold et al. 2021) 95.20 92.41 95.16 91.47 91.92 85.33 93.36 17.01 89.56 92.87 90.51 + +1-12 + +Table 2: The accuracy values of zero-shot evaluation on ${\mathbf{{NoCoLA}}}_{\text{ zero. }}$ Fine-grained results over different error types are reported (Appendix A), as well as the overall average over all sentence pairs in the datasets. + +378 + +380 + +381 + +382 + +383 + +385 + +386 + +389 + +391 + +330 384 + +393 + +340 394 as a lower bound on the performance of any decent Norwegian language models. has the worst perfor- + +max width= + +Model $\mathbf{{Lang}.}$ Size Accuracy $\mathbf{{MCC}}$ + +1-5 +${\mathrm{{BERT}}}_{\text{ base }}$ en 110M ${69.56}^{\pm {0.37}}$ ${23.99}^{\pm {0.41}}$ + +1-5 +${\mathrm{{mBERT}}}_{\text{ base }}$ multi 178M ${75.28}^{\pm {0.66}}$ ${46.39}^{\pm {0.67}}$ + +1-5 +${\mathrm{{XLM} - R}}_{\text{ base }}$ multi 278M ${79.29}^{\pm {0.20}}$ ${55.14}^{\pm {0.36}}$ + +1-5 +ScandiBERT multi 124M ${80.25}^{\pm {0.33}}$ ${57.12}^{\pm {0.37}}$ + +1-5 +NB-BERT ${}_{\text{ base }}$ no 178M ${80.69}^{\pm {0.44}}$ ${58.10}^{\pm {0.48}}$ + +1-5 +NorBERT 1 no 111M ${71.53}^{\pm {0.80}}$ ${35.85}^{\pm {1.70}}$ + +1-5 +NorBERT 2 no 125M ${79.99}^{\pm {0.27}}$ ${56.09}^{\pm {0.30}}$ + +1-5 +X X X X X + +1-5 +XLM-Rlarge multi 560M ${81.03}^{\pm {0.27}}$ ${58.56}^{\pm {0.30}}$ + +1-5 +NB-BERT ${}_{\text{ large }}$ no 355M ${\mathbf{{81.43}}}^{\pm {0.32}}$ ${\mathbf{{59.68}}}^{\pm {0.14}}$ + +1-5 + +Table 3: Accuracy and the Matthews correlation coefficient (Matthews, 1975), the main metric of ${\mathbf{{NoCoLA}}}_{\text{ class }}$ . We report the mean and standard deviation across five runs on the test split. + +362 mance of all our models. The two largest models give a small increase in performance compared to the moderately sized versions of the same models. + +§ 5.2 RESULTS ON ${\MATHBF{{NOCOLA}}}_{\TEXT{ ZERO }}$ + +367 On the raw zero-shot diagnostic task (Table 2), all models trained on Norwegian or Scandinavian languages perform well with results around ${90}\%$ accuracy. The best performance comes, perhaps surprisingly, from NorBERT 1 - possibly because it was pre-trained on a relatively small clean corpus. Remarkably, increased number of parameters does not seem to improve performance on this task. + +We have also included accuracy scores for the in- + +377 dividual error-types, these fine-grained scores can + +be used as a helpful cue for NLP researchers who 396 develop new language models. Comparably low scores can signal a problem their training corpus + +or with their tokenizer. For example, the two mod- 399 els NB-BERTs are relatively weak on punctuation- + +related errors. The large version is trained on 401 uncased data, which explains its inability of this + +model to understand the case-related errors. Scan- 404 diBERT performs comparably to the Norwegian + +ones on most parameters except for spelling. 406 + +§ 6 CONCLUSION + +408 + +In this paper we have proposed NoCoLA, the 409 first dataset for linguistic acceptance in Norwegian + +Bokmal. We showed how to use it for measuring 411 the linguistic knowledge of language models on both a classification task and a zero-shot probability comparison task. We have described how the datasets were created and what their motivation is, + +compared them to related work in English NLP 416 and showed how to use them for fine-grained error analysis of language models. + +Lastly, we evaluated all existing Norwegian language models on both proposed tasks. These re- + +sults suggest that models trained specifically for 421 Norwegian or Scandinavian languages perform better at discriminating between acceptable an non-acceptable sentences. The classification results also + +show that linguistic acceptability is a relatively hard 426 task, as none of the models achieved more than ${60}\%$ on the main MCC metric. The results on our diagnostic dataset highlight some shortcoming of + +the existing models. We will release all evaluation 430 + +sources in the camera-ready version. 431 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/V5PGSHHJEw/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/V5PGSHHJEw/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..16e8805acdaf4863087dc88fd13cd50e599565d0 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/V5PGSHHJEw/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,417 @@ +000 054 + +## Neural Text-to-Speech Synthesis for Vöro + +001 055 + +002 056 + +003 Anonymous Author + +004 Affiliation / Address line 1 + +005 Affiliation / Address line 2 006 Affiliation / Address line 3 007 email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +011 + +## Abstract + +013 This paper presents the first high-quality neural text-to-speech (TTS) system for Vöro, a minority language spoken in + +016 Southern Estonia. By leveraging existing Estonian TTS models and datasets, we an- + +018 alyze whether common low-resource NLP techniques, such as cross-lingual transfer learning from related languages or multi- + +021 task learning, can benefit our low-resource use case. Our results show that we can + +023 achieve high-quality Vöro TTS without transfer learning and that using more diverse training data can even decrease syn- + +026 thesis quality. While these techniques may still be useful in some cases, our work + +028 highlights the need for caution when applied in specific low-resource scenarios, and it can provide valuable insights for fu- + +031 ture low-resource research and efforts in + +033 preserving minority languages. + +## 1 Introduction + +The advancements in neural text-to-speech (TTS) + +036 technology have greatly improved the quality of speech synthesis for many languages. However, + +038 despite the potential benefits of TTS for facilitating accessibility and language preservation, developing TTS systems for low-resource languages remains challenging due to the limited availability of training data for these languages. + +Vöro, a Finno-Ugric minority language spoken in Southern Estonia, serves as a great example of a low-resource language that could benefit from TTS technology. While linguistic resources for Vöro are limited, the language is closely related to Estonian - a high-resource Finno-Ugric language with significantly more datasets, tools, and pre-trained models. + +The goal of this paper is to present the first high- + +053 quality neural TTS system for Vöro and evaluate + +064 + +various low-resource NLP techniques for improv- 065 ing synthesis quality for the language. By lever- + +aging existing Estonian TTS models and datasets, 067 we investigate the impact of transfer learning from related languages and multi-speaker and multilin- + +gual approaches on the TTS quality of Vöro. 070 + +The main contributions of this paper are: 072 + +1. We develop the first high-quality neural text-to-speech system for Vöro and make it pub- + +licly available ${}^{1}$ . 075 + +2. We show that having only 1.5 hours of Vöro 077 speech data per speaker is sufficient to de- + +velop TTS systems for low-resource lan- 079 + +guages without using cross-lingual transfer 080 + +learning or additional monolingual data. 082 + +3. We highlight the potential negative effects of 083 + +diversifying low-resource TTS datasets with 084 + +data from closely related languages. 085 + +086 + +## 2 Background + +087 + +088 + +As neural text-to-speech models require vast 089 + +amounts of data, existing research has proposed 090 several approaches to mitigate the issue of in- + +sufficient training data. For example, several 092 works have shown that cross-lingual pretraining improves the quality of low-resource TTS systems + +(Chen et al., 2019; Xu et al., 2020). 095 + +In a survey on multilingual strategies for low- + +resource TTS, Do et al. (2021) evaluated the use- 097 fulness of using multilingual datasets for improving low-resource language performance. They observed that for sequence-to-sequence models, including additional data from other languages is al- + +most always beneficial and often overweighs the 102 negative effect of having a lower ratio of target data in the entire training dataset. The authors also noted that there is no clear evidence that + +107 using supporting languages from the same language family is more beneficial but claimed that using a shared input representation space (such as phonemes) may be more important. + +--- + +${}^{1}$ Link will be added after the anonymization period + +--- + +At the same time, using closely related languages to boost low-resource performance has been successfully used for many text-based NLP tasks, including for developing Finno-Ugric machine translation systems that also include the Vöro language (Tars et al., 2021). Unfortunately, the usage of neural methods for Vöro has so far been limited to this example. There is also no existing research on Vöru TTS. While the Estonian Language Institute and the Vöro Institute have collaborated to create an HMM-based TTS system for Vöro ${}^{2}$ , this work has not been which has not been described in research. + +## 3 Methodology + +In this section, we present our methodology and experiment setup. Our approach evaluates the benefits of low-resource TTS approaches when training non-autoregressive Transformer-based models (Ren et al., 2019; Lańcucki, 2021). We focus on three common strategies - cross-lingual transfer learning from a pre-trained Estonian TTS model, combining data from multiple Vöro speakers, and including Estonian data to create a multilingual system. Additionally, we explore data augmentation to handle the orthographic variation of Vöro. + +### 3.1 Datasets + +Our experiments used speech data from two Vöro speakers - an adult male and a child (female). Both datasets were attained from the Estonian Language Institute and contained an identical set of 1132 sentences, out of which 100 were set aside for evaluation purposes. + +The Estonian dataset consisted of 6 male and 4 female speakers from the Speech Corpus of Estonian News Sentences (Fishel et al., 2020) and the Estonian Language Institute's audiobook corpora (Piits, 2022a, b). A subset of 1000 sentences per speaker was selected from the Estonian corpora to balance the training dataset. + +The audio files were resampled at ${22050}\mathrm{\;{Hz}}$ and converted into mel-spectrograms using a Hann window with a frame size of 1024 and a hop length of 256. The mel-spectrogram frames were + +aligned to the graphemes using the Estonian align- 162 + +ment model by (Alumäe et al., 2018). Training a 163 separate alignment model for Vöro was also considered, but initial testing showed that the Estonian model was successfully able to produce high-quality alignments. The alignment was also used + +to trip excessive pauses in the audio. 168 + +All datasets were lowercased, and punctuation was normalized to a limited set of characters to reduce the vocabulary size. In total, the training dataset contained 3 hours of Vöro and 14 hours of Estonian speech. + +### 3.2 Data augmentation + +175 + +While the Vöro dataset follows a standardized + +version of Vöro orthography, many speakers and 178 well-known news outlets do not conform to this + +standard. For example, the glottal stop(q)may be 180 omitted or used only when it affects the meaning of the word, and some speakers may also use an apostrophe instead the letter $q$ . Similarly, an apostrophe or an acute accent that marks palatalization + +is often used only when it affects the meaning. 185 + +To create a system that could successfully synthesize speech from all common written formats + +of Vöro, we considered this an important chal- 188 lenge. As there are no existing NLP tools for Vöro + +that would allow us to analyze these features au- 190 tomatically, we decided to use data augmentation to generate orthographic alternatives where glottal + +stops or palatalization features were removed for 193 the system to cope with different orthographies. + +Additionally, while our dataset contained the 195 letter $y$ , all cases of it were replaced with $\widetilde{o}$ as they are no longer differentiated according to the orthographic standardization changes from 2005. + +### 3.3 Model Configuration + +200 + +All models were trained using an open-source implementation of a non-autoregressive Transformer-based (Vaswani et al., 2017) model. The architecture is similar to FastPitch (Lańcucki, 2021) with explicit duration and pitch prediction components. An existing multispeaker model for Estonian (Rätsep et al., 2022) was used for our cross-lingual transfer learning experiments. In multispeaker systems, the speaker identity was marked with a prepended global style token (Wang et al., 2018). + +We trained models with three different data configurations - single-speaker Vöro models for each + +speaker, multi-speaker Vöro models with both 215 speakers, and multi-speaker multilingual models + +--- + +${}^{2}$ https://www.eki.ee/~indrek/voru/ index.php + +--- + +217 with both Estonian and Vöro data. For each data configuration, we also trained another model, which was initialized using the weights of the existing Estonian model. All models were trained for at least ${400}\mathrm{\;k}$ steps and using identical hyper-parameters. + +## 4 Results + +To assess the quality of the models, we conducted a mean opinion score (MOS) (Chu and Peng,2001) evaluation ${}^{3}$ among volunteers from + +229 the Vöro community. The evaluators were required to know the Vöro language but did not have to be native speakers. Of the 41 volunteers, 6 con- + +232 sidered themselves native speakers, and 9 had a self-reported Vöru level of C1 or higher. Many + +234 participants with lower levels of Vöru knowledge also mentioned that their passive language skills were higher as they mostly used Vöro when communicating with older family members who were native speakers. + +The evaluation used a subset of 50 random sentences per speaker (100 total per method) from the held-out dataset, and the samples were generated using pretrained HiFiGAN (Kong et al., 2020) models. The appropriate model for each speaker was selected by evaluating samples generated with multiple vocoder models. For the lower-pitched male speaker, we used a model trained on the VCTK dataset (Yamagishi et al., 2019), and + +249 for the child speaker, we used a model trained on the LJ Speech (Ito and Johnson, 2017) corpus and finetuned on Tacotron 2 (Shen et al., 2018) output. We also included ground truth samples from the held-out dataset and ground truth samples con- + +254 verter to mel-spectrograms and reconstructed by the same vocoder models. + +The evaluation results can be seen in Table 1. Expectedly, ground truth samples in their original and reconstructed forms scored the highest + +259 among the participants. From the TTS models, the highest scores we given to single-speaker models. These were followed by the multi-speaker Vöro models, but the performance drop from the single-speaker models should not be considered significant. The multilingual models showed consistently worse performance compared to the monolingual models. Additionally, we observe minor + +269 + +
MethodMOS
Ground truth${4.03} \pm {0.12}$
Ground truth + vocoder${3.83} \pm {0.13}$
Single-speaker${3.55} \pm {0.15}$
Single-speaker (transfer)${3.62} \pm {0.15}$
Multi-speaker${3.43} \pm {0.15}$
Multi-speaker (transfer)${3.50} \pm {0.13}$
Multilingual${3.10} \pm {0.15}$
Multilingual (transfer)${3.29} \pm {0.15}$
+ +Table 1: Mean opinion scores with 95% confidence intervals on the held-out dataset. + +270 + +271 + +272 + +273 + +274 + +275 + +276 + +281 + +283 benefits from using cross-lingual transfer learning. + +In addition to scoring samples, participants 286 were encouraged to comment on their overall im- + +pressions of speech quality and the evaluation pro- 288 cess. Many expressed a positive surprise about synthesis quality and mentioned the presence of TTS artifacts, such as crackling, as their main evaluation criteria. Some participants also noted that while almost all samples were intelligible, they did not always sound like a native Vöro speaker, especially when producing the glottal stop sound. Unfortunately, as the participants did not know which models produced which samples, further analysis would be needed to assess whether all models are equally prone to this issue and + +whether it can also be observed in ground truth 301 examples. + +## 5 Discussion and Future Work + +303 + +Unexpectedly, our MOS evaluation results are in 305 + +conflict with existing low-resource TTS litera- 306 ture that reports benefits from diversifying training + +data with samples from other speakers or related 308 languages and from using cross-lingual transfer learning. This brings into question both the usefulness of these techniques as well as our approach. + +Firstly, it could be argued that the observations 313 about the low negative performance impact of data imbalance by Do et al. (2021) may not apply to non-autoregressive Transformer-based systems, as the study focused on other methods, such as re- + +current or convolutional neural networks. There- 318 fore, the performance drop in multilingual models could still be caused by an imbalance between the two languages in the dataset. Alternatively, as our model size was dictated by the existing pretrained + +Estonian models, it may lack sufficient capacity to 323 work in a multilingual setting. + +--- + +${}^{3}$ A link to evaluation samples will be added after the anonymization period + +--- + +325 Additionally, it is possible that we should no longer consider Vöro a low-resource language. Based on initial testing, we found that the required amount of speech data for Transformer-based to produce coherent speech is between 1-2 hours, and improvements from using more data are significantly less noticeable. Similar observations in reduced data requirements for Transformer-based models have also been recently reported by Pine et al. (2022). In our case, we had 1.5 hours of speech per speaker, and it may have been sufficient for us not to benefit from additional data from other speakers. However, a more detailed evaluation methodology could be considered to measure the effects on specific features of synthetic speech, such as prosodic variability or pronunciation mistakes. + +As our work focused on creating a high-quality system for Vöro without applying artificial constraints, these points were not explicitly explored in our work. However, in the future, low-resource TTS strategies should be further reviewed specifically for Transformer-based architectures and for different levels of resource constraint. Until then, these strategies should be used with caution and evaluated for each specific low-resource scenario. + +## 6 Conclusion + +This article presented the first high-quality neural text-to-speech system for the Vöro language. We explored the usage of Estonian TTS models and datasets to boost the performance of our low-resource use case. + +Our results suggest that we can achieve high-quality Vöro TTS without transfer learning or us- + +362 ing data from multiple speakers or closely related languages. While these techniques may still be helpful in some cases, we highlight the need for further research and evaluation when applied in + +367 specific low-resource scenarios. + +## References + +Tanel Alumäe, Ottokar Tilk, and Asadullah. 2018. Advanced rich transcription system for Estonian speech. In Human Language Technologies - the Baltic Perspective: Proceedings of the Eighth International Conference, pages 1-8. IOS Press. + +Yuan-Jui Chen, Tao Tu, Cheng chieh Yeh, and Hung- 377 Yi Lee. 2019. End-to-End Text-to-Speech for + +Low-Resource Languages by Cross-Lingual Trans- 378 + +fer Learning. In Proc. Interspeech 2019, pages 379 + +2075-2079. 380 + +381 + +Min Chu and Hu Peng. 2001. An objective mea- 382 sure for estimating MOS of synthesized speech. In + +EUROSPEECH 2001, 7th European Conference on 383 + +Speech Communication, pages 2087-2090. ISCA. 384 + +Phat Do, Matt Coler, Jelske Dijkstra, and Esther Klab-bers. 2021. A Systematic Review and Analysis of Multilingual Data Strategies in Text-to-Speech for Low-Resource Languages. In Proc. Interspeech + +2021, pages 16-20. 389 + +Mark Fishel, Annika Laumets-Tättar, and Liisa Rätsep. 2020. Speech corpus of Estonian news sentences. https://doi.org/10.15155/ + +9-00-0000-0000-0000-001ABL. 394 + +Keith Ito and Linda Johnson. 2017. The LJ + +Speech dataset. https://keithito.com/ 396 LJ-Speech-Dataset/. + +Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020. HiFi-GAN: Generative adversarial networks for efficient and high fidelity speech synthesis. In ${Ad}$ - vances in Neural Information Processing Systems, pages 17022-17033. Curran Associates, Inc. + +Adrian Lańcucki. 2021. FastPitch: Parallel text-to-speech with pitch prediction. In 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6588-6592. + +Liisi Piits. 2022a. Estonian female voice audiobook corpus for speech synthesis. https://doi.org/10.15155/ 3-00-0000-0000-0000-090D4L. + +Liisi Piits. 2022b. Estonian male voice audiobook corpus for speech synthesis. https://doi.org/10.15155/ 3-00-0000-0000-0000-08BF4L. + +416 + +Aidan Pine, Dan Wells, Nathan Brinklow, Patrick Littell, and Korin Richmond. 2022. Requirements and motivations of low-resource speech synthesis for language revitalization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland. Association for Computational Linguistics. + +Liisa Rätsep, Rasmus Lellep, and Mark Fishel. 2022. Estonian text-to-speech synthesis with non-autoregressive transformers. Baltic Journal of Mod- + +ern Computing, 10. 426 + +Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. FastSpeech: Fast, robust and controllable text to speech. In ${Ad}$ - vances in Neural Information Processing Systems. + +Curran Associates, Inc. 431 + +432 Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike 486 433 Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng 487 434 Chen, Yu Zhang, Yuxuan Wang, R. J. Skerry- 488 + +435 Ryan, Rif A. Saurous, Yannis Agiomyrgiannakis, 489 436 and Yonghui Wu. 2018. Natural TTS synthesis by 490 conditioning WaveNet on mel spectrogram predic- + +437 tions. In 2018 IEEE International Conference on 491 + +438 Acoustics, Speech and Signal Processing (ICASSP), 492 + +439 pages 4779-4783. 493 + +440 Maali Tars, Andre Tättar, and Mark Fišel. 2021. Ex- 494 + +441 tremely low-resource machine translation for closely 495 + +442 related languages. In Proceedings of the 23rd 496 443 Nordic Conference on Computational Linguistics 497 444 (NoDaLiDa), pages 41-52, Reykjavik, Iceland (On- 498 + +445 line). Linköping University Electronic Press, Swe- 499 den. 500 + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob 501 448 Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz 502 + +Kaiser, and Illia Polosukhin. 2017. Attention is all 503 + +you need. In Advances in Neural Information Pro- 504 450 cessing Systems. Curran Associates, Inc. + +505 + +Yuxuan Wang, Daisy Stanton, Yu Zhang, R. J. Skerry- 506 + +Ryan, Eric Battenberg, Joel Shor, Ying Xiao, Fei 507 + +Ren, Ye Jia, and Rif A. Saurous. 2018. Style tokens: 508 Unsupervised style modeling, control and transfer + +in end-to-end speech synthesis. In arXiv preprint 509 + +arXiv:1803.09017. 510 + +511 + +Jin Xu, Xu Tan, Yi Ren, Tao Qin, Jian Li, Sheng Zhao, 512 and Tie-Yan Liu. 2020. Lrspeech: Extremely low-resource speech synthesis and recognition. In arXiv + +preprint arXiv:2008.03687. 514 + +515 + +Junichi Yamagishi, Cristophe Veaux, and Kirsten Mac- 516 + +Donald. 2019. CSTR VCTK corpus: English multi- 517 speaker corpus for CSTR voice cloning toolkit (ver- + +sion 0.92 ). https://datashare.ed.ac.uk/ 518 + +465 handle/10283/3443. 519 + +520 + +521 + +522 + +523 + +470 524 + +525 + +526 + +527 + +528 + +475 529 + +530 + +531 + +532 + +479 533 + +480 534 + +481 535 + +482 536 + +483 537 + +484 538 + +485 539 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/V5PGSHHJEw/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/V5PGSHHJEw/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..e9570bf33c4e82615be1351246522028c55750af --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/V5PGSHHJEw/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,289 @@ +000 054 + +§ NEURAL TEXT-TO-SPEECH SYNTHESIS FOR VÖRO + +001 055 + +002 056 + +003 Anonymous Author + +004 Affiliation / Address line 1 + +005 Affiliation / Address line 2 006 Affiliation / Address line 3 007 email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +011 + +§ ABSTRACT + +013 This paper presents the first high-quality neural text-to-speech (TTS) system for Vöro, a minority language spoken in + +016 Southern Estonia. By leveraging existing Estonian TTS models and datasets, we an- + +018 alyze whether common low-resource NLP techniques, such as cross-lingual transfer learning from related languages or multi- + +021 task learning, can benefit our low-resource use case. Our results show that we can + +023 achieve high-quality Vöro TTS without transfer learning and that using more diverse training data can even decrease syn- + +026 thesis quality. While these techniques may still be useful in some cases, our work + +028 highlights the need for caution when applied in specific low-resource scenarios, and it can provide valuable insights for fu- + +031 ture low-resource research and efforts in + +033 preserving minority languages. + +§ 1 INTRODUCTION + +The advancements in neural text-to-speech (TTS) + +036 technology have greatly improved the quality of speech synthesis for many languages. However, + +038 despite the potential benefits of TTS for facilitating accessibility and language preservation, developing TTS systems for low-resource languages remains challenging due to the limited availability of training data for these languages. + +Vöro, a Finno-Ugric minority language spoken in Southern Estonia, serves as a great example of a low-resource language that could benefit from TTS technology. While linguistic resources for Vöro are limited, the language is closely related to Estonian - a high-resource Finno-Ugric language with significantly more datasets, tools, and pre-trained models. + +The goal of this paper is to present the first high- + +053 quality neural TTS system for Vöro and evaluate + +064 + +various low-resource NLP techniques for improv- 065 ing synthesis quality for the language. By lever- + +aging existing Estonian TTS models and datasets, 067 we investigate the impact of transfer learning from related languages and multi-speaker and multilin- + +gual approaches on the TTS quality of Vöro. 070 + +The main contributions of this paper are: 072 + +1. We develop the first high-quality neural text-to-speech system for Vöro and make it pub- + +licly available ${}^{1}$ . 075 + +2. We show that having only 1.5 hours of Vöro 077 speech data per speaker is sufficient to de- + +velop TTS systems for low-resource lan- 079 + +guages without using cross-lingual transfer 080 + +learning or additional monolingual data. 082 + +3. We highlight the potential negative effects of 083 + +diversifying low-resource TTS datasets with 084 + +data from closely related languages. 085 + +086 + +§ 2 BACKGROUND + +087 + +088 + +As neural text-to-speech models require vast 089 + +amounts of data, existing research has proposed 090 several approaches to mitigate the issue of in- + +sufficient training data. For example, several 092 works have shown that cross-lingual pretraining improves the quality of low-resource TTS systems + +(Chen et al., 2019; Xu et al., 2020). 095 + +In a survey on multilingual strategies for low- + +resource TTS, Do et al. (2021) evaluated the use- 097 fulness of using multilingual datasets for improving low-resource language performance. They observed that for sequence-to-sequence models, including additional data from other languages is al- + +most always beneficial and often overweighs the 102 negative effect of having a lower ratio of target data in the entire training dataset. The authors also noted that there is no clear evidence that + +107 using supporting languages from the same language family is more beneficial but claimed that using a shared input representation space (such as phonemes) may be more important. + +${}^{1}$ Link will be added after the anonymization period + +At the same time, using closely related languages to boost low-resource performance has been successfully used for many text-based NLP tasks, including for developing Finno-Ugric machine translation systems that also include the Vöro language (Tars et al., 2021). Unfortunately, the usage of neural methods for Vöro has so far been limited to this example. There is also no existing research on Vöru TTS. While the Estonian Language Institute and the Vöro Institute have collaborated to create an HMM-based TTS system for Vöro ${}^{2}$ , this work has not been which has not been described in research. + +§ 3 METHODOLOGY + +In this section, we present our methodology and experiment setup. Our approach evaluates the benefits of low-resource TTS approaches when training non-autoregressive Transformer-based models (Ren et al., 2019; Lańcucki, 2021). We focus on three common strategies - cross-lingual transfer learning from a pre-trained Estonian TTS model, combining data from multiple Vöro speakers, and including Estonian data to create a multilingual system. Additionally, we explore data augmentation to handle the orthographic variation of Vöro. + +§ 3.1 DATASETS + +Our experiments used speech data from two Vöro speakers - an adult male and a child (female). Both datasets were attained from the Estonian Language Institute and contained an identical set of 1132 sentences, out of which 100 were set aside for evaluation purposes. + +The Estonian dataset consisted of 6 male and 4 female speakers from the Speech Corpus of Estonian News Sentences (Fishel et al., 2020) and the Estonian Language Institute's audiobook corpora (Piits, 2022a, b). A subset of 1000 sentences per speaker was selected from the Estonian corpora to balance the training dataset. + +The audio files were resampled at ${22050}\mathrm{\;{Hz}}$ and converted into mel-spectrograms using a Hann window with a frame size of 1024 and a hop length of 256. The mel-spectrogram frames were + +aligned to the graphemes using the Estonian align- 162 + +ment model by (Alumäe et al., 2018). Training a 163 separate alignment model for Vöro was also considered, but initial testing showed that the Estonian model was successfully able to produce high-quality alignments. The alignment was also used + +to trip excessive pauses in the audio. 168 + +All datasets were lowercased, and punctuation was normalized to a limited set of characters to reduce the vocabulary size. In total, the training dataset contained 3 hours of Vöro and 14 hours of Estonian speech. + +§ 3.2 DATA AUGMENTATION + +175 + +While the Vöro dataset follows a standardized + +version of Vöro orthography, many speakers and 178 well-known news outlets do not conform to this + +standard. For example, the glottal stop(q)may be 180 omitted or used only when it affects the meaning of the word, and some speakers may also use an apostrophe instead the letter $q$ . Similarly, an apostrophe or an acute accent that marks palatalization + +is often used only when it affects the meaning. 185 + +To create a system that could successfully synthesize speech from all common written formats + +of Vöro, we considered this an important chal- 188 lenge. As there are no existing NLP tools for Vöro + +that would allow us to analyze these features au- 190 tomatically, we decided to use data augmentation to generate orthographic alternatives where glottal + +stops or palatalization features were removed for 193 the system to cope with different orthographies. + +Additionally, while our dataset contained the 195 letter $y$ , all cases of it were replaced with $\widetilde{o}$ as they are no longer differentiated according to the orthographic standardization changes from 2005. + +§ 3.3 MODEL CONFIGURATION + +200 + +All models were trained using an open-source implementation of a non-autoregressive Transformer-based (Vaswani et al., 2017) model. The architecture is similar to FastPitch (Lańcucki, 2021) with explicit duration and pitch prediction components. An existing multispeaker model for Estonian (Rätsep et al., 2022) was used for our cross-lingual transfer learning experiments. In multispeaker systems, the speaker identity was marked with a prepended global style token (Wang et al., 2018). + +We trained models with three different data configurations - single-speaker Vöro models for each + +speaker, multi-speaker Vöro models with both 215 speakers, and multi-speaker multilingual models + +${}^{2}$ https://www.eki.ee/ĩndrek/voru/ index.php + +217 with both Estonian and Vöro data. For each data configuration, we also trained another model, which was initialized using the weights of the existing Estonian model. All models were trained for at least ${400}\mathrm{\;k}$ steps and using identical hyper-parameters. + +§ 4 RESULTS + +To assess the quality of the models, we conducted a mean opinion score (MOS) (Chu and Peng,2001) evaluation ${}^{3}$ among volunteers from + +229 the Vöro community. The evaluators were required to know the Vöro language but did not have to be native speakers. Of the 41 volunteers, 6 con- + +232 sidered themselves native speakers, and 9 had a self-reported Vöru level of C1 or higher. Many + +234 participants with lower levels of Vöru knowledge also mentioned that their passive language skills were higher as they mostly used Vöro when communicating with older family members who were native speakers. + +The evaluation used a subset of 50 random sentences per speaker (100 total per method) from the held-out dataset, and the samples were generated using pretrained HiFiGAN (Kong et al., 2020) models. The appropriate model for each speaker was selected by evaluating samples generated with multiple vocoder models. For the lower-pitched male speaker, we used a model trained on the VCTK dataset (Yamagishi et al., 2019), and + +249 for the child speaker, we used a model trained on the LJ Speech (Ito and Johnson, 2017) corpus and finetuned on Tacotron 2 (Shen et al., 2018) output. We also included ground truth samples from the held-out dataset and ground truth samples con- + +254 verter to mel-spectrograms and reconstructed by the same vocoder models. + +The evaluation results can be seen in Table 1. Expectedly, ground truth samples in their original and reconstructed forms scored the highest + +259 among the participants. From the TTS models, the highest scores we given to single-speaker models. These were followed by the multi-speaker Vöro models, but the performance drop from the single-speaker models should not be considered significant. The multilingual models showed consistently worse performance compared to the monolingual models. Additionally, we observe minor + +269 + +max width= + +Method MOS + +1-2 +Ground truth ${4.03} \pm {0.12}$ + +1-2 +Ground truth + vocoder ${3.83} \pm {0.13}$ + +1-2 +Single-speaker ${3.55} \pm {0.15}$ + +1-2 +Single-speaker (transfer) ${3.62} \pm {0.15}$ + +1-2 +Multi-speaker ${3.43} \pm {0.15}$ + +1-2 +Multi-speaker (transfer) ${3.50} \pm {0.13}$ + +1-2 +Multilingual ${3.10} \pm {0.15}$ + +1-2 +Multilingual (transfer) ${3.29} \pm {0.15}$ + +1-2 + +Table 1: Mean opinion scores with 95% confidence intervals on the held-out dataset. + +270 + +271 + +272 + +273 + +274 + +275 + +276 + +281 + +283 benefits from using cross-lingual transfer learning. + +In addition to scoring samples, participants 286 were encouraged to comment on their overall im- + +pressions of speech quality and the evaluation pro- 288 cess. Many expressed a positive surprise about synthesis quality and mentioned the presence of TTS artifacts, such as crackling, as their main evaluation criteria. Some participants also noted that while almost all samples were intelligible, they did not always sound like a native Vöro speaker, especially when producing the glottal stop sound. Unfortunately, as the participants did not know which models produced which samples, further analysis would be needed to assess whether all models are equally prone to this issue and + +whether it can also be observed in ground truth 301 examples. + +§ 5 DISCUSSION AND FUTURE WORK + +303 + +Unexpectedly, our MOS evaluation results are in 305 + +conflict with existing low-resource TTS litera- 306 ture that reports benefits from diversifying training + +data with samples from other speakers or related 308 languages and from using cross-lingual transfer learning. This brings into question both the usefulness of these techniques as well as our approach. + +Firstly, it could be argued that the observations 313 about the low negative performance impact of data imbalance by Do et al. (2021) may not apply to non-autoregressive Transformer-based systems, as the study focused on other methods, such as re- + +current or convolutional neural networks. There- 318 fore, the performance drop in multilingual models could still be caused by an imbalance between the two languages in the dataset. Alternatively, as our model size was dictated by the existing pretrained + +Estonian models, it may lack sufficient capacity to 323 work in a multilingual setting. + +${}^{3}$ A link to evaluation samples will be added after the anonymization period + +325 Additionally, it is possible that we should no longer consider Vöro a low-resource language. Based on initial testing, we found that the required amount of speech data for Transformer-based to produce coherent speech is between 1-2 hours, and improvements from using more data are significantly less noticeable. Similar observations in reduced data requirements for Transformer-based models have also been recently reported by Pine et al. (2022). In our case, we had 1.5 hours of speech per speaker, and it may have been sufficient for us not to benefit from additional data from other speakers. However, a more detailed evaluation methodology could be considered to measure the effects on specific features of synthetic speech, such as prosodic variability or pronunciation mistakes. + +As our work focused on creating a high-quality system for Vöro without applying artificial constraints, these points were not explicitly explored in our work. However, in the future, low-resource TTS strategies should be further reviewed specifically for Transformer-based architectures and for different levels of resource constraint. Until then, these strategies should be used with caution and evaluated for each specific low-resource scenario. + +§ 6 CONCLUSION + +This article presented the first high-quality neural text-to-speech system for the Vöro language. We explored the usage of Estonian TTS models and datasets to boost the performance of our low-resource use case. + +Our results suggest that we can achieve high-quality Vöro TTS without transfer learning or us- + +362 ing data from multiple speakers or closely related languages. While these techniques may still be helpful in some cases, we highlight the need for further research and evaluation when applied in + +367 specific low-resource scenarios. \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/Vzp2aRidnh/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/Vzp2aRidnh/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..5514acc9457a81732ae1766a415115760fd18388 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/Vzp2aRidnh/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,849 @@ +## ASR Language Resources for Faroese + +054 + +055 + +056 + +Anonymous Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +## Abstract + +The aim of this work is to present a set of novel language resources in Faroese suitable for the field of Automatic Speech Recognition including: an ASR corpus comprised of 109 hours of transcribed speech data, acoustic models in systems such as WAV2VEC2, NVIDIA-NeMo, Kaldi and PocketSphinx; a set of n-gram language models and a set of pronunciation dictionaries with two different variants of Faroese. We also show comparison results between the distinct acoustic models presented here. All the resources exposed in this document are publicly available under creative commons licences. + +## 1 Introduction + +As the digital world has become increasingly prominent and omnipresent in most human activities, the use of more and better language technologies has become a pressing need. For this reason, more and more governments are investing in the development of all kinds of linguistic resources that allow their citizens to be part of the new digital era, with all the benefits it entails. Language technology initiatives in the main regions of the world such as: Europe (Rehm et al., 2020; Nikulásdóttir et al., 2020; Meister et al., 2010; D'Halleweyn et al., 2006), India (Vikas, 2001; Choudhary, 2021), Africa (Grover et al., 2011), China (Kania et al., 2018), Saudi Arabia (Mae-gaard et al., 2008, 2005) and the Spanish speaking countries (Fernandez et al., 2016); allow us to attest how important language technologies have become in recent times. + +In synchrony with all the developments mentioned above, it is time to talk about the efforts made for the development of the Faroese language in the digital sphere. The most recent initiative in + +this regard is the Ravnur Project, founded in the 065 Faroe Islands. Thanks to the resources generated + +and shared by Ravnur, it has been possible to de- 067 velop all the language resources presented in this document. + +070 + +### 1.1 Faroese + +The Faroe Islands is a set of small islands located 072 at the North Atlantic in a half way between Scot- + +land, Iceland and Norway. It is an autonomous ter- 075 ritory of the Kingdom of Denmark with Faroese as + +the official language, which is spoken by around 077 54,000people. There are four main dialect areas in the Faroe Islands; north, northwest, central + +and southern (Petersen, 2022). The Faroe Islands 080 is a bilingual country with Danish as the second + +official language. While many native speakers of 082 Faroese use Danish for university education or employment in Denmark, Faroese is spoken as a first + +language by most of the population and is used 085 on all domains, e.g. in education, public sectors, + +church etc. in the Faroe Islands. The first and, to 087 this date, only Faroese speech synthesis was created in 2005 (Helgason and Gullbein, 2005) by + +combining efforts from researchers at the Univer- 090 sity of Stockholm and the University of the Faroe + +Islands and is used by the visually impaired com- 092 munity. Currently, there is a huge demand for Faroese ASR solutions, needed by the deaf, visually impaired and dyslexic communities - and also + +the general public, who wish to use their mother 097 tongue when interacting with technology. + +### 1.2 The Ravnur Project + +The Faroese ASR research project, Ravnur, was assembled in 2019 (Foundation, 2019). The aim of the project was to create open-source resources that could be used to create automatic speech recognition (ASR) systems in Faroese. These resources would also be useful for creating other + +types of language technologies, as well as for lin- 107 guistic research. The project was founded by public and private initiators and investors, including the Faroese government. The development team consisted of a project leader, a technical leader, three native speaking junior linguists, an IT assistant, five university student assistants, as well as external advisors. The project concluded in the summer of 2022 with the publication of the Basic Language Resource Kit for Faroese (BLARK) (Simonsen et al., 2022; Debess et al., 2022). + +### 1.3 Basic Language Resource Kit (BLARK) for Faroese + +A BLARK is defined as the minimal set of language resources needed to create language and speech technology for a language (Krauwer, 2003; Maegaard et al., 2006). A BLARK is ideally language independent, but because languages may have different requirements, the contents of the BLARK may vary in some respects from language to language. + +So, as Ravnur was an ASR project, the focus was on collecting good quality recordings of Faroese and creating a transcription corpus and pronunciation dictionary. During the course of the project, Ravnur collected 135 hours of recordings of 433 speakers total (249 female speakers and 184 male speakers) reading text of various genres, such as news, blogs, Wikipedia, law texts, GPS commands, word lists etc. The participants self-reported their gender, native language, dialect and age which varies between 15 to 83 years old. The recordings were made on TASCAM DR-40 Linear PCM audio recorders using the built-in stereo microphones in WAVE 16 bit with a sample rate of ${48}\mathrm{{kHz}}$ . All recordings have been manually orthographically transcribed, while part of the speech corpus has been phonetically transcribed. The transcriptions were made by the university student assistants and the three Faroese linguists working for the project. All words that occur in the recordings were put in a pronunciation dictionary. The dictionary includes phonetic transcriptions written in SAMPA and PAROLE PoS-tags (Bilgram and Keson,1998; Keson,1998) ${}^{1}$ . + +As it can be seen, the BLARK developed by Ravnur is the starting point of the novel machine learning models presented in this work. + +## 2 The Ravnursson Corpus + +162 + +163 + +Ravnursson ${}^{2}$ (Hernández Mena and Simonsen, 164 2022) is an ASR corpus with a length of 109 hours, extracted from the BLARK described in section 1.3. Unlike the original BLARK, the + +Ravnursson only contains the speech files along 168 with their respective transcriptions. The main characteristics of the corpus are the following: + +- The audio files in this corpus are distributed + +in a FLAC format at ${16}\mathrm{{kHz}}$ @ 16bit mono. 173 + +- The corpus contains 71,949 speech files from 175 433 speakers. + +- The corpus is split into train, dev, and test + +portions. Lengths of every portion are: train 178 $= {100}\mathrm{h}{08}\mathrm{\;m},\operatorname{dev} = 4\mathrm{h}{30}\mathrm{\;m}$ , test $= 4\mathrm{h}{30}\mathrm{\;m}$ . + +180 + +- The development and test portions have exactly 10 male and 10 female speakers each + +and both portions have exactly the same size 183 in hours. + +185 + +- Due to the limited number of prompts to read, only39,945of the71,949prompts in the + +whole corpus are unique. In other words, 188 ${44.48}\%$ of the prompts in the corpus are re- + +peated at least once. 190 + +- Despite the repeated prompts in the corpus, + +the development and test portions do not 193 share speakers with each other or with the + +training set. 195 + +### 2.1 Analysis of the Repeated Prompts + +As the number of reading prompts for the corpus 198 was limited during the recording process, the com- + +mon denominator in the Ravnursson corpus is that 200 one prompt is read by more than one speaker. This is relevant because it is a common practice in ASR to create a language model using the prompts that are found in the train portion of the corpus. That is not recommended for the Ravnursson Corpus as it counts with several prompts shared by all the portions and that will produce an important bias in the language modeling task. + +Table 1 shows some statistics about the repeated prompts through all the portions of the corpus. + +215 + +--- + +${}^{2}$ As a matter of fact, the name Ravnursson comes from Ravnur (a tribute to the Ravnur Project) and the suffix "son" which in Icelandic means "son of". Therefore, the name "Ravnursson" means "The (Icelandic) son of Ravnur". The double "ss" is just for aesthetics. + +${}^{1}$ Both the Faroese SAMPA alphabet (sometimes called FARSAMPA) and PAROLE PoS-tags were created by Ravnur for the BLARK. + +--- + +The way this table has to be understood is as fol- + +217 lows: for example, the first row indicates that there is a total of 71,949 reading prompts in the whole corpus;39,945of those are unique and 32,004 are repeated at least once. Therefore, a total of ${44.48}\%$ prompts in the whole corpus are repeated + +222 at least once. The same applies to the rest of the rows in Table 1. + +
Corpus PortionTotal PromptsUnique PromptsRepeat. Prompts%
All71,94939,94532,00444.48%
Train65,61638,64626, 97041.1%
Test3,0022,8871153.83%
$\mathbf{{Dev}}$3,3313,302290.87%
+ +Table 1: Analysis of Repeated Prompts. + +### 2.2 Corpus Organization + +The "speech" directory contains all the speech files of the corpus. The files in the speech folder are divided in three directories: train, dev and test. The train portion is sub-divided in three types of recordings: RDATA1O, RDATA1OP and RDATA2; this is due the organization of the recordings in the original BLARK. There, the recordings are divided in Rdata1 and Rdata2. + +One main difference between Rdata1 and Rdata2 is that the reading environment for Rdata2 was controlled by a software called "PushPrompt" which is included in the original BLARK (Simonsen et al., 2022). Another difference is that in Rdata1 there are some available transcriptions labelled at the phoneme level. The audio files in the speech directory of the Ravnursson corpus are divided in the folders RDATA1O where "O" is for "Orthographic" and RDATA1OP where "O" is for Orthographic and "P" is for phonetic. These categories are just a reminiscence of the original BLARK but it does not imply that the Ravnursson corpus comes with transcriptions at the phonetic level. In the case of the dev and test portions, the data come only from Rdata2 which does not have labels at the phonetic level in the original BLARK. + +### 2.3 The Metadata File + +The metadata file is a "tab-separated values file" (TSV) containing all the relevant information of the corpus. The file can be read using the Pandas (McKinney et al., 2010) library in Python and + +269 it comprises of the following 12 columns: + +1. id: The filename without the extension 270 + +".flac". 271 + +272 + +2. speaker_id: The filename without the seg- 273 + +ment number. 274 + +275 + +3. filename: Full filename including the exten- 276 sion ".flac". + +4. sentence_norm: The normalized transcription: no punctuation marks, no digits, lower + +case letters, one single space between words. 281 + +5. gender: The gender of the speaker: male or 283 female. + +6. age: The age range of the speaker: 15-35, 36- 286 + +60, 61+ years old. 287 + +7. native_language: "Faroese" in all the cases. 288 289 + +8. dialect: The speaker dialect. 290 291 + +9. created_at: The date when the audio file was + +recorded. 293 + +10. duration: Duration of the speech file in sec- + +onds. 296 + +11. sample_rate: ${16kHz}$ in all the cases. 298 + +12. status: The corpus portion: train, test or dev. + +301 + +### 2.4 Codification of the Audio Filenames + +In the Ravnursson corpus, the filenames of the au- 303 dio files encode relevant information about the respective speech files. The first row of Table 2, shows a typical audio filename. The second row + +enumerates the fields of information encoded in 308 + +the filename and the third row shows the same 309 + +filename of row one but broken down in the eight 310 parts as specified in the second row. + +
MEY01_040319_rok0_0009.flac
12345678
MEY01040319rok00009.flac
+ +Table 2: Audio Filename Format. + +313 + +318 + +The explanation of the information encoded in + +the filename is at follows: 320 + +1. Gender of the Speaker: $\mathbf{M}$ for male or $\mathbf{K}$ for 322 + +female 323 + +324 2. Dialect Group: $\mathbf{U}$ for Suǒuroy, $\mathbf{A}$ for San- + +325 doy, $\mathbf{S}$ for Suǒurstreymoy, $\mathbf{E}$ for Noröurstrey- + +326 moy/Eysturoy (exclusive of Eiði, Gjógv + +327 og Funningur), $\mathbf{V}$ for Vágar and $\mathbf{N}$ for Norǒuroyggjar (inclusive of Eiǒi, Gjógv og Funningur) + +330 + +3. Age Group: $\mathbf{Y}$ for "Younger" between 15-35 + +332 years old, $\mathbf{M}$ for "Middle-aged" between 36- 60 years old and $\mathbf{E}$ for "Elderly" 61 years old or older. + +335 + +4. Number of Speaker in a Group: is a number + +337 that always consists of two digits and starts + +338 with01,02,03etc. The first speaker in a + +339 group with the same gender, dialect group + +340 and age group (e.g. MEY) gets the number 01. The next speaker in the same group + +342 gets the number 02 (and his ID is therefore MEY02). + +## 5. Date: The date when the speech was recorded (day/month/year). + +6. Type of reading material: This code can only be found in speech files at RDATA1O and RDATA1OP. For more information about the types of reading material please see the documentation of the original BLARK and its directory "readingtexts_1.0". + +7. Segment Number: In the original BLARK the recording session is distributed as one + +357 audio file per speaker and it can be very long from the ASR perspective. So, the audio files are subdivided in segments of + +360 around 10 seconds to fit most of the modern ASR engines. The numbering is con- + +362 tinuous for each speaker; the only exception is with the files MUY01_180519_set4_0004 and MUY02_190120_eind2_0007. We detected that they are empty and we removed them. + +367 + +8. File extension: The corpus is distributed in FLAC format. + +## 3 Acoustic Models + +The development of the Ravnursson corpus allowed us to create acoustic models in four different ASR systems: WAV2VEC2, NeMo, Kaldi and PocketSphinx. In this section we discuss the details of how we created each of them. + +### 3.1 WAV2VEC2 Model + +378 + +379 + +WAV2VEC, released in 2019, is a convolutional 380 + +neural network that takes raw audio as input and 381 + +computes a general representation that can be 382 + +input to a speech recognition system (Schnei- 383 + +der et al., 2019). In 2020, a second version, 384 + +WAV2VEC2 (Baevski et al., 2020) was released. 385 + +Based on WAV2VEC2, the XLSR-53 (Conneau 386 et al., 2020) was also released in 2020. XLSR-53 + +is a open-source model trained with more than ${50}\mathrm{k}$ 388 + +hours of unlabelled speech in 53 languages. It can 389 be used to create acoustic models in any language + +through a fine-tuning step. 391 + +Using the XLSR-53 as a starting point, we created an acoustic model suitable for Faroese (Her- + +nandez Mena, 2022b) which is available on a Cre- 394 + +ative Commons licence CCBY4. The fine-tuning 396 process for this model lasted 30 epochs. + +### 3.2 NeMo Model + +399 + +NeMo (Neural Modules) is a Python toolkit de- + +veloped by NVIDIA for creating AI applica- 401 tions. It comes with extendable collections of pre-built modules for automatic speech recognition and natural language processing (Kuchaiev et al., 2019). One of the NeMo modules suitable for speech recognition is called Quartznet (Kri-man et al., 2020) which is a convolutional model trained with Connectionist Temporal Classification (Graves, 2012) or CTC for short. + +In order to train an ASR model for Faroese in NeMo, we used the public checkpoint "QuartzNet15x5Base-En.nemo" ${}^{3}$ as a starting point. This model was trained with more than $3\mathrm{k}$ hours of English data in a Quartznet archi- + +tecture during 600 epochs. Based on a work 416 by Huang et al., we fine-tuned the checkpoint with the data of the Ravnursson corpus during 236 epochs, obtaining a first checkpoint able to recognize Faroese. Then, we augmented the initial 100 hours of the training portion of the Ravnursson corpus to 300 hours through speech perturbation using two speed rates: 0.9 and 1.1 . Finally, we fine-tuned our initial checkpoint in Faroese with the augmented data during 163 epochs to obtain a final model (Hernandez Mena, 2022a) which is available on a Creative Commons licence CCBY4. + +431 + +--- + +${}^{3}$ Available at: https://catalog.ngc.nvidia.com/orgs/nvidia/models/nemospeechmodels/ files + +--- + +432 486 + +
Points of articulation
Manners of articulationConsonantsBi-labialLabiodentalDentalAlveolarPost-alveolarRetroflexPalatalVelarGlottal
Voiceless Stopptk
Voiced Stopbdg
Voiceless AffricatetS
Voiced AffricatedZ
Voiceless Fricativef5SSZh
Voiced FricativeV4
Voiceless NasalM$X$X
Voiced Nasalmn$\mathrm{N}$
Voiceless LateralL
Voiced Lateral1
ApproximantsrjW
VowelsFrontCentralBack
Closei y3U
IYU
Close-mide2O
8
Open-midE 9O
Opena
+ +Table 3: Phonetic Repertoire of Faroese + +507 + +433 487 + +434 488 + +435 489 + +436 490 + +437 491 + +438 492 + +439 493 + +440 + +441 + +442 + +443 497 + +444 + +445 499 + +446 + +447 + +448 502 + +449 + +450 504 + +509 + +### 3.3 Kaldi Model + +Kaldi (Povey et al., 2011), released in 2011, is a well established toolkit for speech recognition written in $\mathrm{C} + +$ , which is based on distinct paradigms such as: finite-state transducers (Allauzen et al., 2007), Hidden Markov Models (Juang and Rabiner, 1991), Gaussian Mixture Models (Naeem et al., 2020) as well as neural networks (Rath et al., 2013). + +Our "Kaldi Recipe for Faroese" (Hernán-dez Mena, 2022) was created using the Ravnurs-son corpus as training data. The recipe produces models based on Hidden Markov Models (HMMs) as well as Neural Networks; in specific, the neural network is an LSTM or "Long Short-Term Memory" (Huang et al., 2017). This recipe requires a 3-gram language model (lm) for decoding, a 4- gram Im for re-scoring and a pronouncing dictionary; elements that are available in our "Faroese Language Models with Pronunciations" (Hernán-dez Mena et al., 2022), discussed in further sections. + +The recipe is available on Clarin. is ${}^{4}$ under a Creative Commons licence CCBY4. + +485 + +### 3.4 PocketSphinx Model + +Sphinx is an old speech recognition system 512 + +based on Hidden Markov Models developed by 514 Carnegie-Mellon University in the late 80's (Lee et al., 1990). Through time, progressive versions of Sphinx have been released up the version 4 . At some point, the version 2 turned into Pock- + +etSphinx (Huggins-Daines et al., 2006). Pocket- 519 Sphinx was supposed to be a lighter and faster version of Sphinx but nowadays it has become the main version that can be used in real time mode, even in ARM processors. PocketSphinx has long + +ceased to be a suitable system for research, but 524 nevertheless it still has an active community of users that choose it as a real time speech recognition system in devices with not a great computing power such as Raspberry PI (Upton and Halfacree, + +2014) or other ARM computers. 529 + +Our PocketSphinx models ${}^{5}$ , trained with the Ravnursson corpus, are suitable for the Pocket-Sphinx Python library available at the Pypi repository ${}^{6}$ . With this library it is possible to perform both standard and real time speech recognition, + +539 + +--- + +${}^{5}$ Available at: https://github.com/ CarlosDanielMena/RAVNURSSON_FAROESE_ Models_100h + +${}^{6}$ See: https://pypi.org/project/ pocket sphinx/ + +${}^{4}$ See: http://hdl.handle.net/20.500.12537/305 + +--- + +540 594 + +
SAMPA$\mathbf{{IPA}}$SAMPA$\mathbf{{IPA}}$SAMPA$\mathbf{{IPA}}$SAMPA$\mathbf{{IPA}}$
p${\mathrm{p}}^{\mathrm{h}}$mmeeaJai
bbM$\dot{\mathrm{m}}$EEaWau
t${t}^{h}$nnaaOJoi
dd$X$$\underset{ \circ }{\text{ n }}$$y$$y$OWou
$\mathrm{k}$${\mathrm{k}}^{\mathrm{h}}$$\mathrm{N}$IJYY3Wtu
gg$X$ij2$\varnothing$EWeu
ff119oe9Wœu
VVL1UU9Jcei
SSjjO040
SfWWO050
ZSrIEAea80
hhUUOA0aHPre-aspiration
tS$\mathrm{i}$$\mathrm{i}$UJ$v\dot{1}$
dZqIIEJei
+ +Table 4: SAMPA vs. IPA Equivalences. + +608 + +609 + +541 595 + +542 596 + +543 597 + +544 598 + +545 599 + +546 600 + +547 601 + +548 602 + +549 603 + +550 604 + +551 605 + +552 606 + +553 607 + +556 610 + +612 forced-alignment and produce timestamps. The version of PocketSphinx that was available when we produced these models was the number 4 . Few weeks later the version 5 was released but our models remain compatible. + +## 4 Pronunciation Models + +The pronunciation models that we discuss in this section is a set of pronouncing dictionaries that are included in our "Faroese Language Models with Pronunciations" (Hernández Mena et al., 2022) along with a number of language models that will be discussed in section 5 . Most of the pronunciations come from the original BLARK, but for convenience, we subdivide them in different dictionaries as follows: + +- Central_Faroese.dic: It contains pronunciations of the variant of Faroese which is spoken in the capital. + +- East_Faroese.dic: It contains pronunciation of the northwest variant of Faroese ${}^{7}$ . + +583 - Ravnursson_Composite_Words.dic: It contains words with hyphens and/or underscores + +593 that are present in the Ravnursson Corpus. We keep them separate in a different dictionary because these type of composite + +words can be problematic for a grapheme-to- 617 phoneme (g2p) tool. + +- BLARK.dic: It contains pronunciations of + +words that are present in the BLARK but that 620 are not present in any other dictionary of the + +set. 622 + +- FAROESE_ASR.dic: This dictionary is + +recommended for ASR experiments in 625 Kaldi or any other ASR system based on + +phonemes. The dictionary is the mix of 627 Central_Faroese.dic, East_Faroese.dic and Ravnursson_Composite_Words.dic. It is im- + +portant to clarify that the dictionary can 630 contain words with multiple pronunciations, + +which is normal in Kaldi-like systems. 632 633 + +### 4.1 Phoneme Sets of Dictionaries + +634 + +Table 3 shows the phonetic repertoire of Faroese 635 + +using 42 SAMPA symbols. Each of these corre- 637 spond to an individual phoneme that is included in the pronouncing dictionaries described in section 4, except for the vowel "/3/" that only occurs in diphthong. The phonetic repertoire of Faroese + +includes the following 12 diphthongs: EA, OA, 642 $\mathbf{{UJ}},\mathbf{{EJ}},\mathbf{{aJ}},\mathbf{{aW}},\mathbf{{OJ}},\mathbf{{OW}},\mathbf{{3W}},\mathbf{{EW}},\mathbf{{9W}}$ and $\mathbf{{9J}}$ . Summing the 41 individual phonemes in Table 3, plus the 12 diphthong, plus seven phonemes with + +pre-aspiration (Hb, Hd, HdZ, Hg, Hp, Ht, HtS), 646 + +we have a total of 60 phonemes. That is the list 647 of 60 phonemes that are included in the dictio- + +--- + +${}^{7}$ In the most recent dialect classification (Petersen,2022), the islands in the northwest area are classified as being the same dialect area. However, there is a difference in the pronunciation of the digraph ${ei}$ between the westernmost islands and the more central and eastern islands in that dialect area in. Therefore, the westernmost part of the dialect area is not included in our EAST dictionary. For that reason, we have given this dictionary the name EAST. The idea is that this makes it is possible to make WEST, NORTHERN and SOUTHERN dictionaries in the future. + +--- + +649 naries presented in section 4. To see an equivalence between our SAMPA symbols versus the IPA phonemes, please see Table 4. + +## 5 Language Models + +As it was mentioned in section 4, our "Faroese Language Models with Pronunciations" is a set of n-gram language models of distinct sizes that were created using the Faroese text provided in the BLARK, as it provides with text from newspaper articles, parliamentary speeches, books and + +661 more. The normalization process of that text included to change everything to lowercase, allow only characters belonging to the Faroese alphabet + +664 and removing punctuation marks. + +The resulting text has a length of more than + +666 half million lines of text $({106.3MB}$ approximately). The text was used to create a 3-gram (recommended for decoding) and a 4-gram (recom- + +669 mended for re-scoring) language models with the SRILM toolkit (Stolcke, 2002). Both the 3-gram and 4-gram models come in pruned and unpruned versions. It is also included a 6-gram language model in binary format suitable for ASR experiments with the NeMo toolkit. In particular, this model was created using KenLM (Heafield, 2011). It is important to mention that all the words present in any of the language models are present in the pronouncing dictionaries for the east and central variants of Faroese (see section 4). + +## 6 Results + +Table 5 shows a comparison of the Word Error Rate (WER) obtained with the acoustic models presented in section 3. Results with Pocket- + +686 Sphinx are not included because PocketSphinx is no longer competitive and the models created with it are destined to perform real time recognition in devices with low computing power as explained in + +691 section 3.4. The NeMo results include the WER obtained using the 6-gram language model (LM) presented in section 5 as well as the WER obtained with no language model at all. The Kaldi results include the WER obtained with Hidden Markov Models (HMM) only and the WER obtained with the LSTM network. As it can be seen, the best results are obtained with the WAV2VEC2 model. + +According to our previous experience (Hernan- + +701 dez Mena et al., 2020; Mena et al., 2022), it is remarkable that the WER obtained with NeMo using a language model and the WER obtained with Kaldi using the LSTM are so close to each other despite of the relatively low amount of training data. This fact reveals that the training method described by Huang et al. is really effective. + +On the other hand, Table 6 shows the results obtained with the newest system Whisper (Radford et al., 2022). Whisper is a transformer-based speech recognition system trained with ${680}\mathrm{k}$ hours of transcribed data in multiple languages. Whisper is also a multitask system able to perform multilingual speech recognition as well as speech translation and language identification. According to the original paper (Radford et al., 2022), the training set that Whisper uses for translation includes 46 hours of Faroese. Based on this, we decided to test Whisper in its distinct sizes with no fine-tuning step and using the development and test portions of the Ravnursson corpus. As it can be seen in Table 6, we obtained terribly bad WER results, revealing that Whisper needs to be fine-tuned prior to recognize Faroese data; unfortunately, this is beyond the scope of this paper but it will tackle as further work. + +## 7 Conclusions + +A major development of Faroese ASR is presented in this work. The Ravnursson project has produced a corpus of 109 hours of transcribed speech and acoustic models for WAV2VEC2, NeMo, Kaldi and PocketSphinx have been developed. Furthermore, the project has also produced a set of n-gram language models of distinct sizes and pronunciation dictionaries in Faroese suitable for ASR experimentation. Quality assessment of the acoustic models are shown in Table 5 where the best results of ${7.60}\%$ WER was achieved by the WAV2VEC2 model. Another interesting result is shown in Table 6 demonstrating that a fine-tuning step is needed for Faroese for the multi-lingual ASR system Whisper. + +Faroese ASR is no longer under-developed due to this work. The project has lowered the technological threshold for implementing ASR solutions for Faroese in industry and for studying the Faroese language using ASR as a tool. With all the results made available with open licenses, there is no good reason why Faroese ASR should not be included in standard language technology software in the future. + +756 810 + +
Corpus PortionNeMo SP No LMNeMo SP With LMKaldi HMMKaldi LSTMWAV2VEC2 XLRS-53
$\mathbf{{Dev}}$20.51%13.66%20.60%12.22%5.56%
Test22.81%15.95%23.44%14.04%7.60%
+ +Table 5: WER Results. + +757 811 + +758 812 + +759 813 + +760 814 + +761 815 + +762 816 + +763 + +
Whisper Size$\mathbf{{Dev}}$ WERTest WER
Tiny113.4%116.7%
Base112.61%113.07%
Small128.05%132.64%
Medium116.34%119.3%
Large105.93%110.25%
+ +Table 6: Whisper WER Results. + +764 + +765 + +766 + +767 + +768 + +769 + +770 + +772 + +774 + +## Acknowledgments + +The text has to be anonymous. The real acknowl- + +777 edgments will be revealed in the final version of the manuscript. The text has to be anonymous. + +779 The real acknowledgments will be revealed in the final version of the manuscript. The text has to + +782 be anonymous. The real acknowledgments will be revealed in the final version of the manuscript. + +784 The text has to be anonymous. The real acknowledgments will be revealed in the final version of the manuscript. + +787 + +789 + +## References + +Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wo-jciech Skut, and Mehryar Mohri. 2007. Openfst: A + +792 general and efficient weighted finite-state transducer library. In International Conference on Implementation and Application of Automata, pages 11-23. + +794 Springer. + +Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Pro- + +799 cessing Systems, 33:12449-12460. + +Thomas Bilgram and Britt Keson. 1998. The construction of a tagged danish corpus. In Proceedings of the 11th Nordic Conference of Computational Linguistics (NODALIDA 1998), pages 129-139. + +Narayan Choudhary. 2021. Ldc-il: The indian repository of resources for language technology. Language Resources and Evaluation, 55(3):855-867. + +Alexis Conneau, Alexei Baevski, Ronan Collobert, + +809 Abdelrahman Mohamed, and Michael Auli. 2020. Unsupervised cross-lingual representation learning for speech recognition. arXiv preprint arXiv:2006.13979. + +Iben Nyholm Debess, Sandra Saxov Lamhauge, 821 Annika Simonsen, Peter Juel Henrichsen, Egil Hofgaard, Uni Johannesen, Petur Markus Josenius Hammer, Gunnvør Hoydal Brimnes, Ebba Malena Debess Thomsen, and Beinta Poulsen. 2022. Basic language resource kit 1.0 for faroese. + +OpenSLR.org. 826 + +Elisabeth D'Halleweyn, Jan Odijk, Lisanne Teunissen, 828 and Catia Cucchiarini. 2006. The dutch-flemish hlt programme stevin: Essential speech and language technology resources. In Proceedings of the Fifth + +International Conference on Language Resources 831 and Evaluation (LREC'06). + +833 + +David Pérez Fernandez, Doaa Samy, and Juan de Dios Llorens Gonzalez. 2016. Spanish language technologies plan. In International Workshop on Fu- + +ture and Emerging Trends in Language Technology, 836 pages 50-60. Springer. + +838 + +Talutøkni Foundation. 2019. The project ravnur. In Talutékini Foundation. + +Alex Graves. 2012. Connectionist temporal classifica- 841 tion. In Supervised sequence labelling with recur- + +rent neural networks, pages 61-93. Springer. 843 + +Aditi Sharma Grover, Gerhard B Van Huyssteen, and Marthinus W Pretorius. 2011. The south african + +human language technology audit. Language re- 846 sources and evaluation, 45(3):271-288. + +848 + +Kenneth Heafield. 2011. Kenlm: Faster and smaller language model queries. In Proceedings of the sixth workshop on statistical machine translation, pages 187-197. + +Pétur Helgason and Sjúrǒur Gullbein. 2005. Færøsk 853 talesyntese: Rapport marts 2005. Nordisk sprogteknologi 2005-Nordic Language Technology, page 51. Carlos Daniel Hernandez Mena. + +2022a. Acoustic model in faroese: 858 + +stt_fo_quartznet15x5_sp_ep163_100h. hug- 859 gingface.co. + +Carlos Daniel Hernandez Mena. 2022b. Acoustic model in faroese: wav2vec2-large-xlsr-53-faroese- + +100h. huggingface.co. 863 + +Carlos Daniel Hernández Mena. 2022. Kaldi recipe for faroese. Clarin.is. + +Carlos Daniel Hernandez Mena, Albert Gatt, Andrea DeMarco, Claudia Borg, Lonneke van der Plas, Amanda Muscat, and Ian Padovani. 2020. Masri-headset: A maltese corpus for speech recognition. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 6381-6388, Marseille, France. European Language Resources Association. + +Carlos Daniel Hernández Mena, Sandra Saxov Lamhauge, Iben Nyholm Debess, and Annika Simonsen. 2022. Faroese language models with pronunciations. Clarin.is. + +Carlos Daniel Hernández Mena and Annika Simonsen. 2022. Ravnursson faroese speech and transcripts. Clarin.is. + +Jocelyn Huang, Oleksii Kuchaiev, Patrick O'Neill, Vi-taly Lavrukhin, Jason Li, Adriana Flores, Georg Kucsko, and Boris Ginsburg. 2020. Cross-language transfer learning, continuous learning, and domain adaptation for end-to-end automatic speech recognition. arXiv preprint arXiv:2005.04290. + +Lu Huang, Ji Xu, Jiasong Sun, and Yi Yang. 2017. An improved residual lstm architecture for acoustic modeling. In 2017 2nd International Conference on Computer and Communication Systems (ICCCS), pages 101-105. IEEE. + +David Huggins-Daines, Mohit Kumar, Arthur Chan, Alan W Black, Mosur Ravishankar, and Alexander I Rudnicky. 2006. Pocketsphinx: A free, real-time continuous speech recognition system for hand-held devices. In 2006 IEEE international conference on acoustics speech and signal processing proceedings, volume 1, pages I-I. IEEE. + +Biing Hwang Juang and Laurence R Rabiner. 1991. Hidden markov models for speech recognition. Technometrics, 33(3):251-272. + +Elsa Kania, Paul Triolo, and Graham Webster. 2018. Translation: Chinese government outlines ai ambitions through 2020. New America. + +Britt Keson. 1998. Vejledning til det danske morfos-yntaktisk taggede parole-korpus. Parole report, Det Danske Sprog-og Litteraturselskab (DSL). + +Steven Krauwer. 2003. The basic language resource kit (blark) as the first milestone for the language resources roadmap. In Proceedings of SPECOM, page 15. + +Samuel Kriman, Stanislav Beliaev, Boris Ginsburg, Jocelyn Huang, Oleksii Kuchaiev, Vitaly Lavrukhin, Ryan Leary, Jason Li, and Yang Zhang. 2020. Quartznet: Deep automatic speech recognition with 1d time-channel separable convolutions. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6124-6128. IEEE. + +865 866 870 872 875 + +880 + +882 + +885 + +887 + +897 + +900 + +902 + +907 + +917 + +Oleksii Kuchaiev, Jason Li, Huyen Nguyen, Olek- 918 + +sii Hrinchuk, Ryan Leary, Boris Ginsburg, Samuel Kriman, Stanislav Beliaev, Vitaly Lavrukhin, Jack Cook, et al. 2019. Nemo: a toolkit for building ai applications using neural modules. arXiv preprint arXiv:1909.09577. + +K-F Lee, H-W Hon, and Raj Reddy. 1990. An overview of the sphinx speech recognition system. IEEE Transactions on Acoustics, Speech, and Signal Processing, 38(1):35-45. + +Bente Maegaard, Mohammed Atiyya, Khalid Choukri, Steven Krauwer, Chafic Mokbel, and Mustafa Yaseen. 2008. Medar: Collaboration between european and mediterranean arabic partners to support the development of language technology for arabic. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08). + +Bente Maegaard, Khalid Choukri, Chafik Mokbel, and Mustafa Yaseen. 2005. Language technology for Arabic. NEMLAR, Center for Sprogteknologi, University of Copenhagen. + +Bente Maegaard, Steven Krauwer, Khalid Choukri, and Lise Damsgaard Jørgensen. 2006. The blark concept and blark for arabic. In LREC, pages 773-778. + +Wes McKinney et al. 2010. Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference, pages 51-56. Austin, TX. + +Einar Meister, Jaak Vilo, and Neeme Kahusk. 2010. National programme for estonian language technology: a pre-final summary. In Human Language Technologies-The Baltic Perspective, pages 11-14. IOS Press. + +Carlos Daniel Hernandez Mena, David Erik Mollberg, Michal Borský, and Jón Guönason. 2022. Samró- mur children: An icelandic speech corpus. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 995-1002. + +Saad Naeem, Majid Iqbal, Muhammad Saqib, Muhammad Saad, Muhammad Soban Raza, Zaid Ali, Naveed Akhtar, Mirza Omer Beg, Waseem Shahzad, and Muhhamad Umair Arshad. 2020. Subspace gaussian mixture model for continuous urdu speech recognition using kaldi. In 202014th International Conference on Open Source Systems and Technologies (ICOSST), pages 1-7. IEEE. + +Anna Björk Nikulásdóttir, Jón Guönason, Anton Karl Ingason, Hrafn Loftsson, Eiríkur Rögn-valdsson, Einar Freyr Sigurösson, and Steinthór Steingrímsson. 2020. Language technology programme for icelandic 2019-2023. arXiv preprint arXiv:2003.09244. + +Hjalmar P Petersen. 2022. Evidence for the modification of dialect classification of modern spoken faroese. European Journal of Scandinavian Studies, 52(1):43-58. + +919 + +920 + +921 + +922 + +923 + +924 + +929 + +934 + +936 + +939 + +941 + +946 + +949 + +951 + +954 + +956 + +961 + +966 + +971 + +972 Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas 1026 973 Burget, Ondrej Glembek, Nagendra Goel, Mirko 1027 974 Hannemann, Petr Motlicek, Yanmin Qian, Petr 1028 975 Schwarz, et al. 2011. The kaldi speech recogni- 1029 tion toolkit. In IEEE 2011 workshop on automatic 1030 976 speech recognition and understanding, CONF. IEEE 977 Signal Processing Society. 1031 978 1032 + +979 Alec Radford, Jong Wook Kim, Tao Xu, Greg Brock- 1033 980 man, Christine McLeavey, and Ilya Sutskever. 2022. 1034 Robust speech recognition via large-scale weak su- 981 pervision. arXiv preprint arXiv:2212.04356. 1035 982 1036 + +983 Shakti P Rath, Daniel Povey, Karel Veselỳ, and Jan 1037 984 Cernocký. 2013. Improved feature processing for 1038 deep neural networks. In Interspeech, pages 109- 985 113. 1039 986 1040 + +987 Georg Rehm, Katrin Marheinecke, Stefanie Hegele, 1041 988 Stelios Piperidis, Kalina Bontcheva, Jan Hajič, 1042 Khalid Choukri, Andrejs Vasiljevs, Gerhard Back- + +989 fried, Christoph Prinz, et al. 2020. The european 1043 + +990 language technology landscape in 2020: Language- 1044 + +centric and human-centric ai for cross-cultural com- 1045 + +munication in multilingual europe. arXiv preprint 1046 + +993 arXiv:2003.13833. 1047 + +Steffen Schneider, Alexei Baevski, Ronan Collobert, 1048 + +995 and Michael Auli. 2019. wav2vec: Unsupervised 1049 + +pre-training for speech recognition. arXiv preprint 1050 + +arXiv:1904.05862. 1051 + +998 Annika Simonsen, Sandra Saxov Lamhauge, Iben Ny- 1052 + +holm Debess, and Peter Juel Henrichsen. 2022. Cre- 1053 + +1000 ating a basic language resource kit for faroese. In 1054 + +Proceedings of the Thirteenth Language Resources 1055 + +and Evaluation Conference, pages 4637-4643. 1056 + +Andreas Stolcke. 2002. Srilm-an extensible language 1057 + +modeling toolkit. In Seventh international confer- 1058 + +1005 ence on spoken language processing. 1059 + +Eben Upton and Gareth Halfacree. 2014. Raspberry Pi 1060 + +user guide. John Wiley & Sons. 1061 + +1008 1062 + +Om Vikas. 2001. Language technology development 1063 + +1010 in india. Ministry of Information Technology. 1064 + +1065 + +1066 + +1013 1067 + +1014 1068 + +1015 1069 + +1016 1070 + +1017 1071 + +1018 1072 + +1019 1073 + +1020 1074 + +1021 1075 + +1022 1076 + +1023 1077 + +1024 1078 + +1025 1079 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/Vzp2aRidnh/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/Vzp2aRidnh/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..8ebe573cfdb92c6b073daf7bde6503f04ff07fd2 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/Vzp2aRidnh/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,750 @@ +§ ASR LANGUAGE RESOURCES FOR FAROESE + +054 + +055 + +056 + +Anonymous Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +§ ABSTRACT + +The aim of this work is to present a set of novel language resources in Faroese suitable for the field of Automatic Speech Recognition including: an ASR corpus comprised of 109 hours of transcribed speech data, acoustic models in systems such as WAV2VEC2, NVIDIA-NeMo, Kaldi and PocketSphinx; a set of n-gram language models and a set of pronunciation dictionaries with two different variants of Faroese. We also show comparison results between the distinct acoustic models presented here. All the resources exposed in this document are publicly available under creative commons licences. + +§ 1 INTRODUCTION + +As the digital world has become increasingly prominent and omnipresent in most human activities, the use of more and better language technologies has become a pressing need. For this reason, more and more governments are investing in the development of all kinds of linguistic resources that allow their citizens to be part of the new digital era, with all the benefits it entails. Language technology initiatives in the main regions of the world such as: Europe (Rehm et al., 2020; Nikulásdóttir et al., 2020; Meister et al., 2010; D'Halleweyn et al., 2006), India (Vikas, 2001; Choudhary, 2021), Africa (Grover et al., 2011), China (Kania et al., 2018), Saudi Arabia (Mae-gaard et al., 2008, 2005) and the Spanish speaking countries (Fernandez et al., 2016); allow us to attest how important language technologies have become in recent times. + +In synchrony with all the developments mentioned above, it is time to talk about the efforts made for the development of the Faroese language in the digital sphere. The most recent initiative in + +this regard is the Ravnur Project, founded in the 065 Faroe Islands. Thanks to the resources generated + +and shared by Ravnur, it has been possible to de- 067 velop all the language resources presented in this document. + +070 + +§ 1.1 FAROESE + +The Faroe Islands is a set of small islands located 072 at the North Atlantic in a half way between Scot- + +land, Iceland and Norway. It is an autonomous ter- 075 ritory of the Kingdom of Denmark with Faroese as + +the official language, which is spoken by around 077 54,000people. There are four main dialect areas in the Faroe Islands; north, northwest, central + +and southern (Petersen, 2022). The Faroe Islands 080 is a bilingual country with Danish as the second + +official language. While many native speakers of 082 Faroese use Danish for university education or employment in Denmark, Faroese is spoken as a first + +language by most of the population and is used 085 on all domains, e.g. in education, public sectors, + +church etc. in the Faroe Islands. The first and, to 087 this date, only Faroese speech synthesis was created in 2005 (Helgason and Gullbein, 2005) by + +combining efforts from researchers at the Univer- 090 sity of Stockholm and the University of the Faroe + +Islands and is used by the visually impaired com- 092 munity. Currently, there is a huge demand for Faroese ASR solutions, needed by the deaf, visually impaired and dyslexic communities - and also + +the general public, who wish to use their mother 097 tongue when interacting with technology. + +§ 1.2 THE RAVNUR PROJECT + +The Faroese ASR research project, Ravnur, was assembled in 2019 (Foundation, 2019). The aim of the project was to create open-source resources that could be used to create automatic speech recognition (ASR) systems in Faroese. These resources would also be useful for creating other + +types of language technologies, as well as for lin- 107 guistic research. The project was founded by public and private initiators and investors, including the Faroese government. The development team consisted of a project leader, a technical leader, three native speaking junior linguists, an IT assistant, five university student assistants, as well as external advisors. The project concluded in the summer of 2022 with the publication of the Basic Language Resource Kit for Faroese (BLARK) (Simonsen et al., 2022; Debess et al., 2022). + +§ 1.3 BASIC LANGUAGE RESOURCE KIT (BLARK) FOR FAROESE + +A BLARK is defined as the minimal set of language resources needed to create language and speech technology for a language (Krauwer, 2003; Maegaard et al., 2006). A BLARK is ideally language independent, but because languages may have different requirements, the contents of the BLARK may vary in some respects from language to language. + +So, as Ravnur was an ASR project, the focus was on collecting good quality recordings of Faroese and creating a transcription corpus and pronunciation dictionary. During the course of the project, Ravnur collected 135 hours of recordings of 433 speakers total (249 female speakers and 184 male speakers) reading text of various genres, such as news, blogs, Wikipedia, law texts, GPS commands, word lists etc. The participants self-reported their gender, native language, dialect and age which varies between 15 to 83 years old. The recordings were made on TASCAM DR-40 Linear PCM audio recorders using the built-in stereo microphones in WAVE 16 bit with a sample rate of ${48}\mathrm{{kHz}}$ . All recordings have been manually orthographically transcribed, while part of the speech corpus has been phonetically transcribed. The transcriptions were made by the university student assistants and the three Faroese linguists working for the project. All words that occur in the recordings were put in a pronunciation dictionary. The dictionary includes phonetic transcriptions written in SAMPA and PAROLE PoS-tags (Bilgram and Keson,1998; Keson,1998) ${}^{1}$ . + +As it can be seen, the BLARK developed by Ravnur is the starting point of the novel machine learning models presented in this work. + +§ 2 THE RAVNURSSON CORPUS + +162 + +163 + +Ravnursson ${}^{2}$ (Hernández Mena and Simonsen, 164 2022) is an ASR corpus with a length of 109 hours, extracted from the BLARK described in section 1.3. Unlike the original BLARK, the + +Ravnursson only contains the speech files along 168 with their respective transcriptions. The main characteristics of the corpus are the following: + + * The audio files in this corpus are distributed + +in a FLAC format at ${16}\mathrm{{kHz}}$ @ 16bit mono. 173 + + * The corpus contains 71,949 speech files from 175 433 speakers. + + * The corpus is split into train, dev, and test + +portions. Lengths of every portion are: train 178 $= {100}\mathrm{h}{08}\mathrm{\;m},\operatorname{dev} = 4\mathrm{h}{30}\mathrm{\;m}$ , test $= 4\mathrm{h}{30}\mathrm{\;m}$ . + +180 + + * The development and test portions have exactly 10 male and 10 female speakers each + +and both portions have exactly the same size 183 in hours. + +185 + + * Due to the limited number of prompts to read, only39,945of the71,949prompts in the + +whole corpus are unique. In other words, 188 ${44.48}\%$ of the prompts in the corpus are re- + +peated at least once. 190 + + * Despite the repeated prompts in the corpus, + +the development and test portions do not 193 share speakers with each other or with the + +training set. 195 + +§ 2.1 ANALYSIS OF THE REPEATED PROMPTS + +As the number of reading prompts for the corpus 198 was limited during the recording process, the com- + +mon denominator in the Ravnursson corpus is that 200 one prompt is read by more than one speaker. This is relevant because it is a common practice in ASR to create a language model using the prompts that are found in the train portion of the corpus. That is not recommended for the Ravnursson Corpus as it counts with several prompts shared by all the portions and that will produce an important bias in the language modeling task. + +Table 1 shows some statistics about the repeated prompts through all the portions of the corpus. + +215 + +${}^{2}$ As a matter of fact, the name Ravnursson comes from Ravnur (a tribute to the Ravnur Project) and the suffix "son" which in Icelandic means "son of". Therefore, the name "Ravnursson" means "The (Icelandic) son of Ravnur". The double "ss" is just for aesthetics. + +${}^{1}$ Both the Faroese SAMPA alphabet (sometimes called FARSAMPA) and PAROLE PoS-tags were created by Ravnur for the BLARK. + +The way this table has to be understood is as fol- + +217 lows: for example, the first row indicates that there is a total of 71,949 reading prompts in the whole corpus;39,945of those are unique and 32,004 are repeated at least once. Therefore, a total of ${44.48}\%$ prompts in the whole corpus are repeated + +222 at least once. The same applies to the rest of the rows in Table 1. + +max width= + +Corpus Portion Total Prompts Unique Prompts Repeat. Prompts % + +1-5 +All 71,949 39,945 32,004 44.48% + +1-5 +Train 65,616 38,646 26, 970 41.1% + +1-5 +Test 3,002 2,887 115 3.83% + +1-5 +$\mathbf{{Dev}}$ 3,331 3,302 29 0.87% + +1-5 + +Table 1: Analysis of Repeated Prompts. + +§ 2.2 CORPUS ORGANIZATION + +The "speech" directory contains all the speech files of the corpus. The files in the speech folder are divided in three directories: train, dev and test. The train portion is sub-divided in three types of recordings: RDATA1O, RDATA1OP and RDATA2; this is due the organization of the recordings in the original BLARK. There, the recordings are divided in Rdata1 and Rdata2. + +One main difference between Rdata1 and Rdata2 is that the reading environment for Rdata2 was controlled by a software called "PushPrompt" which is included in the original BLARK (Simonsen et al., 2022). Another difference is that in Rdata1 there are some available transcriptions labelled at the phoneme level. The audio files in the speech directory of the Ravnursson corpus are divided in the folders RDATA1O where "O" is for "Orthographic" and RDATA1OP where "O" is for Orthographic and "P" is for phonetic. These categories are just a reminiscence of the original BLARK but it does not imply that the Ravnursson corpus comes with transcriptions at the phonetic level. In the case of the dev and test portions, the data come only from Rdata2 which does not have labels at the phonetic level in the original BLARK. + +§ 2.3 THE METADATA FILE + +The metadata file is a "tab-separated values file" (TSV) containing all the relevant information of the corpus. The file can be read using the Pandas (McKinney et al., 2010) library in Python and + +269 it comprises of the following 12 columns: + +1. id: The filename without the extension 270 + +".flac". 271 + +272 + +2. speaker_id: The filename without the seg- 273 + +ment number. 274 + +275 + +3. filename: Full filename including the exten- 276 sion ".flac". + +4. sentence_norm: The normalized transcription: no punctuation marks, no digits, lower + +case letters, one single space between words. 281 + +5. gender: The gender of the speaker: male or 283 female. + +6. age: The age range of the speaker: 15-35, 36- 286 + +60, 61+ years old. 287 + +7. native_language: "Faroese" in all the cases. 288 289 + +8. dialect: The speaker dialect. 290 291 + +9. created_at: The date when the audio file was + +recorded. 293 + +10. duration: Duration of the speech file in sec- + +onds. 296 + +11. sample_rate: ${16kHz}$ in all the cases. 298 + +12. status: The corpus portion: train, test or dev. + +301 + +§ 2.4 CODIFICATION OF THE AUDIO FILENAMES + +In the Ravnursson corpus, the filenames of the au- 303 dio files encode relevant information about the respective speech files. The first row of Table 2, shows a typical audio filename. The second row + +enumerates the fields of information encoded in 308 + +the filename and the third row shows the same 309 + +filename of row one but broken down in the eight 310 parts as specified in the second row. + +max width= + +8|c|MEY01_040319_rok0_0009.flac + +1-8 +1 2 3 4 5 6 7 8 + +1-8 +M E Y 01 040319 rok0 0009 .flac + +1-8 + +Table 2: Audio Filename Format. + +313 + +318 + +The explanation of the information encoded in + +the filename is at follows: 320 + +1. Gender of the Speaker: $\mathbf{M}$ for male or $\mathbf{K}$ for 322 + +female 323 + +324 2. Dialect Group: $\mathbf{U}$ for Suǒuroy, $\mathbf{A}$ for San- + +325 doy, $\mathbf{S}$ for Suǒurstreymoy, $\mathbf{E}$ for Noröurstrey- + +326 moy/Eysturoy (exclusive of Eiði, Gjógv + +327 og Funningur), $\mathbf{V}$ for Vágar and $\mathbf{N}$ for Norǒuroyggjar (inclusive of Eiǒi, Gjógv og Funningur) + +330 + +3. Age Group: $\mathbf{Y}$ for "Younger" between 15-35 + +332 years old, $\mathbf{M}$ for "Middle-aged" between 36- 60 years old and $\mathbf{E}$ for "Elderly" 61 years old or older. + +335 + +4. Number of Speaker in a Group: is a number + +337 that always consists of two digits and starts + +338 with01,02,03etc. The first speaker in a + +339 group with the same gender, dialect group + +340 and age group (e.g. MEY) gets the number 01. The next speaker in the same group + +342 gets the number 02 (and his ID is therefore MEY02). + +§ 5. DATE: THE DATE WHEN THE SPEECH WAS RECORDED (DAY/MONTH/YEAR). + +6. Type of reading material: This code can only be found in speech files at RDATA1O and RDATA1OP. For more information about the types of reading material please see the documentation of the original BLARK and its directory "readingtexts_1.0". + +7. Segment Number: In the original BLARK the recording session is distributed as one + +357 audio file per speaker and it can be very long from the ASR perspective. So, the audio files are subdivided in segments of + +360 around 10 seconds to fit most of the modern ASR engines. The numbering is con- + +362 tinuous for each speaker; the only exception is with the files MUY01_180519_set4_0004 and MUY02_190120_eind2_0007. We detected that they are empty and we removed them. + +367 + +8. File extension: The corpus is distributed in FLAC format. + +§ 3 ACOUSTIC MODELS + +The development of the Ravnursson corpus allowed us to create acoustic models in four different ASR systems: WAV2VEC2, NeMo, Kaldi and PocketSphinx. In this section we discuss the details of how we created each of them. + +§ 3.1 WAV2VEC2 MODEL + +378 + +379 + +WAV2VEC, released in 2019, is a convolutional 380 + +neural network that takes raw audio as input and 381 + +computes a general representation that can be 382 + +input to a speech recognition system (Schnei- 383 + +der et al., 2019). In 2020, a second version, 384 + +WAV2VEC2 (Baevski et al., 2020) was released. 385 + +Based on WAV2VEC2, the XLSR-53 (Conneau 386 et al., 2020) was also released in 2020. XLSR-53 + +is a open-source model trained with more than ${50}\mathrm{k}$ 388 + +hours of unlabelled speech in 53 languages. It can 389 be used to create acoustic models in any language + +through a fine-tuning step. 391 + +Using the XLSR-53 as a starting point, we created an acoustic model suitable for Faroese (Her- + +nandez Mena, 2022b) which is available on a Cre- 394 + +ative Commons licence CCBY4. The fine-tuning 396 process for this model lasted 30 epochs. + +§ 3.2 NEMO MODEL + +399 + +NeMo (Neural Modules) is a Python toolkit de- + +veloped by NVIDIA for creating AI applica- 401 tions. It comes with extendable collections of pre-built modules for automatic speech recognition and natural language processing (Kuchaiev et al., 2019). One of the NeMo modules suitable for speech recognition is called Quartznet (Kri-man et al., 2020) which is a convolutional model trained with Connectionist Temporal Classification (Graves, 2012) or CTC for short. + +In order to train an ASR model for Faroese in NeMo, we used the public checkpoint "QuartzNet15x5Base-En.nemo" ${}^{3}$ as a starting point. This model was trained with more than $3\mathrm{k}$ hours of English data in a Quartznet archi- + +tecture during 600 epochs. Based on a work 416 by Huang et al., we fine-tuned the checkpoint with the data of the Ravnursson corpus during 236 epochs, obtaining a first checkpoint able to recognize Faroese. Then, we augmented the initial 100 hours of the training portion of the Ravnursson corpus to 300 hours through speech perturbation using two speed rates: 0.9 and 1.1 . Finally, we fine-tuned our initial checkpoint in Faroese with the augmented data during 163 epochs to obtain a final model (Hernandez Mena, 2022a) which is available on a Creative Commons licence CCBY4. + +431 + +${}^{3}$ Available at: https://catalog.ngc.nvidia.com/orgs/nvidia/models/nemospeechmodels/ files + +432 486 + +max width= + +X 10|c|Points of articulation + +1-11 +20*Manners of articulation Consonants Bi-labial Labiodental Dental Alveolar Post-alveolar Retroflex Palatal Velar Glottal + +2-11 + Voiceless Stop p X X t X X X k X + +2-11 + Voiced Stop b X X d X X X g X + +2-11 + Voiceless Affricate X X X X tS X X X X + +2-11 + Voiced Affricate X X X X dZ X X X X + +2-11 + Voiceless Fricative X f 5 S S Z X X h + +2-11 + Voiced Fricative X V 4 X X X X X X + +2-11 + Voiceless Nasal M X X $X$ X X X X X + +2-11 + Voiced Nasal m X X n X X X $\mathrm{N}$ X + +2-11 + Voiceless Lateral X X X L X X X X X + +2-11 + Voiced Lateral X X X 1 X X X X X + +2-11 + Approximants X X X r X X j W X + +2-11 + Vowels X X X X Front X Central X Back + +2-11 + Close X X X X i y X 3 X U + +2-11 + X X X X X X IY X U X + +2-11 + Close-mid X X X X e 2 X X O + +2-11 + X X X X X X X 8 X X + +2-11 + Open-mid X X X X X E 9 X X O + +2-11 + X X X X X X X X X X + +2-11 + Open X X X X X a X X X + +1-11 + +Table 3: Phonetic Repertoire of Faroese + +507 + +433 487 + +434 488 + +435 489 + +436 490 + +437 491 + +438 492 + +439 493 + +440 + +441 + +442 + +443 497 + +444 + +445 499 + +446 + +447 + +448 502 + +449 + +450 504 + +509 + +§ 3.3 KALDI MODEL + +Kaldi (Povey et al., 2011), released in 2011, is a well established toolkit for speech recognition written in $\mathrm{C} + +$ , which is based on distinct paradigms such as: finite-state transducers (Allauzen et al., 2007), Hidden Markov Models (Juang and Rabiner, 1991), Gaussian Mixture Models (Naeem et al., 2020) as well as neural networks (Rath et al., 2013). + +Our "Kaldi Recipe for Faroese" (Hernán-dez Mena, 2022) was created using the Ravnurs-son corpus as training data. The recipe produces models based on Hidden Markov Models (HMMs) as well as Neural Networks; in specific, the neural network is an LSTM or "Long Short-Term Memory" (Huang et al., 2017). This recipe requires a 3-gram language model (lm) for decoding, a 4- gram Im for re-scoring and a pronouncing dictionary; elements that are available in our "Faroese Language Models with Pronunciations" (Hernán-dez Mena et al., 2022), discussed in further sections. + +The recipe is available on Clarin. is ${}^{4}$ under a Creative Commons licence CCBY4. + +485 + +§ 3.4 POCKETSPHINX MODEL + +Sphinx is an old speech recognition system 512 + +based on Hidden Markov Models developed by 514 Carnegie-Mellon University in the late 80's (Lee et al., 1990). Through time, progressive versions of Sphinx have been released up the version 4 . At some point, the version 2 turned into Pock- + +etSphinx (Huggins-Daines et al., 2006). Pocket- 519 Sphinx was supposed to be a lighter and faster version of Sphinx but nowadays it has become the main version that can be used in real time mode, even in ARM processors. PocketSphinx has long + +ceased to be a suitable system for research, but 524 nevertheless it still has an active community of users that choose it as a real time speech recognition system in devices with not a great computing power such as Raspberry PI (Upton and Halfacree, + +2014) or other ARM computers. 529 + +Our PocketSphinx models ${}^{5}$ , trained with the Ravnursson corpus, are suitable for the Pocket-Sphinx Python library available at the Pypi repository ${}^{6}$ . With this library it is possible to perform both standard and real time speech recognition, + +539 + +${}^{5}$ Available at: https://github.com/ CarlosDanielMena/RAVNURSSON_FAROESE_ Models_100h + +${}^{6}$ See: https://pypi.org/project/ pocket sphinx/ + +${}^{4}$ See: http://hdl.handle.net/20.500.12537/305 + +540 594 + +max width= + +SAMPA $\mathbf{{IPA}}$ SAMPA $\mathbf{{IPA}}$ SAMPA $\mathbf{{IPA}}$ SAMPA $\mathbf{{IPA}}$ + +1-8 +p ${\mathrm{p}}^{\mathrm{h}}$ m m e e aJ ai + +1-8 +b b M $\dot{\mathrm{m}}$ E E aW au + +1-8 +t ${t}^{h}$ n n a a OJ oi + +1-8 +d d $X$ $\underset{ \circ }{\text{ n }}$ $y$ $y$ OW ou + +1-8 +$\mathrm{k}$ ${\mathrm{k}}^{\mathrm{h}}$ $\mathrm{N}$ IJ Y Y 3W tu + +1-8 +g g $X$ ij 2 $\varnothing$ EW eu + +1-8 +f f 1 1 9 oe 9W œu + +1-8 +V V L 1 U U 9J cei + +1-8 +S S j j O 0 4 0 + +1-8 +S f W W O 0 5 0 + +1-8 +Z S r I EA ea 8 0 + +1-8 +h h U U OA 0a H Pre-aspiration + +1-8 +tS tʃ $\mathrm{i}$ $\mathrm{i}$ UJ $v\dot{1}$ X X + +1-8 +dZ q I I EJ ei X X + +1-8 + +Table 4: SAMPA vs. IPA Equivalences. + +608 + +609 + +541 595 + +542 596 + +543 597 + +544 598 + +545 599 + +546 600 + +547 601 + +548 602 + +549 603 + +550 604 + +551 605 + +552 606 + +553 607 + +556 610 + +612 forced-alignment and produce timestamps. The version of PocketSphinx that was available when we produced these models was the number 4 . Few weeks later the version 5 was released but our models remain compatible. + +§ 4 PRONUNCIATION MODELS + +The pronunciation models that we discuss in this section is a set of pronouncing dictionaries that are included in our "Faroese Language Models with Pronunciations" (Hernández Mena et al., 2022) along with a number of language models that will be discussed in section 5 . Most of the pronunciations come from the original BLARK, but for convenience, we subdivide them in different dictionaries as follows: + + * Central_Faroese.dic: It contains pronunciations of the variant of Faroese which is spoken in the capital. + + * East_Faroese.dic: It contains pronunciation of the northwest variant of Faroese ${}^{7}$ . + +583 - Ravnursson_Composite_Words.dic: It contains words with hyphens and/or underscores + +593 that are present in the Ravnursson Corpus. We keep them separate in a different dictionary because these type of composite + +words can be problematic for a grapheme-to- 617 phoneme (g2p) tool. + + * BLARK.dic: It contains pronunciations of + +words that are present in the BLARK but that 620 are not present in any other dictionary of the + +set. 622 + + * FAROESE_ASR.dic: This dictionary is + +recommended for ASR experiments in 625 Kaldi or any other ASR system based on + +phonemes. The dictionary is the mix of 627 Central_Faroese.dic, East_Faroese.dic and Ravnursson_Composite_Words.dic. It is im- + +portant to clarify that the dictionary can 630 contain words with multiple pronunciations, + +which is normal in Kaldi-like systems. 632 633 + +§ 4.1 PHONEME SETS OF DICTIONARIES + +634 + +Table 3 shows the phonetic repertoire of Faroese 635 + +using 42 SAMPA symbols. Each of these corre- 637 spond to an individual phoneme that is included in the pronouncing dictionaries described in section 4, except for the vowel "/3/" that only occurs in diphthong. The phonetic repertoire of Faroese + +includes the following 12 diphthongs: EA, OA, 642 $\mathbf{{UJ}},\mathbf{{EJ}},\mathbf{{aJ}},\mathbf{{aW}},\mathbf{{OJ}},\mathbf{{OW}},\mathbf{{3W}},\mathbf{{EW}},\mathbf{{9W}}$ and $\mathbf{{9J}}$ . Summing the 41 individual phonemes in Table 3, plus the 12 diphthong, plus seven phonemes with + +pre-aspiration (Hb, Hd, HdZ, Hg, Hp, Ht, HtS), 646 + +we have a total of 60 phonemes. That is the list 647 of 60 phonemes that are included in the dictio- + +${}^{7}$ In the most recent dialect classification (Petersen,2022), the islands in the northwest area are classified as being the same dialect area. However, there is a difference in the pronunciation of the digraph ${ei}$ between the westernmost islands and the more central and eastern islands in that dialect area in. Therefore, the westernmost part of the dialect area is not included in our EAST dictionary. For that reason, we have given this dictionary the name EAST. The idea is that this makes it is possible to make WEST, NORTHERN and SOUTHERN dictionaries in the future. + +649 naries presented in section 4. To see an equivalence between our SAMPA symbols versus the IPA phonemes, please see Table 4. + +§ 5 LANGUAGE MODELS + +As it was mentioned in section 4, our "Faroese Language Models with Pronunciations" is a set of n-gram language models of distinct sizes that were created using the Faroese text provided in the BLARK, as it provides with text from newspaper articles, parliamentary speeches, books and + +661 more. The normalization process of that text included to change everything to lowercase, allow only characters belonging to the Faroese alphabet + +664 and removing punctuation marks. + +The resulting text has a length of more than + +666 half million lines of text $({106.3MB}$ approximately). The text was used to create a 3-gram (recommended for decoding) and a 4-gram (recom- + +669 mended for re-scoring) language models with the SRILM toolkit (Stolcke, 2002). Both the 3-gram and 4-gram models come in pruned and unpruned versions. It is also included a 6-gram language model in binary format suitable for ASR experiments with the NeMo toolkit. In particular, this model was created using KenLM (Heafield, 2011). It is important to mention that all the words present in any of the language models are present in the pronouncing dictionaries for the east and central variants of Faroese (see section 4). + +§ 6 RESULTS + +Table 5 shows a comparison of the Word Error Rate (WER) obtained with the acoustic models presented in section 3. Results with Pocket- + +686 Sphinx are not included because PocketSphinx is no longer competitive and the models created with it are destined to perform real time recognition in devices with low computing power as explained in + +691 section 3.4. The NeMo results include the WER obtained using the 6-gram language model (LM) presented in section 5 as well as the WER obtained with no language model at all. The Kaldi results include the WER obtained with Hidden Markov Models (HMM) only and the WER obtained with the LSTM network. As it can be seen, the best results are obtained with the WAV2VEC2 model. + +According to our previous experience (Hernan- + +701 dez Mena et al., 2020; Mena et al., 2022), it is remarkable that the WER obtained with NeMo using a language model and the WER obtained with Kaldi using the LSTM are so close to each other despite of the relatively low amount of training data. This fact reveals that the training method described by Huang et al. is really effective. + +On the other hand, Table 6 shows the results obtained with the newest system Whisper (Radford et al., 2022). Whisper is a transformer-based speech recognition system trained with ${680}\mathrm{k}$ hours of transcribed data in multiple languages. Whisper is also a multitask system able to perform multilingual speech recognition as well as speech translation and language identification. According to the original paper (Radford et al., 2022), the training set that Whisper uses for translation includes 46 hours of Faroese. Based on this, we decided to test Whisper in its distinct sizes with no fine-tuning step and using the development and test portions of the Ravnursson corpus. As it can be seen in Table 6, we obtained terribly bad WER results, revealing that Whisper needs to be fine-tuned prior to recognize Faroese data; unfortunately, this is beyond the scope of this paper but it will tackle as further work. + +§ 7 CONCLUSIONS + +A major development of Faroese ASR is presented in this work. The Ravnursson project has produced a corpus of 109 hours of transcribed speech and acoustic models for WAV2VEC2, NeMo, Kaldi and PocketSphinx have been developed. Furthermore, the project has also produced a set of n-gram language models of distinct sizes and pronunciation dictionaries in Faroese suitable for ASR experimentation. Quality assessment of the acoustic models are shown in Table 5 where the best results of ${7.60}\%$ WER was achieved by the WAV2VEC2 model. Another interesting result is shown in Table 6 demonstrating that a fine-tuning step is needed for Faroese for the multi-lingual ASR system Whisper. + +Faroese ASR is no longer under-developed due to this work. The project has lowered the technological threshold for implementing ASR solutions for Faroese in industry and for studying the Faroese language using ASR as a tool. With all the results made available with open licenses, there is no good reason why Faroese ASR should not be included in standard language technology software in the future. + +756 810 + +max width= + +Corpus Portion NeMo SP No LM NeMo SP With LM Kaldi HMM Kaldi LSTM WAV2VEC2 XLRS-53 + +1-6 +$\mathbf{{Dev}}$ 20.51% 13.66% 20.60% 12.22% 5.56% + +1-6 +Test 22.81% 15.95% 23.44% 14.04% 7.60% + +1-6 + +Table 5: WER Results. + +757 811 + +758 812 + +759 813 + +760 814 + +761 815 + +762 816 + +763 + +max width= + +Whisper Size $\mathbf{{Dev}}$ WER Test WER + +1-3 +Tiny 113.4% 116.7% + +1-3 +Base 112.61% 113.07% + +1-3 +Small 128.05% 132.64% + +1-3 +Medium 116.34% 119.3% + +1-3 +Large 105.93% 110.25% + +1-3 + +Table 6: Whisper WER Results. + +764 + +765 + +766 + +767 + +768 + +769 + +770 + +772 + +774 + +§ ACKNOWLEDGMENTS + +The text has to be anonymous. The real acknowl- + +777 edgments will be revealed in the final version of the manuscript. The text has to be anonymous. + +779 The real acknowledgments will be revealed in the final version of the manuscript. The text has to + +782 be anonymous. The real acknowledgments will be revealed in the final version of the manuscript. + +784 The text has to be anonymous. The real acknowledgments will be revealed in the final version of the manuscript. + +787 + +789 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/WGYiq3yOTa/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/WGYiq3yOTa/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..251637d95ada27dab317050086c404ad6977e6b9 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/WGYiq3yOTa/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,747 @@ +000 054 + +# Class Explanations: the Role of Content and Function Words + +001 055 + +002 056 + +003 Anonymous Author, Anonymous Author, Anonymous Author, Anonymous Author 057 058 + +Affiliation / Address line 1 059 + +006 Affiliation / Address line 2 060 + +\{email\}@domain 061 + +062 + +063 + +## Abstract + +We address two understudied areas related to explainability for neural text models. First, class explanations. What features + +016 are descriptive across a class, rather than explaining single input instances? Sec- + +018 ond, the type of features that are used for providing explanations. Does the explanation involve the statistical pattern of + +021 word usage or the presence of domain-specific content words? Here, we present + +023 a method to extract both class explanations and strategies to differentiate between two + +026 types of explanations - domain-specific signals or statistical variations in frequen- + +028 cies of common words. We demonstrate our method using a case study in which we analyse transcripts of political debates + +031 in the Swedish Riksdag. + +033 + +## 1 Introduction + +Recent developments in NLP are often the result of ever more complex model architectures and an + +036 increasing number of model parameters. Yet, if we want to rely on these models, we should be + +038 able to review the similarities and dissimilarities between the model and human judgement. Explainability frameworks can do this by highlighting on what the model has learnt to base its decisions. Are these coincidental statistical patterns or something that a human would use as an explanation? Madsen et al. (2022) argue that explanations should ideally be both functionally-grounded (true to the underlying machine learning model) as well as human-grounded (useful to a human). + +In this article, we propose a new method for extracting class explanations from text classifiers. Besides, we also show a new way to distinguish between two types of features that appear in those + +053 explanations, that is, between content words and + +subtle statistical differences in function words' 065 frequencies. Our method aggregates explanations + +for individual data points (here provided by LIME 067 (Ribeiro et al., 2016)), followed by a sorting stage that separates the different kinds of features. + +Our work is in part motivated by use cases of 070 machine learning for texts in the social sciences. + +In this field, explainability methods are relevant 072 both as checks to compare against human expert + +knowledge and as a tool for bias detection. As a 075 case study, we use our method to explain the de- + +cisions of a binary classifier trained to identify if 077 speeches in the Swedish Riksdag belong to either of the two main parties, the Moderates (M) or the + +Social Democrats (S). 080 + +We find that our method can separate class ex- + +plainability features and that those data points 082 whose explanations contain primarily domain-specific content words are more often classified + +correctly. 085 + +## 2 Literature Review + +087 + +As a result of the extensive work on explainability methods, a complex typology of different ap- + +proaches exists (see Danilevsky et al. (2020) or 090 Madsen et al. (2022) for a survey). One impor- + +tant distinction is between global and local. On 092 the one hand, global methods aim to explain some general behaviour of a model, such as class explanations, which summarise the model with respect + +to a certain class. On the other, local methods aim 097 to explain why the model assigned a single data point to a particular class. + +Between global and local methods, the latter receive the most attention (Nauta et al., 2022). Three popular methods are gradient-based approaches (Baehrens et al., 2010), Shapley values (Shapley, 1952), and LIME. Gradient-based approaches use the model's weights and take the gradient with regard to the input. As such, they measure the + +change in the outcome given some small change in 107 the input. Yet, they are only an accurate reflection of the model if that model is linear (Li et al., 2016), which is not the case for most deep NLP architectures. On the other hand, while Shapley values have many theoretical guarantees to make them a faithful interpretation (they represent the true contributions of the features (Ethayarajh and Jurafsky, 2021)), their implementations (e.g. via attention flows for transformer-based architectures (Abnar and Zuidema, 2020)) tend to be computationally expensive, which is problematic in the current setting, where we focus on aggregating a substantial number of individual explanations. Finally, LIME has an advantage over gradient-based approaches as it it model agnostic. This means that LIME attempts explain a trained classifier independent of its architecture (Ribeiro et al., 2016). + +### 2.1 Class explanations + +The area of global class explanations is so far less studied than that of local explanations. One approach to providing global understanding of the model is to use behavioural or structural probes (Tenney et al., 2019; Hewitt and Manning, 2019; Wallace et al., 2019). Probing is a technique where a supervised model (a probe) is used to determine what is encoded in the internal representation of the studied model. This is done by training the probe to predict based on the frozen representations of the black-box model. If the probe performs well on the task, that indicates the required information was well represented by the black-box model, if the probe is unable to achieve high accuracy, that is taken to signify that the studied patterns are not learned by the black-box model. This has some limitations - for example, the complexity of the probe. If the probe is too simple, it may not capture second order effects, if it is too complex, it may learn the task internally and "discover" things that are in the probe rather than the model (Hewitt and Liang, 2019). More importantly, these methods tend to be applied to the discovery of simple syntactic structures like part of speech (POS) tagging, syntactic tree structures (Rogers et al., 2020) or to detect the presence of specific knowledge (Petroni et al., 2019). Other attempts in this area include leveraging local methods and utilising a strategy for aggregating and presenting those results to the user. An example of such approach is SP-LIME (Ribeiro et al., 2016), which aggregates individual LIME + +explanations with a greedy search for finding data 162 + +points (texts) that are explained by the most dis- 163 similar sets of features in order to represent the breadth of the class explanations. The results are presented as ranked text examples with their corresponding explanations, where the number of ex- + +amples is defined by the user. Due to its focus 168 on features that cover as many input instances as possible, this method tends to overemphasise stop words (see further discussion in Section 6). + +### 2.2 Features of Explanations + +To a human, not all features learnt by the machine 175 learning model are equally informative. Some signals may come from speech patterns, others from the topic that is discussed and the sentiment, yet others may indicate preferred catch-phrases and slogans. There is a distinction between explanations of the model (what a model bases its prediction on) and human explanation (what a human would base their decision on if faced with the same prediction task) (Miller, 2019). Since humans have background knowledge that is not accessible to the model and the model has the capacity to detect small statistical signals that are beyond human computational capabilities, the set of features that are selected by either may differ. This issue can be viewed in terms of the concepts presented in the position paper by Doshi-Velez and Kim (2017) and further discussed by Madsen et al. (2022), namely - human-grounded and functionally-grounded explainability. Functionally-grounded explainability is concerned with how well the explanation reflects the model, whereas human-grounded explainability is concerned with producing explanations that are useful to a human. This is also in line with work by Nauta et al. (2022), where the authors argue for the rigorous evaluation of an explainability method across twelve properties in three categories - content, presentation, and user. The content properties and in particular correctness (faithfulness w.r.t. the black box) are related to the functionally-grounded approach, whereas the user properties - context (how relevant the explanation is to the user), coherence (how accordant the explanation is with prior knowledge), and controllability (how interactive or controllable an explanation is) - relate to human-grounded explainability. + +In our work, we use function and content words + +as a proxy for functionally-grounded and human- 215 grounded explanations. The term function words is used in a broader sense here than the strict linguistic definition of prepositions, conjunctions etc. In the setting of parliamentary debates, for example, there is procedural language (e.g. "fru tallman" (madam speaker)) that can also act as function words in the domain. A model can learn to detect distributional differences of any word as long as it is correlated with the predicted class, but a human will be unlikely to relate and understand the cause of the distributional differences of stop-words. The difference in frequency of how often a group uses the word "also", for example, may not be very informative for a human, even if stop word distributions point to real speech patterns that dis- + +232 tinguish between the speakers (Arun et al., 2009a) and have even been linked to the author's gender (Arun et al., 2009b). Human domain knowledge will most likely be captured through domain-specific, content words. Being able to confirm the (extent of the) model's grounding in content words can serve to validate it. + +## 3 Method + +Our algorithm for computing class explanations consists of four steps: post-hoc instance explanations extraction, aggregation, sorting, and a keyword-in-context search that extracts example texts. This framework is formalized in Algorithm 1. It is similar to SP-LIME, but rather than searching for data points that capture the most diversity of the important features, we propose to work directly with the feature importance and explore ways to summarize and sort these by relevance. + +The implementation will be linked in the non-anonymous version. + +### 3.1 Step 1: Instance explanation extraction + +For a set of held-out data samples $N$ , we apply the trained classifier $f$ . In the instances where + +259 the classifier makes the correct prediction, we ex- tract the list of features and their corresponding saliency with model $g$ . This can also be flipped to focus on instances where the model makes the incorrect predictions to investigate which patterns or instances are hard to classify. A certainty threshold can also be used to explore only cases where the model is certain or borderline cases. Our method aims to be extendable to different model architectures, therefore we require a post- + +269 hoc, model agnostic instance explanation function + +Algorithm 1 Class explainability from instance explanations + +--- + +Require: Binary classifier $f$ , data samples $N$ + +Require: Instance explainability function $g$ + +Require: Feature scoring function $h$ + + $W \leftarrow \{ \} \; \vartriangleright$ features and importance scores + + ${c1} \leftarrow \{ \} \; \vartriangleright$ features explaining class 1 + + ${c2} \leftarrow \{ \} \; \vartriangleright$ features explaining class 2 + +--- + +270 + +271 Step 1 - Instance explanation extraction + +--- + +for text, true_label $\in N$ do + + if $f\left( \text{text}\right) =$ true_label then + + $W \leftarrow W \cup \{ g\left( {\text{ text }, f}\right) \}$ + + end if + +end for + +--- + +Step 2 - Aggregation + +--- + +for feature, score $\in W$ do + + if score $< 0$ then + + ${c1} \leftarrow {c1} \cup \{$ feature $\}$ + + else + + ${c2} \leftarrow {c2} \cup \{$ feature $\}$ + + end if + +end for + +--- + +286 + +288 + +Step 3 – Sorting for $c \in \{ {c1},{c2}\}$ do return $c$ sorted by $h$ score end for Step 4 - Keywords in context + +--- + +for $c \in \left\{ {{c1},{c2}}\right\}$ do + + for $\operatorname{term} \in$ top $X$ terms in $c$ do + + return all occurrences of term + + with $n$ words before and after + + end for + +end for + +--- + +298 + +301 + +303 + +306 + +308 $g$ . For now, we have chosen LIME, but alternative methods can be used as well, as long as they are able to extract features and the feature contribution scores that explain an instance. This means we are currently constrained by LIME's limitations and only consider single tokens as features. Since LIME is a surrogate model, there is also some uncoupling between the classification model and the explanations. For each correctly classified instance, we extract the top $k$ features (here set to 10). This can be reduced even further in order + +to limit the number of features that are considered 323 or extended to include all tokens and the task of limiting the explanation will then be completely relegated to the sorting step. + +### 3.2 Step 2: Aggregation + +A feature can contribute either positively or negatively towards the prediction of the model. When working with a binary classifier, a negatively contributing feature towards predicting class 1 means it is a positively contributing feature for class 2 . Therefore, the features collected from the previous step are aggregated in two sets $- {c1},{c2}$ - one for each class based on their feature score sign. Note that these two sets of features may have overlaps if the predictive signal is indicative of the different context in which those features appear. + +### 3.3 Step 3: Sorting + +The resulting sets of features for each class need to be constrained to a feasible size to be interpretable by a human. We propose two approaches to developing a feature relevance score $h$ to prioritize and distinguish these terms along an axis of more domain-specific concepts to more generic stop-words - normalization and PCA. + +Normalization. Here, we use the sum of LIME scores for each feature of the explanation divided by number of occurrences of that feature in the validation set. We calculate the feature relevance score $h$ of the ${j}^{\text{th }}$ feature as: ${h}_{j} = \frac{1}{{m}_{j}}\mathop{\sum }\limits_{{i = 1}}^{N}{W}_{ij}$ . Here, $N$ is the number of data points in the explained dataset, ${m}_{j}$ is the number of occurrences of feature $j$ in the explained set, and $W$ is the explanation matrix containing the local importance of the interpretable components for each instance. This will give higher scores to features identified as more important by LIME, but will penalise common words, if they do not contribute to a class prediction often. This is in line with the definition of stop-words and should target the corpus-specific stop-words. We also filter out words that appear in two or less documents, as these can be party specific, but may not be useful for generalisation. This number can also be increased to filter out more predictive (according to LIME) words. + +PCA. The second approach to sorting is to decouple it from the LIME score after the initial aggregation step and use PCA of word embed-dings. We found that PCA applied to pre-trained word embeddings tends to separate domain specific words from function words and more generic + +terms. A theoretical motivation for this analysis 378 + +lies in the distributional differences between a gen- 379 + +eral text (used for pre-training word embeddings) 380 + +and a domain-specific text (in this case - politi- 381 cal debate). We hypothesise that the general embedding model will see the domain specific terms + +is sufficiently distinct context in order to embed 384 them in a compact space with a latent dimension separating them from more common and general terms. This relies on the studied data having a + +significant amount of domain specific terminology 389 that is rarer in general. We expect this to be the case for many application within the social sciences (e.g. politics), but can have limitations in, lower-level, syntactic classification tasks like POS + +tagging. 394 + +To calculate the sorting score, the terms from + +each set ${c1}$ and ${c2}$ are embedded using a model ${}^{\top }$ 396 trained on the Swedish CoNLL17 corpus. A PCA is run on each set of words $- {c1},{c2}$ - and the first PCA dimension value is used as the sorting score $h$ . Similarly to the normalisation approach, words that appear in two or fewer documents are filtered out. This dimension seems to provide a good distinction of domain specific terms. + +### 3.4 Step 4: Keywords in Context + +To further increase human interpretability, we also provide a way to provide context by extracting snippets of texts around the top word features produced in Step 3. For each occurrence, we use a simple keyword-in-context search and extract $n$ words before and after our feature word. This is clearly not feasible or interesting for very frequent words, which further motivates separating rarer, domain specific content words from more common function words. + +## 4 Data + +The dataset used for the case-study consists of transcripts of debates in the Swedish Riksdag, sourced from Riksdagens öppna data - Anföranden ${}^{2}$ . We use a pre-processed version available from Språkbanken ${}^{3}$ consisting of debates from 1993 to 2018. For our experiment, texts from the Social Democrat (S) and Moderate (M) parties + +431 have been extracted, resulting in ${104},{842}\mathrm{\;S}$ and ${62},{160}\mathrm{M}$ data points (one data point is one speech that could be part of a longer debate). From these, 100 examples have been sampled for a small-scale human baseline check, where two annotators are asked to perform the classification task of determining the party label from the speech texts and were evaluated against the true label. Since these are debates, references to the opponent are a strong but trivial predictor of party. References to people and political parties have been removed by targeting Swedish political party stems and words tagged as "People_along_political_spectrum" in Spräkbanken's tags, based on Swedish FrameNet (Heppin and Gronostaj, 2012). Data points shorter than 50 words have been removed, as manual analysis shows these tend to be entirely procedural and do not carry political sentiment. This is in line with similar cleaning practices used for US congressional debates (Bayram et al., 2019). The data is undersampled to balance the classes and split into: train(108,169), test(12,019)and validation (2,000)sets. The validation set is used for explainability methods. + +--- + +http://vectors.nlpl.eu/repository/20/ 69.zip + +thttps://data.riksdagen.se/data/ anforanden/ + +'https://spraakbanken.gu.se/resurser/ rd-anf-1993-2018 + +--- + +## 5 Experiments + +To test our methodology we apply it to a BERT classifier trained to predict the party label of a text (Devlin et al., 2019). The classifier is fine-tuned from a pre-trained model for Swedish data released by The National Library of Sweden/KBLab and available through the huggingface library The model has a 50,325 word vocabulary and 512 maximum token length. Longer inputs are truncated. As a baseline for investigating class differences and separability of the data we use a logistic regression classifier, as this provides easy access to class explanations by simply looking at the top and bottom scoring internal weights of the model. $\mathrm{N}$ -gram spans from 1 to 3 and a combination of all have been compared. The number of input features is 50,325 - the same as the pre-trained BERT model. + +A small-scale human annotation check on 100 instances shows the two annotators perform with 58 and 56 percent accuracy respectively. A Cohen's kappa of 0.4 indicates this is a hard classification task. + +In the interest of space, the sections below con- + +tain partial results. The full results are available in 486 + +an online appendix. 487 + +488 + +### 5.1 Baseline + +489 + +Table 1 summarises the accuracy and F1 scores 490 + +for the logistic regression classifier. We observe 492 that the best result is achieved with 1 -grams, with the inclusion of 2- and 3- grams adding no performance gains. It seems the main part of the distinguishing signal can be picked up by specific words + +rather than phrases. 497 + +
n-gram span#feataccF1
1,150,32576.9476.80
2,250,32573.1973.05
3,350,32569.3969.15
1,3150,97576.9376.80
+ +Table 1: Logistic regression classifier performance. + +499 + +502 + +504 + +From the internal model weights, we can identify both domain specific words - "sjuka" (sick), "arbetslösa" (unemplyed), "arbetslinjen" (the employment line, a Moderate catchphrase), and function words - "det" (the), "ocks" (also), "synner-het" (in particular), can be predictive of the party label. This is in agreement with our assumption that a model can depend on both statistical differences in stop word or in human concepts as the basis of its prediction, and in doing so outperforms the human annotators. + +519 + +### 5.2 BERT + +The BERT model ${}^{6}$ has an accuracy of 78.44 and 522 F1 score of 76.66 on the test set and accuracy of + +79.95 and F1 score of 78.27 on the validation set, 524 which is only a slight improvement over the logistic regression baseline. + +Applying LIME to all validation samples and aggregating the top 10 features for each data point + +results is a list of 2,043 Moderate and 2,085 Social 529 Democrats terms. Out of these 1,456 Moderate and 1,334 Social Democrat terms appear in more than two documents, and are thus candidates to be included as part of class explanations (this limit + +can be adjusted by the user). 534 + +539 + +--- + +5 https://github.com/ + +anonymous-supplementary-materials/ + +NoDaLiDa2023_Appendix + +${}^{6}$ With hyperparameters: $\operatorname{lr} = 5\mathrm{e} - 6$ , batch size $= {48}$ , steps $= {6000}$ + +https://huggingface.co/KB/ bert-base-swedish-cased + +--- + +540 + +
PCA ordering
rankterm
1utgiftsomrâde (expenditure area)
2budgetpropositionen (the budget bill)
3jobbskatteavdrag (employment tax credit)
4arbetslöshetsförsäkringen (unemployment insurance)
5skattehöjningar (tax increases)
...
1454högkvalitativa (high quality)
1455vackra (beautiful)
1456klassiska (classic)
Normalised LIME score
rankterm
1vänsterregering (left-wing government)
2fattigdomsbekämpning (poverty alleviation)
3bidragsberoende (benefits dependency)
4fridens (of peace)
5arbetsföra (able to work)
...
1454som (as)
1455ett (one)
1456en (one)
+ +Table 2: Results for the Moderates. + +541 + +542 + +543 + +544 + +545 + +546 + +547 + +548 + +549 + +550 + +551 + +552 + +553 + +554 + +555 + +556 + +557 + +558 + +559 + +560 + +561 + +562 + +563 + +564 + +566 + +### 5.3 Validation + +Tables 2- 3 show the results of both LIME and PCA for both M and S. In both cases, the models separate informative terms from generic ones. This is especially the case with the LIME scores, where the lowest-scoring words are all stop words. As for the highest-scoring words, we find that they are all related to taxes and employment. This is understandable, as this is also what makes up the main political left/right dimension in Sweden (Franzmann and Kaiser, 2006; Jolly et al., 2022; + +583 Ezrow et al., 2011). Besides, we can identify sev- eral references to several (groups of) parties and ministers, which we would expect in debates. + +While these findings are hopeful on their own, to be useful for social scientists, we need to do more to ensure that our results are valid. In other words, we want to ensure that our method measures what we intend to measure (Carmines and Zeller, 1979). In our case, this is whether a speech is representative of $\mathrm{S}$ or $\mathrm{M}$ . + +593 Looking at how appropriate the terms are, as we + +
PCA ordering
rankterm
1budgetpropositionen (the budget bill)
2arbetsmarknadspolitik (labor market policy)
3samlingspartiet [Refers to the Moderates]
4ungdomsarbetslösheten (youth unemployment)
5skattesänkningar (tax cuts)
...
1332tillsammans (together)
1333u (u)
1334dam (lady)
+ +
Normalised LIME score
rankterm
1överläggningen (the deliberation)
2moderatledda (moderate-led)
3kd (abbrev. for Christian Democrat party)
4skattesänkningarna (the tax cuts)
5borgarna (the bourgeois [parties to the right])
...
1332har (have)
1333av (of)
1334för (for)
+ +Table 3: Results for Social Democrats + +594 + +595 + +596 + +597 + +600 + +605 + +607 + +610 + +617 + +620 + +622 + +623 + +624 + +did above, is a first step. This is also known as 625 face validity, as we look if our method "appears to + +measure" what we want it to measure (Anastasi, 627 1976, pp. 139-140). Yet, face validity depends on many implicit decisions that vary between context + +and researcher. As such, we should look further if 630 we wish to provide a more satisfactory validation. + +One good candidate for this is by looking at $\operatorname{con}$ - 632 struct validity (Shadish et al., 2002; Carmines and Zeller, 1979). This refers to the degree to which we can use our results to say something about that + +what we aim to measure. One way to learn this 637 here is to look at the wider context in which the terms the algorithm uses appear. For example, if a term used by the algorithm to assign a speech to $\mathrm{S}$ occurs in a context that defines $\mathrm{S}$ , this strength- + +ens our case for construct validity. To see this, we 642 can use keyword-in-context (KWIC), which looks at the $n$ (here we choose 20) words before and after the term that interests us. In Table 4 we show + +this for one of the terms from the PCA analysis 646 + +for S - arbetsmarknadspolitik (labour market pol- 647 + +icy). Here, we see that the context of the word + +649 indeed refers to policies close to S. In both cases, the term is used to call for more and new measures to regulate the labour market - something indicative of S. Similar examples for the words in Tables 2-3 are in the online appendix. As we have implemented KWIC in our algorithm, scholars can thus easily assess whether the same is true for any of the other terms and in this way better assess the validity. + +"... enda âtgärd lösa detta, det behövs många âtgärder. Det handlar om ett gott företagarklimat, om en ny arbetsmarknad-spolitik, om ytterligare utbildningssatsningar, om att bygga om - osv. med de förslag till âtgärder som vi ..." + +"... single measure solve this, many measures are needed. It's about a good business climate, about a new labour market policy, about further training efforts, about rebuilding - etc. with the proposed measures that we ..." + +"... i arbete det finns individer som kommer att behöva säskilt stöd, och då behöver vi ha en bra arbetsmarknadspolitik. Men det är förstås in-get egenvärde i att ungdomar som kan få jobb ändà ska vara i en . . ." + +"... in work there are individuals who will need separate support, and then we need to have a good labour market policy. But of course there is no intrinsic value in young people who can get a job still being in a..." + +Table 4: Keywords-in-context for the class-explanation feature labour market policy for the Social Democrats. + +### 5.4 Explanations and Predictive Accuracy + +Returning to individual instance explanations, we also wanted to investigate if the kind of words (domain specific or statistical distributions) occurring in an explanation have any relationship with the certainty of the model on those datapoints. We found domain specific words (here related to politics), along the positive PCA spectrum, while more common, general words had embeddings placing them towards the negative end. We find that data points where the explanation-words are predominantly positioned within the positive PCA spectrum (the sum of the PCA coordinates of the + +701 top-ten explanation features is positive) are cases + +where the model is more accurate. Compared to 702 + +datapoints where explanations lie in the negative 703 + +PCA space, there is an accuracy gain of roughly 704 + +10 percent (Table 5). Interestingly, this suggests 705 that explanations containing domain specific, rarer words are correlated with the model's correctness, + +although the number of datapoints with domain 708 + +specific explanations is quite small. 710 + +
CorrectIncorrect$\mathbf{{Acc}}$
Pos PCA sum1862588.15
Neg PCA sum141337678.98
+ +Table 5: Classifier performance on the validation set split based on the sum of PCA coordinates of the explanation provided by LIME. + +713 + +715 + +718 + +720 + +## 6 Comparison to SP-LIME + +Our method is comparable with SP-LIME, which aggregates individual LIME explanations. SP-LIME consists of three similar steps: post-hoc instance explanations extraction, sorting and example extraction. In contrast to our proposed scoring functions, SP-LIME calculates the score for feature $j$ as ${I}_{j} = \sqrt{\mathop{\sum }\limits_{{i = 1}}^{N}{W}_{ij}}$ where $N$ is the number of data points in the explained dataset and $W$ is the explanation matrix containing the local importance of the features. Based on this scoring, SP-LIME performs a greedy search to extract the top scoring data examples that also have the greatest coverage of distinct features. Therefore, the model explanation takes the form of a set number of text examples with their corresponding instance explanations, where the number of examples provided is defined by the user. Since the method performs a greedy search, the results are ordered by their contribution to how well they explain the model and how many unique features they cover. + +We apply SP-LIME to the BERT classifier and extract the top 20 text examples that the explainability approach considers most representative. These contain $9\mathrm{\;S}$ examples and ${11}\mathrm{M}$ examples. A selected set of instance explanations can be seen in Table 6 and the full list is available in our online appendix. We can see the overemphasis of stop words especially in the top examples. Only a couple of the surfaced terms carry a political significance, and even those lack context and have arguable generalisability. Some of the examples + +provided by SP-LIME (see Top 12 and Top 16 in 755 + +Rank 1 SP-LIME example (true label S): är (is), det (the), som (as), den (the), vi (we), Natomedlemskap (NATO membership), att (to), du (you), samlingsregeringen (the coalition government), $\mathbf{{Vi}\left( {We}\right) }$ Rank 2 SP-LIME example (true label M): fragorna (the questions), protektionistiska (protectionist), önskar (wish), Det (The), och (and), Herr (Mr), oerhört (incredibly), handelsminister (Minister of Trade), tackar (thanks), de (the) + +... Rank 12 SP-LIME example (true label M): medelinkomsttagare (middle income earner), avregleringar (deregulations), vänster (left), tvivelaktiga (questionable), skattesänkningar (tax cuts), Då (Then), och (and), Man (One/third person singular), bostadsmarknaden (the housing market), stöd (support) + +... + +Rank 16 SP-LIME example (true label S): borgarna (the bourgeois), oss (us), längtidsarbetslösa (long-term unemployed), klyftorna (the cleavages), det (the), sjuka (sick), rödgröna (red green) 7, Vi (We), Làt (Let), är (is) + +Table 6: Explanations provided by SP-LIME. Bold features indicate words contributing towards an $\mathrm{M}$ classification, while italic features do the same for S. Full results are in the online appendix. + +Table 6) are instances where human intuition is more easy to align with. However SP-LIME in general does not provide a way to distinguish between the two types of contributing features that the current work targets. Finally, SP-LIME also differ from our method in the way it presents texts containing explanatory features. SP-LIME tries to find texts which has as many features as possible in one and the same text, while we choose to present many alternative contexts in which explaining feature words appear, motivated by social science use-cases. + +## 7 Conclusion and Discussion + +We have developed a new algorithm for extracting class explanations, which takes the distinction between functional and content words into account. It thereby provides an alternative to prior + +methods like SP-LIME, which mixes explanations 810 + +based on e.g. stop word frequency with presence 811 + +of certain domain specific terms. Our motivation 812 + +comes from the idea of human-grounded explain- 813 + +ability: a useful explanation for a human will fo- 814 + +cus on content rather than stop-words, while still 815 + +being true to the model. In our case-study, we 816 demonstrated this on speeches from the Swedish parliament, with the task of explaining a binary classifier associating speeches to either of the two + +main parties. This is a difficult task, our human 821 annotation experiment showed human performing just better than random, potentially as they primarily looked for clues about policy. The machine learning models performed better, as they + +likely also managed to identify statistical speech 826 patterns of speakers, which we saw in explanations where e.g. stop words inevitably appear. Our algorithm can not only identify these, but also separate them from explanations containing domain + +specific words, hinting at policy, motivated by the 831 needs of social scientists. Additionally, we find indications that domain specific explanations correlate with model performance. Patterns related to policy in our experiment may be more robust than + +learned speech patterns of stop words, which risks 836 being influenced by single frequent individuals in + +the dataset, rather than capturing patterns common 838 to a political party. + +Future work will focus on systematic and exten- + +sive testing of the proposed methodology in order 841 to evaluate it along the twelve properties proposed + +by Nauta et al. (2022). The focus should be on 843 measuring the faithfulness to the underlying black box model, correctness, as well as a larger scale domain expert evaluation to measure how relevant + +and valid the explanations are (context and coher- 848 ence properties). The generalisability will also be tested, by studying other domains and classifica- + +tion tasks. 851 + +853 + +## References + +Samira Abnar and Willem Zuidema. 2020. Quantifying Attention Flow in Transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4190-4197. ACL. + +Anne Anastasi. 1976. Psychological Testing, 4 edition. Macmillan, New York, NY. + +R. Arun, V. Suresh, and C. E. Veni Madhavan. 2009a. Stopword Graphs and Authorship Attribution in Text + +858 + +863 Corpora. In 2009 IEEE International Conference on 865 Semantic Computing, pages 192-196. + +Rajkumar Arun, Ravi Saradha, V. Suresh, M. Murty, and C. Madhavan. 2009b. Stopwords and Stylom-etry: A Latent Dirichlet Allocation Approach. In NIPS workshop on Applications for Topic Models. 870 + +David Baehrens, Timon Schroeter, Stefan Harmel-ing, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert Müller. 2010. How to Explain Individual Classification Decisions. The Journal of Machine Learning Research, 11:1803-1831. + +Ulya Bayram, John Pestian, Daniel Santel, and Ali A. Minai. 2019. What's in a Word? Detecting Partisan Affiliation from Word Use in Congressional Speeches. In 2019 International Joint Conference + +880 on Neural Networks (IJCNN), pages 1-8. IEEE. + +Edward Carmines and Richard Zeller. 1979. Reliability + +882 and Validity Assessment. Sage, Thousand Oaks, CA. + +Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis + +885 Katsis, Ban Kawas, and Prithviraj Sen. 2020. A Survey of the State of Explainable AI for Natural Language Processing. In Proceedings of the 1st Confer- + +887 ence of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 447-459. ACL. + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, MN. ACL. + +Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. + +902 + +Kawin Ethayarajh and Dan Jurafsky. 2021. Attention Flows are Shapley Value Explanations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language + +907 Processing (Volume 2: Short Papers), pages 49-54. ACL. + +Lawrence Ezrow, Catherine de Vries, Marco Steenber-gen, and Erica Edwards. 2011. Mean voter representation and partisan constituency representation: Do + +912 parties respond to the mean voter position or to their supporters? Party Politics, 17(3):275-301. + +Simon Franzmann and André Kaiser. 2006. Locating Political Parties in Policy Space: A Reanalysis of Party Manifesto Data. Party Politics, 12(2):163- + +917 188. + +Karin Friberg Heppin and Maria Toporowska Gronos- 918 + +taj. 2012. The Rocky Road towards a Swedish 919 + +FrameNet - Creating SweFN. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 256- 261. European Language Resources Association (ELRA). + +John Hewitt and Percy Liang. 2019. Designing and Interpreting Probes with Control Tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743. ACL. + +John Hewitt and Christopher D. Manning. 2019. A Structural Probe for Finding Syntax in Word Representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138. ACL. + +Seth Jolly, Ryan Bakker, Liesbet Hooghe, Gary Marks, Jonathan Polk, Jan Rovny, Marco Steenbergen, and Milada Anna Vachudova. 2022. Chapel Hill Expert Survey trend file, 1999-2019. Electoral Studies, 75:102420. + +Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Juraf-sky. 2016. Visualizing and Understanding Neural Models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 681-691. ACL. + +Andreas Madsen, Siva Reddy, and Sarath Chandar. 2022. Post-Hoc Interpretability for Neural NLP: A Survey. ACM Computing Surveys, 55(8):1-42. + +Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial intelligence, 267:1-38. + +Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jörg Schlötterer, Maurice van Keulen, and Christin Seifert. 2022. From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. CoRR, abs/2201.08164. + +Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language Models as Knowledge Bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473. ACL. + +Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of + +920 + +921 + +922 + +924 + +929 + +934 + +936 + +939 + +941 + +946 + +949 + +951 + +954 + +956 + +966 + +971 972 the 2016 Conference of the North American Chap- 1026 973 ter of the Association for Computational Linguistics: 1027 974 Demonstrations, pages 97-101. ACL. 1028 + +975 Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 1029 + +976 2020. A Primer in BERTology: What We Know 1030 + +977 About How BERT Works. Transactions of the Asso- 1031 + +978 ciation for Computational Linguistics, 8:842-866. 1032 + +979 William R. Shadish, Thomas D. Cook, and Don- 1033 + +980 ald T. Campbell. 2002. Experimental and Quasi- 1034 + +Experimental Designs for Generalized Causal Infer- 1035 + +ence. Houghton Mifflin, Boston, MA. 1036 + +983 Lloyd S. Shapley. 1952. A Value for N-Person Games. 1037 + +RAND Corporation, Santa Monica, CA. 1038 + +985 1039 + +986 Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, 1040 + +987 Adam Poliak, R. Thomas McCoy, Najoung Kim, 1041 988 Benjamin Van Durme, Samuel R. Bowman, Dipan- 1042 jan Das, and Ellie Pavlick. 2019. What do you + +989 learn from context? Probing for sentence struc- 1043 + +990 ture in contextualized word representations. CoRR, 1044 + +abs/1905.06316. 1045 + +Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, 1046 + +993 and Matt Gardner. 2019. Do NLP Models Know 1047 + +Numbers? Probing Numeracy in Embeddings. In 1048 + +995 Proceedings of the 2019 Conference on Empirical 1049 + +Methods in Natural Language Processing and the 1050 + +997 9th International Joint Conference on Natural Lan- 1051 guage Processing (EMNLP-IJCNLP), pages 5307- 1052 998 5315. ACL. + +1053 + +1000 1054 + +1055 + +1056 + +1003 1057 + +1004 1058 + +1005 1059 + +1006 1060 + +1007 1061 + +1008 1062 + +1009 1063 + +1010 1064 + +1011 1065 + +1012 1066 + +1013 1067 + +1014 1068 + +1015 1069 + +1016 1070 + +1017 1071 + +1018 1072 + +1019 1073 + +1020 1074 + +1021 1075 + +1022 1076 + +1023 1077 + +1024 1078 + +1025 1079 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/WGYiq3yOTa/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/WGYiq3yOTa/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..676a8d9ea7902df014acead84b627b5ee4280616 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/WGYiq3yOTa/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,682 @@ +000 054 + +§ CLASS EXPLANATIONS: THE ROLE OF CONTENT AND FUNCTION WORDS + +001 055 + +002 056 + +003 Anonymous Author, Anonymous Author, Anonymous Author, Anonymous Author 057 058 + +Affiliation / Address line 1 059 + +006 Affiliation / Address line 2 060 + +{email}@domain 061 + +062 + +063 + +§ ABSTRACT + +We address two understudied areas related to explainability for neural text models. First, class explanations. What features + +016 are descriptive across a class, rather than explaining single input instances? Sec- + +018 ond, the type of features that are used for providing explanations. Does the explanation involve the statistical pattern of + +021 word usage or the presence of domain-specific content words? Here, we present + +023 a method to extract both class explanations and strategies to differentiate between two + +026 types of explanations - domain-specific signals or statistical variations in frequen- + +028 cies of common words. We demonstrate our method using a case study in which we analyse transcripts of political debates + +031 in the Swedish Riksdag. + +033 + +§ 1 INTRODUCTION + +Recent developments in NLP are often the result of ever more complex model architectures and an + +036 increasing number of model parameters. Yet, if we want to rely on these models, we should be + +038 able to review the similarities and dissimilarities between the model and human judgement. Explainability frameworks can do this by highlighting on what the model has learnt to base its decisions. Are these coincidental statistical patterns or something that a human would use as an explanation? Madsen et al. (2022) argue that explanations should ideally be both functionally-grounded (true to the underlying machine learning model) as well as human-grounded (useful to a human). + +In this article, we propose a new method for extracting class explanations from text classifiers. Besides, we also show a new way to distinguish between two types of features that appear in those + +053 explanations, that is, between content words and + +subtle statistical differences in function words' 065 frequencies. Our method aggregates explanations + +for individual data points (here provided by LIME 067 (Ribeiro et al., 2016)), followed by a sorting stage that separates the different kinds of features. + +Our work is in part motivated by use cases of 070 machine learning for texts in the social sciences. + +In this field, explainability methods are relevant 072 both as checks to compare against human expert + +knowledge and as a tool for bias detection. As a 075 case study, we use our method to explain the de- + +cisions of a binary classifier trained to identify if 077 speeches in the Swedish Riksdag belong to either of the two main parties, the Moderates (M) or the + +Social Democrats (S). 080 + +We find that our method can separate class ex- + +plainability features and that those data points 082 whose explanations contain primarily domain-specific content words are more often classified + +correctly. 085 + +§ 2 LITERATURE REVIEW + +087 + +As a result of the extensive work on explainability methods, a complex typology of different ap- + +proaches exists (see Danilevsky et al. (2020) or 090 Madsen et al. (2022) for a survey). One impor- + +tant distinction is between global and local. On 092 the one hand, global methods aim to explain some general behaviour of a model, such as class explanations, which summarise the model with respect + +to a certain class. On the other, local methods aim 097 to explain why the model assigned a single data point to a particular class. + +Between global and local methods, the latter receive the most attention (Nauta et al., 2022). Three popular methods are gradient-based approaches (Baehrens et al., 2010), Shapley values (Shapley, 1952), and LIME. Gradient-based approaches use the model's weights and take the gradient with regard to the input. As such, they measure the + +change in the outcome given some small change in 107 the input. Yet, they are only an accurate reflection of the model if that model is linear (Li et al., 2016), which is not the case for most deep NLP architectures. On the other hand, while Shapley values have many theoretical guarantees to make them a faithful interpretation (they represent the true contributions of the features (Ethayarajh and Jurafsky, 2021)), their implementations (e.g. via attention flows for transformer-based architectures (Abnar and Zuidema, 2020)) tend to be computationally expensive, which is problematic in the current setting, where we focus on aggregating a substantial number of individual explanations. Finally, LIME has an advantage over gradient-based approaches as it it model agnostic. This means that LIME attempts explain a trained classifier independent of its architecture (Ribeiro et al., 2016). + +§ 2.1 CLASS EXPLANATIONS + +The area of global class explanations is so far less studied than that of local explanations. One approach to providing global understanding of the model is to use behavioural or structural probes (Tenney et al., 2019; Hewitt and Manning, 2019; Wallace et al., 2019). Probing is a technique where a supervised model (a probe) is used to determine what is encoded in the internal representation of the studied model. This is done by training the probe to predict based on the frozen representations of the black-box model. If the probe performs well on the task, that indicates the required information was well represented by the black-box model, if the probe is unable to achieve high accuracy, that is taken to signify that the studied patterns are not learned by the black-box model. This has some limitations - for example, the complexity of the probe. If the probe is too simple, it may not capture second order effects, if it is too complex, it may learn the task internally and "discover" things that are in the probe rather than the model (Hewitt and Liang, 2019). More importantly, these methods tend to be applied to the discovery of simple syntactic structures like part of speech (POS) tagging, syntactic tree structures (Rogers et al., 2020) or to detect the presence of specific knowledge (Petroni et al., 2019). Other attempts in this area include leveraging local methods and utilising a strategy for aggregating and presenting those results to the user. An example of such approach is SP-LIME (Ribeiro et al., 2016), which aggregates individual LIME + +explanations with a greedy search for finding data 162 + +points (texts) that are explained by the most dis- 163 similar sets of features in order to represent the breadth of the class explanations. The results are presented as ranked text examples with their corresponding explanations, where the number of ex- + +amples is defined by the user. Due to its focus 168 on features that cover as many input instances as possible, this method tends to overemphasise stop words (see further discussion in Section 6). + +§ 2.2 FEATURES OF EXPLANATIONS + +To a human, not all features learnt by the machine 175 learning model are equally informative. Some signals may come from speech patterns, others from the topic that is discussed and the sentiment, yet others may indicate preferred catch-phrases and slogans. There is a distinction between explanations of the model (what a model bases its prediction on) and human explanation (what a human would base their decision on if faced with the same prediction task) (Miller, 2019). Since humans have background knowledge that is not accessible to the model and the model has the capacity to detect small statistical signals that are beyond human computational capabilities, the set of features that are selected by either may differ. This issue can be viewed in terms of the concepts presented in the position paper by Doshi-Velez and Kim (2017) and further discussed by Madsen et al. (2022), namely - human-grounded and functionally-grounded explainability. Functionally-grounded explainability is concerned with how well the explanation reflects the model, whereas human-grounded explainability is concerned with producing explanations that are useful to a human. This is also in line with work by Nauta et al. (2022), where the authors argue for the rigorous evaluation of an explainability method across twelve properties in three categories - content, presentation, and user. The content properties and in particular correctness (faithfulness w.r.t. the black box) are related to the functionally-grounded approach, whereas the user properties - context (how relevant the explanation is to the user), coherence (how accordant the explanation is with prior knowledge), and controllability (how interactive or controllable an explanation is) - relate to human-grounded explainability. + +In our work, we use function and content words + +as a proxy for functionally-grounded and human- 215 grounded explanations. The term function words is used in a broader sense here than the strict linguistic definition of prepositions, conjunctions etc. In the setting of parliamentary debates, for example, there is procedural language (e.g. "fru tallman" (madam speaker)) that can also act as function words in the domain. A model can learn to detect distributional differences of any word as long as it is correlated with the predicted class, but a human will be unlikely to relate and understand the cause of the distributional differences of stop-words. The difference in frequency of how often a group uses the word "also", for example, may not be very informative for a human, even if stop word distributions point to real speech patterns that dis- + +232 tinguish between the speakers (Arun et al., 2009a) and have even been linked to the author's gender (Arun et al., 2009b). Human domain knowledge will most likely be captured through domain-specific, content words. Being able to confirm the (extent of the) model's grounding in content words can serve to validate it. + +§ 3 METHOD + +Our algorithm for computing class explanations consists of four steps: post-hoc instance explanations extraction, aggregation, sorting, and a keyword-in-context search that extracts example texts. This framework is formalized in Algorithm 1. It is similar to SP-LIME, but rather than searching for data points that capture the most diversity of the important features, we propose to work directly with the feature importance and explore ways to summarize and sort these by relevance. + +The implementation will be linked in the non-anonymous version. + +§ 3.1 STEP 1: INSTANCE EXPLANATION EXTRACTION + +For a set of held-out data samples $N$ , we apply the trained classifier $f$ . In the instances where + +259 the classifier makes the correct prediction, we ex- tract the list of features and their corresponding saliency with model $g$ . This can also be flipped to focus on instances where the model makes the incorrect predictions to investigate which patterns or instances are hard to classify. A certainty threshold can also be used to explore only cases where the model is certain or borderline cases. Our method aims to be extendable to different model architectures, therefore we require a post- + +269 hoc, model agnostic instance explanation function + +Algorithm 1 Class explainability from instance explanations + +Require: Binary classifier $f$ , data samples $N$ + +Require: Instance explainability function $g$ + +Require: Feature scoring function $h$ + + $W \leftarrow \{ \} \; \vartriangleright$ features and importance scores + + ${c1} \leftarrow \{ \} \; \vartriangleright$ features explaining class 1 + + ${c2} \leftarrow \{ \} \; \vartriangleright$ features explaining class 2 + +270 + +271 Step 1 - Instance explanation extraction + +for text, true_label $\in N$ do + + if $f\left( \text{ text }\right) =$ true_label then + + $W \leftarrow W \cup \{ g\left( {\text{ text },f}\right) \}$ + + end if + +end for + +Step 2 - Aggregation + +for feature, score $\in W$ do + + if score $< 0$ then + + ${c1} \leftarrow {c1} \cup \{$ feature $\}$ + + else + + ${c2} \leftarrow {c2} \cup \{$ feature $\}$ + + end if + +end for + +286 + +288 + +Step 3 – Sorting for $c \in \{ {c1},{c2}\}$ do return $c$ sorted by $h$ score end for Step 4 - Keywords in context + +for $c \in \left\{ {{c1},{c2}}\right\}$ do + + for $\operatorname{term} \in$ top $X$ terms in $c$ do + + return all occurrences of term + + with $n$ words before and after + + end for + +end for + +298 + +301 + +303 + +306 + +308 $g$ . For now, we have chosen LIME, but alternative methods can be used as well, as long as they are able to extract features and the feature contribution scores that explain an instance. This means we are currently constrained by LIME's limitations and only consider single tokens as features. Since LIME is a surrogate model, there is also some uncoupling between the classification model and the explanations. For each correctly classified instance, we extract the top $k$ features (here set to 10). This can be reduced even further in order + +to limit the number of features that are considered 323 or extended to include all tokens and the task of limiting the explanation will then be completely relegated to the sorting step. + +§ 3.2 STEP 2: AGGREGATION + +A feature can contribute either positively or negatively towards the prediction of the model. When working with a binary classifier, a negatively contributing feature towards predicting class 1 means it is a positively contributing feature for class 2 . Therefore, the features collected from the previous step are aggregated in two sets $- {c1},{c2}$ - one for each class based on their feature score sign. Note that these two sets of features may have overlaps if the predictive signal is indicative of the different context in which those features appear. + +§ 3.3 STEP 3: SORTING + +The resulting sets of features for each class need to be constrained to a feasible size to be interpretable by a human. We propose two approaches to developing a feature relevance score $h$ to prioritize and distinguish these terms along an axis of more domain-specific concepts to more generic stop-words - normalization and PCA. + +Normalization. Here, we use the sum of LIME scores for each feature of the explanation divided by number of occurrences of that feature in the validation set. We calculate the feature relevance score $h$ of the ${j}^{\text{ th }}$ feature as: ${h}_{j} = \frac{1}{{m}_{j}}\mathop{\sum }\limits_{{i = 1}}^{N}{W}_{ij}$ . Here, $N$ is the number of data points in the explained dataset, ${m}_{j}$ is the number of occurrences of feature $j$ in the explained set, and $W$ is the explanation matrix containing the local importance of the interpretable components for each instance. This will give higher scores to features identified as more important by LIME, but will penalise common words, if they do not contribute to a class prediction often. This is in line with the definition of stop-words and should target the corpus-specific stop-words. We also filter out words that appear in two or less documents, as these can be party specific, but may not be useful for generalisation. This number can also be increased to filter out more predictive (according to LIME) words. + +PCA. The second approach to sorting is to decouple it from the LIME score after the initial aggregation step and use PCA of word embed-dings. We found that PCA applied to pre-trained word embeddings tends to separate domain specific words from function words and more generic + +terms. A theoretical motivation for this analysis 378 + +lies in the distributional differences between a gen- 379 + +eral text (used for pre-training word embeddings) 380 + +and a domain-specific text (in this case - politi- 381 cal debate). We hypothesise that the general embedding model will see the domain specific terms + +is sufficiently distinct context in order to embed 384 them in a compact space with a latent dimension separating them from more common and general terms. This relies on the studied data having a + +significant amount of domain specific terminology 389 that is rarer in general. We expect this to be the case for many application within the social sciences (e.g. politics), but can have limitations in, lower-level, syntactic classification tasks like POS + +tagging. 394 + +To calculate the sorting score, the terms from + +each set ${c1}$ and ${c2}$ are embedded using a model ${}^{\top }$ 396 trained on the Swedish CoNLL17 corpus. A PCA is run on each set of words $- {c1},{c2}$ - and the first PCA dimension value is used as the sorting score $h$ . Similarly to the normalisation approach, words that appear in two or fewer documents are filtered out. This dimension seems to provide a good distinction of domain specific terms. + +§ 3.4 STEP 4: KEYWORDS IN CONTEXT + +To further increase human interpretability, we also provide a way to provide context by extracting snippets of texts around the top word features produced in Step 3. For each occurrence, we use a simple keyword-in-context search and extract $n$ words before and after our feature word. This is clearly not feasible or interesting for very frequent words, which further motivates separating rarer, domain specific content words from more common function words. + +§ 4 DATA + +The dataset used for the case-study consists of transcripts of debates in the Swedish Riksdag, sourced from Riksdagens öppna data - Anföranden ${}^{2}$ . We use a pre-processed version available from Språkbanken ${}^{3}$ consisting of debates from 1993 to 2018. For our experiment, texts from the Social Democrat (S) and Moderate (M) parties + +431 have been extracted, resulting in ${104},{842}\mathrm{\;S}$ and ${62},{160}\mathrm{M}$ data points (one data point is one speech that could be part of a longer debate). From these, 100 examples have been sampled for a small-scale human baseline check, where two annotators are asked to perform the classification task of determining the party label from the speech texts and were evaluated against the true label. Since these are debates, references to the opponent are a strong but trivial predictor of party. References to people and political parties have been removed by targeting Swedish political party stems and words tagged as "People_along_political_spectrum" in Spräkbanken's tags, based on Swedish FrameNet (Heppin and Gronostaj, 2012). Data points shorter than 50 words have been removed, as manual analysis shows these tend to be entirely procedural and do not carry political sentiment. This is in line with similar cleaning practices used for US congressional debates (Bayram et al., 2019). The data is undersampled to balance the classes and split into: train(108,169), test(12,019)and validation (2,000)sets. The validation set is used for explainability methods. + +http://vectors.nlpl.eu/repository/20/ 69.zip + +thttps://data.riksdagen.se/data/ anforanden/ + +'https://spraakbanken.gu.se/resurser/ rd-anf-1993-2018 + +§ 5 EXPERIMENTS + +To test our methodology we apply it to a BERT classifier trained to predict the party label of a text (Devlin et al., 2019). The classifier is fine-tuned from a pre-trained model for Swedish data released by The National Library of Sweden/KBLab and available through the huggingface library The model has a 50,325 word vocabulary and 512 maximum token length. Longer inputs are truncated. As a baseline for investigating class differences and separability of the data we use a logistic regression classifier, as this provides easy access to class explanations by simply looking at the top and bottom scoring internal weights of the model. $\mathrm{N}$ -gram spans from 1 to 3 and a combination of all have been compared. The number of input features is 50,325 - the same as the pre-trained BERT model. + +A small-scale human annotation check on 100 instances shows the two annotators perform with 58 and 56 percent accuracy respectively. A Cohen's kappa of 0.4 indicates this is a hard classification task. + +In the interest of space, the sections below con- + +tain partial results. The full results are available in 486 + +an online appendix. 487 + +488 + +§ 5.1 BASELINE + +489 + +Table 1 summarises the accuracy and F1 scores 490 + +for the logistic regression classifier. We observe 492 that the best result is achieved with 1 -grams, with the inclusion of 2- and 3- grams adding no performance gains. It seems the main part of the distinguishing signal can be picked up by specific words + +rather than phrases. 497 + +max width= + +n-gram span #feat acc F1 + +1-4 +1,1 50,325 76.94 76.80 + +1-4 +2,2 50,325 73.19 73.05 + +1-4 +3,3 50,325 69.39 69.15 + +1-4 +1,3 150,975 76.93 76.80 + +1-4 + +Table 1: Logistic regression classifier performance. + +499 + +502 + +504 + +From the internal model weights, we can identify both domain specific words - "sjuka" (sick), "arbetslösa" (unemplyed), "arbetslinjen" (the employment line, a Moderate catchphrase), and function words - "det" (the), "ocks" (also), "synner-het" (in particular), can be predictive of the party label. This is in agreement with our assumption that a model can depend on both statistical differences in stop word or in human concepts as the basis of its prediction, and in doing so outperforms the human annotators. + +519 + +§ 5.2 BERT + +The BERT model ${}^{6}$ has an accuracy of 78.44 and 522 F1 score of 76.66 on the test set and accuracy of + +79.95 and F1 score of 78.27 on the validation set, 524 which is only a slight improvement over the logistic regression baseline. + +Applying LIME to all validation samples and aggregating the top 10 features for each data point + +results is a list of 2,043 Moderate and 2,085 Social 529 Democrats terms. Out of these 1,456 Moderate and 1,334 Social Democrat terms appear in more than two documents, and are thus candidates to be included as part of class explanations (this limit + +can be adjusted by the user). 534 + +539 + +5 https://github.com/ + +anonymous-supplementary-materials/ + +NoDaLiDa2023_Appendix + +${}^{6}$ With hyperparameters: $\operatorname{lr} = 5\mathrm{e} - 6$ , batch size $= {48}$ , steps $= {6000}$ + +https://huggingface.co/KB/ bert-base-swedish-cased + +540 + +max width= + +2|c|PCA ordering + +1-2 +rank term + +1-2 +1 utgiftsomrâde (expenditure area) + +1-2 +2 budgetpropositionen (the budget bill) + +1-2 +3 jobbskatteavdrag (employment tax credit) + +1-2 +4 arbetslöshetsförsäkringen (unemployment insurance) + +1-2 +5 skattehöjningar (tax increases) + +1-2 +X ... + +1-2 +1454 högkvalitativa (high quality) + +1-2 +1455 vackra (beautiful) + +1-2 +1456 klassiska (classic) + +1-2 +2|c|Normalised LIME score + +1-2 +rank term + +1-2 +1 vänsterregering (left-wing government) + +1-2 +2 fattigdomsbekämpning (poverty alleviation) + +1-2 +3 bidragsberoende (benefits dependency) + +1-2 +4 fridens (of peace) + +1-2 +5 arbetsföra (able to work) + +1-2 +X ... + +1-2 +1454 som (as) + +1-2 +1455 ett (one) + +1-2 +1456 en (one) + +1-2 + +Table 2: Results for the Moderates. + +541 + +542 + +543 + +544 + +545 + +546 + +547 + +548 + +549 + +550 + +551 + +552 + +553 + +554 + +555 + +556 + +557 + +558 + +559 + +560 + +561 + +562 + +563 + +564 + +566 + +§ 5.3 VALIDATION + +Tables 2- 3 show the results of both LIME and PCA for both M and S. In both cases, the models separate informative terms from generic ones. This is especially the case with the LIME scores, where the lowest-scoring words are all stop words. As for the highest-scoring words, we find that they are all related to taxes and employment. This is understandable, as this is also what makes up the main political left/right dimension in Sweden (Franzmann and Kaiser, 2006; Jolly et al., 2022; + +583 Ezrow et al., 2011). Besides, we can identify sev- eral references to several (groups of) parties and ministers, which we would expect in debates. + +While these findings are hopeful on their own, to be useful for social scientists, we need to do more to ensure that our results are valid. In other words, we want to ensure that our method measures what we intend to measure (Carmines and Zeller, 1979). In our case, this is whether a speech is representative of $\mathrm{S}$ or $\mathrm{M}$ . + +593 Looking at how appropriate the terms are, as we + +max width= + +2|c|PCA ordering + +1-2 +rank term + +1-2 +1 budgetpropositionen (the budget bill) + +1-2 +2 arbetsmarknadspolitik (labor market policy) + +1-2 +3 samlingspartiet [Refers to the Moderates] + +1-2 +4 ungdomsarbetslösheten (youth unemployment) + +1-2 +5 skattesänkningar (tax cuts) + +1-2 +X ... + +1-2 +1332 tillsammans (together) + +1-2 +1333 u (u) + +1-2 +1334 dam (lady) + +1-2 + +max width= + +2|c|Normalised LIME score + +1-2 +rank term + +1-2 +1 överläggningen (the deliberation) + +1-2 +2 moderatledda (moderate-led) + +1-2 +3 kd (abbrev. for Christian Democrat party) + +1-2 +4 skattesänkningarna (the tax cuts) + +1-2 +5 borgarna (the bourgeois [parties to the right]) + +1-2 +X ... + +1-2 +1332 har (have) + +1-2 +1333 av (of) + +1-2 +1334 för (for) + +1-2 + +Table 3: Results for Social Democrats + +594 + +595 + +596 + +597 + +600 + +605 + +607 + +610 + +617 + +620 + +622 + +623 + +624 + +did above, is a first step. This is also known as 625 face validity, as we look if our method "appears to + +measure" what we want it to measure (Anastasi, 627 1976, pp. 139-140). Yet, face validity depends on many implicit decisions that vary between context + +and researcher. As such, we should look further if 630 we wish to provide a more satisfactory validation. + +One good candidate for this is by looking at $\operatorname{con}$ - 632 struct validity (Shadish et al., 2002; Carmines and Zeller, 1979). This refers to the degree to which we can use our results to say something about that + +what we aim to measure. One way to learn this 637 here is to look at the wider context in which the terms the algorithm uses appear. For example, if a term used by the algorithm to assign a speech to $\mathrm{S}$ occurs in a context that defines $\mathrm{S}$ , this strength- + +ens our case for construct validity. To see this, we 642 can use keyword-in-context (KWIC), which looks at the $n$ (here we choose 20) words before and after the term that interests us. In Table 4 we show + +this for one of the terms from the PCA analysis 646 + +for S - arbetsmarknadspolitik (labour market pol- 647 + +icy). Here, we see that the context of the word + +649 indeed refers to policies close to S. In both cases, the term is used to call for more and new measures to regulate the labour market - something indicative of S. Similar examples for the words in Tables 2-3 are in the online appendix. As we have implemented KWIC in our algorithm, scholars can thus easily assess whether the same is true for any of the other terms and in this way better assess the validity. + +"... enda âtgärd lösa detta, det behövs många âtgärder. Det handlar om ett gott företagarklimat, om en ny arbetsmarknad-spolitik, om ytterligare utbildningssatsningar, om att bygga om - osv. med de förslag till âtgärder som vi ..." + +"... single measure solve this, many measures are needed. It's about a good business climate, about a new labour market policy, about further training efforts, about rebuilding - etc. with the proposed measures that we ..." + +"... i arbete det finns individer som kommer att behöva säskilt stöd, och då behöver vi ha en bra arbetsmarknadspolitik. Men det är förstås in-get egenvärde i att ungdomar som kan få jobb ändà ska vara i en . . ." + +"... in work there are individuals who will need separate support, and then we need to have a good labour market policy. But of course there is no intrinsic value in young people who can get a job still being in a..." + +Table 4: Keywords-in-context for the class-explanation feature labour market policy for the Social Democrats. + +§ 5.4 EXPLANATIONS AND PREDICTIVE ACCURACY + +Returning to individual instance explanations, we also wanted to investigate if the kind of words (domain specific or statistical distributions) occurring in an explanation have any relationship with the certainty of the model on those datapoints. We found domain specific words (here related to politics), along the positive PCA spectrum, while more common, general words had embeddings placing them towards the negative end. We find that data points where the explanation-words are predominantly positioned within the positive PCA spectrum (the sum of the PCA coordinates of the + +701 top-ten explanation features is positive) are cases + +where the model is more accurate. Compared to 702 + +datapoints where explanations lie in the negative 703 + +PCA space, there is an accuracy gain of roughly 704 + +10 percent (Table 5). Interestingly, this suggests 705 that explanations containing domain specific, rarer words are correlated with the model's correctness, + +although the number of datapoints with domain 708 + +specific explanations is quite small. 710 + +max width= + +X Correct Incorrect $\mathbf{{Acc}}$ + +1-4 +Pos PCA sum 186 25 88.15 + +1-4 +Neg PCA sum 1413 376 78.98 + +1-4 + +Table 5: Classifier performance on the validation set split based on the sum of PCA coordinates of the explanation provided by LIME. + +713 + +715 + +718 + +720 + +§ 6 COMPARISON TO SP-LIME + +Our method is comparable with SP-LIME, which aggregates individual LIME explanations. SP-LIME consists of three similar steps: post-hoc instance explanations extraction, sorting and example extraction. In contrast to our proposed scoring functions, SP-LIME calculates the score for feature $j$ as ${I}_{j} = \sqrt{\mathop{\sum }\limits_{{i = 1}}^{N}{W}_{ij}}$ where $N$ is the number of data points in the explained dataset and $W$ is the explanation matrix containing the local importance of the features. Based on this scoring, SP-LIME performs a greedy search to extract the top scoring data examples that also have the greatest coverage of distinct features. Therefore, the model explanation takes the form of a set number of text examples with their corresponding instance explanations, where the number of examples provided is defined by the user. Since the method performs a greedy search, the results are ordered by their contribution to how well they explain the model and how many unique features they cover. + +We apply SP-LIME to the BERT classifier and extract the top 20 text examples that the explainability approach considers most representative. These contain $9\mathrm{\;S}$ examples and ${11}\mathrm{M}$ examples. A selected set of instance explanations can be seen in Table 6 and the full list is available in our online appendix. We can see the overemphasis of stop words especially in the top examples. Only a couple of the surfaced terms carry a political significance, and even those lack context and have arguable generalisability. Some of the examples + +provided by SP-LIME (see Top 12 and Top 16 in 755 + +Rank 1 SP-LIME example (true label S): är (is), det (the), som (as), den (the), vi (we), Natomedlemskap (NATO membership), att (to), du (you), samlingsregeringen (the coalition government), $\mathbf{{Vi}\left( {We}\right) }$ Rank 2 SP-LIME example (true label M): fragorna (the questions), protektionistiska (protectionist), önskar (wish), Det (The), och (and), Herr (Mr), oerhört (incredibly), handelsminister (Minister of Trade), tackar (thanks), de (the) + +... Rank 12 SP-LIME example (true label M): medelinkomsttagare (middle income earner), avregleringar (deregulations), vänster (left), tvivelaktiga (questionable), skattesänkningar (tax cuts), Då (Then), och (and), Man (One/third person singular), bostadsmarknaden (the housing market), stöd (support) + +... + +Rank 16 SP-LIME example (true label S): borgarna (the bourgeois), oss (us), längtidsarbetslösa (long-term unemployed), klyftorna (the cleavages), det (the), sjuka (sick), rödgröna (red green) 7, Vi (We), Làt (Let), är (is) + +Table 6: Explanations provided by SP-LIME. Bold features indicate words contributing towards an $\mathrm{M}$ classification, while italic features do the same for S. Full results are in the online appendix. + +Table 6) are instances where human intuition is more easy to align with. However SP-LIME in general does not provide a way to distinguish between the two types of contributing features that the current work targets. Finally, SP-LIME also differ from our method in the way it presents texts containing explanatory features. SP-LIME tries to find texts which has as many features as possible in one and the same text, while we choose to present many alternative contexts in which explaining feature words appear, motivated by social science use-cases. + +§ 7 CONCLUSION AND DISCUSSION + +We have developed a new algorithm for extracting class explanations, which takes the distinction between functional and content words into account. It thereby provides an alternative to prior + +methods like SP-LIME, which mixes explanations 810 + +based on e.g. stop word frequency with presence 811 + +of certain domain specific terms. Our motivation 812 + +comes from the idea of human-grounded explain- 813 + +ability: a useful explanation for a human will fo- 814 + +cus on content rather than stop-words, while still 815 + +being true to the model. In our case-study, we 816 demonstrated this on speeches from the Swedish parliament, with the task of explaining a binary classifier associating speeches to either of the two + +main parties. This is a difficult task, our human 821 annotation experiment showed human performing just better than random, potentially as they primarily looked for clues about policy. The machine learning models performed better, as they + +likely also managed to identify statistical speech 826 patterns of speakers, which we saw in explanations where e.g. stop words inevitably appear. Our algorithm can not only identify these, but also separate them from explanations containing domain + +specific words, hinting at policy, motivated by the 831 needs of social scientists. Additionally, we find indications that domain specific explanations correlate with model performance. Patterns related to policy in our experiment may be more robust than + +learned speech patterns of stop words, which risks 836 being influenced by single frequent individuals in + +the dataset, rather than capturing patterns common 838 to a political party. + +Future work will focus on systematic and exten- + +sive testing of the proposed methodology in order 841 to evaluate it along the twelve properties proposed + +by Nauta et al. (2022). The focus should be on 843 measuring the faithfulness to the underlying black box model, correctness, as well as a larger scale domain expert evaluation to measure how relevant + +and valid the explanations are (context and coher- 848 ence properties). The generalisability will also be tested, by studying other domains and classifica- + +tion tasks. 851 + +853 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/YTVwaoG0Mi/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/YTVwaoG0Mi/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..5eb450945c09165a3e28f24694d05023a1ef6383 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/YTVwaoG0Mi/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,523 @@ +000 054 + +# Detection and attribution of quotes in Finnish news media: BERT vs. rule-based approach + +001 055 + +002 056 + +003 057 + +058 + +059 + +006 060 + +## Abstract + +We approach the problem of recognition and attribution of quotes in Finnish news media. Solving this task would create possibilities for large-scale analysis of media wrt. the presence and styles of presenta- + +018 tion of different voices and opinions. We describe the annotation of a corpus of media texts, numbering around 1500 articles, + +021 with quote attribution and coreference information. Further, we compare two meth- + +023 ods for automatic quote recognition: a rule-based one operating on dependency trees and a machine learning one built on + +026 top of the BERT language model. We conclude that BERT provides more promis- + +028 ing results even with little training data, achieving ${95}\%$ F-score on direct quote recognition and ${84}\%$ for indirect quotes. + +031 Finally, we discuss open problems and further associated tasks, especially the neces- + +033 sity of resolving speaker mentions to entity references. + +## 1 Introduction + +The recognition of quotes and reported speech is + +038 an important step towards the computational analysis of news media articles. It allows us to measure on a large scale, who is given voice and how much, how opposing or competing views are presented alongside each other, as well as how the language of the quoted sources differs from the language of the journalistic reporting. In case of the Finnish news media, such analyses have recently been attempted by (Koivunen et al., 2021; Seuri et al., 2021). On the other hand, Suomen Kuvalehti et al. (2021) have studied politicians' visibility in the media based on the mentions of their names. + +In the present paper, we focus on the technical 053 task of recognizing direct and indirect quotes in + +061 + +062 + +063 + +064 + +the Finnish news media texts. The task can be il- 065 + +lustrated with the following example: 066 + +067 + +Sipilän mukaan lakiehdotuksia ollaan 068 + +tuomassa eduskuntaan helmikuussa. 069 + +According to Sipilä, bill proposals will 070 071 + +be brought to the parliament in Febru- 072 ary. + +Such relations consists of three elements: the 075 cue 'mukaan' ('according to') indicates an indirect quote, in which the source (Juha Sipilä, the Finnish prime minister 2015-2019) says the text referred to as proposition, or quotation span. A + +complete approach for quote detection and attribu- 080 tion would solve the following tasks: + +082 + +1. Detecting quotation spans. + +2. Attributing quotation spans to the source 085 mention in the text (which might also span + +multiple tokens). 087 + +3. Linking source mentions to entity identi- + +fiers (including coreference resolution and 090 lemmatization). + +We will present methods for solving tasks 1 and 2 , 092 while discussing 3 as subject for further work. + +Most existing work for this task deals with English, while occasionally other Germanic or Ro- + +mance languages have been considered. Com- 097 pared to that, Finnish presents challenges due to a rich morphology and free word order. Those can largely be dealt with by the advanced NLP tools that we use (either a dependency parser pipeline or BERT), but they rule out the usage of simpler pattern-based methods and remain a possible source of errors even for state-of-the-art NLP. + +107 + +--- + +${}^{1}$ We follow Pareti (2015)’s convention of marking the quotation span in cursive, the source in bold, and underlining the cue. + +--- + +We describe the process of collecting and anno- + +109 tating a gold standard corpus in sec. 3. Further, in sec. 4, we describe two different automatic approaches: a rule-based one, amounting to matching certain grammatical structures in dependency-parsed text, as well as a machine learning one, which utilizes the state-of-the-art neural language model BERT. We will release the annotated corpus and both methods publicly. + +Our initial intuition was that dependency parsing provides enough information to recognize quotes with simple pattern matching. Another reason to implement this approach was that it did not need training data, which was at first unavailable for us. However, the final comparison revealed that the BERT-based model outperformed the rule-based even with little training data. The results of this experiment are described in sec. 5. + +## 2 Related Work + +To our knowledge, the most similar work to ours has been done by Silvia Pareti and colleagues (Pareti et al., 2013; Pareti, 2015, 2016), who annotated a corpus of attribution relations for English and experimented with machine learning models for recognizing such relations. For the latter they applied classification algorithms - CRF, k-NN, logistic regression - working on data enriched with linguistic features, which was state-of-the art in NLP at the time. However, Scheible et al. (2016) have criticized the choice of CRFs for quote detection because of the Markov assumption they make. More recently, Papay and Padó (2019) presented a neural LSTM-based model for recognizing quotations, but without attribution. Brunner et al. (2020) compare different embedding-based models (including BERT) on the task of recognizing types of speech, which includes direct and indirect quotes. + +As to Nordic languages, a rule-based approach for Norwegian has been presented by Salway et al. (2017). It utilizes a dependency parser and a list of speech verbs. From other languages, Quintão (2014) used a machine learning method on Portuguese news corpora, while Pouliquen et al. (2007) used a rule-based approach for multiple European languages. + +Muzny et al. (2017) present a method for quote attribution. They thus start with quotation spans already recognized and perform two tasks: 1) at- + +161 + +tributing a quote to a speaker mention in the text, 162 + +2) linking the speaker mentions into entities. They 163 use a rule-based strategy on top of tools performing dependency parsing and coreference resolution. They also released a corpus of quote attributions consisting of three novels in English. + +Although not dealing exactly with quote detec- 168 tion, Padó et al. (2019) provide a prominent example of computational analysis of political discourse using modern NLP methods. They use various neural models (including BERT) to detect claims and attribute them to actors, with the goal of modeling the discourse as a network of relations between actors and claims. Automatic quote detection could be a useful element of such larger + +system as well. 178 + +## 3 Dataset and Annotation + +180 + +The annotation process consisted of two parallel tasks: marking quotations and linking together chains of co-referencing expressions denoting people, institutions and other human-like actors present in the documents. Both annotation tasks were conducted using the WebAnno platform (Eckart de Castilho et al., 2016), by which each annotator was assigned their documents and + +by which the annotation itself was done. The an- 190 notation guidelines were written beforehand and further developed after a test run. + +The quotation detection annotation consisted of 193 1) marking the span in the text containing the con- + +tent of the quote, 2) marking the speech act verb (if 195 present), 3) marking the source of the quotation (if present), and 4) noting whether the quote was direct or indirect. The task was relatively straightforward, as all annotators were students with at least + +a minor degree in linguistics. 200 + +The project employed 10 annotators. Four of them were recruited in an earlier phase and annotated a test data set of 40 articles. After the test run, the guidelines were improved based on both inter-annotator agreement scores and feedback from the annotators, in accordance with the standard linguistic annotation methodology (Art-stein, 2017). The inter-annotator agreement scores (Fleiss’ $\kappa$ ) were between 0.77-0.8, which we deemed sufficient to consider the annotations consistent. The workload was balanced so that the 6 other annotators who were recruited at the later stage annotated more articles to compensate for + +the test run. The annotators worked independently 215 on the WebAnno platform. + +--- + +${}^{2}$ (links to repositories removed for anonymization, will be added in the published version) + +--- + +217 The articles were sampled from a database containing the metadata for the online media sources and the sampled lists of articles were then scraped using a web crawler (Mäkelä and Toivanen, 2021) and automatically pre-processed to CONLL format containing lemmatization, part-of-speech and dependency taggings using Turku Neural Parser (Kanerva et al., 2018). We used four sources for the articles: YLE (the Finnish national broadcasting company), Helsingin Sanomat (the most popular daily newspaper), Iltalehti (an evening tabloid) and STT (the Finnish news agency), covering different kinds of media texts wrt. length and style. The total number of articles annotated was 1500 , + +232 of which 1460 were annotated by only one annotator at the second stage. + +234 + +## 4 Methods + +### 4.1 Rule-based approach + +The input to the rule-based quote detection engine is text with linguistic annotations obtained from the Turku Neural Parser (Kanerva et al., 2018). The parser performs the following tasks: tokeniza-tion, lemmatization, part-of-speech and morphological tagging, and dependency parsing. + +The first stage of quote recognition is recognizing syntactic structures that typically introduce a quote (Table 1). Rules 1-2 describe the very common structures like ’ $\mathrm{X}$ says that $\mathrm{Y}$ ’ and ’ $\mathrm{Y}$ ’ says X', respectively. Rules 3-4 describe structures of the type: 'according to X, Y' and 'in X's opinion, Y'. In such structures, the source and cue can be positioned differently relatively to the proposition: before, after, or even inside it (see the example for rule 4). In the latter case, we allow annotating the cue and source as part of the proposition to avoid discontinuous propositions. Finally, rule 5 is characteristic for Finnish: it captures the construction 'says + active participle', e.g. sanoo olevansa + +259 'says that he is', or sanoo tehneensa 'says that he did'. This construction does not use the word että 'that'. + +In the rules where the cue is a verb $(1,2$ and 5), the verb sanoa 'to say' can be substituted by any other speech act verb, e.g. kertoa 'to tell', korostaa 'to emphasize', kuitata 'to sum up' etc. We initially prepared a list of speech act verbs manually, then used a word2vec model to expand it with automatically generated synonyms, which + +269 were again filtered manually. The final list con- + +sisted of 73 verbs. 270 + +Once the source-cue-proposition triplets are 271 + +recognized, the proposition texts can typically be 272 + +extracted by taking the dependency subtree under 273 the token marked as proposition. However, further post-processing is needed for quotes consist- + +ing of multiple sentences. For example in Table 276 1 , the example for rule 2 is clearly the last sentence of a multi-sentence quote. In order to expand the matches to multi-sentence quotes, we use two + +rules: 281 + +1. If the paragraph containing the match starts + +with a hyphen - extend the quote to the begin- 283 ning of the paragraph. This is because long + +direct quotes are typically formatted as sepa- 286 rate paragraphs. + +2. If there is a quotation mark between the cue 288 and the proposition head - extend the quote + +backwards to the matching quotation mark. 291 + +In both these cases, the quote is classified as direct, as it is marked with quotation markers. Matches that do not fulfill the above conditions are classified as indirect. + +Finally, we use an additional rule to detect 'freestanding' direct quotes encompassing entire paragraphs. These do not necessarily contain a source attribution (like ', says X') because the source might be already clear from context. Thus, we detect remaining paragraphs that either start with + +a hyphen or are enclosed in quotation marks, as 303 direct quotes. For the attribution we currently use a naïve strategy of attributing them to the + +same source as the previous quote in the text (if 306 present). This works in a lot of cases because the + +quotes usually follow a structure in which a whole- 308 paragraph direct quote is introduced by an indirect + +one, like: 310 + +311 + +According to Lindberg, approximately + +every third pet is overweight. 313 + +- We do have a lot of work on that. + +The rules from Table 1 are implemented using the spaCy library class DependencyMatcher ${}^{3}$ which offers a declarative language to express the rules and good performance. The post-processing code is implemented in Python. + +323 + +--- + +https://spacy.io/api/ dependencymatcher + +--- + +325 379 + +
$\mathbf{{No}.}$schemaexample
1 source cue propMalinen sanoo, että hän ei tule esittämään liiton hallituk- selle yhdenkään sopimuksen hyväksymistä. Malinen says that he will not propose accepting even a single motion of agreement to the union's board.
2 Siksi mekin lähdimme näihin neuvotteluihin mukaan, Mäkynen sanoo. This is why we also joined these negotiations, Mäkynen says.
3 Sipilän mukaan lakiehdotuksia ollaan tuomassa eduskun- taan helmikuussa. According to Sipilä, bill proposals will be brought to the parliament in February.
4 CASE: ElaSuomen vaikeista ongelmista talous on presidentin mielestä helpompi. From Finland's most difficult problems, the economy is in the president's opinion easy.
5 Orpo sanoo olevansa valmis poikkeuksellisiin keinoihin ja jopa lainmuutoksiin [...]. Orpo says that he is ready for exceptional measures and even legistative changes [...].
+ +Table 1: The manually constructed rules for detecting quote-like syntactic structures. + +378 + +380 + +381 + +330 384 + +335 389 + +337 + +338 + +339 + +340 + +342 + +345 + +347 + +352 406 + +### 4.2 BERT model + +The machine learning model is realized as two to- + +357 ken classification heads on top of BERT - a neural language model based on the transformer architecture (Devlin et al., 2019). We use the model pre-trained on Finnish data by Virtanen et al. (2019). + +The first classification head recognizes and clas- + +362 sifies spans of quoted text (propositions). The labeling follows the IOB schema and the class label encodes whether the quote is direct or indirect, as well as the relative position of the speaker men- + +367 tion to the quoted text. The latter is expressed as one of the symbols: $+ , -$ or $=$ and a number 1-4. The symbol describes whether the speaker is mentioned after $\left( +\right)$ , before(-)or inside $\left( = \right)$ the proposition, while the number signifies, which recognized entity is the speaker. For example, the class label B-DIRECT+2 denotes the beginning (B-) of a direct quote, the source of which is the second recognized entity after the quote. A special label 00 signifies that the source of the quote is + +377 not marked. + +The second classification head recognizes the + +entities, i.e. elements of coreference chains. It has 409 just one class encoded in the IOB schema and does + +not perform the linking of entities into chains. 411 + +An example of sequence annotation is shown in Table 2. It shows the following sentence: + +Kansainvälinen rikostuomioistuin aikoo 415 + +määrätä Sudanin presidentin Omar 416 + +al-Bashirin pidatettäväksi, kertoo 417 + +sanomalehti New York Times. 418 + +419 + +The International Criminal Court is in- 420 + +tending to issue an arrest warrant on 421 Sudan's president Omar al-Bashar, the newspaper New York Times reports. + +There are three entities in the sentence: 'The In- + +ternational Criminal Court', 'Sudan's president 426 Omar al-Bashar' and 'the newspaper New York + +Times' - their annotations on the token level are 428 + +encoded on the 'entity' layer. The 'quote' layer 429 + +encodes an indirect quote, which is attributed to 430 + +the first entity following the quote (hence, +1). 431 + +433 + +
wordquoteentity
KansainvälinenB-INDIRECT+1B
rikostuomioistuinI-INDIRECT+1I
aikooI-INDIRECT+1O
mäiārātāI-INDIRECT+1O
SudaninI-INDIRECT+1B
presidentinI-INDIRECT+1I
OmarI-INDIRECT+1I
al-BashirinI-INDIRECT+1I
pidätettäväksiI-INDIRECT+1O
9OO
kertooOO
sanomalehtiOB
NewOI
YorkOI
TimesOI
-OO
+ +Table 2: An example of sequence annotation for + +438 the BERT model. + +
trainingevaluation
articles1,172287
sentences22,9495,097
tokens252,00659,076
quotes3,854984
+ +Table 3: The sizes of datasets used in experiments. + +## 5 Evaluation + +For the evaluation experiments we use a roughly 80-20 split of the data by taking the data provided by 2 annotators as evaluation set and the remaining 8 annotators as training set. The dataset sizes are summarized in Table 3. We compare both methods on the task of quote recognition (with and without direct/indirect classification) and attribution. + +Quote detection. The results of quote span detection without taking into account the direct-indirect distinction are shown in Table 4. On the other hand, the direct-indirect breakdown is shown in Table 5, where misclassifications (identifying a direct quote as an indirect one or vice versa) were counted as both a false positive and a false negative. We exclude punctuation tokens from the evaluation as especially the commas and periods on the boundaries of quotes might have been inconsistently annotated, and their inclusion in the quote is irrelevant. + +Both settings show a clear advantage of the + +485 BERT model. In case of direct quotes, the rules + +
method$\mathbf{{Pr}}$$\mathbf{{Re}}$$\mathbf{{F1}}$
rule-based.85.78.82
BERT.92.90.91
+ +Table 4: Results of quotation span detection without classification. + +486 + +487 + +488 + +489 + +490 + +491 + +
methodindirectdirect
$\mathbf{{Pr}}$$\mathbf{{Re}}$$\mathbf{{F1}}$$\mathbf{{Pr}}$$\mathbf{{Re}}$$\mathbf{{F1}}$
rule-based.75.66.70.93.86.89
BERT.84.84.84.96.94.95
+ +Table 5: Results of quotation span detection and direct/indirect classification. + +492 + +493 + +497 + +499 for recognizing them are quite rigid. Furthermore, + +they can suffer from paragraph segmentation er- 502 rors and misplaced or incidental quotation marks + +(e.g. 'scare quotes'). This explains the lower re- 504 call of the rule-based method. + +Indirect quotes have proven more challenging to the rule-based method as well. This can be to a variety of reasons: missing speech act verbs, incorrectly identifying quote spans based on syntactic criteria (also affected by parser, tagger and sentence segmentation errors), or uncommon structures not covered by the rules. Moreover, rule 3 ('according to') has a tendency to produce false positives, e.g. something being described 'according to the plan'. + +In general, the BERT model has shown to be more flexible wrt. the often unpredictable nature of text data, and does not suffer from the error propagation through the NLP pipeline. + +Attribution. The evaluation of attribution is problematic because of the fact that our dataset was not annotated with the BERT model in mind. + +Thus, we present it as our best attempt given the 524 current possibilities, but recognize the need for further work in this regard. + +The annotated data assigns each quote to a sin- + +gle token representing the mention of the quote's 529 source in the text. If the source is represented by a longer phrase, the syntactic head (wrt. dependency parsing) of this phrase should be selected according to the annotation guidelines. On the + +other hand, mentions of quote sources are typ- 534 ically entities annotated as parts of coreference chains, and thus the entire span is marked for the purpose of coreference annotation. Thus, by com- + +bining the quote and coreference annotations, we 538 + +are able to obtain a span-to-span attribution rela- 539 tion for most cases. The exception are cases in + +541 which the quoted entity is mentioned only once in the article, and thus not annotated as a coreference chain. + +Although the BERT model outputs sources as entity spans, the rule-based model points to a single token - the syntactic head, similarly to the gold standard annotation. In order to make the results comparable, we reduced the output of the BERT model to the first token of the span, and then evaluated a source annotation as correct if it either points to exactly the same token as the gold standard, or if it points to a token within the same coreference span. Thus, the model's ability to correctly identify the entire span is currently not eval- + +556 uated, as it is not implemented in the rule-based method. + +558 Table 6 presents results of the attribution evaluation in terms of the number of gold-standard quote tokens with correctly and incorrectly recognized source, as well as unrecognized source. The latter case occurse if either the token is not recognized as a quote at all, or it is recognized but without identifying the source. We report the accuracy as the ratio of correctly identified to all tokens. + +The results indicate a small advantage of the rule-based model. In both cases, the main source of errors are the unrecognized annotations, rather than the incorrect ones. For the rule-based model this is typically due to quotes not being recognized at all (see low recall in Table 4), while for the BERT model there is a large amount of correctly identified quotes, for which the source could not be found. Of the 1990 recognized quotes, 646 $\left( {{32}\% }\right)$ are reported without source, compared to ${13}\% \left( {{218}/{1633}}\right)$ for the rule-based model. The BERT model's ability to identify the source depends on the entity detection, for which the training data is incomplete (derived from coreference annotations only). Further, the model processes the text paragraph by paragraph and thus does not + +583 find a source mention that is outside of the paragraph containing the quote. These problems offer room for improvement in further work, and thus it can be expected that the BERT model will eventually outperform the rule-based one also in attribution. + +## 6 Discussion and Further Work + +Although we regard the work presented in the pre- + +593 vious sections as a complete solution to a well- + +
methodcorincunrecaccuracy
rule-based78897744996.58
BERT75547675338.55
+ +Table 6: Results of attribution. + +594 + +595 + +596 + +597 + +598 + +599 + +delimited problem, we see some potential for both 600 incremental improvements, as well as work on further related tasks, that will be addressed in the future. + +Entity annotation and detection. While de- 605 signing our annotation project, we did not antici- + +pate that a machine learning quote detection model 607 will need to also detect entities that the quotes can be attributed to. We intended the coreference an- + +notation to be used only in the further step (entity 610 resolution). In result, entities that are mentioned + +only once were not annotated. The corpus could 612 be improved by ensuring that at least tokens assigned as source to a quote are also annotated as + +an entity. This is expected to improve the BERT 615 model's performance on entity detection, and thus + +quote attribution. 617 + +Entity resolution. While some works treat the + +problem of quote attribution to speaker mention in 620 the text and entity resolution jointly (e.g. Muzny + +et al. 2017), in our opinion entity resolution is a 622 complex task that is best treated separately. In addition to coreference resolution within one docu- + +ment, also matching the entities across documents 625 could be considered there. + +Coreference resolution can be done with BERT 627 with state-of-the-art accuracy (Joshi et al., 2019). However, the setup is complicated as coreferences + +are typically long-range relations, so a sliding win- 630 dow approach needs to be used to mitigate BERT's + +limitation in text size. Furthermore, modeling re- 632 lations with a neural model is not straightforward. + +A related problem is that nested entities are possible and might be relevant, e.g.: + +[[Viron] metallityöväen liiton] puheen- 637 + +johtaja Endel Soon] 638 + +[[Estonia]'s metal workers' union]'s chairman Endel Soon] + +In such case, coreferences and other quotes might 642 also refer to the inner entities 'Estonia' or 'Estonia's metal workers' union'. For the present work, we disregarded nested entities as locally the outermost entity is typically the source of the quote it + +stands next to. 647 + +## 7 Conclusion + +649 + +We have presented two methods for recognition of quotes in Finnish news media, along with an annotated corpus for training and evaluation. To our knowledge, our solution is the first one proposed 654 for Finnish. We hope that the progress achieved on this task will facilitate more detailed large-scale quantitative analysis of voices in the Finnish news media. + +## References + +661 Ron Artstein. 2017. Handbook of Linguistic Annotation, chapter Inter-annotator agreement. + +664 Ann Brunner, Ngoc Duyen Tanja Tu, Lukas Weimer, and Fotis Jannidis. 2020. To bert or not to bert - comparing contextual embeddings in a deep learn- + +666 ing architecture for the automatic recognition of four types of speech, thought and writing representation. In SwissText/KONVENS. + +669 Richard Eckart de Castilho, Éva Mújdricza-Maydt, Seid Muhie Yimam, Silvana Hartmann, Iryna Gurevych, Anette Frank, and Chris Biemann. 2016. A Web-based Tool for the Integrated Annotation of Semantic and Syntactic Structures. In Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities (LT4DH), pages 76-84, Osaka, Japan. + +Jacob Devlin, Ming-Wei Chang, Kanton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In ${NAACL}$ . + +681 Mandar Joshi, Omer Levy, Daniel S. Weld, and Luke Zettlemoyer. 2019. Bert for coreference resolution: Baselines and analysis. In EMNLP 2019. + +Jenna Kanerva, Filip Ginter, Niko Miekka, Akseli Leino, and Tapio Salakoski. 2018. Turku neural + +686 parser pipeline: An end-to-end system for the conll 2018 shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computational Linguistics. + +691 Anu Koivunen, Antti Kanner, Maciej Janicki, Auli Harju, Julius Hokkanen, and Eetu Mäkelä. 2021. Emotive, evaluative, epistemic: a linguistic analysis of affectivity in news journalism. Journalism, 22(5):1190-1206. + +696 Grace Muzny, Michael Fang, Angel X. Chang, and Dan Jurafsky. 2017. A two-stage sieve approach for quote attribution. + +Eetu Mäkelä and Pihla Toivanen. 2021. Finnish media scrapers. Journal of Open Source Software, 701 6(68):3504. + +Sebastian Padó, André Blessing, Nico Blokker, Ere- 702 + +nay Dayanik, Sebastian Haunss, and Jonas Kuhn. 2019. Who sides with whom? towards computational construction of discourse networks for political debates. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2841-2847. + +Sean Papay and Sebastian Padó. 2019. Quotation detection and classification with a corpus-agnostic model. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 888-894, Varna, Bulgaria. INCOMA Ltd. + +Silvia Pareti. 2015. Attribution: A Computational Approach. Ph.D. thesis, University of Edinburgh. + +Silvia Pareti. 2016. Parc 3.0: A corpus of attribution relations. In ${LREC}$ . + +Silvia Pareti, Tim O'Keefe, Ioannis Konstas, James R. Curran, and Irena Koprinska. 2013. Automatically detecting and attributing indirect quotations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 989- 999. + +Bruno Pouliquen, Ralf Steinberger, and Clive Best. 2007. Automatic detection of quotations in multilingual news. In Proceedings of Recend Advances in Natural Language Processing, pages 487-492, Borovets, Bulgaria. + +Marta Quintão. 2014. Quotation attribution for portuguese news corpora. + +Andrew Salway, Paul Meurer, Knut Hofland, and Øys-tein Reigem. 2017. Quote extraction and attribution from norwegian newspapers. In Proceedings of the 21st Nordic Conference of Computational Linguistics, pages 293-297, Gothenburg, Sweden. + +Christian Scheible, Roman Klinger, and Sebastian Padó. 2016. Model architectures for quotation detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1736-1745. + +Olli Seuri, Riikka Era, Anu Koivunen, Maciej Janicki, Pihla Toivanen, Julius Hokkanen, and Eetu Mäkelä. 2021. Uutisvuon hallitsija: Uutismedia kiky-kamppailussa 2015-2016. Politiikka : Valti-otieteellisen yhdistyksen julkaisu, 63(3):233-259. + +Suomen Kuvalehti, Eetu Mäkelä, and Pihla Toiva-nen. 2021. Vuosi valokeilassa: Kuka sai medi-alta huomiota? kuka jäi varjoon? suomen kuvale-hti selvitti tutkijoiden kanssa, miten kansanedusta-jat näkyivät neljässä suuressa uutismediassa vuonna 2020. + +Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: BERT for Finnish. + +703 + +704 + +705 + +706 + +708 + +710 + +713 + +715 + +718 + +720 + +740 + +745 + +750 + +755 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/YTVwaoG0Mi/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/YTVwaoG0Mi/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..317eaeece98262e392b6ada5a301daa0b25b8164 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/YTVwaoG0Mi/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,563 @@ +000 054 + +§ DETECTION AND ATTRIBUTION OF QUOTES IN FINNISH NEWS MEDIA: BERT VS. RULE-BASED APPROACH + +001 055 + +002 056 + +003 057 + +058 + +059 + +006 060 + +§ ABSTRACT + +We approach the problem of recognition and attribution of quotes in Finnish news media. Solving this task would create possibilities for large-scale analysis of media wrt. the presence and styles of presenta- + +018 tion of different voices and opinions. We describe the annotation of a corpus of media texts, numbering around 1500 articles, + +021 with quote attribution and coreference information. Further, we compare two meth- + +023 ods for automatic quote recognition: a rule-based one operating on dependency trees and a machine learning one built on + +026 top of the BERT language model. We conclude that BERT provides more promis- + +028 ing results even with little training data, achieving ${95}\%$ F-score on direct quote recognition and ${84}\%$ for indirect quotes. + +031 Finally, we discuss open problems and further associated tasks, especially the neces- + +033 sity of resolving speaker mentions to entity references. + +§ 1 INTRODUCTION + +The recognition of quotes and reported speech is + +038 an important step towards the computational analysis of news media articles. It allows us to measure on a large scale, who is given voice and how much, how opposing or competing views are presented alongside each other, as well as how the language of the quoted sources differs from the language of the journalistic reporting. In case of the Finnish news media, such analyses have recently been attempted by (Koivunen et al., 2021; Seuri et al., 2021). On the other hand, Suomen Kuvalehti et al. (2021) have studied politicians' visibility in the media based on the mentions of their names. + +In the present paper, we focus on the technical 053 task of recognizing direct and indirect quotes in + +061 + +062 + +063 + +064 + +the Finnish news media texts. The task can be il- 065 + +lustrated with the following example: 066 + +067 + +Sipilän mukaan lakiehdotuksia ollaan 068 + +tuomassa eduskuntaan helmikuussa. 069 + +According to Sipilä, bill proposals will 070 071 + +be brought to the parliament in Febru- 072 ary. + +Such relations consists of three elements: the 075 cue 'mukaan' ('according to') indicates an indirect quote, in which the source (Juha Sipilä, the Finnish prime minister 2015-2019) says the text referred to as proposition, or quotation span. A + +complete approach for quote detection and attribu- 080 tion would solve the following tasks: + +082 + +1. Detecting quotation spans. + +2. Attributing quotation spans to the source 085 mention in the text (which might also span + +multiple tokens). 087 + +3. Linking source mentions to entity identi- + +fiers (including coreference resolution and 090 lemmatization). + +We will present methods for solving tasks 1 and 2 , 092 while discussing 3 as subject for further work. + +Most existing work for this task deals with English, while occasionally other Germanic or Ro- + +mance languages have been considered. Com- 097 pared to that, Finnish presents challenges due to a rich morphology and free word order. Those can largely be dealt with by the advanced NLP tools that we use (either a dependency parser pipeline or BERT), but they rule out the usage of simpler pattern-based methods and remain a possible source of errors even for state-of-the-art NLP. + +107 + +${}^{1}$ We follow Pareti (2015)’s convention of marking the quotation span in cursive, the source in bold, and underlining the cue. + +We describe the process of collecting and anno- + +109 tating a gold standard corpus in sec. 3. Further, in sec. 4, we describe two different automatic approaches: a rule-based one, amounting to matching certain grammatical structures in dependency-parsed text, as well as a machine learning one, which utilizes the state-of-the-art neural language model BERT. We will release the annotated corpus and both methods publicly. + +Our initial intuition was that dependency parsing provides enough information to recognize quotes with simple pattern matching. Another reason to implement this approach was that it did not need training data, which was at first unavailable for us. However, the final comparison revealed that the BERT-based model outperformed the rule-based even with little training data. The results of this experiment are described in sec. 5. + +§ 2 RELATED WORK + +To our knowledge, the most similar work to ours has been done by Silvia Pareti and colleagues (Pareti et al., 2013; Pareti, 2015, 2016), who annotated a corpus of attribution relations for English and experimented with machine learning models for recognizing such relations. For the latter they applied classification algorithms - CRF, k-NN, logistic regression - working on data enriched with linguistic features, which was state-of-the art in NLP at the time. However, Scheible et al. (2016) have criticized the choice of CRFs for quote detection because of the Markov assumption they make. More recently, Papay and Padó (2019) presented a neural LSTM-based model for recognizing quotations, but without attribution. Brunner et al. (2020) compare different embedding-based models (including BERT) on the task of recognizing types of speech, which includes direct and indirect quotes. + +As to Nordic languages, a rule-based approach for Norwegian has been presented by Salway et al. (2017). It utilizes a dependency parser and a list of speech verbs. From other languages, Quintão (2014) used a machine learning method on Portuguese news corpora, while Pouliquen et al. (2007) used a rule-based approach for multiple European languages. + +Muzny et al. (2017) present a method for quote attribution. They thus start with quotation spans already recognized and perform two tasks: 1) at- + +161 + +tributing a quote to a speaker mention in the text, 162 + +2) linking the speaker mentions into entities. They 163 use a rule-based strategy on top of tools performing dependency parsing and coreference resolution. They also released a corpus of quote attributions consisting of three novels in English. + +Although not dealing exactly with quote detec- 168 tion, Padó et al. (2019) provide a prominent example of computational analysis of political discourse using modern NLP methods. They use various neural models (including BERT) to detect claims and attribute them to actors, with the goal of modeling the discourse as a network of relations between actors and claims. Automatic quote detection could be a useful element of such larger + +system as well. 178 + +§ 3 DATASET AND ANNOTATION + +180 + +The annotation process consisted of two parallel tasks: marking quotations and linking together chains of co-referencing expressions denoting people, institutions and other human-like actors present in the documents. Both annotation tasks were conducted using the WebAnno platform (Eckart de Castilho et al., 2016), by which each annotator was assigned their documents and + +by which the annotation itself was done. The an- 190 notation guidelines were written beforehand and further developed after a test run. + +The quotation detection annotation consisted of 193 1) marking the span in the text containing the con- + +tent of the quote, 2) marking the speech act verb (if 195 present), 3) marking the source of the quotation (if present), and 4) noting whether the quote was direct or indirect. The task was relatively straightforward, as all annotators were students with at least + +a minor degree in linguistics. 200 + +The project employed 10 annotators. Four of them were recruited in an earlier phase and annotated a test data set of 40 articles. After the test run, the guidelines were improved based on both inter-annotator agreement scores and feedback from the annotators, in accordance with the standard linguistic annotation methodology (Art-stein, 2017). The inter-annotator agreement scores (Fleiss’ $\kappa$ ) were between 0.77-0.8, which we deemed sufficient to consider the annotations consistent. The workload was balanced so that the 6 other annotators who were recruited at the later stage annotated more articles to compensate for + +the test run. The annotators worked independently 215 on the WebAnno platform. + +${}^{2}$ (links to repositories removed for anonymization, will be added in the published version) + +217 The articles were sampled from a database containing the metadata for the online media sources and the sampled lists of articles were then scraped using a web crawler (Mäkelä and Toivanen, 2021) and automatically pre-processed to CONLL format containing lemmatization, part-of-speech and dependency taggings using Turku Neural Parser (Kanerva et al., 2018). We used four sources for the articles: YLE (the Finnish national broadcasting company), Helsingin Sanomat (the most popular daily newspaper), Iltalehti (an evening tabloid) and STT (the Finnish news agency), covering different kinds of media texts wrt. length and style. The total number of articles annotated was 1500 , + +232 of which 1460 were annotated by only one annotator at the second stage. + +234 + +§ 4 METHODS + +§ 4.1 RULE-BASED APPROACH + +The input to the rule-based quote detection engine is text with linguistic annotations obtained from the Turku Neural Parser (Kanerva et al., 2018). The parser performs the following tasks: tokeniza-tion, lemmatization, part-of-speech and morphological tagging, and dependency parsing. + +The first stage of quote recognition is recognizing syntactic structures that typically introduce a quote (Table 1). Rules 1-2 describe the very common structures like ’ $\mathrm{X}$ says that $\mathrm{Y}$ ’ and ’ $\mathrm{Y}$ ’ says X', respectively. Rules 3-4 describe structures of the type: 'according to X, Y' and 'in X's opinion, Y'. In such structures, the source and cue can be positioned differently relatively to the proposition: before, after, or even inside it (see the example for rule 4). In the latter case, we allow annotating the cue and source as part of the proposition to avoid discontinuous propositions. Finally, rule 5 is characteristic for Finnish: it captures the construction 'says + active participle', e.g. sanoo olevansa + +259 'says that he is', or sanoo tehneensa 'says that he did'. This construction does not use the word että 'that'. + +In the rules where the cue is a verb $(1,2$ and 5), the verb sanoa 'to say' can be substituted by any other speech act verb, e.g. kertoa 'to tell', korostaa 'to emphasize', kuitata 'to sum up' etc. We initially prepared a list of speech act verbs manually, then used a word2vec model to expand it with automatically generated synonyms, which + +269 were again filtered manually. The final list con- + +sisted of 73 verbs. 270 + +Once the source-cue-proposition triplets are 271 + +recognized, the proposition texts can typically be 272 + +extracted by taking the dependency subtree under 273 the token marked as proposition. However, further post-processing is needed for quotes consist- + +ing of multiple sentences. For example in Table 276 1, the example for rule 2 is clearly the last sentence of a multi-sentence quote. In order to expand the matches to multi-sentence quotes, we use two + +rules: 281 + +1. If the paragraph containing the match starts + +with a hyphen - extend the quote to the begin- 283 ning of the paragraph. This is because long + +direct quotes are typically formatted as sepa- 286 rate paragraphs. + +2. If there is a quotation mark between the cue 288 and the proposition head - extend the quote + +backwards to the matching quotation mark. 291 + +In both these cases, the quote is classified as direct, as it is marked with quotation markers. Matches that do not fulfill the above conditions are classified as indirect. + +Finally, we use an additional rule to detect 'freestanding' direct quotes encompassing entire paragraphs. These do not necessarily contain a source attribution (like ', says X') because the source might be already clear from context. Thus, we detect remaining paragraphs that either start with + +a hyphen or are enclosed in quotation marks, as 303 direct quotes. For the attribution we currently use a naïve strategy of attributing them to the + +same source as the previous quote in the text (if 306 present). This works in a lot of cases because the + +quotes usually follow a structure in which a whole- 308 paragraph direct quote is introduced by an indirect + +one, like: 310 + +311 + +According to Lindberg, approximately + +every third pet is overweight. 313 + + * We do have a lot of work on that. + +The rules from Table 1 are implemented using the spaCy library class DependencyMatcher ${}^{3}$ which offers a declarative language to express the rules and good performance. The post-processing code is implemented in Python. + +323 + +https://spacy.io/api/ dependencymatcher + +325 379 + +max width= + +$\mathbf{{No}.}$ schema example + +1-3 +1 + < g r a p h i c s > + source cue prop Malinen sanoo, että hän ei tule esittämään liiton hallituk- selle yhdenkään sopimuksen hyväksymistä. Malinen says that he will not propose accepting even a single motion of agreement to the union's board. + +1-3 +2 + < g r a p h i c s > + Siksi mekin lähdimme näihin neuvotteluihin mukaan, Mäkynen sanoo. This is why we also joined these negotiations, Mäkynen says. + +1-3 +3 + < g r a p h i c s > + Sipilän mukaan lakiehdotuksia ollaan tuomassa eduskun- taan helmikuussa. According to Sipilä, bill proposals will be brought to the parliament in February. + +1-3 +4 + < g r a p h i c s > + CASE: Ela Suomen vaikeista ongelmista talous on presidentin mielestä helpompi. From Finland's most difficult problems, the economy is in the president's opinion easy. + +1-3 +5 + < g r a p h i c s > + Orpo sanoo olevansa valmis poikkeuksellisiin keinoihin ja jopa lainmuutoksiin [...]. Orpo says that he is ready for exceptional measures and even legistative changes [...]. + +1-3 + +Table 1: The manually constructed rules for detecting quote-like syntactic structures. + +378 + +380 + +381 + +330 384 + +335 389 + +337 + +338 + +339 + +340 + +342 + +345 + +347 + +352 406 + +§ 4.2 BERT MODEL + +The machine learning model is realized as two to- + +357 ken classification heads on top of BERT - a neural language model based on the transformer architecture (Devlin et al., 2019). We use the model pre-trained on Finnish data by Virtanen et al. (2019). + +The first classification head recognizes and clas- + +362 sifies spans of quoted text (propositions). The labeling follows the IOB schema and the class label encodes whether the quote is direct or indirect, as well as the relative position of the speaker men- + +367 tion to the quoted text. The latter is expressed as one of the symbols: $+ , -$ or $=$ and a number 1-4. The symbol describes whether the speaker is mentioned after $\left( +\right)$ , before(-)or inside $\left( = \right)$ the proposition, while the number signifies, which recognized entity is the speaker. For example, the class label B-DIRECT+2 denotes the beginning (B-) of a direct quote, the source of which is the second recognized entity after the quote. A special label 00 signifies that the source of the quote is + +377 not marked. + +The second classification head recognizes the + +entities, i.e. elements of coreference chains. It has 409 just one class encoded in the IOB schema and does + +not perform the linking of entities into chains. 411 + +An example of sequence annotation is shown in Table 2. It shows the following sentence: + +Kansainvälinen rikostuomioistuin aikoo 415 + +määrätä Sudanin presidentin Omar 416 + +al-Bashirin pidatettäväksi, kertoo 417 + +sanomalehti New York Times. 418 + +419 + +The International Criminal Court is in- 420 + +tending to issue an arrest warrant on 421 Sudan's president Omar al-Bashar, the newspaper New York Times reports. + +There are three entities in the sentence: 'The In- + +ternational Criminal Court', 'Sudan's president 426 Omar al-Bashar' and 'the newspaper New York + +Times' - their annotations on the token level are 428 + +encoded on the 'entity' layer. The 'quote' layer 429 + +encodes an indirect quote, which is attributed to 430 + +the first entity following the quote (hence, +1). 431 + +433 + +max width= + +word quote entity + +1-3 +Kansainvälinen B-INDIRECT+1 B + +1-3 +rikostuomioistuin I-INDIRECT+1 I + +1-3 +aikoo I-INDIRECT+1 O + +1-3 +mäiārātā I-INDIRECT+1 O + +1-3 +Sudanin I-INDIRECT+1 B + +1-3 +presidentin I-INDIRECT+1 I + +1-3 +Omar I-INDIRECT+1 I + +1-3 +al-Bashirin I-INDIRECT+1 I + +1-3 +pidätettäväksi I-INDIRECT+1 O + +1-3 +9 O O + +1-3 +kertoo O O + +1-3 +sanomalehti O B + +1-3 +New O I + +1-3 +York O I + +1-3 +Times O I + +1-3 +- O O + +1-3 + +Table 2: An example of sequence annotation for + +438 the BERT model. + +max width= + +X training evaluation + +1-3 +articles 1,172 287 + +1-3 +sentences 22,949 5,097 + +1-3 +tokens 252,006 59,076 + +1-3 +quotes 3,854 984 + +1-3 + +Table 3: The sizes of datasets used in experiments. + +§ 5 EVALUATION + +For the evaluation experiments we use a roughly 80-20 split of the data by taking the data provided by 2 annotators as evaluation set and the remaining 8 annotators as training set. The dataset sizes are summarized in Table 3. We compare both methods on the task of quote recognition (with and without direct/indirect classification) and attribution. + +Quote detection. The results of quote span detection without taking into account the direct-indirect distinction are shown in Table 4. On the other hand, the direct-indirect breakdown is shown in Table 5, where misclassifications (identifying a direct quote as an indirect one or vice versa) were counted as both a false positive and a false negative. We exclude punctuation tokens from the evaluation as especially the commas and periods on the boundaries of quotes might have been inconsistently annotated, and their inclusion in the quote is irrelevant. + +Both settings show a clear advantage of the + +485 BERT model. In case of direct quotes, the rules + +max width= + +method $\mathbf{{Pr}}$ $\mathbf{{Re}}$ $\mathbf{{F1}}$ + +1-4 +rule-based .85 .78 .82 + +1-4 +BERT .92 .90 .91 + +1-4 + +Table 4: Results of quotation span detection without classification. + +486 + +487 + +488 + +489 + +490 + +491 + +max width= + +2*method 3|c|indirect 3|c|direct + +2-7 + $\mathbf{{Pr}}$ $\mathbf{{Re}}$ $\mathbf{{F1}}$ $\mathbf{{Pr}}$ $\mathbf{{Re}}$ $\mathbf{{F1}}$ + +1-7 +rule-based .75 .66 .70 .93 .86 .89 + +1-7 +BERT .84 .84 .84 .96 .94 .95 + +1-7 + +Table 5: Results of quotation span detection and direct/indirect classification. + +492 + +493 + +497 + +499 for recognizing them are quite rigid. Furthermore, + +they can suffer from paragraph segmentation er- 502 rors and misplaced or incidental quotation marks + +(e.g. 'scare quotes'). This explains the lower re- 504 call of the rule-based method. + +Indirect quotes have proven more challenging to the rule-based method as well. This can be to a variety of reasons: missing speech act verbs, incorrectly identifying quote spans based on syntactic criteria (also affected by parser, tagger and sentence segmentation errors), or uncommon structures not covered by the rules. Moreover, rule 3 ('according to') has a tendency to produce false positives, e.g. something being described 'according to the plan'. + +In general, the BERT model has shown to be more flexible wrt. the often unpredictable nature of text data, and does not suffer from the error propagation through the NLP pipeline. + +Attribution. The evaluation of attribution is problematic because of the fact that our dataset was not annotated with the BERT model in mind. + +Thus, we present it as our best attempt given the 524 current possibilities, but recognize the need for further work in this regard. + +The annotated data assigns each quote to a sin- + +gle token representing the mention of the quote's 529 source in the text. If the source is represented by a longer phrase, the syntactic head (wrt. dependency parsing) of this phrase should be selected according to the annotation guidelines. On the + +other hand, mentions of quote sources are typ- 534 ically entities annotated as parts of coreference chains, and thus the entire span is marked for the purpose of coreference annotation. Thus, by com- + +bining the quote and coreference annotations, we 538 + +are able to obtain a span-to-span attribution rela- 539 tion for most cases. The exception are cases in + +541 which the quoted entity is mentioned only once in the article, and thus not annotated as a coreference chain. + +Although the BERT model outputs sources as entity spans, the rule-based model points to a single token - the syntactic head, similarly to the gold standard annotation. In order to make the results comparable, we reduced the output of the BERT model to the first token of the span, and then evaluated a source annotation as correct if it either points to exactly the same token as the gold standard, or if it points to a token within the same coreference span. Thus, the model's ability to correctly identify the entire span is currently not eval- + +556 uated, as it is not implemented in the rule-based method. + +558 Table 6 presents results of the attribution evaluation in terms of the number of gold-standard quote tokens with correctly and incorrectly recognized source, as well as unrecognized source. The latter case occurse if either the token is not recognized as a quote at all, or it is recognized but without identifying the source. We report the accuracy as the ratio of correctly identified to all tokens. + +The results indicate a small advantage of the rule-based model. In both cases, the main source of errors are the unrecognized annotations, rather than the incorrect ones. For the rule-based model this is typically due to quotes not being recognized at all (see low recall in Table 4), while for the BERT model there is a large amount of correctly identified quotes, for which the source could not be found. Of the 1990 recognized quotes, 646 $\left( {{32}\% }\right)$ are reported without source, compared to ${13}\% \left( {{218}/{1633}}\right)$ for the rule-based model. The BERT model's ability to identify the source depends on the entity detection, for which the training data is incomplete (derived from coreference annotations only). Further, the model processes the text paragraph by paragraph and thus does not + +583 find a source mention that is outside of the paragraph containing the quote. These problems offer room for improvement in further work, and thus it can be expected that the BERT model will eventually outperform the rule-based one also in attribution. + +§ 6 DISCUSSION AND FURTHER WORK + +Although we regard the work presented in the pre- + +593 vious sections as a complete solution to a well- + +max width= + +method cor inc unrec accuracy + +1-5 +rule-based 7889 774 4996 .58 + +1-5 +BERT 7554 767 5338 .55 + +1-5 + +Table 6: Results of attribution. + +594 + +595 + +596 + +597 + +598 + +599 + +delimited problem, we see some potential for both 600 incremental improvements, as well as work on further related tasks, that will be addressed in the future. + +Entity annotation and detection. While de- 605 signing our annotation project, we did not antici- + +pate that a machine learning quote detection model 607 will need to also detect entities that the quotes can be attributed to. We intended the coreference an- + +notation to be used only in the further step (entity 610 resolution). In result, entities that are mentioned + +only once were not annotated. The corpus could 612 be improved by ensuring that at least tokens assigned as source to a quote are also annotated as + +an entity. This is expected to improve the BERT 615 model's performance on entity detection, and thus + +quote attribution. 617 + +Entity resolution. While some works treat the + +problem of quote attribution to speaker mention in 620 the text and entity resolution jointly (e.g. Muzny + +et al. 2017), in our opinion entity resolution is a 622 complex task that is best treated separately. In addition to coreference resolution within one docu- + +ment, also matching the entities across documents 625 could be considered there. + +Coreference resolution can be done with BERT 627 with state-of-the-art accuracy (Joshi et al., 2019). However, the setup is complicated as coreferences + +are typically long-range relations, so a sliding win- 630 dow approach needs to be used to mitigate BERT's + +limitation in text size. Furthermore, modeling re- 632 lations with a neural model is not straightforward. + +A related problem is that nested entities are possible and might be relevant, e.g.: + +[[Viron] metallityöväen liiton] puheen- 637 + +johtaja Endel Soon] 638 + +[[Estonia]'s metal workers' union]'s chairman Endel Soon] + +In such case, coreferences and other quotes might 642 also refer to the inner entities 'Estonia' or 'Estonia's metal workers' union'. For the present work, we disregarded nested entities as locally the outermost entity is typically the source of the quote it + +stands next to. 647 + +§ 7 CONCLUSION + +649 + +We have presented two methods for recognition of quotes in Finnish news media, along with an annotated corpus for training and evaluation. To our knowledge, our solution is the first one proposed 654 for Finnish. We hope that the progress achieved on this task will facilitate more detailed large-scale quantitative analysis of voices in the Finnish news media. \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_7VPETQwnPX/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_7VPETQwnPX/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..22ae71c5c6614ed3fa696c3b0137eb190fb10f54 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_7VPETQwnPX/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,949 @@ +000 054 + +# Probing structural constraints of negation in Pretrained Language Models + +001 055 + +056 + +Anonymous Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 + +email@domain 062 + +## Abstract + +Contradictory results about the encoding of the semantic impact of negation in pretrained language models (PLMs) have been drawn recently (e.g. Kassner and Schütze (2020); Gubelmann and Hand- + +018 schuh (2022)). + +In this paper we focus rather on the way + +021 PLMs encode negation and its formal impact, through the phenomenon of the Neg- + +023 ative Polarity Item (NPI) licensing in English. More precisely, we use probes to identify which contextual representations + +026 best encode 1) the presence of negation in a sentence, and 2) the polarity of a neigh- + +028 boring masked polarity item. + +We find that contextual representations of tokens inside the negation scope do allow + +031 for (i) a better prediction of the presence + +033 of not compared to those outside the scope and (ii) a better prediction of the right polarity of a masked polarity item licensed by not, although the magnitude of the difference varies from PLM to PLM. Impor- + +038 tantly, in both cases the trend holds even when controlling for distance to not. + +We thus confirm that the embeddings of these models do reflect the notion of negation scope, and do encode the impact of negation on NPI licensing. The subtle difference between licensing scope and negation scope, however, does not seem to be captured. + +## 1 Introduction + +Negation has recently been the focus of various works aiming at determining the abilities of Pre-trained Language Models (PLMs) to capture linguistic knowledge. + +Some works investigate the 'semantic impact' 065 of negation, namely its impact in terms of truth + +values, by interpreting how the presence of nega- 067 tion impacts the probability distribution at a masked position. The rationale is that negating a + +verb reverses the truth value of its clause, which 070 should be reflected in the probability distribution + +at certain positions. Ettinger (2020); Kassner and 072 Schütze (2020) use factual statements such as (1), + +and report that models output similar distributions 075 for the positive and negative variants of (1), and + +conclude that models largely ignore negation. 077 + +## (1) A robin is (not) a [MASK] + +080 + +Gubelmann and Handschuh (2022) chose to + +avoid factual statements and focus rather on multi- 082 sentence self-contained examples, such that, given the context provided by the first sentence, one par- + +ticular word is either likely (in positive items) or 085 ruled out (in negative items) at a masked posi- + +tion in the second sentence. Because this partic- 087 ular word is substantially less often the top-1 prediction in the negative items than in the positive + +items, the authors draw the opposite conclusion 090 that PLMs do show sensitivity to negation. + +A different line of works focused on finding out 092 to what extent negation is encoded in PLM embed-dings. Celikkanat et al. (2020) train classifiers taking as input the contextual embedding of a verb or + +its subject or direct object, and predicting whether 097 the verb is negated or not. The resulting high accuracy allows them to conclude that these tokens' embeddings do contain "traces" of not. More generally, several authors have investigated whether the contextual representation of a token encodes information about surrounding tokens. To ease further reading, we will talk of a classifier taking as input an input embedding, namely the contextual representation of an input token, and predict- + +ing some target information about another token 107 in the sentence. For instance, Klafka and Ettinger (2020) study how input embeddings encode ani-macy, gender, and number of surrounding words in a specific SVO context. Li et al. (2022) target the number feature of French participles in the context of object-past participle agreement. They show that the performance of the classifier depends on the syntactic position of the input token in the sentence. We will build on their idea to compare performance at predicting target information depending on the syntactic zone the input token belongs to. + +In this paper, we focus on how the information about negation encoded in contextual embeddings is used. Our aim is to study PLMs' ability to capture and encode structural information concerning negation (namely negation scope), and also their ability to actually mobilize the encoding in order to capture phenomena that are direct consequences of the presence of negation. To do so, we focus on the licensing of Negative Polarity Items (NPI) by not modifying a verb. Polarity Items (PI), either positive (e.g. some), or negative (e.g. any), are words or expressions that are constrained in their distribution (Homer, 2020). A NPI will require that a word or a construction, called the licensor, be in the vicinity. And the licensor itself grammatically defines a zone of the sentence, called the licensing scope, in which the NPI can appear. The adverb not modifying a verb is one such licensor. While any is licensed by negation in (2-a) vs. (2-b), it is not licensed in (2-c), even though the verb is negated, arguably because it is not in the licensing scope ${}^{1}$ . + +(2) a. Sam didn't find any books. + +b. *Sam found any books. + +Jumelet and Hupkes (2018) have shown that LSTM embeddings do encode the notion of licensing scope (given an input embedding, a classifier can predict the structural zone the input token belongs to), a finding later confirmed for transformer-based PLMs (Warstadt et al., 2019). Focusing on when the licensor is a verb-modifying not, we rather investigate whether this demonstrated encoding of the zones go as far as enabling a better prediction of a PI's polarity from inside the licensing scope compared to outside the scope. So instead of the question "Is this input embed- + +ding the embedding of a token that is within, be- 162 + +fore or after the licensing scope?", we rather ask 163 the question "Given a masked PI position, and an input embedding of a neighboring token, what is the polarity of the PI?", and we study whether this question is better answered when the input embedding is inside or outside the licensing or negation scopes. + +Note that our methodology differs from that of Jumelet and Hupkes (2018), who, given an input token, predict the zone this token belongs to. We instead predict the polarity of a neighboring masked polarity item and then compare accuracies depending on the input token's zone. Our motivation is that the polarity, being a lexical information, requires less linguistic preconception, and hence our probing method is a more direct translation of the NPI licensing phenomenon: we study whether and where the information of "which PIs are licit where?" is encoded, in the context of sentence negation. This method also allows us to better control the confounding factor of distance between the input embedding and the licensor not. + +In the following we start in section 2 by defining the linguistic notions of negation scope and NPI licensing scope, and by showing how we actually identified them in English sentences. In section 3 , we define our probing experiments and discuss their results, both for the encoding of not (section 3.1), and the encoding of NPI licensing (section 3.2). We conclude in section 4. + +## 2 Defining and identifying scopes + +195 + +### 2.1 Negation scope + +From a linguistic point of view, the scope of a negation cue is the area of the sentence whose propositional content's truth value is reversed by the presence of the cue. While in many cases it is sufficient to use the syntactic structure to recover the scope, in some cases semantics or even pragmatics come into play. ${}^{2}$ Nevertheless, annotation guidelines usually offer syntactic approximations of negation scope. + +To identify the negation scope for a not ${}^{3}$ modifying a verb, we followed the syntactic constraints that emerge from the guidelines of Morante and Blanco (2012). Note though that these guide- + +215 + +--- + +${}^{2}$ For instance in Kim did not go to the party because Bob was there., negation may scope only over the matrix clause or include the causal subordinate clause. + +${}^{3}$ In all this article, not stands for either not or $n$ ’t. + +${}^{1}$ We leave aside the uses of any and the like having free choice interpretations, as for instance in "Pick any card". + +--- + +![01964112-e0c0-72bb-938a-3a202ab2acb8_2_191_166_1273_179_0.jpg](images/01964112-e0c0-72bb-938a-3a202ab2acb8_2_191_166_1273_179_0.jpg) + +Table 1: The "neg-patterns": patterns adapted from Jumelet and Hupkes (2018), which we used to identify some cases of not licensing a NPI and to build the not+NPI test set. Col1: pattern id in Jumelet and Hupkes (2018). Col2: syntactic pattern (defined as a phrase-structure subtree, using the Penn Treebank's annotation scheme), with the licensing scope appearing in blue. Col3: examples with colors for the four zones: pink for tokens in the PRE zone (before both scopes), purple for PRE-IN (to the left of the licensing scope, but within the negation scope), blue for IN (within both scopes) and green for POST (after both scopes). The NPI licensor is not, and appears in yellow. + +270 + +271 + +276 + +281 lines restrict the annotation to factual eventualities, leaving aside e.g. negated future verbs. We did not retain such a restriction, hence our identification of the negation scope is independent from verb tense or modality. + +### 2.2 NPI licensing scope + +Polarity items are a notoriously complex phenomenon. To identify the NPI licensing scope, we focus on specific syntactic patterns defined by Jumelet and Hupkes (2018), retaining only those involving not as licensor. ${}^{4}$ Table 1 shows an example for each retained pattern (hereafter the neg-patterns), with the NPI licensing scope in blue. + +Importantly, in the neg-patterns, the licensing scope is strictly included in the negation scope: within the clause of the negated verb, the tokens to its left belong to the negation scope but not to the licensing scope. E.g. in (3), anyone is not licit as a subject of going, whether the location argument is itself a plain PP, a NPI or a PPI (3-b). + +(3) a. I'm not going anywhere. + +b. *Anyone is not going to the party/ somewhere/anywhere. + +We thus defined 4 zones for the not+NPI sentences, exemplified in Table 1: PRE (tokens be- + +259 fore both scopes), PRE-IN (to the left of the licensing scope, but within the negation scope), IN (in both scopes), and POST (after both scopes). + +We note though that the restriction exemplified in (3-b) only holds for non-embedded NPIs (de Swart, 1998), so examples like (4), with an embedded NPI in the subject of the negated verb + +269 + +283 (hence belonging to our PRE-IN zone), are theoretically possible. + +286 + +## (4) Examples with any relevance to that issue didn't come up in the discussion. + +288 + +Yet in practice, we found that they are ex- + +tremely rare: using the Corpus of Contempo- 291 rary American English (COCA, Davies 2015) ${}^{5}$ , we extracted sentences matching one of the neg-patterns, and among these, sentences having any or any-body/one/thing/time/where in the IN zone, + +the PRE-IN zone or both. As shown in Table 2, 296 any* in the PRE-IN zone are way rarer than in the + +classical licensing scope (IN zone) ${}^{6}$ . Hence we 298 sticked to the usual notion of direct NPI licensing scope, as illustrated in Table 1. + +301 + +
TotalINPRE-INboth
45,15735,93871158
+ +303 + +Table 2: Number of sentences from the COCA + +corpus, matching the neg-patterns of Table 1: 306 Col1: total number, Col2-4: number having a + +any* in the IN zone, the PRE-IN zone, and in both 308 zones respectively. + +313 + +318 + +323 + +--- + +${}^{5}$ We used a version with texts from 1990 to 2012. COCA is distributed with some tokens in some sentences voluntarily masked, varying across distributions. We ignored such sentences. + +${}^{6}$ More precisely, the figures in Table 2 correspond to an upper bound, because of (i) potential syntactic parsing errors impacting the identification of the zones, (ii) cases in which the NPI licensor is different from the not targeted by the patterns, and (iii) cases in which the any* is a free choice item and not a NPI (as in "Pick any one"). We inspected 250 examples of any* in the PRE-IN zone, and 250 examples in the IN zone. In the former, we found that almost all cases fall under (i), (ii) or (iii), less than 3% corresponding to examples such as (4)). In contrast, in the IN zone the proportion of NPIs actually licensed by the target not is ${92}\%$ . + +${}^{4}$ We ignored pattern 4 (never instead of not as licensor), and 6 (too few occurrences in our data). We merged patterns 1 and 2 , and corrected an obvious minor error in pattern 5 . + +--- + +### 2.3 Building the not+NPI test set + +Having defined these structural zones, we can use them to probe the traces they carry and compare the magnitude of these traces across the four zones. To do so, we built a test set of COCA sentences containing a not licensing a NPI (hereafter the not+NPI test set), matching one of the neg-patterns of Table 1, and having at least one any, anybody, anyone, anything, anytime or anywhere within the licensing scope. + +The scope of negation has been implemented through an approximation using dependency parses (from the Stanza parser (Qi et al., 2020)), which proved more convenient than phrase-structure parses: we took the subtree of the negated verb, excluding not itself, and excluding dependents corresponding to sentential or verbal conjuncts and to sentential parentheticals. + +More precisely, we identified the token having not as dependent (which, given our patterns, can be either the negated verb or a predicative adjective in case of a negated copula). Then, we retrieved the children of this head, except those attached to it with a "conj", "parataxis", "mark" or "discourse" dependency. In the complete subtrees of the selected dependents, all tokens were annotated as being inside the negation scope. + +
GenreMagAcadFictNewsTotal
#with not5373838305362285
#and a NPI31215834143
+ +Table 3: Thousands of sentences in COCA: Line 1: containing a not. Line 2: containing a not and at least one NPI (among any- $\varnothing /$ body/one/where/time/thing), anywhere in the sentence. + +362 + +For the licensing scope, we parsed the corpus using the PTB-style parser "Supar Parser"' of Zhang et al. (2020), and further retained only the + +367 sentences (i) matching the neg-patterns of Table 1 and (ii) having a NPI within the licensing scope (IN zone, shown in blue in Table 1). + +We finally obtained a not+NPI test set, whose statistics are provided in Table 4. + +## 3 Probing for the scopes + +Our objective is to study how a transformer-based PLM (i) encodes the presence of a negation + +
$\mathbf{{Pattern}}$MagAcadFictNewsTotal
1/26.561.6916.496.1630.90
30.570.141.330.492.53
5*0.220.080.580.151.02
+ +Table 4: Statistics of the not+NPI test set: thousands of COCA sentences matching the neg-patterns (cf. Table 1), and having at least one any* in the IN zone (licensing scope), broken down by corpus genre. + +378 + +379 + +380 + +381 + +384 + +389 (the "traces" of negation) and (ii) models lexico- + +syntactic constraints imposed by negation, such as 391 the modeling of a NPI licensing scope. Using the terminology introduced in section 1 , we will probe + +whether input embeddings encode as target infor- 394 mation (i) the presence of not elsewhere in the sen- + +tence, and (ii) the polarity of a masked PI. The 396 former focuses on a plain encoding of negation, whereas the latter focuses on whether the encoding of negation can be mobilized to reflect a property (NPI licensing) that is directly imposed by negation. To investigate whether such an encoding matches linguistic notions of scopes, we will contrast results depending on the zone the input token belongs to (among the four zones defined for a not + +licensing a NPI, namely PRE, PRE-IN, IN, POST) 406 and its distance to not. + +We study four PLMs: BERT-base-case, BERT- + +large-case (Devlin et al., 2019) and ROBERTA- 409 base and ROBERTA-large (Liu et al., 2019). All + +our experiments were done with each of these 411 models, and for a given model, each experiment was repeated three times. All the sentences we used for training, tuning and testing were extracted from the COCA corpus. + +416 + +### 3.1 Probing for the negation scope + +In preliminary experiments, we extend Celikkanat et al. (2020)'s study by investigating the traces of + +not in the contextual embedding of all the tokens 421 of a sentence containing not (instead of just the verb, subject and object). + +#### 3.1.1 Training neg-classifiers + +We train binary classifiers (hereafter the m-neg- 426 classifiers, with $m$ the name of the studied PLM) taking an input contextual embedding, and predicting the presence or absence of at least one + +not in the sentence. We train 3 classifiers for 430 + +each of the 4 tested PLMs. To train and evalu- 431 ate these classifiers, we randomly extract 40,000 sentences containing exactly one not, and 40,000 sentences not containing any not. We BERT- and ROBERTA-tokenized these sentences and for each model, we randomly selected one PLM token in each sentence to serve as input token. For these input tokens, we ignored any token not, plus all PLM tokens associated to a contracted negation: for instance don’t is BERT-tokenized into don $+ {}^{\prime } + t$ , and ROBERTA-tokenized into don’ + t. We ignore all these tokens, as they are too obvious a clue for the presence of a verbal negation. Furthermore, in order to homogenize the handling of negation whether contracted or not, we also set aside any modal or auxiliary that can form a negated contracted form. Hence, in She did leave, She did not leave or She didn't leave, the only candidate input tokens are those for She and leave ${}^{8}$ . We use ${64}\mathrm{k}$ sentences for training (neg-train-sets), and the remaining ${16}\mathrm{k}$ for testing (neg-test-set). + +--- + +${}^{7}$ https://parser.yzhang.site/en/latest/index.html + +--- + +We provide the obtained accuracies on this neg-test-set in Table 5, which shows that performance is significantly above chance. + +
Model${\mathrm{{BERT}}}_{b}$${\mathrm{{BERT}}}_{l}$ROB. $b$ROB. ${}_{l}$
Accur.74.373.172.176.6
+ +Table 5: Accuracies of the neg-classifiers on the neg-test-set for each PLM (averaged over 3 runs). + +#### 3.1.2 Studying results on the not+NPI test set + +To probe the negation scope, we then use the not+NPI test set (cf. section 2), and compare accuracies in PRE-IN versus PRE, and in IN versus POST. + +Note though that distance to not is also likely to impact the classifiers' accuracy. Indeed, by definition the structural zones obviously correlate with distance to not. For instance, a token at distance 3 to the right of not is more likely to be in the licensing scope than a token at distance 20 . Hence, to study the impact of the input token's zone, we need to control for distance to the negation clue. + +We thus break down our classifiers' accuracy on the not $+ \mathrm{{NPI}}$ test set, not only according to the input token's zone, but also according to its relative position to the negation cue. Table 6 shows an example of not+NPI sentence, and the zone and + +relative position to not of each token. The target 486 + +not has position 0 , and so do all the PLMs' sub- 487 word tokens involved in the negation complex, and all preceding modal or auxiliary, to homogenize across PLMs and across contracted/plain negation. By construction, the PRE and PRE-IN zones + +correspond to negative positions, whereas IN and 492 POST correspond to positive ones. + +The break-down by position for ROBERTA- + +large is shown in Figure 1 (results for other models 497 are in Appendix C). Two effects can be observed, + +for all the 4 PLMs: firstly, there is a general de- 499 crease of the accuracy as moving away from not, for the four zones. This contrasts with the findings + +of Klafka and Ettinger (2020), who did not ob- 502 serve a distance effect in their experiments, when probing whether the contextual representation of e.g. a direct object encodes e.g. the animacy of the subject. The decrease is more rapid before not than after it, which remains to be explained. It might come from the negation scope being shorter before not than after it. + +Secondly, when looking at fixed relative distances, there is a slight but almost systematic effect that when the input token is in the negation scope (either PRE-IN or IN), the accuracy is higher than when it is outside (PRE and POST) (the differences are statistically significant at $p <$ 0.001, cf. Appendix B). This tendency is more marked for the PRE vs. PRE-IN distinction than for the POST vs. IN distinction. + +522 + +This observation can be summarized by com- + +puting the average accuracy gap, namely the ac- 524 curacy differences averaged across positions (the average of the purple minus pink bars, and of blue minus green bars in Figure 3), which provide an average difference when a token is within or outside the negation scope. The average accuracy gaps for the four tested models are given in Table 7. It confirms that input embeddings of tokens inside the negation scope do allow for a slightly better prediction of the presence of not than those outside the scope. Note that the average difference is stable across models, whose size does not seem to matter. It shows that the strength of the encoding of not in contextual representations matches + +the linguistic notion of negation scope. 539 + +--- + +${}^{8}$ COCA sentences are tokenized and tagged. We detok-enized them before BERT/ROBERTA tokenization, in order to get closer to a standard input. + +--- + +540 594 + +![01964112-e0c0-72bb-938a-3a202ab2acb8_5_191_162_1284_290_0.jpg](images/01964112-e0c0-72bb-938a-3a202ab2acb8_5_191_162_1284_290_0.jpg) + +Table 6: Example sentence from the not+NPI test set: structural zones and relative positions to not. Any auxiliary or modal preceding the target not has position 0 too, to homogenize contracted and plain negation, and BERT versus ROBERTA's tokenization. + +596 + +597 + +598 + +599 + +541 595 + +546 600 + +551 605 + +553 607 + +556 610 + +![01964112-e0c0-72bb-938a-3a202ab2acb8_5_201_676_1246_537_0.jpg](images/01964112-e0c0-72bb-938a-3a202ab2acb8_5_201_676_1246_537_0.jpg) + +Figure 1: Accuracy of the ROBERTA-large-neg-classifier (average on 3 runs) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at $p < {0.001}$ (cf. Appendix B). Figures for the other 3 models are provided in Appendix C. + +608 + +609 + +612 + +615 + +617 + +619 + +620 + +621 + +622 + +623 + +624 + +625 + +626 + +627 + +628 + +
${\mathrm{{BERT}}}_{b}$${\mathrm{{BERT}}}_{l}$${\mathrm{{ROB}}}_{b}$${\mathrm{{ROB}}}_{l}$
3.0 (0.6)3.5 (0.2)2.6 (0.2)2.6 (1.3)
+ +Table 7: Accuracy gaps for the neg-classifiers on the not+NPI test set, for each tested PLM, averaged over 14 relative positions and 3 runs (stdev within brackets). + +583 + +588 We also observe that the biggest difference occurs at position -1 . This corresponds mostly to a contrast between a finite vs. non-finite negated verb (neg-patterns $1/2/3$ vs. neg-pattern 5 in Table 1), which seems well reflected in PLMs' em- + +593 beddings. + +629 + +### 3.2 Probing for the licensing scope + +630 + +631 + +We then focused on whether this encoding of not 632 + +can actually be mobilized to capture the licens- 633 + +ing of a NPI. We built classifiers (hereafter the 634 + +$m$ -pol-classifiers, with $m$ the name of the studied 635 + +PLM), taking an input contextual embedding, and 636 + +predicting as target information the polarity of a 637 + +masked position, originally filled with a positive 638 or negative PI. Importantly, the input embedding in the training set is randomly chosen in the sen- + +tence, and can correspond to a position that is or 642 isn't linguistically related to the polarity of the PI (cf. figure 2). This avoids using linguistic preconceptions while building the classifiers. + +We train on sentences originally having either a 646 + +PPI or a NPI, which we mask before running each 647 + +648 + +![01964112-e0c0-72bb-938a-3a202ab2acb8_6_206_182_593_205_0.jpg](images/01964112-e0c0-72bb-938a-3a202ab2acb8_6_206_182_593_205_0.jpg) + +Figure 2: Illustration of the training of the pol-classifiers. + +649 + +654 studied PLM. More precisely, in each COCA sub-corpus (each genre), and for each of the 6 NPI/PPI pairs listed by Jumelet and Hupkes ${\left( {2018}\right) }^{9}$ , we randomly took at most 2,000 sentences containing the NPI, and the same amount of sentences con- + +664 taining the corresponding ${\mathrm{{PPI}}}^{10}$ . In each of these, we masked the PI, randomly selected one token per sentence to serve as input token (excluding the masked position) and split these into 63,529 examples for training (pol-train-set) and 15,883 for testing (pol-test-set). + +
Model${\mathrm{{BERT}}}_{b}$${\mathrm{{BERT}}}_{l}$ROB. $b$ROB. ${}_{l}$
Accur.64.263.756.668.6
+ +Table 8: Accuracies of the pol-classifiers on the pol-test-set for each PLM (averaged over 3 runs). + +Accuracies on the pol-test-set for each PLM are shown in Table 8. While still above chance, we observe that it doesn’t exceed ${69}\%$ , which is quite lower than the accuracies of the neg-classifiers (Table 5). This is not surprising since the task is more difficult. First, as stressed above, some of the training input tokens are independent, from the linguistic point of view, of the PI's polarity. Second, the cues for predicting the polarity are + +686 diverse. And third, in numerous contexts, both polarities are indeed possible, even though not equally likely. We did not control the training for this, on purpose not to introduce any additional + +691 bias in the data. We can thus interpret the pol- classifier's scores as how likely a given polarity is. + +Next, we applied these classifiers on the not+NPI test set. The objective is to compare the classifiers' accuracy depending on the structural + +701 + +zone the input token belongs to. If PLMs have a 702 + +notion of licensing scope, then the polarity predic- 703 tion should be higher when using an input token from the IN zone. + +#### 3.2.1 Results + +Once more, we control for distance of the in- 708 put embedding to not. The break-down by position and structural zone for ROBERTA-large is provided in Figure 3 (results for other models are in Appendix C). + +Again, we observe a general accuracy decrease as moving away from not, and this decrease is faster than for the previous experiment. We also note that the decrease is more rapid in the PRE-IN zone than in the IN zone (for instance at distance -4 in PRE-IN, the accuracy is less than 70%, whereas it is still above it at distance 8 in the IN zone). This tends to indicate that the traces of not are more robust in the licensing scope. + +Secondly, as for the previous experiment, for each relative position, when the input token is in the negation scope (either PRE-IN or IN), the accuracy is higher than when it is outside (PRE and POST). Even though we cannot exclude that the relatively high overall accuracies may be explained by the classifier catching some regulari- + +ties of the sentences containing a NPI rather than 730 a PPI (independently of the presence of not), it remains that for the not+NPI sentences, accuracy is higher when the input token is in the negation scope than outside it. Moreover, this trend is much + +more marked than for the previous experiment. 735 + +Thirdly, the amplitude of this observation depends on the model. We provide the accuracy gaps for each PLM in Table 9. We observe that the trend is marked for ROBERTA-large and BERT- + +base (gap of 8.7 and 7.4 accuracy points, actually 740 much higher than the accuracy gaps for predicting the presence of not), but lower for ROBERTA-base and BERT-large. + +
${\mathrm{{BERT}}}_{b}$${\mathrm{{BERT}}}_{l}$${\mathrm{{ROB}}}_{b}$${\mathrm{{ROB}}}_{l}$
7.4 (0.5)3.1 (0.4)1.4 (0.2)8.7 (0.6)
+ +745 + +Table 9: Accuracy gaps for the pol-classifiers on + +the not+NPI test set, averaged over 14 relative po- 750 sitions and 3 runs (stdev within brackets). + +This leads us to conclude that (i) PLMs do encode structural constraints imposed by not (NPI li- + +censing), but to varying degrees across the PLMs 755 + +--- + +${}^{9}$ (any/some) $\left( {\varnothing /\text{where/one/body/thing/time)}}\right)$ + +${}^{10}$ For any/some(%/one/thing), we took $2 \times {2000}$ occurrences. For any/some(body/time/where), less occurrences were available in some of the subcorpora. We took as many as possible, but keeping a strict balance between NPI and PPI sentences (between $2 \times {169}$ and $2 \times {958}$ depending on the corpus genre and on the NPI/PPI pair). + +--- + +756 810 + +757 811 + +758 812 + +![01964112-e0c0-72bb-938a-3a202ab2acb8_7_206_235_1247_536_0.jpg](images/01964112-e0c0-72bb-938a-3a202ab2acb8_7_206_235_1247_536_0.jpg) + +Figure 3: Accuracy of the ROBERTA-large-pol-classifier (average on 3 runs) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at $p < {0.001}$ (cf. Appendix B). + +817 + +818 + +819 + +820 + +822 + +826 + +759 813 + +760 814 + +761 815 + +762 816 + +767 821 + +828 + +831 we tested, and (ii) that this encoding is stronger in the negation scope than outside it, independently of the distance to not. This only partially matches the linguistic expectation that the strongest zone should be the licensing scope rather than the entire negation scope. + +## 4 Conclusion + +In this paper, we studied the way negation and its scope are encoded in contextual representations of PLMs and to what extent this encoding is used to model NPI licensing. + +Classifiers were trained to predict the presence of negation in a sentence from the contextual representation of a random token. We also trained classifiers to predict the polarity of a masked polar item from the contextual representation of a random token. A test set of sentences was designed with not licensing an NPI, inside which we identified the negation scope (roughly the clause), and the licensing scope (roughly the VP). + +For these sentences, we found that the contex- + +804 tual embeddings of tokens within the scope of a negation allow a better prediction of the presence of not. These embedding also allow a better prediction of the (negative) polarity of a masked PI. These results hold even when controlling for the + +809 distance to not. + +833 + +We conclude that the PLMs which were tested indeed encode a notion of negation scope in their + +contextual representations. We could not find 836 however a consistent encoding of the narrower + +(and probably more difficult to define) notion 838 of negative polarity licensing scope. Moreover, + +variation across PLMs remains to be explained 841 through further studies. + +843 + +## References + +Hande Celikkanat, Sami Virpioja, Jörg Tiedemann, and 846 Marianna Apidianaki. 2020. Controlling the Imprint of Passivization and Negation in Contextual- + +ized Representations. In Proceedings of the Third 848 BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 136-148, On- + +line. Association for Computational Linguistics. 851 + +Mark Davies. 2015. Corpus of Contemporary Ameri- + +can English (COCA). 853 + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference + +of the North American Chapter of the Association 858 + +for Computational Linguistics: Human Language 859 + +Technologies, Volume 1 (Long and Short Papers), 860 pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. + +862 + +Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- 863 + +864 ichart. 2018. The hitchhiker's guide to testing statis- 865 tical significance in natural language processing. In 866 Proceedings of the 56th Annual Meeting of the As- 867 sociation for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392, Melbourne, Australia. Association for Computational Linguistics. + +870 Allyson Ettinger. 2020. What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models. Transactions of the Association for Computational Linguistics, 8:34-48. + +Reto Gubelmann and Siegfried Handschuh. 2022. 875 Context matters: A pragmatic study of PLMs' nega- tion understanding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4602- 4621, Dublin, Ireland. Association for Computational Linguistics. + +880 Vincent Homer. 2020. Negative Polarity, pages 1-39. John Wiley & Sons, Ltd. + +882 + +Jaap Jumelet and Dieuwke Hupkes. 2018. Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items. In Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 222-231, Brussels, Belgium. Association for Computational Linguistics. + +Nora Kassner and Hinrich Schütze. 2020. Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811-7818, Online. Association for Computational Linguistics. + +Josef Klafka and Allyson Ettinger. 2020. Spying on Your Neighbors: Fine-grained Probing of Contextual Embeddings for Information about Surrounding Words. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4801-4811, Online. Association for Computational Linguistics. + +Bingzhi Li, Guillaume Wisniewski, and Benoit Crabbé. + +902 2022. How distributed are distributed representations? an observation on the locality of syntactic information in verb agreement tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 501-507, Dublin, Ireland. Association for Computational Linguistics. + +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv. + +Roser Morante and Eduardo Blanco. 2012. *SEM 2012 Shared Task: Resolving the Scope and Focus of Negation. In *SEM 2012: The First Joint Conference on Lexical and Computational Seman- + +917 tics - Volume 1: Proceedings of the main conference + +and the shared task, and Volume 2: Proceedings of 918 + +the Sixth International Workshop on Semantic Eval- 919 + +uation (SemEval 2012), pages 265-274, Montréal, 920 + +Canada. Association for Computational Linguistics. 921 + +Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, 922 + +and Christopher D. Manning. 2020. Stanza: A 923 + +Python natural language processing toolkit for many 924 human languages. In Proceedings of the58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. + +Henriëtte de Swart. 1998. Licensing of negative po- + +larity items under inverse scope. Lingua, 105(3- 929 4):175-200. + +Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Ha- 931 gen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason + +Phang, Anhad Mohananey, Phu Mon Htut, Paloma 934 Jeretic, and Samuel R. Bowman. 2019. Investigating BERT's Knowledge of Language: Five Anal- + +ysis Methods with NPIs. In Proceedings of the 936 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International + +Joint Conference on Natural Language Processing 939 (EMNLP-IJCNLP), pages 2877-2887, Hong Kong, China. Association for Computational Linguistics. + +941 + +Yu Zhang, Houquan Zhou, and Zhenghua Li. 2020. Fast and Accurate Neural CRF Constituency Parsing. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, pages 4046-4053, Yokohama, Japan. International + +Joint Conferences on Artificial Intelligence Organi- 946 zation. + +## A Hyperparameter tuning for the neg-classifiers and the pol-classifiers + +949 + +The PLMs' contextual representations were ob- 951 tained using a GeForce RTX 2080 Ti GPU. + +The neg-classifiers and the pol-classifiers were + +trained on a CPU, each training taking about 15 954 min each. Then, testing them on the not+NPI test + +set takes about 5 minutes. 956 + +To tune these classifiers, we performed a grid search with: a number of hidden layers included in $\left\lbrack {1,2}\right\rbrack$ , number of units in each layer in $\lbrack {20},{50}$ , + +${100450},{1000}\rbrack$ , and the learning rate in $\lbrack 1,{0.1}$ , 961 ${0.01},{0.001}\rbrack$ . + +We selected a learning rate of 0.001, 2 hidden layers, with size 450 each, based on the accuracies on the neg-test-set and the pol-test-set. Except when the learning rate equaled 1 , all hyperparam-eter combinations resulted in similar performance (less than 1 point of accuracy, in the results of figure 3). + +The code and methodology was developed first + +using the BERT-base model, and then applied to 971 the other models. Including code and method- 1026 ology development, we estimate that the experi- 1027 ments reported in this paper correspond to a total 1028 of 160 hours of GPU computing. 1029 1030 + +## B Statistical significance test + +1031 + +In this section we detail the test performed to as- 1032 1033 sess the statistical significance of the accuracy dif- 1034 ferences illustrated in Figures 3 and 5. 1035 + +For each of the four tested PLMs, and for each 1036 of 3 runs of classifier training, 1037 + +985 - for each position from -8 to -1 relative to the 1038 1039 not, 1040 + +987 - we compare the accuracy of the pol- 1041 + +988 classifier in the PRE-IN zone versus in 1042 + +989 the PRE zone (i.e. the difference be- 1043 + +990 tween the purple bar with respect to the 1044 pink one). 1045 + +993 - namely, we test the statistical signifi- 1046 1047 cance of the following positive differ- 1048 995 ence : accuracy for tokens in PRE-IN 1049 + +zone minus accuracy for tokens in the 1050 + +PRE zone. 1051 + +- for each position from 3 to 8 , 1052 1053 + +- we test the statistical significance of the 1054 + +following positive difference : accuracy 1055 + +for tokens in IN zone minus accuracy for 1056 + +tokens in the POST zone (i.e. the differ- 1057 + +ence between the blue bar with respect 1058 + +to the green one) 1059 + +1060 + +Each test is an approximate Fisher-Pitman 1061 + +permutation test (with 5000 random permu- 1062 + +tations, performed using the script of Dror 1063 + +et al. (2018), https://github.com/rtmdrr/ 1064 + +testSignificanceNLP.git), and all the differ- 1065 + +ences listed above result as statistically significant 1066 + +at $p < {0.001}$ . 1067 + +1068 + +1015 1069 + +## C Accuracies of the classifiers on the $\mathbf{{not} + {NPItestset}}$ + +1070 + +The break-downs by position for the three models 1071 1072 + +not presented in the main text (BERT-base, BERT- 1073 + +large and ROBERTA-base) are provided in Fig- 1074 + +ures 4 (neg-classifiers) and 5 (pol-classifiers). 1075 1076 1077 1078 + +1025 1079 + +1080 1134 + +1081 1135 + +![01964112-e0c0-72bb-938a-3a202ab2acb8_10_289_209_1095_1680_0.jpg](images/01964112-e0c0-72bb-938a-3a202ab2acb8_10_289_209_1095_1680_0.jpg) + +Figure 4: Accuracy (average on 3 runs) of the other neg-classifiers (BERT-base, BERT-large and ROBERTA-base) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at $p < {0.001}$ (cf. Appendix B). + +1082 1136 + +1083 1137 + +1084 1138 + +1085 1139 + +1086 1140 + +1087 1141 + +1088 1142 + +1089 1143 + +1090 1144 + +1091 1145 + +1092 1146 + +1093 1147 + +1094 1148 + +1095 1149 + +1096 1150 + +1097 1151 + +1098 1152 + +1099 1153 + +1100 1154 + +1101 1155 + +1102 1156 + +1103 1157 + +1104 1158 + +1105 1159 + +1106 1160 + +1107 1161 + +1108 1162 + +1109 1163 + +1110 1164 + +1111 1165 + +1112 1166 + +1113 1167 + +1114 1168 + +1115 1169 + +1116 1170 + +1117 1171 + +1118 1172 + +1119 1173 + +1120 1174 + +1121 1175 + +1122 1176 + +1123 1177 + +1124 1178 + +1125 1179 + +1126 1180 + +1127 1181 + +1128 1182 + +1129 1183 + +1130 1184 + +1131 1185 + +1132 1186 + +1133 1187 + +1188 1242 + +1189 1243 + +1190 1244 + +![01964112-e0c0-72bb-938a-3a202ab2acb8_11_279_265_1102_1549_0.jpg](images/01964112-e0c0-72bb-938a-3a202ab2acb8_11_279_265_1102_1549_0.jpg) + +Figure 5: Accuracy (average on 3 runs) of the other pol-classifiers (BERT-base, BERT-large and ROBERTA-base) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at $p < {0.001}$ (cf. Appendix B). + +1191 1245 + +1192 1246 + +1193 1247 + +1194 1248 + +1195 1249 + +1196 1250 + +1197 1251 + +1198 1252 + +1199 1253 + +1200 1254 + +1201 1255 + +1202 1256 + +1203 1257 + +1204 1258 + +1205 1259 + +1206 1260 + +1207 1261 + +1208 1262 + +1209 1263 + +1210 1264 + +1211 1265 + +1212 1266 + +1213 1267 + +1214 1268 + +1215 1269 + +1216 1270 + +1217 1271 + +1218 1272 + +1219 1273 + +1220 1274 + +1221 1275 + +1222 1276 + +1223 1277 + +1224 1278 + +1225 1279 + +1226 1280 + +1227 1281 + +1228 1282 + +1229 1283 + +1230 1284 + +1231 1285 + +1232 1286 + +1233 1287 + +1234 1288 + +1235 1289 + +1236 1290 + +1237 1291 + +1238 1292 + +1239 1293 + +1240 1294 + +1241 1295 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_7VPETQwnPX/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_7VPETQwnPX/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..76d334c719ac61f1f44060c856de8bfd3dd32d0f --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_7VPETQwnPX/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,587 @@ +000 054 + +§ PROBING STRUCTURAL CONSTRAINTS OF NEGATION IN PRETRAINED LANGUAGE MODELS + +001 055 + +056 + +Anonymous Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 + +email@domain 062 + +§ ABSTRACT + +Contradictory results about the encoding of the semantic impact of negation in pretrained language models (PLMs) have been drawn recently (e.g. Kassner and Schütze (2020); Gubelmann and Hand- + +018 schuh (2022)). + +In this paper we focus rather on the way + +021 PLMs encode negation and its formal impact, through the phenomenon of the Neg- + +023 ative Polarity Item (NPI) licensing in English. More precisely, we use probes to identify which contextual representations + +026 best encode 1) the presence of negation in a sentence, and 2) the polarity of a neigh- + +028 boring masked polarity item. + +We find that contextual representations of tokens inside the negation scope do allow + +031 for (i) a better prediction of the presence + +033 of not compared to those outside the scope and (ii) a better prediction of the right polarity of a masked polarity item licensed by not, although the magnitude of the difference varies from PLM to PLM. Impor- + +038 tantly, in both cases the trend holds even when controlling for distance to not. + +We thus confirm that the embeddings of these models do reflect the notion of negation scope, and do encode the impact of negation on NPI licensing. The subtle difference between licensing scope and negation scope, however, does not seem to be captured. + +§ 1 INTRODUCTION + +Negation has recently been the focus of various works aiming at determining the abilities of Pre-trained Language Models (PLMs) to capture linguistic knowledge. + +Some works investigate the 'semantic impact' 065 of negation, namely its impact in terms of truth + +values, by interpreting how the presence of nega- 067 tion impacts the probability distribution at a masked position. The rationale is that negating a + +verb reverses the truth value of its clause, which 070 should be reflected in the probability distribution + +at certain positions. Ettinger (2020); Kassner and 072 Schütze (2020) use factual statements such as (1), + +and report that models output similar distributions 075 for the positive and negative variants of (1), and + +conclude that models largely ignore negation. 077 + +§ (1) A ROBIN IS (NOT) A [MASK] + +080 + +Gubelmann and Handschuh (2022) chose to + +avoid factual statements and focus rather on multi- 082 sentence self-contained examples, such that, given the context provided by the first sentence, one par- + +ticular word is either likely (in positive items) or 085 ruled out (in negative items) at a masked posi- + +tion in the second sentence. Because this partic- 087 ular word is substantially less often the top-1 prediction in the negative items than in the positive + +items, the authors draw the opposite conclusion 090 that PLMs do show sensitivity to negation. + +A different line of works focused on finding out 092 to what extent negation is encoded in PLM embed-dings. Celikkanat et al. (2020) train classifiers taking as input the contextual embedding of a verb or + +its subject or direct object, and predicting whether 097 the verb is negated or not. The resulting high accuracy allows them to conclude that these tokens' embeddings do contain "traces" of not. More generally, several authors have investigated whether the contextual representation of a token encodes information about surrounding tokens. To ease further reading, we will talk of a classifier taking as input an input embedding, namely the contextual representation of an input token, and predict- + +ing some target information about another token 107 in the sentence. For instance, Klafka and Ettinger (2020) study how input embeddings encode ani-macy, gender, and number of surrounding words in a specific SVO context. Li et al. (2022) target the number feature of French participles in the context of object-past participle agreement. They show that the performance of the classifier depends on the syntactic position of the input token in the sentence. We will build on their idea to compare performance at predicting target information depending on the syntactic zone the input token belongs to. + +In this paper, we focus on how the information about negation encoded in contextual embeddings is used. Our aim is to study PLMs' ability to capture and encode structural information concerning negation (namely negation scope), and also their ability to actually mobilize the encoding in order to capture phenomena that are direct consequences of the presence of negation. To do so, we focus on the licensing of Negative Polarity Items (NPI) by not modifying a verb. Polarity Items (PI), either positive (e.g. some), or negative (e.g. any), are words or expressions that are constrained in their distribution (Homer, 2020). A NPI will require that a word or a construction, called the licensor, be in the vicinity. And the licensor itself grammatically defines a zone of the sentence, called the licensing scope, in which the NPI can appear. The adverb not modifying a verb is one such licensor. While any is licensed by negation in (2-a) vs. (2-b), it is not licensed in (2-c), even though the verb is negated, arguably because it is not in the licensing scope ${}^{1}$ . + +(2) a. Sam didn't find any books. + +b. *Sam found any books. + +Jumelet and Hupkes (2018) have shown that LSTM embeddings do encode the notion of licensing scope (given an input embedding, a classifier can predict the structural zone the input token belongs to), a finding later confirmed for transformer-based PLMs (Warstadt et al., 2019). Focusing on when the licensor is a verb-modifying not, we rather investigate whether this demonstrated encoding of the zones go as far as enabling a better prediction of a PI's polarity from inside the licensing scope compared to outside the scope. So instead of the question "Is this input embed- + +ding the embedding of a token that is within, be- 162 + +fore or after the licensing scope?", we rather ask 163 the question "Given a masked PI position, and an input embedding of a neighboring token, what is the polarity of the PI?", and we study whether this question is better answered when the input embedding is inside or outside the licensing or negation scopes. + +Note that our methodology differs from that of Jumelet and Hupkes (2018), who, given an input token, predict the zone this token belongs to. We instead predict the polarity of a neighboring masked polarity item and then compare accuracies depending on the input token's zone. Our motivation is that the polarity, being a lexical information, requires less linguistic preconception, and hence our probing method is a more direct translation of the NPI licensing phenomenon: we study whether and where the information of "which PIs are licit where?" is encoded, in the context of sentence negation. This method also allows us to better control the confounding factor of distance between the input embedding and the licensor not. + +In the following we start in section 2 by defining the linguistic notions of negation scope and NPI licensing scope, and by showing how we actually identified them in English sentences. In section 3, we define our probing experiments and discuss their results, both for the encoding of not (section 3.1), and the encoding of NPI licensing (section 3.2). We conclude in section 4. + +§ 2 DEFINING AND IDENTIFYING SCOPES + +195 + +§ 2.1 NEGATION SCOPE + +From a linguistic point of view, the scope of a negation cue is the area of the sentence whose propositional content's truth value is reversed by the presence of the cue. While in many cases it is sufficient to use the syntactic structure to recover the scope, in some cases semantics or even pragmatics come into play. ${}^{2}$ Nevertheless, annotation guidelines usually offer syntactic approximations of negation scope. + +To identify the negation scope for a not ${}^{3}$ modifying a verb, we followed the syntactic constraints that emerge from the guidelines of Morante and Blanco (2012). Note though that these guide- + +215 + +${}^{2}$ For instance in Kim did not go to the party because Bob was there., negation may scope only over the matrix clause or include the causal subordinate clause. + +${}^{3}$ In all this article, not stands for either not or $n$ ’t. + +${}^{1}$ We leave aside the uses of any and the like having free choice interpretations, as for instance in "Pick any card". + + < g r a p h i c s > + +Table 1: The "neg-patterns": patterns adapted from Jumelet and Hupkes (2018), which we used to identify some cases of not licensing a NPI and to build the not+NPI test set. Col1: pattern id in Jumelet and Hupkes (2018). Col2: syntactic pattern (defined as a phrase-structure subtree, using the Penn Treebank's annotation scheme), with the licensing scope appearing in blue. Col3: examples with colors for the four zones: pink for tokens in the PRE zone (before both scopes), purple for PRE-IN (to the left of the licensing scope, but within the negation scope), blue for IN (within both scopes) and green for POST (after both scopes). The NPI licensor is not, and appears in yellow. + +270 + +271 + +276 + +281 lines restrict the annotation to factual eventualities, leaving aside e.g. negated future verbs. We did not retain such a restriction, hence our identification of the negation scope is independent from verb tense or modality. + +§ 2.2 NPI LICENSING SCOPE + +Polarity items are a notoriously complex phenomenon. To identify the NPI licensing scope, we focus on specific syntactic patterns defined by Jumelet and Hupkes (2018), retaining only those involving not as licensor. ${}^{4}$ Table 1 shows an example for each retained pattern (hereafter the neg-patterns), with the NPI licensing scope in blue. + +Importantly, in the neg-patterns, the licensing scope is strictly included in the negation scope: within the clause of the negated verb, the tokens to its left belong to the negation scope but not to the licensing scope. E.g. in (3), anyone is not licit as a subject of going, whether the location argument is itself a plain PP, a NPI or a PPI (3-b). + +(3) a. I'm not going anywhere. + +b. *Anyone is not going to the party/ somewhere/anywhere. + +We thus defined 4 zones for the not+NPI sentences, exemplified in Table 1: PRE (tokens be- + +259 fore both scopes), PRE-IN (to the left of the licensing scope, but within the negation scope), IN (in both scopes), and POST (after both scopes). + +We note though that the restriction exemplified in (3-b) only holds for non-embedded NPIs (de Swart, 1998), so examples like (4), with an embedded NPI in the subject of the negated verb + +269 + +283 (hence belonging to our PRE-IN zone), are theoretically possible. + +286 + +§ (4) EXAMPLES WITH ANY RELEVANCE TO THAT ISSUE DIDN'T COME UP IN THE DISCUSSION. + +288 + +Yet in practice, we found that they are ex- + +tremely rare: using the Corpus of Contempo- 291 rary American English (COCA, Davies 2015) ${}^{5}$ , we extracted sentences matching one of the neg-patterns, and among these, sentences having any or any-body/one/thing/time/where in the IN zone, + +the PRE-IN zone or both. As shown in Table 2, 296 any* in the PRE-IN zone are way rarer than in the + +classical licensing scope (IN zone) ${}^{6}$ . Hence we 298 sticked to the usual notion of direct NPI licensing scope, as illustrated in Table 1. + +301 + +max width= + +Total IN PRE-IN both + +1-4 +45,157 35,938 711 58 + +1-4 + +303 + +Table 2: Number of sentences from the COCA + +corpus, matching the neg-patterns of Table 1: 306 Col1: total number, Col2-4: number having a + +any* in the IN zone, the PRE-IN zone, and in both 308 zones respectively. + +313 + +318 + +323 + +${}^{5}$ We used a version with texts from 1990 to 2012. COCA is distributed with some tokens in some sentences voluntarily masked, varying across distributions. We ignored such sentences. + +${}^{6}$ More precisely, the figures in Table 2 correspond to an upper bound, because of (i) potential syntactic parsing errors impacting the identification of the zones, (ii) cases in which the NPI licensor is different from the not targeted by the patterns, and (iii) cases in which the any* is a free choice item and not a NPI (as in "Pick any one"). We inspected 250 examples of any* in the PRE-IN zone, and 250 examples in the IN zone. In the former, we found that almost all cases fall under (i), (ii) or (iii), less than 3% corresponding to examples such as (4)). In contrast, in the IN zone the proportion of NPIs actually licensed by the target not is ${92}\%$ . + +${}^{4}$ We ignored pattern 4 (never instead of not as licensor), and 6 (too few occurrences in our data). We merged patterns 1 and 2, and corrected an obvious minor error in pattern 5 . + +§ 2.3 BUILDING THE NOT+NPI TEST SET + +Having defined these structural zones, we can use them to probe the traces they carry and compare the magnitude of these traces across the four zones. To do so, we built a test set of COCA sentences containing a not licensing a NPI (hereafter the not+NPI test set), matching one of the neg-patterns of Table 1, and having at least one any, anybody, anyone, anything, anytime or anywhere within the licensing scope. + +The scope of negation has been implemented through an approximation using dependency parses (from the Stanza parser (Qi et al., 2020)), which proved more convenient than phrase-structure parses: we took the subtree of the negated verb, excluding not itself, and excluding dependents corresponding to sentential or verbal conjuncts and to sentential parentheticals. + +More precisely, we identified the token having not as dependent (which, given our patterns, can be either the negated verb or a predicative adjective in case of a negated copula). Then, we retrieved the children of this head, except those attached to it with a "conj", "parataxis", "mark" or "discourse" dependency. In the complete subtrees of the selected dependents, all tokens were annotated as being inside the negation scope. + +max width= + +Genre Mag Acad Fict News Total + +1-6 +#with not 537 383 830 536 2285 + +1-6 +#and a NPI 31 21 58 34 143 + +1-6 + +Table 3: Thousands of sentences in COCA: Line 1: containing a not. Line 2: containing a not and at least one NPI (among any- $\varnothing /$ body/one/where/time/thing), anywhere in the sentence. + +362 + +For the licensing scope, we parsed the corpus using the PTB-style parser "Supar Parser"' of Zhang et al. (2020), and further retained only the + +367 sentences (i) matching the neg-patterns of Table 1 and (ii) having a NPI within the licensing scope (IN zone, shown in blue in Table 1). + +We finally obtained a not+NPI test set, whose statistics are provided in Table 4. + +§ 3 PROBING FOR THE SCOPES + +Our objective is to study how a transformer-based PLM (i) encodes the presence of a negation + +max width= + +$\mathbf{{Pattern}}$ Mag Acad Fict News Total + +1-6 +1/2 6.56 1.69 16.49 6.16 30.90 + +1-6 +3 0.57 0.14 1.33 0.49 2.53 + +1-6 +5* 0.22 0.08 0.58 0.15 1.02 + +1-6 + +Table 4: Statistics of the not+NPI test set: thousands of COCA sentences matching the neg-patterns (cf. Table 1), and having at least one any* in the IN zone (licensing scope), broken down by corpus genre. + +378 + +379 + +380 + +381 + +384 + +389 (the "traces" of negation) and (ii) models lexico- + +syntactic constraints imposed by negation, such as 391 the modeling of a NPI licensing scope. Using the terminology introduced in section 1, we will probe + +whether input embeddings encode as target infor- 394 mation (i) the presence of not elsewhere in the sen- + +tence, and (ii) the polarity of a masked PI. The 396 former focuses on a plain encoding of negation, whereas the latter focuses on whether the encoding of negation can be mobilized to reflect a property (NPI licensing) that is directly imposed by negation. To investigate whether such an encoding matches linguistic notions of scopes, we will contrast results depending on the zone the input token belongs to (among the four zones defined for a not + +licensing a NPI, namely PRE, PRE-IN, IN, POST) 406 and its distance to not. + +We study four PLMs: BERT-base-case, BERT- + +large-case (Devlin et al., 2019) and ROBERTA- 409 base and ROBERTA-large (Liu et al., 2019). All + +our experiments were done with each of these 411 models, and for a given model, each experiment was repeated three times. All the sentences we used for training, tuning and testing were extracted from the COCA corpus. + +416 + +§ 3.1 PROBING FOR THE NEGATION SCOPE + +In preliminary experiments, we extend Celikkanat et al. (2020)'s study by investigating the traces of + +not in the contextual embedding of all the tokens 421 of a sentence containing not (instead of just the verb, subject and object). + +§ 3.1.1 TRAINING NEG-CLASSIFIERS + +We train binary classifiers (hereafter the m-neg- 426 classifiers, with $m$ the name of the studied PLM) taking an input contextual embedding, and predicting the presence or absence of at least one + +not in the sentence. We train 3 classifiers for 430 + +each of the 4 tested PLMs. To train and evalu- 431 ate these classifiers, we randomly extract 40,000 sentences containing exactly one not, and 40,000 sentences not containing any not. We BERT- and ROBERTA-tokenized these sentences and for each model, we randomly selected one PLM token in each sentence to serve as input token. For these input tokens, we ignored any token not, plus all PLM tokens associated to a contracted negation: for instance don’t is BERT-tokenized into don $+ {}^{\prime } + t$ , and ROBERTA-tokenized into don’ + t. We ignore all these tokens, as they are too obvious a clue for the presence of a verbal negation. Furthermore, in order to homogenize the handling of negation whether contracted or not, we also set aside any modal or auxiliary that can form a negated contracted form. Hence, in She did leave, She did not leave or She didn't leave, the only candidate input tokens are those for She and leave ${}^{8}$ . We use ${64}\mathrm{k}$ sentences for training (neg-train-sets), and the remaining ${16}\mathrm{k}$ for testing (neg-test-set). + +${}^{7}$ https://parser.yzhang.site/en/latest/index.html + +We provide the obtained accuracies on this neg-test-set in Table 5, which shows that performance is significantly above chance. + +max width= + +Model ${\mathrm{{BERT}}}_{b}$ ${\mathrm{{BERT}}}_{l}$ ROB. $b$ ROB. ${}_{l}$ + +1-5 +Accur. 74.3 73.1 72.1 76.6 + +1-5 + +Table 5: Accuracies of the neg-classifiers on the neg-test-set for each PLM (averaged over 3 runs). + +§ 3.1.2 STUDYING RESULTS ON THE NOT+NPI TEST SET + +To probe the negation scope, we then use the not+NPI test set (cf. section 2), and compare accuracies in PRE-IN versus PRE, and in IN versus POST. + +Note though that distance to not is also likely to impact the classifiers' accuracy. Indeed, by definition the structural zones obviously correlate with distance to not. For instance, a token at distance 3 to the right of not is more likely to be in the licensing scope than a token at distance 20 . Hence, to study the impact of the input token's zone, we need to control for distance to the negation clue. + +We thus break down our classifiers' accuracy on the not $+ \mathrm{{NPI}}$ test set, not only according to the input token's zone, but also according to its relative position to the negation cue. Table 6 shows an example of not+NPI sentence, and the zone and + +relative position to not of each token. The target 486 + +not has position 0, and so do all the PLMs' sub- 487 word tokens involved in the negation complex, and all preceding modal or auxiliary, to homogenize across PLMs and across contracted/plain negation. By construction, the PRE and PRE-IN zones + +correspond to negative positions, whereas IN and 492 POST correspond to positive ones. + +The break-down by position for ROBERTA- + +large is shown in Figure 1 (results for other models 497 are in Appendix C). Two effects can be observed, + +for all the 4 PLMs: firstly, there is a general de- 499 crease of the accuracy as moving away from not, for the four zones. This contrasts with the findings + +of Klafka and Ettinger (2020), who did not ob- 502 serve a distance effect in their experiments, when probing whether the contextual representation of e.g. a direct object encodes e.g. the animacy of the subject. The decrease is more rapid before not than after it, which remains to be explained. It might come from the negation scope being shorter before not than after it. + +Secondly, when looking at fixed relative distances, there is a slight but almost systematic effect that when the input token is in the negation scope (either PRE-IN or IN), the accuracy is higher than when it is outside (PRE and POST) (the differences are statistically significant at $p <$ 0.001, cf. Appendix B). This tendency is more marked for the PRE vs. PRE-IN distinction than for the POST vs. IN distinction. + +522 + +This observation can be summarized by com- + +puting the average accuracy gap, namely the ac- 524 curacy differences averaged across positions (the average of the purple minus pink bars, and of blue minus green bars in Figure 3), which provide an average difference when a token is within or outside the negation scope. The average accuracy gaps for the four tested models are given in Table 7. It confirms that input embeddings of tokens inside the negation scope do allow for a slightly better prediction of the presence of not than those outside the scope. Note that the average difference is stable across models, whose size does not seem to matter. It shows that the strength of the encoding of not in contextual representations matches + +the linguistic notion of negation scope. 539 + +${}^{8}$ COCA sentences are tokenized and tagged. We detok-enized them before BERT/ROBERTA tokenization, in order to get closer to a standard input. + +540 594 + + < g r a p h i c s > + +Table 6: Example sentence from the not+NPI test set: structural zones and relative positions to not. Any auxiliary or modal preceding the target not has position 0 too, to homogenize contracted and plain negation, and BERT versus ROBERTA's tokenization. + +596 + +597 + +598 + +599 + +541 595 + +546 600 + +551 605 + +553 607 + +556 610 + + < g r a p h i c s > + +Figure 1: Accuracy of the ROBERTA-large-neg-classifier (average on 3 runs) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at $p < {0.001}$ (cf. Appendix B). Figures for the other 3 models are provided in Appendix C. + +608 + +609 + +612 + +615 + +617 + +619 + +620 + +621 + +622 + +623 + +624 + +625 + +626 + +627 + +628 + +max width= + +${\mathrm{{BERT}}}_{b}$ ${\mathrm{{BERT}}}_{l}$ ${\mathrm{{ROB}}}_{b}$ ${\mathrm{{ROB}}}_{l}$ + +1-4 +3.0 (0.6) 3.5 (0.2) 2.6 (0.2) 2.6 (1.3) + +1-4 + +Table 7: Accuracy gaps for the neg-classifiers on the not+NPI test set, for each tested PLM, averaged over 14 relative positions and 3 runs (stdev within brackets). + +583 + +588 We also observe that the biggest difference occurs at position -1 . This corresponds mostly to a contrast between a finite vs. non-finite negated verb (neg-patterns $1/2/3$ vs. neg-pattern 5 in Table 1), which seems well reflected in PLMs' em- + +593 beddings. + +629 + +§ 3.2 PROBING FOR THE LICENSING SCOPE + +630 + +631 + +We then focused on whether this encoding of not 632 + +can actually be mobilized to capture the licens- 633 + +ing of a NPI. We built classifiers (hereafter the 634 + +$m$ -pol-classifiers, with $m$ the name of the studied 635 + +PLM), taking an input contextual embedding, and 636 + +predicting as target information the polarity of a 637 + +masked position, originally filled with a positive 638 or negative PI. Importantly, the input embedding in the training set is randomly chosen in the sen- + +tence, and can correspond to a position that is or 642 isn't linguistically related to the polarity of the PI (cf. figure 2). This avoids using linguistic preconceptions while building the classifiers. + +We train on sentences originally having either a 646 + +PPI or a NPI, which we mask before running each 647 + +648 + + < g r a p h i c s > + +Figure 2: Illustration of the training of the pol-classifiers. + +649 + +654 studied PLM. More precisely, in each COCA sub-corpus (each genre), and for each of the 6 NPI/PPI pairs listed by Jumelet and Hupkes ${\left( {2018}\right) }^{9}$ , we randomly took at most 2,000 sentences containing the NPI, and the same amount of sentences con- + +664 taining the corresponding ${\mathrm{{PPI}}}^{10}$ . In each of these, we masked the PI, randomly selected one token per sentence to serve as input token (excluding the masked position) and split these into 63,529 examples for training (pol-train-set) and 15,883 for testing (pol-test-set). + +max width= + +Model ${\mathrm{{BERT}}}_{b}$ ${\mathrm{{BERT}}}_{l}$ ROB. $b$ ROB. ${}_{l}$ + +1-5 +Accur. 64.2 63.7 56.6 68.6 + +1-5 + +Table 8: Accuracies of the pol-classifiers on the pol-test-set for each PLM (averaged over 3 runs). + +Accuracies on the pol-test-set for each PLM are shown in Table 8. While still above chance, we observe that it doesn’t exceed ${69}\%$ , which is quite lower than the accuracies of the neg-classifiers (Table 5). This is not surprising since the task is more difficult. First, as stressed above, some of the training input tokens are independent, from the linguistic point of view, of the PI's polarity. Second, the cues for predicting the polarity are + +686 diverse. And third, in numerous contexts, both polarities are indeed possible, even though not equally likely. We did not control the training for this, on purpose not to introduce any additional + +691 bias in the data. We can thus interpret the pol- classifier's scores as how likely a given polarity is. + +Next, we applied these classifiers on the not+NPI test set. The objective is to compare the classifiers' accuracy depending on the structural + +701 + +zone the input token belongs to. If PLMs have a 702 + +notion of licensing scope, then the polarity predic- 703 tion should be higher when using an input token from the IN zone. + +§ 3.2.1 RESULTS + +Once more, we control for distance of the in- 708 put embedding to not. The break-down by position and structural zone for ROBERTA-large is provided in Figure 3 (results for other models are in Appendix C). + +Again, we observe a general accuracy decrease as moving away from not, and this decrease is faster than for the previous experiment. We also note that the decrease is more rapid in the PRE-IN zone than in the IN zone (for instance at distance -4 in PRE-IN, the accuracy is less than 70%, whereas it is still above it at distance 8 in the IN zone). This tends to indicate that the traces of not are more robust in the licensing scope. + +Secondly, as for the previous experiment, for each relative position, when the input token is in the negation scope (either PRE-IN or IN), the accuracy is higher than when it is outside (PRE and POST). Even though we cannot exclude that the relatively high overall accuracies may be explained by the classifier catching some regulari- + +ties of the sentences containing a NPI rather than 730 a PPI (independently of the presence of not), it remains that for the not+NPI sentences, accuracy is higher when the input token is in the negation scope than outside it. Moreover, this trend is much + +more marked than for the previous experiment. 735 + +Thirdly, the amplitude of this observation depends on the model. We provide the accuracy gaps for each PLM in Table 9. We observe that the trend is marked for ROBERTA-large and BERT- + +base (gap of 8.7 and 7.4 accuracy points, actually 740 much higher than the accuracy gaps for predicting the presence of not), but lower for ROBERTA-base and BERT-large. + +max width= + +${\mathrm{{BERT}}}_{b}$ ${\mathrm{{BERT}}}_{l}$ ${\mathrm{{ROB}}}_{b}$ ${\mathrm{{ROB}}}_{l}$ + +1-4 +7.4 (0.5) 3.1 (0.4) 1.4 (0.2) 8.7 (0.6) + +1-4 + +745 + +Table 9: Accuracy gaps for the pol-classifiers on + +the not+NPI test set, averaged over 14 relative po- 750 sitions and 3 runs (stdev within brackets). + +This leads us to conclude that (i) PLMs do encode structural constraints imposed by not (NPI li- + +censing), but to varying degrees across the PLMs 755 + +${}^{9}$ (any/some) $\left( {\varnothing /\text{ where/one/body/thing/time) }}\right)$ + +${}^{10}$ For any/some(%/one/thing), we took $2 \times {2000}$ occurrences. For any/some(body/time/where), less occurrences were available in some of the subcorpora. We took as many as possible, but keeping a strict balance between NPI and PPI sentences (between $2 \times {169}$ and $2 \times {958}$ depending on the corpus genre and on the NPI/PPI pair). + +756 810 + +757 811 + +758 812 + + < g r a p h i c s > + +Figure 3: Accuracy of the ROBERTA-large-pol-classifier (average on 3 runs) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at $p < {0.001}$ (cf. Appendix B). + +817 + +818 + +819 + +820 + +822 + +826 + +759 813 + +760 814 + +761 815 + +762 816 + +767 821 + +828 + +831 we tested, and (ii) that this encoding is stronger in the negation scope than outside it, independently of the distance to not. This only partially matches the linguistic expectation that the strongest zone should be the licensing scope rather than the entire negation scope. + +§ 4 CONCLUSION + +In this paper, we studied the way negation and its scope are encoded in contextual representations of PLMs and to what extent this encoding is used to model NPI licensing. + +Classifiers were trained to predict the presence of negation in a sentence from the contextual representation of a random token. We also trained classifiers to predict the polarity of a masked polar item from the contextual representation of a random token. A test set of sentences was designed with not licensing an NPI, inside which we identified the negation scope (roughly the clause), and the licensing scope (roughly the VP). + +For these sentences, we found that the contex- + +804 tual embeddings of tokens within the scope of a negation allow a better prediction of the presence of not. These embedding also allow a better prediction of the (negative) polarity of a masked PI. These results hold even when controlling for the + +809 distance to not. + +833 + +We conclude that the PLMs which were tested indeed encode a notion of negation scope in their + +contextual representations. We could not find 836 however a consistent encoding of the narrower + +(and probably more difficult to define) notion 838 of negative polarity licensing scope. Moreover, + +variation across PLMs remains to be explained 841 through further studies. + +843 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_bbk5bLa9K/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_bbk5bLa9K/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..33ced55c18f8322b84657481b995b40336349900 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_bbk5bLa9K/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,847 @@ +000 054 + +# Length Dependence of Vocabulary Richness + +001 055 + +002 056 + +003 Anonymous Author + +004 Affiliation / Address line 1 + +005 Affiliation / Address line 2 006 Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +## Abstract + +013 The relation between the length of a text and the number of unique words is investigated using several Swedish language + +016 corpora. We consider a number of existing measures of vocabulary richness, show + +018 that they are not length-independent, and try to improve on some of them based on statistical evidence. We also look at the spectrum of values over text lengths, and find that genres have characteristic shapes. + +023 + +## 1 Introduction + +Measures of lexical richness have several uses, including author identification, other forms of text classification, and estimating how difficult a text is. One of the simplest and most obvious measures of lexical richness is to compare the size of the vocabulary (that is, how many different words) to the size of the text (how many words in total). This can be done in several ways, most + +033 straightforwardly as the type-token ratio (henceforth TTR), $u/n$ , where $u$ is the number of unique words (types) and $n$ is the total number of words (tokens). Thus, for the sentence "this example is this example", there are three types and five to- + +038 kens, so TTR is $u/n = 3/5 = {0.6}$ . + +The obvious problem with TTR is that it changes with the length of the text. As we write a text, the more words we have already written, the more likely it is that the next word will be one that has already been used, so TTR goes down as the text grows longer. Many attempts have been made to transform this measure into something independent of the length of the text, but many of those attempts were made in an age before "big data", or even before computers, and were based on a priori reasoning rather than statistical analysis (Tweedie and Baayen, 1998). + +We will start by looking at some of these mea- + +053 sures, and test them on a set of corpora from + +Spräkbanken to see how they hold up for a wide 065 range of different $n$ . After comparing some of the + +previous methods, we will briefly look into using 067 the empirical data to come up with a better suggestion. The results give rise to another question: + +What if instead of aiming for a length-independent 070 measure, we consider how the values change with + +the length? Can that actually tell us new and inter- 072 esting things? + +We find that if we analyse the type count for 075 different sample lengths, we see clear and con- + +sistent differences between different types of text. 077 This may be useful for genre classification, or for a more detailed description of the complexity of + +the text. 080 + +Although these measures are usually applied to + +specific texts, we here apply them to entire cor- 082 + +pora. We will discuss the effects of this after see- 083 + +ing the results. 084 085 + +086 + +## 2 Data + +087 + +088 + +Spräkbanken (the Swedish Language Bank) at the 089 + +University of Gothenburg (spraakbanken.gu.se) 090 has a large collection of text corpora, mainly in + +Swedish but including several other languages. In 092 this study, we use Swedish texts, focusing on large and homogeneous corpora. + +We extract the type count $u$ for several differ- + +ent lengths $n$ . For each $n$ , we divide the corpus 097 in chunks of length $n$ , dropping any overflow at the end, and take the mean value of $u$ for each of these chunks. (In some cases we remove the last value for being an outlier; presumably this is because it is the only value where a large part of the data is dropped due to overflow.) We use a pseudo-logarithmic scale for ease of reading, extracting values for $n = {10},{20},{50},{100},{200},{500},{1000}\ldots$ up to the maximum possible for each corpus; the + +largest go up to 500 million tokens. 107 + +## 3 Testing existing measures + +109 + +First of all, we can test and verify that TTR does go down. Figure 1 shows TTR for 31 corpora. + +![01964102-9f53-7400-8878-6a8bcbbc33a9_1_163_376_648_482_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_1_163_376_648_482_0.jpg) + +Figure 1: Type-token ratio + +It seems likely that, as we compare different-size corpora, effects of size changes might be best described in terms of multiplicative changes rather than additive, so we might try looking at the logarithms of $n$ and $u$ . We see in Figure 2 that the result looks fairly close to a straight line. + +![01964102-9f53-7400-8878-6a8bcbbc33a9_1_165_1281_653_484_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_1_165_1281_653_484_0.jpg) + +Figure 2: Type count + +151 + +The first obvious method, then, is to assume that this is indeed a straight line, and use the slope of that line as our presumed length-independent measure of richness, that is, $\log u/\log n$ . This was proposed by Herdan (1964). We see in Figure 3 + +161 that the measure is decreasing quite steadily for + +all the texts. The six corpora used here are chosen 162 + +partly for being large, and partly for having large 163 + +differences in type count; many other corpora are 164 + +not nearly as well separated. 165 + +166 + +167 + +![01964102-9f53-7400-8878-6a8bcbbc33a9_1_822_384_652_482_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_1_822_384_652_482_0.jpg) + +Figure 3: Herdan's measure + +168 + +169 + +170 + +173 + +175 + +176 + +178 + +180 + +183 + +Let us pause for a moment and consider what 185 + +this figure illustrates. The fact that the measure de- 186 + +creases is not in itself a problem; we may be aim- 187 + +ing for a near-constant, but we should not expect 188 it to be completely perfect. The amount of varia- + +tion is also not relevant; we could change that by 190 adding or multiplying by a constant. Regardless of how large the variation is, we would also change + +the axes of the graph, so a glance at the variation of 193 a single curve in the graph does not tell us whether + +the measure is near-constant. 195 + +What actually matters is comparing the curves. If the measure is to reliably compare different texts, regardless of the (sample) size for each text, what we need is to have the lines separated inso- + +far as possible. If the lowest point of curve $A$ is 200 higher than the highest point of curve $\mathrm{B}$ , then we have successfully determined that $\mathrm{A}$ has a higher richness. We should also keep in mind that the first + +few points of the curve are not as important - we 205 are probably not very interested in measuring richness for very short texts, so although the graphs go all the way from 10 , we can mostly ignore values below 1000 or so. We would be content if the measure can separate the lines from that point on. + +As we see in Figure 3, this is not quite the case here. This measure works considerably better than TTR, but the curves are still close enough that their ranges overlap. We will compare with a few other + +measures. 215 + +216 Guiraud (in 1954, as cited by Hultman and + +217 Westman (1977)) proposed the measure $u/\sqrt{n}$ , + +218 shown in Figure 4. This does not separate the curves particularly well, and does not seem to have any advantage over the previous method. + +221 + +222 + +223 + +![01964102-9f53-7400-8878-6a8bcbbc33a9_2_163_419_652_485_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_2_163_419_652_485_0.jpg) + +Figure 4: Guiraud's measure + +227 + +229 + +230 + +231 + +232 + +233 + +234 + +237 + +239 + +240 Dugast (1979) built on Herdan by suggesting + +241 $\log u/\log \log n$ , seen in Figure 5. We find no ad- + +242 vantage with this method, and only added conceptual complexity with the double logarithm. + +244 + +![01964102-9f53-7400-8878-6a8bcbbc33a9_2_162_1211_656_520_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_2_162_1211_656_520_0.jpg) + +Figure 5: Dugast's measure + +248 + +249 + +254 + +256 + +257 + +258 + +259 + +260 + +261 + +262 + +Brunet (1978) proposed ${n}^{ \land }\left( {u}^{-a}\right)$ , where usu- + +264 ally $a = {0.172}$ . This is shown in Figure 6. This too is a fairly conceptually complicated method + +266 which shows no sign of improving the results. + +267 Maas (1972) found another approach, with + +268 $\left( {\log n - \log u}\right) /{\left( \log n\right) }^{2}$ , see Figure 7. This seems + +269 marginally more effective at separating the curves. + +![01964102-9f53-7400-8878-6a8bcbbc33a9_2_831_191_647_497_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_2_831_191_647_497_0.jpg) + +Figure 6: Brunet's measure + +270 + +271 + +272 + +273 + +274 + +275 + +276 + +277 + +278 + +279 + +280 + +281 + +282 + +283 + +284 + +285 + +286 + +287 + +![01964102-9f53-7400-8878-6a8bcbbc33a9_2_823_841_652_484_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_2_823_841_652_484_0.jpg) + +Figure 7: Maas's measure + +288 + +289 + +290 + +291 + +292 + +293 + +294 + +295 + +296 + +297 + +298 + +299 + +300 + +301 + +302 + +303 + +304 + +305 + +Hultman and Westman (1977) defined the OVIX 306 + +measure as 307 + +$$ +\frac{\log n}{\log \left( {2 - \frac{\log u}{\log n}}\right) } +$$ + +308 309 310 311 + +which is seen in Figure 8. This is a measure com- 312 + +monly used in Sweden, including by Spräkbanken. 313 As we see, this also does a passable job, but there is a clear rising trend for most curves. This is confirmed by further testing on other corpora. + +## 4 Improving measures + +318 + +319 + +By analysing the way these measures depend on 320 + +$n$ , we may be able to adjust and improve them. 321 + +As noted, the fact that the curve of $\log u$ against 322 + +$\log n$ is close to a line suggests that $u/n$ may be 323 + +324 + +![01964102-9f53-7400-8878-6a8bcbbc33a9_3_165_192_655_493_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_3_165_192_655_493_0.jpg) + +Figure 8: Ovix + +325 + +329 + +330 + +335 + +337 + +340 + +342 a constant, as per Herdan. But that assumes that the line passes through(0,0); if the line passes though(0, m)for some $m$ , we should expect that $\left( {u - m}\right) /n$ is constant. We find that for a subset of the corpora, the best-fitting line gives $m = {0.4}$ , and we see in Figure 9 that $\left( {u - {0.4}}\right) /n$ does look a lot flatter. As before, we pay less attention to the values where $n < {1000}$ . + +![01964102-9f53-7400-8878-6a8bcbbc33a9_3_163_1201_649_484_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_3_163_1201_649_484_0.jpg) + +Figure 9: Herdan with constant term + +362 + +365 + +366 + +367 + +368 + +On the other hand, we know that a text with one word certainly also has one unique word, so log- + +372 ically the curve of $\log u$ against $\log n$ must pass though(0,0). Empiricism is all good and well, but if we want results that hold up for other data, perhaps we are better off not violating basic logic. What if instead of a line, we fit the points to a + +377 polynomial curve with zero constant term? Trying + +second, third and fourth order polynomials sug- 378 + +gests that third is a good compromise. We find 379 + +the best fit for six corpora, take the average for 380 + +the quadratic and cubic terms, and get the adjusted 381 + +measure 382 + +383 + +$$ +\log u/\log n + {0.044}{\left( \log n\right) }^{2} - {0.0024}{\left( \log n\right) }^{3} +$$ + +384 + +385 + +You can see in Figure 10 that this separates the 386 curves considerably better than the pure Herdan + +measure. From looking at the graph, this is proba- 388 + +bly the best option we have here, but we should 389 + +note that the coefficients vary quite a bit be- 390 + +tween corpora (standard deviations are 0.015 and 391 + +0.0017), so this is not universal enough to adopt as 392 + +some sort of standard measure. 393 + +394 + +![01964102-9f53-7400-8878-6a8bcbbc33a9_3_821_798_652_485_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_3_821_798_652_485_0.jpg) + +Figure 10: Herdan with cubic fit + +395 + +396 + +397 + +398 + +399 + +400 + +401 + +403 + +404 + +405 + +406 + +407 + +408 + +409 + +410 + +411 + +412 + +413 + +![01964102-9f53-7400-8878-6a8bcbbc33a9_3_819_1471_655_481_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_3_819_1471_655_481_0.jpg) + +Figure 11: Adjusted Guiraud + +414 + +415 + +416 + +417 + +418 + +419 + +420 + +421 + +422 + +423 + +424 + +425 + +426 + +427 + +428 + +429 + +430 + +We can also consider the Guiraud approach, and 431 try to adjust it. We notice that while TTR (where we divide by $n$ ) goes steadily down, Guiraud (where we divide by ${n}^{0.5}$ ) goes up. Perhaps we can find a middle ground? Figure 11 shows the results for $u/{n}^{0.75}$ , which looks overall much flatter and better separating the curves. This may not be a better result than the previous one, but it does have the advantage of not depending on experimentally determined coefficients. + +Is there another option, using only the length and the type count? Yes, there is an option which is in principle completely independent of text length: Measure the type count (or equivalently TTR) for a fixed length. One option would be to measure only the first $n$ words of a text, but that could mean that a small part of the text has a large impact, so probably a better method is to cut the text into pieces of length $n$ and take the average, exactly as we have done above. + +![01964102-9f53-7400-8878-6a8bcbbc33a9_4_147_919_689_877_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_4_147_919_689_877_0.jpg) + +Figure 12: TTR at $n = {10000}$ + +Figure 12 shows the results for $n = {10000}$ , on 39 corpora. We see that it fairly well separates several categories of text. The eight newspaper corpora are above all but one other, with the three oldest getting the highest value, followed by the two from the late 1900s, then the two from printed + +newspapers in 2000 and 2014, and last the web- 486 + +based news texts. The social media and blog texts 487 are a little more scattered, but all below the mean, except Twitter, which in both cases is higher. The four corpora of novels are not quite the same level, but all higher than all of the ones in the "easy read" + +category. In that category, young adult literature 492 is the highest and children's literature the lowest. Parliamentary data is all below the mean but above "easy read". Near the bottom we find, perhaps surprisingly, the Bible, along with Wikipedia, neither of which are primarily known to be easy reads. Altogether, these results should tell us that this is at least a meaningful measure. + +That leaves the question of choosing an $n$ . Very + +low values might give strange effects, very high 502 values would make it unusable for shorter texts. Other values were tested for comparison: $n = {10}$ gives little useful information, while $n = {100}$ ranks all the novels below most of social media, and beyond that we get mostly unremarkable results from just looking at the ranking. Based on these limited results, $n = {10000}$ seems like a good choice, if we are working with relatively long texts, and otherwise we can settle for $n = {1000}$ . + +## 5 Spectrum comparison + +Instead of considering type counts for only one $n$ , what if we measure for many values of $n$ , and look at the whole spectrum? This is essentially what we already did in all of section 3 , and we could see that the curves for the different corpora certainly did have different shapes - some of them even crossed each other, which implies that any one number is not going to tell us the whole truth. + +To compare corpora instead of methods, we need to pick one method, one way to transform $u$ based on $n$ . Using plain TTR as seen in Figure 1 would make it difficult to tell the difference between shapes, and picking one of the tested methods seems like too arbitrary a choice. So for the purposes of this section, we will evade the problem. We normalise the type count (or equivalently TTR) for each $n$ by subtracting the mean and dividing by the standard deviation. That is, the values on the vertical axis are in terms of standard deviations above the mean, counted for each separate value on the horizontal axis. (For the very highest values, the mean and sd values change erratically because of corpora dropping off. We adjust + +the normalisation to gradually change from actual 539 + +540 mean and sd to extrapolated values.) + +541 Figures 13-22 show the spectra for each category. Some curves are shorter because of limited data. Figures 13-15 show three different types of web-based texts, one set of blog texts and two different internet forums. We can see that each category is a little different, but all the curves share some characteristics - first a short rise, then a drop, then flatter, and finally a small rise. Most of them start slightly above the mean, and end below the mean. + +![01964102-9f53-7400-8878-6a8bcbbc33a9_5_164_640_644_484_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_5_164_640_644_484_0.jpg) + +Figure 13: Spectrum for blog texts + +566 + +568 + +![01964102-9f53-7400-8878-6a8bcbbc33a9_5_163_1315_657_484_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_5_163_1315_657_484_0.jpg) + +Figure 14: Spectrum for the Familjeliv forum + +578 + +583 + +Figure 16 shows the "easy read" category. Despite being unrelated, the curves share the same shape, which is clearly different from the web-based corpora - a drop, then a rise, peaking around + +593 1000 without reaching the mean, then a drop. + +![01964102-9f53-7400-8878-6a8bcbbc33a9_5_822_193_652_492_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_5_822_193_652_492_0.jpg) + +Figure 15: Spectrum for the Flashback forum + +594 + +595 + +596 + +597 + +598 + +599 + +600 + +602 + +604 + +605 + +607 + +608 + +609 + +610 + +612 + +![01964102-9f53-7400-8878-6a8bcbbc33a9_5_823_847_647_481_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_5_823_847_647_481_0.jpg) + +Figure 16: Spectrum for easy-read texts + +613 + +614 + +615 + +616 + +617 + +618 + +619 + +620 + +622 + +623 + +624 + +625 + +626 + +627 + +628 + +629 + +Figures 17-18 show news texts, with Figure 17 630 showing three newspapers from the early 1900s, + +and Figure 18 showing four more recent newspa- 632 pers and one web-based news corpus. As with the blog/forum collection, we see that these two related categories have clear similarities: a slow rise + +up to between ten and a hundred thousand, and 637 then a sharp. But they are also visibly distinct, with the older newspapers having higher values and rising near the end. Aside from some more unpredictable behaviour for $n < {1000}$ , the curves in + +each category are remarkably similar in both shape 642 and level. + +Figures 19-20 show literary texts, with Figure 19 showing regular novels and Figure 20 showing + +children's fiction and young adult fiction. They are 646 + +all comparatively straight and dropping slightly. 647 + +648 702 + +649 703 + +0.6 , + +0.4 + +0.2 + +romi + +0 romg + +Sd above mean romi -0.2 -0.4 + +-0.6 + +-0.8 + +-1 + +-1.2 + +100 1000 1E4 1E5 1E6 1E7 1.68 + +Sample length + +Figure 19: Spectrum for novels + +2.5 + +lalpilen1920 + +Sd above mean 1.5 kalmar191 ostgota... + +0.5 + +-0.5 + +100 1000 1E4 1.E5 1E6 1E7 1E + +Sample length + +Figure 17: Spectrum for old newspapers + +704 + +705 + +706 + +707 + +654 708 + +709 + +659 713 + +715 + +716 + +664 718 + +666 720 + +![01964102-9f53-7400-8878-6a8bcbbc33a9_6_164_854_647_486_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_6_164_854_647_486_0.jpg) + +Figure 18: Spectrum for recent newspapers + +![01964102-9f53-7400-8878-6a8bcbbc33a9_6_823_850_647_483_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_6_823_850_647_483_0.jpg) + +Figure 20: Spectrum for youth novels + +723 + +725 + +726 + +728 + +730 + +733 + +681 735 Children's literature is generally lower than young + +## 6 Applicability + +686 adult literature, and they both drop faster than the Is it reasonable to apply measures like these on an 740 687 curves for books aimed at adults. entire corpus instead of just separate texts? First, 688 Figure 21 shows religious texts. We see two "separate texts" is not necessarily well defined. Is 689 translations of the Bible, with very similar curves a newspaper one text, or each article? Books in a 690 - both dropping, rising, levelling out, but unlike series? Multiple entries posted on the same web 745 691 the easy read category they level out at about the page? Second, for the lower values of $n$ , running same level where they started. Also included is a the entire corpus at once should not make a big book of church hymns, which happens to level out difference. For example, if $n = {100}$ and the typi-at a similar level, but starts with a large rise. cal length of a text is 10000 , that would mean that + +696 Finally, in Figure 22, we see three uncate- only about $1\%$ of samples contain two texts, and 750 gorised corpora - one from a 1700 s songwriter, the rest only one. For the higher values of $n$ , using one from a popular science magazine, and one only separate texts would leave us with no data at from Wikipedia. As expected, they show very dif- all - it would be difficult to find singular coherent ferent shapes and levels, and are clearly distinct texts spanning hundreds of millions of words. This + +701 from each other as well as all the other curves. means that allowing corpora of multiple authors 755 + +757 + +![01964102-9f53-7400-8878-6a8bcbbc33a9_7_162_190_656_495_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_7_162_190_656_495_0.jpg) + +Figure 21: Spectrum for religious texts + +762 and topics is our only option if we want results for large $n$ . + +![01964102-9f53-7400-8878-6a8bcbbc33a9_7_164_858_655_485_0.jpg](images/01964102-9f53-7400-8878-6a8bcbbc33a9_7_164_858_655_485_0.jpg) + +Figure 22: Spectrum for some other texts + +But we can also look at the results. Are the differences between the curves largely caused by differences in text length? If that was the case, we would expect that when a curve reaches the "critical $n$ " where we go from a single text to multiple texts, the vocabulary richness should increase rapidly. The curve we would expect to see is one that starts out mostly flat (because hardly any texts are that short), then slowly decreases (as others reach their critical $n$ and bring up the mean), then rapidly jumps up as it reaches its critical $n$ , and then slowly decreases again. This is not a pattern that we see anywhere, so we can conclude that text + +809 length is not the driving factor of the curve shapes. + +## 7 Conclusion + +810 + +811 + +It is clear that the task of finding a length- 812 + +independent measure of vocabulary richness is dif- 813 ficult at best. We have seen that many traditionally used measures are not satisfactory, and some sug- + +gestions as to how they can be improved. Perhaps 816 the most obvious approach is to use average TTR over a sample length, with 10000 being a good sample length when possible. + +The figures show that the curves have very dif- + +ferent shapes, and often cross. This means that 821 the ranking of corpora changes depending on the + +length of text we are looking at, so a perfect solu- 823 tion is not possible, or at least cannot be expressed as a single number. + +Is this spectrum method useful for genre classi- 826 fication? It is perhaps rare that we need to analyse + +entire hundred-million-word corpora to see if they 828 are made up of novels or newspapers, but we do + +see that there are some differences even for much 831 shorter lengths. We have also gained insight into + +what makes it difficult to find a good measure of 833 vocabulary richness. But most importantly, we have seen that there are notable and interesting dif- + +ferences between genres, and raised for future re- 836 search the question of why. + +838 + +## References + +839 + +840 + +Etienne Brunet. 1978. Le vocabulaire de Jean Giraudoux structure et évolution. Slatkine, Genève. + +Daniel Dugast. 1979. Vocabulaire et stylistique, volume 8. Slatkine, Genève. + +Gustav Herdan. 1964. Quantitative linguistics. Butterworth, London. + +Tor G. Hultman and Margareta Westman. 1977. Gym-nasistsvenska. Liber Läromedel, Lund. + +Heinz-Dieter Maas. 1972. Über den zusammenhang zwischen wortschatzumfang und länge eines textes. Zeitschrift für Literaturwissenschaft und Linguistik, 2(8):73. + +Fiona J Tweedie and R Harald Baayen. 1998. How variable may a constant be? measures of lexical richness in perspective. Computers and the Humanities, 32:323-352. + +841 + +843 + +846 + +847 + +848 + +849 + +850 + +851 + +853 + +858 + +859 + +860 + +861 + +862 + +863 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_bbk5bLa9K/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_bbk5bLa9K/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..4cd498ae13d063a5dc8bee219d9a3ba2905fae03 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/_bbk5bLa9K/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,799 @@ +000 054 + +§ LENGTH DEPENDENCE OF VOCABULARY RICHNESS + +001 055 + +002 056 + +003 Anonymous Author + +004 Affiliation / Address line 1 + +005 Affiliation / Address line 2 006 Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +§ ABSTRACT + +013 The relation between the length of a text and the number of unique words is investigated using several Swedish language + +016 corpora. We consider a number of existing measures of vocabulary richness, show + +018 that they are not length-independent, and try to improve on some of them based on statistical evidence. We also look at the spectrum of values over text lengths, and find that genres have characteristic shapes. + +023 + +§ 1 INTRODUCTION + +Measures of lexical richness have several uses, including author identification, other forms of text classification, and estimating how difficult a text is. One of the simplest and most obvious measures of lexical richness is to compare the size of the vocabulary (that is, how many different words) to the size of the text (how many words in total). This can be done in several ways, most + +033 straightforwardly as the type-token ratio (henceforth TTR), $u/n$ , where $u$ is the number of unique words (types) and $n$ is the total number of words (tokens). Thus, for the sentence "this example is this example", there are three types and five to- + +038 kens, so TTR is $u/n = 3/5 = {0.6}$ . + +The obvious problem with TTR is that it changes with the length of the text. As we write a text, the more words we have already written, the more likely it is that the next word will be one that has already been used, so TTR goes down as the text grows longer. Many attempts have been made to transform this measure into something independent of the length of the text, but many of those attempts were made in an age before "big data", or even before computers, and were based on a priori reasoning rather than statistical analysis (Tweedie and Baayen, 1998). + +We will start by looking at some of these mea- + +053 sures, and test them on a set of corpora from + +Spräkbanken to see how they hold up for a wide 065 range of different $n$ . After comparing some of the + +previous methods, we will briefly look into using 067 the empirical data to come up with a better suggestion. The results give rise to another question: + +What if instead of aiming for a length-independent 070 measure, we consider how the values change with + +the length? Can that actually tell us new and inter- 072 esting things? + +We find that if we analyse the type count for 075 different sample lengths, we see clear and con- + +sistent differences between different types of text. 077 This may be useful for genre classification, or for a more detailed description of the complexity of + +the text. 080 + +Although these measures are usually applied to + +specific texts, we here apply them to entire cor- 082 + +pora. We will discuss the effects of this after see- 083 + +ing the results. 084 085 + +086 + +§ 2 DATA + +087 + +088 + +Spräkbanken (the Swedish Language Bank) at the 089 + +University of Gothenburg (spraakbanken.gu.se) 090 has a large collection of text corpora, mainly in + +Swedish but including several other languages. In 092 this study, we use Swedish texts, focusing on large and homogeneous corpora. + +We extract the type count $u$ for several differ- + +ent lengths $n$ . For each $n$ , we divide the corpus 097 in chunks of length $n$ , dropping any overflow at the end, and take the mean value of $u$ for each of these chunks. (In some cases we remove the last value for being an outlier; presumably this is because it is the only value where a large part of the data is dropped due to overflow.) We use a pseudo-logarithmic scale for ease of reading, extracting values for $n = {10},{20},{50},{100},{200},{500},{1000}\ldots$ up to the maximum possible for each corpus; the + +largest go up to 500 million tokens. 107 + +§ 3 TESTING EXISTING MEASURES + +109 + +First of all, we can test and verify that TTR does go down. Figure 1 shows TTR for 31 corpora. + + < g r a p h i c s > + +Figure 1: Type-token ratio + +It seems likely that, as we compare different-size corpora, effects of size changes might be best described in terms of multiplicative changes rather than additive, so we might try looking at the logarithms of $n$ and $u$ . We see in Figure 2 that the result looks fairly close to a straight line. + + < g r a p h i c s > + +Figure 2: Type count + +151 + +The first obvious method, then, is to assume that this is indeed a straight line, and use the slope of that line as our presumed length-independent measure of richness, that is, $\log u/\log n$ . This was proposed by Herdan (1964). We see in Figure 3 + +161 that the measure is decreasing quite steadily for + +all the texts. The six corpora used here are chosen 162 + +partly for being large, and partly for having large 163 + +differences in type count; many other corpora are 164 + +not nearly as well separated. 165 + +166 + +167 + + < g r a p h i c s > + +Figure 3: Herdan's measure + +168 + +169 + +170 + +173 + +175 + +176 + +178 + +180 + +183 + +Let us pause for a moment and consider what 185 + +this figure illustrates. The fact that the measure de- 186 + +creases is not in itself a problem; we may be aim- 187 + +ing for a near-constant, but we should not expect 188 it to be completely perfect. The amount of varia- + +tion is also not relevant; we could change that by 190 adding or multiplying by a constant. Regardless of how large the variation is, we would also change + +the axes of the graph, so a glance at the variation of 193 a single curve in the graph does not tell us whether + +the measure is near-constant. 195 + +What actually matters is comparing the curves. If the measure is to reliably compare different texts, regardless of the (sample) size for each text, what we need is to have the lines separated inso- + +far as possible. If the lowest point of curve $A$ is 200 higher than the highest point of curve $\mathrm{B}$ , then we have successfully determined that $\mathrm{A}$ has a higher richness. We should also keep in mind that the first + +few points of the curve are not as important - we 205 are probably not very interested in measuring richness for very short texts, so although the graphs go all the way from 10, we can mostly ignore values below 1000 or so. We would be content if the measure can separate the lines from that point on. + +As we see in Figure 3, this is not quite the case here. This measure works considerably better than TTR, but the curves are still close enough that their ranges overlap. We will compare with a few other + +measures. 215 + +216 Guiraud (in 1954, as cited by Hultman and + +217 Westman (1977)) proposed the measure $u/\sqrt{n}$ , + +218 shown in Figure 4. This does not separate the curves particularly well, and does not seem to have any advantage over the previous method. + +221 + +222 + +223 + + < g r a p h i c s > + +Figure 4: Guiraud's measure + +227 + +229 + +230 + +231 + +232 + +233 + +234 + +237 + +239 + +240 Dugast (1979) built on Herdan by suggesting + +241 $\log u/\log \log n$ , seen in Figure 5. We find no ad- + +242 vantage with this method, and only added conceptual complexity with the double logarithm. + +244 + + < g r a p h i c s > + +Figure 5: Dugast's measure + +248 + +249 + +254 + +256 + +257 + +258 + +259 + +260 + +261 + +262 + +Brunet (1978) proposed ${n}^{ \land }\left( {u}^{-a}\right)$ , where usu- + +264 ally $a = {0.172}$ . This is shown in Figure 6. This too is a fairly conceptually complicated method + +266 which shows no sign of improving the results. + +267 Maas (1972) found another approach, with + +268 $\left( {\log n - \log u}\right) /{\left( \log n\right) }^{2}$ , see Figure 7. This seems + +269 marginally more effective at separating the curves. + + < g r a p h i c s > + +Figure 6: Brunet's measure + +270 + +271 + +272 + +273 + +274 + +275 + +276 + +277 + +278 + +279 + +280 + +281 + +282 + +283 + +284 + +285 + +286 + +287 + + < g r a p h i c s > + +Figure 7: Maas's measure + +288 + +289 + +290 + +291 + +292 + +293 + +294 + +295 + +296 + +297 + +298 + +299 + +300 + +301 + +302 + +303 + +304 + +305 + +Hultman and Westman (1977) defined the OVIX 306 + +measure as 307 + +$$ +\frac{\log n}{\log \left( {2 - \frac{\log u}{\log n}}\right) } +$$ + +308 309 310 311 + +which is seen in Figure 8. This is a measure com- 312 + +monly used in Sweden, including by Spräkbanken. 313 As we see, this also does a passable job, but there is a clear rising trend for most curves. This is confirmed by further testing on other corpora. + +§ 4 IMPROVING MEASURES + +318 + +319 + +By analysing the way these measures depend on 320 + +$n$ , we may be able to adjust and improve them. 321 + +As noted, the fact that the curve of $\log u$ against 322 + +$\log n$ is close to a line suggests that $u/n$ may be 323 + +324 + + < g r a p h i c s > + +Figure 8: Ovix + +325 + +329 + +330 + +335 + +337 + +340 + +342 a constant, as per Herdan. But that assumes that the line passes through(0,0); if the line passes though(0, m)for some $m$ , we should expect that $\left( {u - m}\right) /n$ is constant. We find that for a subset of the corpora, the best-fitting line gives $m = {0.4}$ , and we see in Figure 9 that $\left( {u - {0.4}}\right) /n$ does look a lot flatter. As before, we pay less attention to the values where $n < {1000}$ . + + < g r a p h i c s > + +Figure 9: Herdan with constant term + +362 + +365 + +366 + +367 + +368 + +On the other hand, we know that a text with one word certainly also has one unique word, so log- + +372 ically the curve of $\log u$ against $\log n$ must pass though(0,0). Empiricism is all good and well, but if we want results that hold up for other data, perhaps we are better off not violating basic logic. What if instead of a line, we fit the points to a + +377 polynomial curve with zero constant term? Trying + +second, third and fourth order polynomials sug- 378 + +gests that third is a good compromise. We find 379 + +the best fit for six corpora, take the average for 380 + +the quadratic and cubic terms, and get the adjusted 381 + +measure 382 + +383 + +$$ +\log u/\log n + {0.044}{\left( \log n\right) }^{2} - {0.0024}{\left( \log n\right) }^{3} +$$ + +384 + +385 + +You can see in Figure 10 that this separates the 386 curves considerably better than the pure Herdan + +measure. From looking at the graph, this is proba- 388 + +bly the best option we have here, but we should 389 + +note that the coefficients vary quite a bit be- 390 + +tween corpora (standard deviations are 0.015 and 391 + +0.0017), so this is not universal enough to adopt as 392 + +some sort of standard measure. 393 + +394 + + < g r a p h i c s > + +Figure 10: Herdan with cubic fit + +395 + +396 + +397 + +398 + +399 + +400 + +401 + +403 + +404 + +405 + +406 + +407 + +408 + +409 + +410 + +411 + +412 + +413 + + < g r a p h i c s > + +Figure 11: Adjusted Guiraud + +414 + +415 + +416 + +417 + +418 + +419 + +420 + +421 + +422 + +423 + +424 + +425 + +426 + +427 + +428 + +429 + +430 + +We can also consider the Guiraud approach, and 431 try to adjust it. We notice that while TTR (where we divide by $n$ ) goes steadily down, Guiraud (where we divide by ${n}^{0.5}$ ) goes up. Perhaps we can find a middle ground? Figure 11 shows the results for $u/{n}^{0.75}$ , which looks overall much flatter and better separating the curves. This may not be a better result than the previous one, but it does have the advantage of not depending on experimentally determined coefficients. + +Is there another option, using only the length and the type count? Yes, there is an option which is in principle completely independent of text length: Measure the type count (or equivalently TTR) for a fixed length. One option would be to measure only the first $n$ words of a text, but that could mean that a small part of the text has a large impact, so probably a better method is to cut the text into pieces of length $n$ and take the average, exactly as we have done above. + + < g r a p h i c s > + +Figure 12: TTR at $n = {10000}$ + +Figure 12 shows the results for $n = {10000}$ , on 39 corpora. We see that it fairly well separates several categories of text. The eight newspaper corpora are above all but one other, with the three oldest getting the highest value, followed by the two from the late 1900s, then the two from printed + +newspapers in 2000 and 2014, and last the web- 486 + +based news texts. The social media and blog texts 487 are a little more scattered, but all below the mean, except Twitter, which in both cases is higher. The four corpora of novels are not quite the same level, but all higher than all of the ones in the "easy read" + +category. In that category, young adult literature 492 is the highest and children's literature the lowest. Parliamentary data is all below the mean but above "easy read". Near the bottom we find, perhaps surprisingly, the Bible, along with Wikipedia, neither of which are primarily known to be easy reads. Altogether, these results should tell us that this is at least a meaningful measure. + +That leaves the question of choosing an $n$ . Very + +low values might give strange effects, very high 502 values would make it unusable for shorter texts. Other values were tested for comparison: $n = {10}$ gives little useful information, while $n = {100}$ ranks all the novels below most of social media, and beyond that we get mostly unremarkable results from just looking at the ranking. Based on these limited results, $n = {10000}$ seems like a good choice, if we are working with relatively long texts, and otherwise we can settle for $n = {1000}$ . + +§ 5 SPECTRUM COMPARISON + +Instead of considering type counts for only one $n$ , what if we measure for many values of $n$ , and look at the whole spectrum? This is essentially what we already did in all of section 3, and we could see that the curves for the different corpora certainly did have different shapes - some of them even crossed each other, which implies that any one number is not going to tell us the whole truth. + +To compare corpora instead of methods, we need to pick one method, one way to transform $u$ based on $n$ . Using plain TTR as seen in Figure 1 would make it difficult to tell the difference between shapes, and picking one of the tested methods seems like too arbitrary a choice. So for the purposes of this section, we will evade the problem. We normalise the type count (or equivalently TTR) for each $n$ by subtracting the mean and dividing by the standard deviation. That is, the values on the vertical axis are in terms of standard deviations above the mean, counted for each separate value on the horizontal axis. (For the very highest values, the mean and sd values change erratically because of corpora dropping off. We adjust + +the normalisation to gradually change from actual 539 + +540 mean and sd to extrapolated values.) + +541 Figures 13-22 show the spectra for each category. Some curves are shorter because of limited data. Figures 13-15 show three different types of web-based texts, one set of blog texts and two different internet forums. We can see that each category is a little different, but all the curves share some characteristics - first a short rise, then a drop, then flatter, and finally a small rise. Most of them start slightly above the mean, and end below the mean. + + < g r a p h i c s > + +Figure 13: Spectrum for blog texts + +566 + +568 + + < g r a p h i c s > + +Figure 14: Spectrum for the Familjeliv forum + +578 + +583 + +Figure 16 shows the "easy read" category. Despite being unrelated, the curves share the same shape, which is clearly different from the web-based corpora - a drop, then a rise, peaking around + +593 1000 without reaching the mean, then a drop. + + < g r a p h i c s > + +Figure 15: Spectrum for the Flashback forum + +594 + +595 + +596 + +597 + +598 + +599 + +600 + +602 + +604 + +605 + +607 + +608 + +609 + +610 + +612 + + < g r a p h i c s > + +Figure 16: Spectrum for easy-read texts + +613 + +614 + +615 + +616 + +617 + +618 + +619 + +620 + +622 + +623 + +624 + +625 + +626 + +627 + +628 + +629 + +Figures 17-18 show news texts, with Figure 17 630 showing three newspapers from the early 1900s, + +and Figure 18 showing four more recent newspa- 632 pers and one web-based news corpus. As with the blog/forum collection, we see that these two related categories have clear similarities: a slow rise + +up to between ten and a hundred thousand, and 637 then a sharp. But they are also visibly distinct, with the older newspapers having higher values and rising near the end. Aside from some more unpredictable behaviour for $n < {1000}$ , the curves in + +each category are remarkably similar in both shape 642 and level. + +Figures 19-20 show literary texts, with Figure 19 showing regular novels and Figure 20 showing + +children's fiction and young adult fiction. They are 646 + +all comparatively straight and dropping slightly. 647 + +648 702 + +649 703 + +0.6 , + +0.4 + +0.2 + +romi + +0 romg + +Sd above mean romi -0.2 -0.4 + +-0.6 + +-0.8 + +-1 + +-1.2 + +100 1000 1E4 1E5 1E6 1E7 1.68 + +Sample length + +Figure 19: Spectrum for novels + +2.5 + +lalpilen1920 + +Sd above mean 1.5 kalmar191 ostgota... + +0.5 + +-0.5 + +100 1000 1E4 1.E5 1E6 1E7 1E + +Sample length + +Figure 17: Spectrum for old newspapers + +704 + +705 + +706 + +707 + +654 708 + +709 + +659 713 + +715 + +716 + +664 718 + +666 720 + + < g r a p h i c s > + +Figure 18: Spectrum for recent newspapers + + < g r a p h i c s > + +Figure 20: Spectrum for youth novels + +723 + +725 + +726 + +728 + +730 + +733 + +681 735 Children's literature is generally lower than young + +§ 6 APPLICABILITY + +686 adult literature, and they both drop faster than the Is it reasonable to apply measures like these on an 740 687 curves for books aimed at adults. entire corpus instead of just separate texts? First, 688 Figure 21 shows religious texts. We see two "separate texts" is not necessarily well defined. Is 689 translations of the Bible, with very similar curves a newspaper one text, or each article? Books in a 690 - both dropping, rising, levelling out, but unlike series? Multiple entries posted on the same web 745 691 the easy read category they level out at about the page? Second, for the lower values of $n$ , running same level where they started. Also included is a the entire corpus at once should not make a big book of church hymns, which happens to level out difference. For example, if $n = {100}$ and the typi-at a similar level, but starts with a large rise. cal length of a text is 10000, that would mean that + +696 Finally, in Figure 22, we see three uncate- only about $1\%$ of samples contain two texts, and 750 gorised corpora - one from a 1700 s songwriter, the rest only one. For the higher values of $n$ , using one from a popular science magazine, and one only separate texts would leave us with no data at from Wikipedia. As expected, they show very dif- all - it would be difficult to find singular coherent ferent shapes and levels, and are clearly distinct texts spanning hundreds of millions of words. This + +701 from each other as well as all the other curves. means that allowing corpora of multiple authors 755 + +757 + + < g r a p h i c s > + +Figure 21: Spectrum for religious texts + +762 and topics is our only option if we want results for large $n$ . + + < g r a p h i c s > + +Figure 22: Spectrum for some other texts + +But we can also look at the results. Are the differences between the curves largely caused by differences in text length? If that was the case, we would expect that when a curve reaches the "critical $n$ " where we go from a single text to multiple texts, the vocabulary richness should increase rapidly. The curve we would expect to see is one that starts out mostly flat (because hardly any texts are that short), then slowly decreases (as others reach their critical $n$ and bring up the mean), then rapidly jumps up as it reaches its critical $n$ , and then slowly decreases again. This is not a pattern that we see anywhere, so we can conclude that text + +809 length is not the driving factor of the curve shapes. + +§ 7 CONCLUSION + +810 + +811 + +It is clear that the task of finding a length- 812 + +independent measure of vocabulary richness is dif- 813 ficult at best. We have seen that many traditionally used measures are not satisfactory, and some sug- + +gestions as to how they can be improved. Perhaps 816 the most obvious approach is to use average TTR over a sample length, with 10000 being a good sample length when possible. + +The figures show that the curves have very dif- + +ferent shapes, and often cross. This means that 821 the ranking of corpora changes depending on the + +length of text we are looking at, so a perfect solu- 823 tion is not possible, or at least cannot be expressed as a single number. + +Is this spectrum method useful for genre classi- 826 fication? It is perhaps rare that we need to analyse + +entire hundred-million-word corpora to see if they 828 are made up of novels or newspapers, but we do + +see that there are some differences even for much 831 shorter lengths. We have also gained insight into + +what makes it difficult to find a good measure of 833 vocabulary richness. But most importantly, we have seen that there are notable and interesting dif- + +ferences between genres, and raised for future re- 836 search the question of why. + +838 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/rrsAzPAGhs/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/rrsAzPAGhs/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..d8920ff8d6f1f7c18d712043d96246433f6905a3 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/rrsAzPAGhs/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,733 @@ +000 054 + +# Good Reads and Easy Novels Readability and Literary Quality in a Corpus of US-published Fiction + +001 055 + +002 056 + +003 057 + +Anonymous Author + +Affiliation / Address line 1 + +006 Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 058 + +Affiliation / Address line 1 059 + +Affiliation / Address line 2 060 + +Affiliation / Address line 3 + +email@domain 062 + +## Abstract + +013 In this paper, we explore the extent to which readability contributes to the perception of literary quality as de- + +016 fined by two categories of variables: expert-based (e.g., Pulitzer Prize, Na- + +018 tional Book Award) and crowd-based (e.g., GoodReads, WorldCat). Based on a large corpus of modern and contemporary + +021 fiction in English, we examine the correlation of a text's readability with its per- + +023 ceived literary quality, also assessing readability measures against simpler stylomet- + +026 ric features. Our results show that read- ability generally correlates with popularity + +028 as measured through open platforms such as GoodReads and WorldCat but has an inverse relation with three prestigious liter- + +031 ary awards. This points to a distinction between crowd- and expert-based judgments + +033 of literary style, as well as to a discrimination between fame and appreciation in the reception of a book. + +036 + +## 1 Introduction and Related Works + +038 Is it overall better for a novel to strive for an easy prose, or is there a link between difficulty and literary quality? The concept of readability has been studied for decades and is defined as the ease with which a text can be read and understood (Dale and Chall, 1949). Several works have attempted to define an easy way to compute readability in order to make, for example, didactic books more accessible, reduce technical jargon in documents produced for the general public, and adjust text selections according to the intended audience (Dubay, 2004). The result has been a series of popular and amply tested measures, each with a slight difference in their model of readability. Dale and Chall + +053 (1949), for example, referred to readability as the + +combination of elements in a text that impact im- 065 portant aspects of a reader's experience - including + +whether the reader can understand the text, finds 067 it interesting, and can read with optimal speed (Dale and Chall, 1949). Despite their shortcom- + +ings (Redish, 2000), readability measures have 070 been broadly applied to a large number of different + +domains. Measures of readability vary according 072 to what aspect of a text they take into account, but + +they typically combine features such as sentence 075 length, word length, and the presence of complex + +words. While the actual ease of a text depends on 077 reader characteristics (background, situation, ability) it is widely accepted that simple textual fea- + +tures such as sentence length, syllables per word 080 and lexical diversity impact the reading experience + +(Dubay, 2004). 082 + +The connection of readability to the quality of a text has often been often implied when it comes to + +non-fiction, and early studies into readability attest 085 to the educational and social importance of devel- + +oping such measures to improve technical or ex- 087 pository documents (Chall, 1947), but its role in the quality of literary fiction is much more com- + +plex. An easy-to-read novel can be enjoyable 090 to read, but may also apppear poor or unorigi- + +nal. In literary studies, the idea that readability 092 might be a precondition for literary success is debated, and literary texts have been assessed variously by readability measures and similar met- + +rics. Sherman (1893) was one of the first schol- 097 ars to propose certain values of average sentence-length and reading ease as properties of "better" literary style. Readability naturally varies across genre, but it is a widespread conception for readers and publishers alike that bestsellers (as defined by top book-sales) are easier to read (Martin, 1996). More recently, readability has gained traction in areas of (commercial) creative writing and publishing, especially where its measures are imple- + +mented in text-editing tools such as the Heming- 107 + +108 162 + +
Spearman Correlation Scores
READABILITY_FLESCH_GRADE -0.00720.760.39-0.291-0.950.860.931.00 0.75 0.75
READABILITY_FLESCH_EASE --0.028-0.65-0.420.34-0.951-0.89-0.860.50 -0.72 -0.25
PERDABRITY_SMOO -0.0180.630.44-0.390.86-0.8910.880.77-0.00
READABILITY_ANI -0.0340.770.43-0.320.93-0.860.881-0.25 0.77 -0.50
READABILITY_DALE_CHALL_NEW-0.390.550.4-0.50.75-0.720.770.771-0.75
wompcountSENTENCE_LENOTHEMSTTR-100extel-terREADABILITY FLESCH GITADE READABILITY FLESCH EASEPEADABILITY SMOGREADABILITY APIREADABILITY DALE_CHALL_NEW
+ +Figure 1: Correlations between stylometrics and flavours of readability (Spearman). All correlations between 0.09 and 0.99 are statistically significant. + +164 + +165 + +166 + +167 + +168 + +109 163 way or Marlowe editors ${}^{T}$ . These applications tend to favour lower readability scores - which is, texts easier to read. Yet, on the large scale, few studies have included readability as a measure that could help predicting literary quality. Studying a small corpus of bestsellers and more literary, canonical works, Martin (1996) found no significant difference in readability, using a modified Flesch reading score, while Garthwaite (2014) found differences in readability between bestsellers and commercially endorsed book-list titles. Relying on multiple measures of readability and one measure of literary quality (i.e., GoodReads' average ratings), Maharjan et al. (2017) found that readability was actually a weak measure for estimating popularity in comparison to, for example, character $\mathrm{n}$ - grams. Still, many studies of literary success, popularity, or perceived literary quality have sought to approximate text complexity and have studied textual properties upon which formulae of readability are directly or indirectly based, such as sentence-length, vocabulary richness, or text compressibility (Brottrager et al., 2022; van Cranenburgh and Bod, 2017; Crosbie et al., 2013). + +The question of the role of readability in literary quality is complicated by the practical and conceptual problem of defining literary quality itself, and consequently of quantifying it for large scale studies. Studies that seek to predict perceived literary quality from textual features often rely on the provisional proxy of one single gold standard, such as book-ratings from large user-platforms like GoodReads (Maharjan et al., 2018), personally or institutionally compiled canons (Mohseni et al., 2022) or sales-numbers (Wang et al., 2019). However, it has been shown that readers may have different, distinct perceptions of quality that are not necessarily based on the same criteria or prompted by the same textual features (Koolen et al., 2020). + +In this paper, we explore to what extent readability might contribute to the perception of literary quality - defined through several alternative measures - in a large fiction corpus of modern and contemporary novels in English, taking into account, instead of one golden standard, different contextual perspectives on literary quality, so as to cover both crowd-based and "expert"-based stan- + +dards of judgment. 185 + +## 2 Data and Methods + +The essence of our approach consists in examining whether readability, as measured through five different algorithms, and literary quality, as approximated through six different resources, show any correlation on a large corpus of English-language fiction. We use standard correlation measures (Pearson and Spearman product-moment correlation coefficients ${r}_{p}$ and ${r}_{s}$ , respectively). For inference on the correlation measures, simple Student's t-tests are used. For robustness checks, correlation coefficients were also modelled using a Bayesian ridge model of standardized the variables - although not reported due to limited space. ${}^{2}$ + +### 2.1 Corpus + +We use a corpus of modern and contemporary fiction in English, the so-called Chicago Corpus. [3] The Chicago Corpus is a collection of over 9000 novels from 1880 to 2000, representing works of fiction that are widespread in libraries, that is, the works of fiction that have a large number of library holdings as listed on WorldCat, a large-scale, international online library catalogue 4 . The num- + +215 + +--- + +${}^{2}$ The code will be publicly available upon acceptance. + +${}^{3}$ While we cannot directly provide access to the corpus, it is possible to contact the authors for requests. + +${}^{4}$ https://www.worldcat.org/about + +${}^{1}$ https://hemingwayapp.com/help.html https://authors.ai/marlowe/ + +--- + +216 270 + +![01964131-ae34-7e34-8d5f-80e5e6e06a28_2_188_167_1282_564_0.jpg](images/01964131-ae34-7e34-8d5f-80e5e6e06a28_2_188_167_1282_564_0.jpg) + +(b) Distributions of quality measures. Rating count is visualised with cutoff at 5000 for legibility. + +Figure 2: Distributions of measures + +272 + +273 + +274 + +275 + +277 + +278 + +279 + +280 + +281 + +282 + +283 + +284 + +285 + +217 271 + +222 276 + +286 + +287 ber of holdings was used as a first filtering measure to include or exclude works in the dataset, yet there are still large differences in how many libraries hold each title, so we can use it as a met- + +239 ric to score different titles within the dataset as well. The corpus is unique, to our knowledge, for its diversity and extraordinary representation of famous popular- and genre-fiction, as well as + +244 seminal works from the whole period: key works of modernism and postmodernism as well as Nobel laureates and winners of major literary award. + +247 Still, it should be noted that the Chicago corpus re- + +248 flects a clear cultural and geographical tilt, with a + +249 strong over-representation of Anglophone authors, and features only works either written in or translated into English. This tilt should be taken into + +252 account especially since we correlate textual features in the corpus to readability measures that + +254 were developed - and are particularly successful - in the English language context (Antunes and Lopes, 2019). + +257 + +258 + +259 + +
N. TitlesN. Authors
Whole corpus90897000
Pulitzer5346
NBA10479
Hugo9647
+ +Table 1: Overall titles and authors in the corpus and number of long-listed titles for each award. + +260 + +261 + +264 + +265 + +266 + +267 + +268 + +269 + +288 + +### 2.2 Measures of quality + +289 + +We use six different measures of literary quality 291 of two main types, heuristically setting up a qual- + +itative distinction between more crowd-based and 293 more expert-based measures. Expert-based measures may be supposed more institutionally pre- + +scribed, where titles are distinguished by appoint- 296 ing committees (as with literary prizes). Here, we + +chose to look at three prominent literary prizes in 298 Anglophone literary culture: The Pulitzer Prize, the National Book Award, and the Hugo Awards, + +considering titles that were both long- and short- 301 listed for these prizes. The selection of awards + +allows us to consider a main-stream vs. genre- 303 literature divide in our expert measures, since the first two prizes are assigned mainly to works of + +literary fiction, while the latter is an award given 306 to works of genre fiction (science fiction and fan- + +tasy). 308 + +Crowd-based measures may be considered 309 310 more democratic in the sense of being user-created, for example by users' ratings on + +large scale reading community sites such as 313 GoodReads, or by the effect of popular demand on library acquisitions. We use three standards here: the average ratings of titles on GoodReads (from 0 to 5 stars), the average rating count of titles on + +GoodReads (number of ratings given to a given ti- 318 tle), and the number of libraries that hold a title according to Worldcat. Goodreads ratings and/or rating counts are often favoured in studies of literary + +quality and reception, because they seem to proffer 322 + +more democratic literary evaluations "in the wild", 323 + +324 378 + +![01964131-ae34-7e34-8d5f-80e5e6e06a28_3_186_167_1282_885_0.jpg](images/01964131-ae34-7e34-8d5f-80e5e6e06a28_3_186_167_1282_885_0.jpg) + +Figure 3: Quality standards and flavours of readability + +397 + +398 + +400 + +325 379 + +326 380 + +327 381 + +328 382 + +329 383 + +330 384 + +331 385 + +332 386 + +333 387 + +334 388 + +335 389 + +336 390 + +337 391 + +338 392 + +339 393 + +340 394 + +341 395 + +342 396 + +345 399 + +347 401 + +402 + +403 + +350 404 + +351 considering the large diversity and geographical 352 spread of its nearly 90 million users (Nakamura, 353 2013). In slight contrast to Goodread's ratings, 354 we consider library holdings a conceptually hy- 355 + +356 brid measure, standing between completely free + +357 reader-based votes and expert-driven choices, as + +358 libraries respond to user-demand from within an + +359 institutional structure. + +360 + +361 + +### 2.3 Measures of readability + +362 For assessing the complexity and/or difficulty of 363 literary texts, we apply various measures of read- 364 ability. Since the ${1920}\mathrm{\;s}$ , and especially with the 365 success of the Flesch and Dale-Chall formulas in 366 the 1950s, combinations of sentence-length and 367 + +368 words and/or syllables have been used to assess the difficulty of a text as proxies of word and sen- 369 + +370 tence complexity (Dale and Chall, 1948). According to Dubay (2004), there were more than 200 + +372 different versions of readability formulas in 1980, while new ones are still introduced and old ones + +374 revised. Still, measures from what Dubay calls + +375 the "classic" readability studies, continue to be the most widely used measures and to prove them- + +377 selves effective in assessing text difficulty (Dubay, + +2004; Stajner et al., 2012) - despite their relative 405 406 simplicity (being counts of two or three aspects of 407 texts). 408 These measures have been applied to a wide 409 range of written productions, from technical and 410 journalistic texts to fiction. Flesch, for example, 411 found that fiction tend to score a Flesch Reading Ease score in the range 70 ; Score ; 90, in contrast + +to scientific text that often score below 30 (Flesch, 414 1948). In the present study we used five differ- + +ent "classic" readability algorithms to measure the 416 prose of each book, chosen for their popularity and interpretability ${}^{5}$ . + +- The Flesch Reading Ease is a measure of + +readability based on the average sentence 421 length (ASL), and the average syllables per word (word length)(ASW). It is calculated as follows: + +$$ +\text{ Score } = {206.835} - \left( {{1.015} \times \mathrm{{ASL}}}\right) +$$ + +426 + +$$ +- \left( {{84.6} \times \text{ASW}}\right) +$$ + +428 + +## The Flesch-Kincaid Grade Level is a revised + +429 + +430 + +431 version of the Flesch Reading Ease score. + +--- + +${}^{5}$ All readability scores were extracted using the textstat package: https://pypi.org/project/textstat/ + +--- + +433 Like the former, it is based on the average sentence length (ASL), and the number of syllables per word (ASW). It is calculated as follows: + +$$ +\mathrm{{GL}} = \left( {{0.4} \times \mathrm{{ASL}}}\right) + \left( {{12} \times \mathrm{{ASW}}}\right) - {15} +$$ + +- The SMOG Readability Formula is a readability score introduced by McLaughlin (McLaughlin, 1969). It measures readability based on the average sentence length and number of words with more than 3 syllables (number of polysyllables), applying the formula: + +$$ +\text{SMOG grading} = 3 + \sqrt{\text{ polysyllablecount }} +$$ + +- The Automated Readability Index is a readability score based on the average sentence length and number of characters per words (word length). It is calculated as follows: + +$$ +{4.71}\frac{\text{ characters }}{\text{ words }} + {0.5}\frac{\text{ words }}{\text{ sentences }} - {21.43} +$$ + +- The New Dale-Chall Readability Formula is a 1995 revision of the Dale-Chall readability score (Chall and Dale, 1995). It is based on the average sentence length (ASL) and the percentage of "difficult words" (PDW) which were defined as words which do not appear on a list of words which 80 percent of fourth-graders would know (Dale and Chall, 1948), contained in the Dale-Chall word-list. [6] It is calculated as follows: + +$$ +\text{Raw Score} = {0.1579} \times \mathrm{{PDW}} + {0.0496} \times \mathrm{{ASL}} +$$ + +$$ +\text{If PDW} > 5\% \text{: Adjusted Score} = +$$ + +$$ +\text{Raw Score} + {3.6365} +$$ + +All readability scores are represented as a US-grade level, where a higher grade means a more difficult text, except for the Flesch Reading Ease. The Flesch Reading Ease indicates a score between 0 (low readability) and 100 (high readability): a higher number means a more readable text. For this reason in most of our experiments the Flesch Reading Ease looks reversed with respect to the other measures (and is negatively correlated with them). + +## 3 Results + +486 + +487 + +Pearson's and Spearman's correlations between 488 + +these five readability metrics and commonly used 489 stylometric features show - as a sanity check - that readability measures capture aspects of novels' + +overall style. All measures are similarly correlated 492 to sentence-length (naturally, being a base for all measures) but also to lexical diversity and compressibility, which measure, respectively, complexity at the word- and sequence-level. More- + +over, the correlations between with our "quality 497 scores" show that readability is linked with the ones closer to popularity than to appreciation. + +
Spearman Correlation Scores
-0.16-0.0630.13
0.130.0820.56 0.1 -0.25
8-0.15-0.11-0.12-0.06
-0.15-0.061-0.25 -0.12 -0.50
-0.25-0.22-0.22-0.25
throughAvg Setting-1.66 Bating Count
+ +Figure 4: Correlations between quality standards and flavours of readability. All correlations are statistically significant. + +502 + +504 + +507 + +509 + +Pearsons' r, specifically in its significance testing, relies on the assumption of normally distributed data and it assumes that the two variables have a linear relationship, while Spearmans' $\mathrm{r}$ correlation coefficient is non-parametric, meaning that, while it still assumes a monotonic relation between the two variables, it does not make strong assumptions on the shape of the data. For this reason, Spearman is probably the best overall measure for this study, as we have no reason to assume that all our measures are normally distributed (and + +some are evidently not, as can be seen in Figure 2). 524 For these reasons, we will mainly credit the correlations observed through Spearman'r, although we report both in [2]. + +### 3.1 Readability and stylometrics + +529 + +As readability measures are supposed to be measures of style, we compute their correlation with three core stylistic features - sentence length, lexical diversity ${}^{7}$ and textual compressibility ${}^{8}$ - that + +539 have been found linked to perceived literary qual- + +--- + +${}^{7}$ We operationalized lexical diversity as the type-token ratio (TTR) of a text, using a common method insensitive to text-length: the Mean Segmental Type-Token Ratio (MSTTR). MSTTR-100 represents the average TTR of local averages in 100-word segments of each text. + +${}^{8}$ Following van Cranenburgh and Bod (2017), for text compressibility, we calculated the compression ratio (origi- + +${}^{6}$ See: https://countwordsworth.com/download /DaleChal-lEasyWordList.txt + +--- + +541 ity in previous studies (van Cranenburgh and Bod, 2017; Crosbie et al., 2013; Maharjan et al., 2017; Wang et al., 2019). As can be seen in Figure 1, all readability measures have evident correlations with these three metrics, even though they don't necessarily compute them directly - for example, no readability measure computes text compressibility. However, while compressibility is not obviously correlated to readability, compressibility is a measure of redundancy or formulaicity: it appears that easier texts also have a tendency to be more sequentially repetitive. One readability measure, the new Dale-Chall, correlates with the simple length (word count) of the novels. This is a surprising effect, since, like the other measures, the new Dale-Chall is not length-dependent. As it is the only measure looking at the texts' lexicon through an index of difficult words, it seems to be picking on a tendency for longer books to have a slightly more complex vocabulary. + +### 3.2 Relation with quality - GoodReads and libraries + +As discussed before, we correlate readability with three possible proxies of perceived quality of novels: GoodReads' average ratings, GoodReads' rating count, and the number of libraries holding a given title according to WorldCat ${}^{9}$ . We could consider GoodReads' rating count to be a measure closer to the concept of popularity or fame, while GoodReads' average rating tells us about the appreciation of the title independently from how many readers it had. As can be seen in Figure 4 , all of our readability measures show a degree of correlation with the number of library holdings and the GoodReads' rating count: more readable books tend to have more ratings and tend to be held by more libraries. + +The average rating of titles on GoodReads, on the other hand, shows a significant correlation + +583 with only one of the measures, the Dale-Chall readability score, while it appears to have no link with the other four. Interestingly, the Dale-Chall score is the only measure that uses a precompiled list of words to estimate the number of difficult words in a text, instead of relying entirely on the features of the text at hand. While this could make + +593 + +594 + +![01964131-ae34-7e34-8d5f-80e5e6e06a28_5_834_213_647_436_0.jpg](images/01964131-ae34-7e34-8d5f-80e5e6e06a28_5_834_213_647_436_0.jpg) + +Figure 5: The likelihood of being acquired by less than 100 libraries increases quite steadily with difficulty of reading (Spearman's rho 0.84), as the probability of appearing in more than 500 declines. Readability is here measured as Flesch-Kincaid Grade Level. + +595 + +596 + +597 + +598 + +600 + +605 + +610 + +615 + +![01964131-ae34-7e34-8d5f-80e5e6e06a28_5_834_990_630_435_0.jpg](images/01964131-ae34-7e34-8d5f-80e5e6e06a28_5_834_990_630_435_0.jpg) + +Figure 6: The probability of being rated by less than 100 users in Goodreads strongly correlates with the difficulty of the texts as measured, in this case, by the Flesch-Kincaid Grade Level. + +617 + +619 + +620 + +622 + +625 + +627 + +632 + +it a more fragile measure (due to linguistic change 635 + +and differences between genres) it appears to ac- 637 tually give it an increased modelling power for the tastes of GoodReads' average readers. It is worth mentioning that GoodReads' average ratings do not correlate, in our corpus, with the books' publication date - so a direct effect of language evolution on the measure's index can be excluded. Simplifying a bit, this points to the idea that the ease of vocabulary might relate to the average apprecia- + +tion of a book as well as its fame, so that texts with 646 + +a simpler lexicon, together with shorter sentences 647 + +--- + +nal bit-size/compressed bit-size) using bzip2, a standard file-compressor. + +${}^{9}$ Naturally this selection remains arbitrary. Expanding to other measures of perceived quality is an ongoing process. + +--- + +648 + +![01964131-ae34-7e34-8d5f-80e5e6e06a28_6_196_170_1271_247_0.jpg](images/01964131-ae34-7e34-8d5f-80e5e6e06a28_6_196_170_1271_247_0.jpg) + +Figure 7: Flavours of readability and awards: overall distributions. + +649 + +650 + +651 + +652 + +653 + +654 + +659 or words, are both more read and better liked. + +![01964131-ae34-7e34-8d5f-80e5e6e06a28_6_189_503_1278_246_0.jpg](images/01964131-ae34-7e34-8d5f-80e5e6e06a28_6_189_503_1278_246_0.jpg) + +Figure 8: Flavours of readability and awards: mean value and standard error. + +In Figure 3 we show the relation of each readability measure with library holdings, average Goodreads ratings and number of Goodreads' ratings. As can be seen, we should interpret the results with some caution, as the relation might not be linear: it could be that the best interpretation of the relation between, for example, readability and library holdings is modelled with a curve rather than a straight line. Yet, it appears quite evident at a glance that the probability of being held by a + +681 large number of libraries, and of being rated by a large number of Goodreads users, decreases dramatically when the difficulty of the text increases beyond a certain level. As we show in Figure 5, the probability of being acquired by less than 100 + +686 libraries grows quite clearly with the text's dif- + +688 ficulty, and the probability of being acquired by more than 500 decreases accordingly, with an in- 689 teresting peak at a medium-low point of difficulty. 690 The effect is even more evident when consider- 691 ing the probability of having less than 100 ratings on GoodReads, as appears in Figure 6. Appearing in 90 libraries is still a quite impressive measure of success, but the majority of the titles in + +696 the Chicago corpus goes beyond that threshold, as well as beyond the threshold of 100 user ratings on GoodReads, so the difference in probabilities seems to point to a relative decline in popularity or fame with the increase of the texts' surface com- + +701 plexity. + +
Libs.$\mathbf{{Rat}.n.}$
Flesch grade-0.16 (-0.1)-0.06 (-0.06)
Flesch ease0.13 (0.07)0.08 (0.09)
SMOG-0.15 (-0.1)-0.11 (-0.11)
ARI-0.15 (-0.01)0.06 (-0.06)
New Dale-Chall-0.25 (-0.2)-0.22 (-0.2)
Flesch grade0.840.83
Flesch ease-0.4-0.48
SMOG0.760.81
ARI0.730.71
New Dale-Chall0.780.82
+ +Table 2: On the upper part of the table, Spearman's r (Pearson's in parenthesis) for each readability flavour and quality measure. On the lower, Spearman’s $r$ with the probability of being in less than 100 libraries or having less than 100 ratings. + +### 3.3 Relation with quality - literary awards + +The second type of quality check we selected is a categorical one: whether or not a title was long-listed for one of three prestigious awards - the Pulitzer Prize, the National Book Award and the Hugo Award. + +As we show in Figures 7 and 8, as well as in Table 3, the difference between long-listed books and non long-listed books in terms of readability is small but significant for almost all measures, with long-listed books are systematically harder to read than their non-listed counterparts - again with the exception of the new Dale-Chall measure. Using this kind of quality proxy, we do not observe a value of reading ease but possibly its "dark side", + +757 such as perceived simplification or a reduced expressive power of novels. + +It may not surprise that these different standards should exhibit different preferences and perspectives on quality. Literary awards are notoriously elitist, even, perhaps, in a way that is wanted by their readership: the committee of the Booker Prize was accused of populism in 2011 when announcing "readability" as a new criterion for the award (Clark, 2011). + +
T-testp-value
Flesch grade3.780.0001
Flesch ease-4.660.000005
SMOG3.690.0002
ARI3.60.0003
New Dale-Chall1.80.07
+ +Table 3: T-test and p-value for the difference between long-listed and non-listed titles for each readability measure. The only measure that does not fall under the formal threshold of statistical significance is the new Dale-Chall. + +## 4 Conclusions and Future Works + +Readability measures proved significantly consistent, both between each other and with other relevant stylometric features, when applied on modern and contemporary fiction. Their relation with different proxies of literary quality is intriguing: more popular works, in terms of number of ratings on GoodReads and in terms of libraries willing to hold a copy of the book, appear to have a correlation with readability, while the appreciation of readers alone (independently from their number) seems to hold almost no link with it, and long-listed titles have an inverse relation with readability, tending to prefer slightly more difficult prose on the readability metrics' scale. It can be argued that we are seeing the divide between high-brow and "popular" literature, but the lack of correlation with GoodReads average rating might point to a slightly more nuanced conclusion. It is worth noting that the only measure showing a meaning- + +804 ful correlation with all of the crowd-based quality metrics was the new Dale-Chall measure of readability, also the only one explicitly focusing on the presence of widely understood lexicon in a text, but it was also the only one showing no significant + +809 difference between long-listed and non long-listed + +titles. The only other measure having a correlation 810 + +higher than 0.1 with average GoodReads' ratings 811 + +was SMOG, which, while not using a list of hard 812 + +words, considers "difficult words" in its own way 813 in its computation, using the number of polysyllable words as a central element. + +816 + +If we were to draw rough conclusions from these observations, it would seem that surface-level simplicity of style in terms of words per sen- + +tence, characters per words, and similar metrics 821 "helps" a text's popularity, but has nothing to do with its likelihood of being highly liked by its readers - and it even slightly hinders its possibilities of receiving a prestigious awards. In other + +words, surface-level simplicity improves a text's 826 quality only if we equate it with popularity or fame. Similarly, looking at threshold-based probability distributions showed that indeed increasing the difficulty of the novels' style might hinder + +its diffusion across libraries and Goodreads' users. 831 Using a more common vocabulary might also increase readers' appreciation of the text, but only when it comes to crowd-based measures. On the other hand, the correlations of average number of ratings and library holdings with readability measures do not appear linear or monotonic, meaning + +that there might also be a "point of balance" be- 838 tween too easy and too difficult, that maximizes the correlation with a novel's fame. The same might be true for the likelihood of a novel being long-listed for one of the three awards we took into + +consideration. 843 + +Overall, readability seems to have an impact on + +different perceptions of literary quality, although 846 + +its role and interaction with other features of the 848 text remains to be defined. + +Further research points towards extending the set of correlations to more proxies of quality as well as more sophisticated stylometric measures to see whether interactions can provide a clearer picture of what we perceive as literary quality. Other further work could be to check the correlations of our measures with publication date: readability + +might depend on time, either in the sense of the 858 evolution of the average novelistic style, overall language change, or even cultural selection, which would make the passage of time a particular form of "quality test" of its own accord. + +863 + +## References + +865 + +866 Hélder Antunes and Carla Teixeira Lopes. 2019. An- alyzing the Adequacy of Readability Indicators to a 867 Non-English Language. In Fabio Crestani, Martin Braschler, Jacques Savoy, Andreas Rauber, Henning Müller, David E. Losada, Gundula Heinatz Bürki, 870 Linda Cappellato, and Nicola Ferro, editors, ${Ex}$ - perimental IR Meets Multilinguality, Multimodality, and Interaction, volume 11696, pages 149-155. 872 Springer International Publishing, Cham. + +Judith Brottrager, Annina Stahl, Arda Arslan, Ulrik 875 Brandes, and Thomas Weitin. 2022. Modeling and predicting literary reception. Journal of Computa- 877 tional Literary Studies, 1(1):1-27. + +Jeanne S. Chall. 1947. This business of readability. Educational Research Bulletin, 26(1):1-13. + +880 Jeanne S. Chall and Edgar Dale. 1995. Readability Revisited: The New Dale-Chall Readability Formula. + +882 Brookline Books. + +Alex Clark. 2011. Man Booker prize: This year's judges are betraying authors and their readers. The Observer. + +887 Andreas van Cranenburgh and Rens Bod. 2017. A data-oriented model of literary language. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1228-1238, Valencia, Spain. Association for Computational Linguistics. + +Tess Crosbie, Tim French, and Marc Conrad. 2013. Towards a model for replicating aesthetic literary appreciation. In Proceedings of the Fifth Workshop on Semantic Web Information Management, SWIM + +897 '13, New York, NY, USA. Association for Computing Machinery. + +Edgar Dale and Jeanne S. Chall. 1948. A formula for predicting readability. Educational Research Bulletin, 27(1):11-28. + +902 + +Edgar Dale and Jeanne S. Chall. 1949. The concept of readability. Elementary English, 26(1):19-26. + +William Dubay. 2004. The Principles of Readability. Impact Information. + +907 + +Rudolph Flesch. 1948. A new readability yardstick. Journal of Applied Psychology, 32:221-233. + +Craig L. Garthwaite. 2014. Demand spillovers, combative advertising, and celebrity endorsements. American Economic Journal: Applied Economics, 6(2):76-104. + +Corina Koolen, Karina van Dalen-Oskam, Andreas van Cranenburgh, and Erica Nagelhout. 2020. Literary quality in the eye of the Dutch reader: The national + +917 reader survey. Poetics, 79:1-13. + +Suraj Maharjan, John Arevalo, Manuel Montes, 918 + +Fabio A. González, and Thamar Solorio. 2017. A 919 + +multi-task approach to predict likability of books. In 920 + +Proceedings of the 15th Conference of the European 921 + +Chapter of the Association for Computational Lin- 922 guistics: Volume 1, Long Papers, pages 1217-1227, + +Valencia, Spain. Association for Computational Lin- 923 + +guistics. 924 + +Suraj Maharjan, Sudipta Kar, Manuel Montes, Fabio A. González, and Thamar Solorio. 2018. Letting emotions flow: Success prediction by modeling the flow of emotions in books. In Proceedings of the 2018 + +Conference of the North American Chapter of the 929 Association for Computational Linguistics: Human + +Language Technologies: Volume 2, Short Papers, 931 pages 259-265, New Orleans, Louisiana. Association for Computational Linguistics. + +Claude Martin. 1996. Production, content, and uses of 934 bestselling books in quebec. Canadian Journal of + +Communication, 21(4). 936 + +Harry G. McLaughlin. 1969. Smog grading: A new 937 + +readability formula. Journal of Reading, 12(1):639- 938 + +646. 939 + +Mahdi Mohseni, Christoph Redies, and Volker Gast. + +2022. Approximate entropy in canonical and non- 941 canonical fiction. Entropy, 24(2):278. + +Lisa Nakamura. 2013. "Words with friends": So- + +cially networked reading on Goodreads. PMLA, 944 128(1):238-243. + +946 + +Janice Redish. 2000. Readability formulas have even more limitations than Klare discusses. ACM J. Com-put. Doc., 24(3):132-137. + +949 + +Lucius A. Sherman. 1893. Analytics of Literature: $A$ + +Manual for the Objective Study of English Prose and 951 Poetry. Athenaeum Press. Ginn. + +Sanja Stajner, Richard Evans, Constantin Orasan, and Ruslan Mitkov. 2012. What can readability measures really tell us about text complexity? In Pro- + +ceedings of Workshop on natural language process- 956 ing for improving textual accessibility, pages 14- 22, Istanbul, Turkey. Association for Computational Linguistics. + +959 + +Xindi Wang, Burcu Yucesoy, Onur Varol, Tina Eliassi-Rad, and Albert-László Barabási. 2019. Success + +in books: Predicting book sales before publication. 961 EPJ Data Science, 8(1):31. + +963 + +964 + +965 966 + +967 968 969 970 971 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/rrsAzPAGhs/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/rrsAzPAGhs/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..69d9fed809336cb2128b4996e105b9553c67e6cb --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/rrsAzPAGhs/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,724 @@ +000 054 + +§ GOOD READS AND EASY NOVELS READABILITY AND LITERARY QUALITY IN A CORPUS OF US-PUBLISHED FICTION + +001 055 + +002 056 + +003 057 + +Anonymous Author + +Affiliation / Address line 1 + +006 Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 058 + +Affiliation / Address line 1 059 + +Affiliation / Address line 2 060 + +Affiliation / Address line 3 + +email@domain 062 + +§ ABSTRACT + +013 In this paper, we explore the extent to which readability contributes to the perception of literary quality as de- + +016 fined by two categories of variables: expert-based (e.g., Pulitzer Prize, Na- + +018 tional Book Award) and crowd-based (e.g., GoodReads, WorldCat). Based on a large corpus of modern and contemporary + +021 fiction in English, we examine the correlation of a text's readability with its per- + +023 ceived literary quality, also assessing readability measures against simpler stylomet- + +026 ric features. Our results show that read- ability generally correlates with popularity + +028 as measured through open platforms such as GoodReads and WorldCat but has an inverse relation with three prestigious liter- + +031 ary awards. This points to a distinction between crowd- and expert-based judgments + +033 of literary style, as well as to a discrimination between fame and appreciation in the reception of a book. + +036 + +§ 1 INTRODUCTION AND RELATED WORKS + +038 Is it overall better for a novel to strive for an easy prose, or is there a link between difficulty and literary quality? The concept of readability has been studied for decades and is defined as the ease with which a text can be read and understood (Dale and Chall, 1949). Several works have attempted to define an easy way to compute readability in order to make, for example, didactic books more accessible, reduce technical jargon in documents produced for the general public, and adjust text selections according to the intended audience (Dubay, 2004). The result has been a series of popular and amply tested measures, each with a slight difference in their model of readability. Dale and Chall + +053 (1949), for example, referred to readability as the + +combination of elements in a text that impact im- 065 portant aspects of a reader's experience - including + +whether the reader can understand the text, finds 067 it interesting, and can read with optimal speed (Dale and Chall, 1949). Despite their shortcom- + +ings (Redish, 2000), readability measures have 070 been broadly applied to a large number of different + +domains. Measures of readability vary according 072 to what aspect of a text they take into account, but + +they typically combine features such as sentence 075 length, word length, and the presence of complex + +words. While the actual ease of a text depends on 077 reader characteristics (background, situation, ability) it is widely accepted that simple textual fea- + +tures such as sentence length, syllables per word 080 and lexical diversity impact the reading experience + +(Dubay, 2004). 082 + +The connection of readability to the quality of a text has often been often implied when it comes to + +non-fiction, and early studies into readability attest 085 to the educational and social importance of devel- + +oping such measures to improve technical or ex- 087 pository documents (Chall, 1947), but its role in the quality of literary fiction is much more com- + +plex. An easy-to-read novel can be enjoyable 090 to read, but may also apppear poor or unorigi- + +nal. In literary studies, the idea that readability 092 might be a precondition for literary success is debated, and literary texts have been assessed variously by readability measures and similar met- + +rics. Sherman (1893) was one of the first schol- 097 ars to propose certain values of average sentence-length and reading ease as properties of "better" literary style. Readability naturally varies across genre, but it is a widespread conception for readers and publishers alike that bestsellers (as defined by top book-sales) are easier to read (Martin, 1996). More recently, readability has gained traction in areas of (commercial) creative writing and publishing, especially where its measures are imple- + +mented in text-editing tools such as the Heming- 107 + +108 162 + +max width= + +10|c|Spearman Correlation Scores + +1-10 +READABILITY_FLESCH_GRADE - 0.0072 0.76 0.39 -0.29 1 -0.95 0.86 0.93 1.00 0.75 0.75 + +1-10 +READABILITY_FLESCH_EASE - -0.028 -0.65 -0.42 0.34 -0.95 1 -0.89 -0.86 0.50 -0.72 -0.25 + +1-10 +PERDABRITY_SMOO - 0.018 0.63 0.44 -0.39 0.86 -0.89 1 0.88 0.77-0.00 + +1-10 +READABILITY_ANI - 0.034 0.77 0.43 -0.32 0.93 -0.86 0.88 1 -0.25 0.77 -0.50 + +1-10 +READABILITY_DALE_CHALL_NEW -0.39 0.55 0.4 -0.5 0.75 -0.72 0.77 0.77 1-0.75 + +1-10 +X wompcount SENTENCE_LENOTHE MSTTR-100 extel-ter READABILITY FLESCH GITADE READABILITY FLESCH EASE X PEADABILITY SMOG READABILITY API READABILITY DALE_CHALL_NEW + +1-10 + +Figure 1: Correlations between stylometrics and flavours of readability (Spearman). All correlations between 0.09 and 0.99 are statistically significant. + +164 + +165 + +166 + +167 + +168 + +109 163 way or Marlowe editors ${}^{T}$ . These applications tend to favour lower readability scores - which is, texts easier to read. Yet, on the large scale, few studies have included readability as a measure that could help predicting literary quality. Studying a small corpus of bestsellers and more literary, canonical works, Martin (1996) found no significant difference in readability, using a modified Flesch reading score, while Garthwaite (2014) found differences in readability between bestsellers and commercially endorsed book-list titles. Relying on multiple measures of readability and one measure of literary quality (i.e., GoodReads' average ratings), Maharjan et al. (2017) found that readability was actually a weak measure for estimating popularity in comparison to, for example, character $\mathrm{n}$ - grams. Still, many studies of literary success, popularity, or perceived literary quality have sought to approximate text complexity and have studied textual properties upon which formulae of readability are directly or indirectly based, such as sentence-length, vocabulary richness, or text compressibility (Brottrager et al., 2022; van Cranenburgh and Bod, 2017; Crosbie et al., 2013). + +The question of the role of readability in literary quality is complicated by the practical and conceptual problem of defining literary quality itself, and consequently of quantifying it for large scale studies. Studies that seek to predict perceived literary quality from textual features often rely on the provisional proxy of one single gold standard, such as book-ratings from large user-platforms like GoodReads (Maharjan et al., 2018), personally or institutionally compiled canons (Mohseni et al., 2022) or sales-numbers (Wang et al., 2019). However, it has been shown that readers may have different, distinct perceptions of quality that are not necessarily based on the same criteria or prompted by the same textual features (Koolen et al., 2020). + +In this paper, we explore to what extent readability might contribute to the perception of literary quality - defined through several alternative measures - in a large fiction corpus of modern and contemporary novels in English, taking into account, instead of one golden standard, different contextual perspectives on literary quality, so as to cover both crowd-based and "expert"-based stan- + +dards of judgment. 185 + +§ 2 DATA AND METHODS + +The essence of our approach consists in examining whether readability, as measured through five different algorithms, and literary quality, as approximated through six different resources, show any correlation on a large corpus of English-language fiction. We use standard correlation measures (Pearson and Spearman product-moment correlation coefficients ${r}_{p}$ and ${r}_{s}$ , respectively). For inference on the correlation measures, simple Student's t-tests are used. For robustness checks, correlation coefficients were also modelled using a Bayesian ridge model of standardized the variables - although not reported due to limited space. ${}^{2}$ + +§ 2.1 CORPUS + +We use a corpus of modern and contemporary fiction in English, the so-called Chicago Corpus. [3] The Chicago Corpus is a collection of over 9000 novels from 1880 to 2000, representing works of fiction that are widespread in libraries, that is, the works of fiction that have a large number of library holdings as listed on WorldCat, a large-scale, international online library catalogue 4 . The num- + +215 + +${}^{2}$ The code will be publicly available upon acceptance. + +${}^{3}$ While we cannot directly provide access to the corpus, it is possible to contact the authors for requests. + +${}^{4}$ https://www.worldcat.org/about + +${}^{1}$ https://hemingwayapp.com/help.html https://authors.ai/marlowe/ + +216 270 + + < g r a p h i c s > + +(b) Distributions of quality measures. Rating count is visualised with cutoff at 5000 for legibility. + +Figure 2: Distributions of measures + +272 + +273 + +274 + +275 + +277 + +278 + +279 + +280 + +281 + +282 + +283 + +284 + +285 + +217 271 + +222 276 + +286 + +287 ber of holdings was used as a first filtering measure to include or exclude works in the dataset, yet there are still large differences in how many libraries hold each title, so we can use it as a met- + +239 ric to score different titles within the dataset as well. The corpus is unique, to our knowledge, for its diversity and extraordinary representation of famous popular- and genre-fiction, as well as + +244 seminal works from the whole period: key works of modernism and postmodernism as well as Nobel laureates and winners of major literary award. + +247 Still, it should be noted that the Chicago corpus re- + +248 flects a clear cultural and geographical tilt, with a + +249 strong over-representation of Anglophone authors, and features only works either written in or translated into English. This tilt should be taken into + +252 account especially since we correlate textual features in the corpus to readability measures that + +254 were developed - and are particularly successful - in the English language context (Antunes and Lopes, 2019). + +257 + +258 + +259 + +max width= + +X N. Titles N. Authors + +1-3 +Whole corpus 9089 7000 + +1-3 +Pulitzer 53 46 + +1-3 +NBA 104 79 + +1-3 +Hugo 96 47 + +1-3 + +Table 1: Overall titles and authors in the corpus and number of long-listed titles for each award. + +260 + +261 + +264 + +265 + +266 + +267 + +268 + +269 + +288 + +§ 2.2 MEASURES OF QUALITY + +289 + +We use six different measures of literary quality 291 of two main types, heuristically setting up a qual- + +itative distinction between more crowd-based and 293 more expert-based measures. Expert-based measures may be supposed more institutionally pre- + +scribed, where titles are distinguished by appoint- 296 ing committees (as with literary prizes). Here, we + +chose to look at three prominent literary prizes in 298 Anglophone literary culture: The Pulitzer Prize, the National Book Award, and the Hugo Awards, + +considering titles that were both long- and short- 301 listed for these prizes. The selection of awards + +allows us to consider a main-stream vs. genre- 303 literature divide in our expert measures, since the first two prizes are assigned mainly to works of + +literary fiction, while the latter is an award given 306 to works of genre fiction (science fiction and fan- + +tasy). 308 + +Crowd-based measures may be considered 309 310 more democratic in the sense of being user-created, for example by users' ratings on + +large scale reading community sites such as 313 GoodReads, or by the effect of popular demand on library acquisitions. We use three standards here: the average ratings of titles on GoodReads (from 0 to 5 stars), the average rating count of titles on + +GoodReads (number of ratings given to a given ti- 318 tle), and the number of libraries that hold a title according to Worldcat. Goodreads ratings and/or rating counts are often favoured in studies of literary + +quality and reception, because they seem to proffer 322 + +more democratic literary evaluations "in the wild", 323 + +324 378 + + < g r a p h i c s > + +Figure 3: Quality standards and flavours of readability + +397 + +398 + +400 + +325 379 + +326 380 + +327 381 + +328 382 + +329 383 + +330 384 + +331 385 + +332 386 + +333 387 + +334 388 + +335 389 + +336 390 + +337 391 + +338 392 + +339 393 + +340 394 + +341 395 + +342 396 + +345 399 + +347 401 + +402 + +403 + +350 404 + +351 considering the large diversity and geographical 352 spread of its nearly 90 million users (Nakamura, 353 2013). In slight contrast to Goodread's ratings, 354 we consider library holdings a conceptually hy- 355 + +356 brid measure, standing between completely free + +357 reader-based votes and expert-driven choices, as + +358 libraries respond to user-demand from within an + +359 institutional structure. + +360 + +361 + +§ 2.3 MEASURES OF READABILITY + +362 For assessing the complexity and/or difficulty of 363 literary texts, we apply various measures of read- 364 ability. Since the ${1920}\mathrm{\;s}$ , and especially with the 365 success of the Flesch and Dale-Chall formulas in 366 the 1950s, combinations of sentence-length and 367 + +368 words and/or syllables have been used to assess the difficulty of a text as proxies of word and sen- 369 + +370 tence complexity (Dale and Chall, 1948). According to Dubay (2004), there were more than 200 + +372 different versions of readability formulas in 1980, while new ones are still introduced and old ones + +374 revised. Still, measures from what Dubay calls + +375 the "classic" readability studies, continue to be the most widely used measures and to prove them- + +377 selves effective in assessing text difficulty (Dubay, + +2004; Stajner et al., 2012) - despite their relative 405 406 simplicity (being counts of two or three aspects of 407 texts). 408 These measures have been applied to a wide 409 range of written productions, from technical and 410 journalistic texts to fiction. Flesch, for example, 411 found that fiction tend to score a Flesch Reading Ease score in the range 70 ; Score ; 90, in contrast + +to scientific text that often score below 30 (Flesch, 414 1948). In the present study we used five differ- + +ent "classic" readability algorithms to measure the 416 prose of each book, chosen for their popularity and interpretability ${}^{5}$ . + + * The Flesch Reading Ease is a measure of + +readability based on the average sentence 421 length (ASL), and the average syllables per word (word length)(ASW). It is calculated as follows: + +$$ +\text{ Score } = {206.835} - \left( {{1.015} \times \mathrm{{ASL}}}\right) +$$ + +426 + +$$ +- \left( {{84.6} \times \text{ ASW }}\right) +$$ + +428 + +§ THE FLESCH-KINCAID GRADE LEVEL IS A REVISED + +429 + +430 + +431 version of the Flesch Reading Ease score. + +${}^{5}$ All readability scores were extracted using the textstat package: https://pypi.org/project/textstat/ + +433 Like the former, it is based on the average sentence length (ASL), and the number of syllables per word (ASW). It is calculated as follows: + +$$ +\mathrm{{GL}} = \left( {{0.4} \times \mathrm{{ASL}}}\right) + \left( {{12} \times \mathrm{{ASW}}}\right) - {15} +$$ + + * The SMOG Readability Formula is a readability score introduced by McLaughlin (McLaughlin, 1969). It measures readability based on the average sentence length and number of words with more than 3 syllables (number of polysyllables), applying the formula: + +$$ +\text{ SMOG grading } = 3 + \sqrt{\text{ polysyllablecount }} +$$ + + * The Automated Readability Index is a readability score based on the average sentence length and number of characters per words (word length). It is calculated as follows: + +$$ +{4.71}\frac{\text{ characters }}{\text{ words }} + {0.5}\frac{\text{ words }}{\text{ sentences }} - {21.43} +$$ + + * The New Dale-Chall Readability Formula is a 1995 revision of the Dale-Chall readability score (Chall and Dale, 1995). It is based on the average sentence length (ASL) and the percentage of "difficult words" (PDW) which were defined as words which do not appear on a list of words which 80 percent of fourth-graders would know (Dale and Chall, 1948), contained in the Dale-Chall word-list. [6] It is calculated as follows: + +$$ +\text{ Raw Score } = {0.1579} \times \mathrm{{PDW}} + {0.0496} \times \mathrm{{ASL}} +$$ + +$$ +\text{ If PDW } > 5\% \text{ : Adjusted Score } = +$$ + +$$ +\text{ Raw Score } + {3.6365} +$$ + +All readability scores are represented as a US-grade level, where a higher grade means a more difficult text, except for the Flesch Reading Ease. The Flesch Reading Ease indicates a score between 0 (low readability) and 100 (high readability): a higher number means a more readable text. For this reason in most of our experiments the Flesch Reading Ease looks reversed with respect to the other measures (and is negatively correlated with them). + +§ 3 RESULTS + +486 + +487 + +Pearson's and Spearman's correlations between 488 + +these five readability metrics and commonly used 489 stylometric features show - as a sanity check - that readability measures capture aspects of novels' + +overall style. All measures are similarly correlated 492 to sentence-length (naturally, being a base for all measures) but also to lexical diversity and compressibility, which measure, respectively, complexity at the word- and sequence-level. More- + +over, the correlations between with our "quality 497 scores" show that readability is linked with the ones closer to popularity than to appreciation. + +max width= + +X X Spearman Correlation Scores X + +1-4 +X -0.16 -0.063 0.13 + +1-4 +X 0.13 0.082 0.56 0.1 -0.25 + +1-4 +8 -0.15 -0.11 -0.12-0.06 + +1-4 +X -0.15 -0.061 -0.25 -0.12 -0.50 + +1-4 +X -0.25 -0.22 -0.22-0.25 + +1-4 +X through Avg Setting -1.66 Bating Count + +1-4 + +Figure 4: Correlations between quality standards and flavours of readability. All correlations are statistically significant. + +502 + +504 + +507 + +509 + +Pearsons' r, specifically in its significance testing, relies on the assumption of normally distributed data and it assumes that the two variables have a linear relationship, while Spearmans' $\mathrm{r}$ correlation coefficient is non-parametric, meaning that, while it still assumes a monotonic relation between the two variables, it does not make strong assumptions on the shape of the data. For this reason, Spearman is probably the best overall measure for this study, as we have no reason to assume that all our measures are normally distributed (and + +some are evidently not, as can be seen in Figure 2). 524 For these reasons, we will mainly credit the correlations observed through Spearman'r, although we report both in [2]. + +§ 3.1 READABILITY AND STYLOMETRICS + +529 + +As readability measures are supposed to be measures of style, we compute their correlation with three core stylistic features - sentence length, lexical diversity ${}^{7}$ and textual compressibility ${}^{8}$ - that + +539 have been found linked to perceived literary qual- + +${}^{7}$ We operationalized lexical diversity as the type-token ratio (TTR) of a text, using a common method insensitive to text-length: the Mean Segmental Type-Token Ratio (MSTTR). MSTTR-100 represents the average TTR of local averages in 100-word segments of each text. + +${}^{8}$ Following van Cranenburgh and Bod (2017), for text compressibility, we calculated the compression ratio (origi- + +${}^{6}$ See: https://countwordsworth.com/download /DaleChal-lEasyWordList.txt + +541 ity in previous studies (van Cranenburgh and Bod, 2017; Crosbie et al., 2013; Maharjan et al., 2017; Wang et al., 2019). As can be seen in Figure 1, all readability measures have evident correlations with these three metrics, even though they don't necessarily compute them directly - for example, no readability measure computes text compressibility. However, while compressibility is not obviously correlated to readability, compressibility is a measure of redundancy or formulaicity: it appears that easier texts also have a tendency to be more sequentially repetitive. One readability measure, the new Dale-Chall, correlates with the simple length (word count) of the novels. This is a surprising effect, since, like the other measures, the new Dale-Chall is not length-dependent. As it is the only measure looking at the texts' lexicon through an index of difficult words, it seems to be picking on a tendency for longer books to have a slightly more complex vocabulary. + +§ 3.2 RELATION WITH QUALITY - GOODREADS AND LIBRARIES + +As discussed before, we correlate readability with three possible proxies of perceived quality of novels: GoodReads' average ratings, GoodReads' rating count, and the number of libraries holding a given title according to WorldCat ${}^{9}$ . We could consider GoodReads' rating count to be a measure closer to the concept of popularity or fame, while GoodReads' average rating tells us about the appreciation of the title independently from how many readers it had. As can be seen in Figure 4, all of our readability measures show a degree of correlation with the number of library holdings and the GoodReads' rating count: more readable books tend to have more ratings and tend to be held by more libraries. + +The average rating of titles on GoodReads, on the other hand, shows a significant correlation + +583 with only one of the measures, the Dale-Chall readability score, while it appears to have no link with the other four. Interestingly, the Dale-Chall score is the only measure that uses a precompiled list of words to estimate the number of difficult words in a text, instead of relying entirely on the features of the text at hand. While this could make + +593 + +594 + + < g r a p h i c s > + +Figure 5: The likelihood of being acquired by less than 100 libraries increases quite steadily with difficulty of reading (Spearman's rho 0.84), as the probability of appearing in more than 500 declines. Readability is here measured as Flesch-Kincaid Grade Level. + +595 + +596 + +597 + +598 + +600 + +605 + +610 + +615 + + < g r a p h i c s > + +Figure 6: The probability of being rated by less than 100 users in Goodreads strongly correlates with the difficulty of the texts as measured, in this case, by the Flesch-Kincaid Grade Level. + +617 + +619 + +620 + +622 + +625 + +627 + +632 + +it a more fragile measure (due to linguistic change 635 + +and differences between genres) it appears to ac- 637 tually give it an increased modelling power for the tastes of GoodReads' average readers. It is worth mentioning that GoodReads' average ratings do not correlate, in our corpus, with the books' publication date - so a direct effect of language evolution on the measure's index can be excluded. Simplifying a bit, this points to the idea that the ease of vocabulary might relate to the average apprecia- + +tion of a book as well as its fame, so that texts with 646 + +a simpler lexicon, together with shorter sentences 647 + +nal bit-size/compressed bit-size) using bzip2, a standard file-compressor. + +${}^{9}$ Naturally this selection remains arbitrary. Expanding to other measures of perceived quality is an ongoing process. + +648 + + < g r a p h i c s > + +Figure 7: Flavours of readability and awards: overall distributions. + +649 + +650 + +651 + +652 + +653 + +654 + +659 or words, are both more read and better liked. + + < g r a p h i c s > + +Figure 8: Flavours of readability and awards: mean value and standard error. + +In Figure 3 we show the relation of each readability measure with library holdings, average Goodreads ratings and number of Goodreads' ratings. As can be seen, we should interpret the results with some caution, as the relation might not be linear: it could be that the best interpretation of the relation between, for example, readability and library holdings is modelled with a curve rather than a straight line. Yet, it appears quite evident at a glance that the probability of being held by a + +681 large number of libraries, and of being rated by a large number of Goodreads users, decreases dramatically when the difficulty of the text increases beyond a certain level. As we show in Figure 5, the probability of being acquired by less than 100 + +686 libraries grows quite clearly with the text's dif- + +688 ficulty, and the probability of being acquired by more than 500 decreases accordingly, with an in- 689 teresting peak at a medium-low point of difficulty. 690 The effect is even more evident when consider- 691 ing the probability of having less than 100 ratings on GoodReads, as appears in Figure 6. Appearing in 90 libraries is still a quite impressive measure of success, but the majority of the titles in + +696 the Chicago corpus goes beyond that threshold, as well as beyond the threshold of 100 user ratings on GoodReads, so the difference in probabilities seems to point to a relative decline in popularity or fame with the increase of the texts' surface com- + +701 plexity. + +max width= + +X Libs. $\mathbf{{Rat}.n.}$ + +1-3 +Flesch grade -0.16 (-0.1) -0.06 (-0.06) + +1-3 +Flesch ease 0.13 (0.07) 0.08 (0.09) + +1-3 +SMOG -0.15 (-0.1) -0.11 (-0.11) + +1-3 +ARI -0.15 (-0.01) 0.06 (-0.06) + +1-3 +New Dale-Chall -0.25 (-0.2) -0.22 (-0.2) + +1-3 +Flesch grade 0.84 0.83 + +1-3 +Flesch ease -0.4 -0.48 + +1-3 +SMOG 0.76 0.81 + +1-3 +ARI 0.73 0.71 + +1-3 +New Dale-Chall 0.78 0.82 + +1-3 + +Table 2: On the upper part of the table, Spearman's r (Pearson's in parenthesis) for each readability flavour and quality measure. On the lower, Spearman’s $r$ with the probability of being in less than 100 libraries or having less than 100 ratings. + +§ 3.3 RELATION WITH QUALITY - LITERARY AWARDS + +The second type of quality check we selected is a categorical one: whether or not a title was long-listed for one of three prestigious awards - the Pulitzer Prize, the National Book Award and the Hugo Award. + +As we show in Figures 7 and 8, as well as in Table 3, the difference between long-listed books and non long-listed books in terms of readability is small but significant for almost all measures, with long-listed books are systematically harder to read than their non-listed counterparts - again with the exception of the new Dale-Chall measure. Using this kind of quality proxy, we do not observe a value of reading ease but possibly its "dark side", + +757 such as perceived simplification or a reduced expressive power of novels. + +It may not surprise that these different standards should exhibit different preferences and perspectives on quality. Literary awards are notoriously elitist, even, perhaps, in a way that is wanted by their readership: the committee of the Booker Prize was accused of populism in 2011 when announcing "readability" as a new criterion for the award (Clark, 2011). + +max width= + +X T-test p-value + +1-3 +Flesch grade 3.78 0.0001 + +1-3 +Flesch ease -4.66 0.000005 + +1-3 +SMOG 3.69 0.0002 + +1-3 +ARI 3.6 0.0003 + +1-3 +New Dale-Chall 1.8 0.07 + +1-3 + +Table 3: T-test and p-value for the difference between long-listed and non-listed titles for each readability measure. The only measure that does not fall under the formal threshold of statistical significance is the new Dale-Chall. + +§ 4 CONCLUSIONS AND FUTURE WORKS + +Readability measures proved significantly consistent, both between each other and with other relevant stylometric features, when applied on modern and contemporary fiction. Their relation with different proxies of literary quality is intriguing: more popular works, in terms of number of ratings on GoodReads and in terms of libraries willing to hold a copy of the book, appear to have a correlation with readability, while the appreciation of readers alone (independently from their number) seems to hold almost no link with it, and long-listed titles have an inverse relation with readability, tending to prefer slightly more difficult prose on the readability metrics' scale. It can be argued that we are seeing the divide between high-brow and "popular" literature, but the lack of correlation with GoodReads average rating might point to a slightly more nuanced conclusion. It is worth noting that the only measure showing a meaning- + +804 ful correlation with all of the crowd-based quality metrics was the new Dale-Chall measure of readability, also the only one explicitly focusing on the presence of widely understood lexicon in a text, but it was also the only one showing no significant + +809 difference between long-listed and non long-listed + +titles. The only other measure having a correlation 810 + +higher than 0.1 with average GoodReads' ratings 811 + +was SMOG, which, while not using a list of hard 812 + +words, considers "difficult words" in its own way 813 in its computation, using the number of polysyllable words as a central element. + +816 + +If we were to draw rough conclusions from these observations, it would seem that surface-level simplicity of style in terms of words per sen- + +tence, characters per words, and similar metrics 821 "helps" a text's popularity, but has nothing to do with its likelihood of being highly liked by its readers - and it even slightly hinders its possibilities of receiving a prestigious awards. In other + +words, surface-level simplicity improves a text's 826 quality only if we equate it with popularity or fame. Similarly, looking at threshold-based probability distributions showed that indeed increasing the difficulty of the novels' style might hinder + +its diffusion across libraries and Goodreads' users. 831 Using a more common vocabulary might also increase readers' appreciation of the text, but only when it comes to crowd-based measures. On the other hand, the correlations of average number of ratings and library holdings with readability measures do not appear linear or monotonic, meaning + +that there might also be a "point of balance" be- 838 tween too easy and too difficult, that maximizes the correlation with a novel's fame. The same might be true for the likelihood of a novel being long-listed for one of the three awards we took into + +consideration. 843 + +Overall, readability seems to have an impact on + +different perceptions of literary quality, although 846 + +its role and interaction with other features of the 848 text remains to be defined. + +Further research points towards extending the set of correlations to more proxies of quality as well as more sophisticated stylometric measures to see whether interactions can provide a clearer picture of what we perceive as literary quality. Other further work could be to check the correlations of our measures with publication date: readability + +might depend on time, either in the sense of the 858 evolution of the average novelistic style, overall language change, or even cultural selection, which would make the passage of time a particular form of "quality test" of its own accord. + +863 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/tcxy7vRVKlg/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/tcxy7vRVKlg/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..79450367e41b69af73f56554f2f7d0f419eedc48 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/tcxy7vRVKlg/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,749 @@ +000 054 + +# Training and Evaluating Norwegian Sentence Embedding Models + +001 055 + +002 056 + +003 057 + +004 058 + +005 059 + +006 060 + +## Abstract + +We train and evaluate Norwegian sentence embedding models using the contrastive learning methodology SimCSE. We start + +016 from pre-trained Norwegian encoder models and train both unsupervised and super- + +018 vised models. The models are evaluated on a machine-translated version of semantic textual similarity datasets, as well as bi- + +021 nary classification tasks. We show that we can train good Norwegian sentence em- + +023 bedding models, that clearly outperform the pre-trained encoder models, as well as + +026 the multilingual mBERT, on the task of sentence similarity. + +028 + +## 1 Introduction + +Recently there have been a huge increase in the + +031 capabilities of natural language processing systems. The new dominant paradigm is using large + +033 language models such as BERT (Devlin et al., 2019) or GPT (Radford et al., 2018) as a starting model which one adapts to any given task one wishes to solve. There exists several different versions of BERT-type encoder models in Norwegian + +038 (Kummervold et al., 2021), (Kutuzov et al., 2021), (Pyysalo et al., 2021). It is well-known that BERT-type models that give contextual words embed-dings do not give particularly good sentence em-beddings (Reimers and Gurevych, 2019). For this reason we train and evaluate Norwegian sentence embedding models, using the pre-trained encoder models as starting points. + +We train models using the state of the art Sim-CSE methodology, similarly to the original paper (Gao et al., 2021). Like them, we train both unsupervised and supervised models. We start with a pretrained bidirectional language encoder model such as BERT or RoBERTa (Liu et al., 2019). For + +053 the unsupervised version we sample texts from the + +061 + +062 + +063 + +064 + +Norwegian Colossal Corpus (NCC) dataset (Kum- 065 mervold et al., 2022). We then pass them through + +the model using two different dropout masks and 067 predict contrastively which pairs within a batch represent the same text. For the supervised ver- + +sion, we train on a machine-translated version of 070 natural language inference (NLI) data, where we use sentences related by "entailment" as positive sentences, and sentences labeled as contradiction as hard negative sentences. We train on both the Norwegian dataset, and a combined dataset of + +both Norwegian and English NLI data, and show 077 that the latter gives better results for sentence representations in Norwegian. We evaluate our mod- + +els on a machine translated version of semantic 080 textual similarities (STS) datasets, as well as on + +the sequence classification problems in Norwe- 082 gian "Talk of Norway" and the binary classification version of the NoReC review dataset (Velldal + +et al., 2018). 085 + +Our main contributions are: + +087 + +1. We train and evaluate Norwegian unsupervised and supervised sentence embedding + +models. 090 + +2. We demonstrate a new way to compare the 092 various existing Norwegian language models by measuring their performance after training + +them to make sentence embeddings. 095 + +3. We show that our sentence encoders some- 097 times get better performance than the base encoder on classification. In particular, we obtain new state of the art results on the classification problem "Talk of Norway". + +102 + +4. Through our experiments we illustrate the usefulness of machine translated datasets for training and evaluating Norwegian language models. In particular, we show that super- + +vised training on machine translated data out- 107 performs unsupervised training on Norwe- + +109 gian data. + +## 2 Related work + +The fundamental technique we build on is that of training large transformer models (Vaswani et al., 2017). In particular, we utilize the large encoder models Bidirectional Encoder Representations from Transformers (BERT) and Robustly Optimized BERT (RoBERTa) by using them as pre-trained starting points. + +Our work builds upon existing language models trained in Norwegian. The National Library of Norway has trained BERT models in Norwegian (Kummervold et al., 2021), which we call NB-BERT, which exists in both base and large size. Also, the language technology group at the University of Oslo has trained their version of a BERT for Norwegian called NorBERT (Kutuzov et al., 2021). There is also a WikiBERT model trained on Norwegian Wikipedia (Pyysalo et al., 2021). We also test the multilingual version of BERT (Devlin et al., 2019), which is trained in Norwegian and many other languages. + +Our work uses existing methodology for making sentence embedding models. The first paper to improve BERT to make better sentence representations by training it for that purpose, was the Sentence-BERT paper (Reimers and Gurevych, 2019), which trained sentence embedding models by using siamese networks. We build upon the newer Simple Contrastive learning of Sentence Embeddings (SimCSE) methodology (Gao et al., 2021), which uses a contrastive training objective to create sentence embeddings from a pre-trained encoder. The idea behind both of these works is that of finding a training procedure that better extracts the knowledge about sentences that already exists in the pre-trained encoder model. + +## 3 Data + +For the unsupervised models, we sample data from the Norwegian Colossal Corpus (NCC) (Kummer-vold et al., 2022). This is a dataset of different smaller Norwegian text corpuses that has been collected into one corpus by the National Library of Norway to train language models. This is primarily a Norwegian corpus, although there are some amounts of other languages present. The dataset description estimates that ${87}\%$ of documents are in Norwegian, with about $6 - 7\%$ of documents in + +Sentence: Deltakerne mente at hvis inter- 162 163 essenter var seriøse om â forbedre finansrap-porteringsmodellen, ville en gruppe bli op-prettet og finansiert spesielt for dette formälet. + +Positive: Deltakerne forventer at seriøse in- + +teressenter vil danne en gruppe for à forbedre 168 finansrapporteringsmodellen. + +Negative: A group was created to improve the financial reporting model. + +Figure 1: An example of a triplet of sentences of mixed language in the Norwegian/English NLI dataset. + +English and the rest in other European languages 178 + +(mostly other Nordic languages). We sample 1 180 million texts from the dataset for training unsupervised. Some are longer than one sentence, but all are truncated to max 32 tokens before training, thus they are all approximately sentence length. + +For supervised training we train with data collected for the task of natural language inference (NLI). This task is that of taking a pair of sentences and predicting the relationship between them as either "entailment", "neutral" or "contradiction". The authors of the SimCSE paper use NLI data to create triples of a sentence with one positive and one hard negative and show that this data work well for training sentence models using contrastive learning, thus we follow this practice. We use a dataset that has been curated for training in Norwegian by the National Library of Norway. ${}^{1}$ The original data is based on the English datasets the Stanford Natural Language Inference (SNLI) Corpus (Bowman et al., 2015) and Multi-Genre Natural Language Inference (MNLI) dataset (Williams et al., 2018). The Norwegian data is machine translated from the MNLI dataset and has about 128 thousand triples. There is also a combined Norwegian and English version of the dataset made by taking a combination of the translated Norwegian MNLI data and English MNLI and SNLI data. 2 Also included are extra combined Norwegian/English sentence triples: For each of the translated triples there is a joint + +215 + +--- + +${}^{1}$ https://huggingface.co/datasets/NbAiLab/mnli-norwegian + +${}^{2}$ The same English data that was used to train English SimCSE: https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse + +--- + +217 Sentence 1: en mann skjærer opp en agurk . Sentence 2: en mann skjærer en agurk. Similarity: 4.2 + +Sentence 1: en mann spiller harpe. Sentence 2: en mann spiller et keyboard . Similarity: 1.5 + +Figure 2: Examples from the translated STS-Benchmark dataset. Similarity ratings are from 0- 5. + +Norwegian/English triple consisting of one or two sentences in each of English and Norwegian, see Figure 1 for an example. The English/Norwegian dataset contains about 531 thousand triples of sentences. + +For evaluation we also machine translate the standard English datasets for semantic textual similarity STS12-16 (Agirre et al., 2012), (Agirre et al., 2013), (Agirre et al., 2014), (Agirre et al., 2015), (Agirre et al., 2016), STSBenchmark (Cer et al., 2017), and SICK relatedness (Marelli et al., 2014). The task is predicting how similar a pair of sentences are to each other on a scale of 0 -5 . We use these datasets only for validation and testing and never for training. In fig. 2 we see two examples from the translated STS Benchmark dataset. + +The usage of translated datasets is a weakness compared to having original data in Norwegian. This project can also be viewed as an exploration of what performance it is possible to get from auto-translated English datasets: To the degree they are shown to be useful, one will have much more data one could potentially work with in Norwegian language processing. We note that + +259 for sentence similiarity, a similar exploration of translated data has been done for Swedish in (Is-bister and Sahlgren, 2020). They conclude that they do not recommend the usage of automatically translated STS datasets for fine-tuning, but that it should probably have limited negative consequences for comparing models. We partly follow their recommendation: We only use translated STS data for valdiation and evaluation, but we do perform supervised training on translated + +269 NLI data. + +## 4 Experiments + +270 + +271 + +Our experiments follow the implementations in 272 + +the SimCSE paper closely. We start with a pre- 273 trained encoder model that is either BERT or RoBERTa. + +For unsupervised training we sample one mil- 276 lion texts from the NCC dataset. We then pass each text through the model using two different dropout masks to obtain two different text representations ${s}_{i}$ and ${s}_{i}^{ + }$ for each text. Here dropout + +functions as a form of continuous augmentation of 281 embeddings. Then we contrastively predict which pairs of texts within a batch are the same using cross-entropy loss on the cosine similarity scores. In other words, the loss for text $i$ is given by + +286 + +$$ +{\operatorname{loss}}_{i} = - \log \frac{{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{i}^{ + }}\right) /\tau }}{\mathop{\sum }\limits_{{j = 1}}^{b}{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{j}^{ + }}\right) /\tau }}, +$$ + +288 + +where sim is cosine similarity and $\tau$ is a tempera- 291 ture hyperparameter which we simply set to 0.05 , + +which is the outcome of optimization done in the 293 SimCSE paper. + +For training unsupervised models, the models + +we start from are given by their names on hug- 296 gingface as + +298 + +- bert-base-cased [english model] + +- roberta-base [english model] 300 301 + +- bert-base-multilingual-cased 302 303 + +- TurkuNLP/wikibert-base-no-cased 304 + +305 + +- Itgoslo/norbert2 306 + +307 + +- NbAiLab/nb-bert-base 308 + +309 + +- NbAiLab/nb-bert-large 310 + +The english models are included as a sanity + +check: Since we are using automatically trans- 313 lated datasets to choose the best models, we want to compare their performance with some models that are expected to perform worse than Norwegian models. For the same reason we also test on the English STS datasets. + +We train the supervised models using NLI data where each sentence has one paired sentenced labeled as entailment, which is regarded as a positive sample, and one sentence labeled with con- + +tradiction, which is considered a negative sample. 323 + +325 + +
Model$\mathbf{{Avg}.{STS}}$
BERT34.29
RoBERTa25.56
mBERT48.34
WikiBERT42.21
NorBERT54.42
NB-BERT-base50.41
NB-BERT-large49.90
+ +Table 1: Average performance of models before training using average of the last layer on Norwegian STS. + +We thus obtain three different sentence representations ${s}_{i},{s}_{i}^{ + },{s}_{i}^{ - }$ . As in the SimCSE paper, we train contrastively trying to predict the positive pairs, and add the negative sentence representation ${s}_{i}^{ - }$ to the loss function as follows: + +$$ +{\operatorname{loss}}_{i} = - \log \frac{{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{i}^{ + }}\right) /\tau }}{\mathop{\sum }\limits_{{j = 1}}^{b}{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{j}^{ + }}\right) /\tau } + {e}^{\operatorname{sim}\left( {{s}_{i},{s}_{j}^{ - }}\right) /\tau }} +$$ + +(1) + +For training supervised models we start with the following models: + +- bert-base-multilingual-cased + +- TurkuNLP/wikibert-base-no-cased + +- Itgoslo/norbert2 + +- NbAiLab/nb-bert-base + +- NbAiLab/nb-bert-large + +We train with the same settings as in the Sim- + +362 CSE paper: We set a max sequence length of 32, and use the learning rates and batch sizes given in the appendix of the SimCSE paper (which vary by model type and size). Each model is trained + +367 on a single NVIDIA 3090 GPU. For some models we have to use gradient accumulation to achieve the correct batch size due to lack of RAM, which changes training dynamics a bit, since contrastive loss depends on the entire batch. We do not see any noticable effects on results from this. We train with the Adam optimizer with linear weight decay and put a multi-layer perceptron (MLP) on top of the model for training. Unsupervised we train for one epoch, and supervised for three. The best model is selected by evaluating on the dev + +part of the STS Benchmark dataset. For evalua- 378 + +tion we test both with and without this MLP, and 379 + +find that generally, testing without the MLP gives 380 + +slightly better results. We train three versions of 381 each model and report average scores. + +The models are also fine-tuned on two Norwe- + +gian sequence classification tasks. Talk of Nor- 384 way (ToN) is a subset of the Norwegian parliament speeches dataset (Lapponi et al., 2018), where the task is to classify whether the speech was given by SV or FrP (politically left or right, respectively) selected in (Kummervold et al., 2021). 3 NoReC is a dataset of reviews in Norwegian from different domains such as movies, video games and music (Velldal et al., 2018). From this dataset one can extract a binary classification task by taking the subset of reviews that are clearly positive or negative and letting the task be to classify them as positive or negative (Øvrelid et al., 2020).We take the text representations made by the model before the MLP, and add a linear classification layer on top and fine-tune the entire model on the training dataset. For both the fine-tuning datasets we do a grid search for hyperparameters under the following conditions (these are the same hyperparame-ters as in the finetuning examples in the appendix of the original BERT paper (Devlin et al., 2019)): + +- epochs=2, 3, 4 + +- learning rate $= 2\mathrm{e} - 5,3\mathrm{e} - 5,5\mathrm{e} - 5$ + +- batch size 16,32 + +We use the macro f1 score on the validation set to select the best model for each training run. We do three training runs and report the average of test scores. + +## 5 Results sentence similarity + +We evaluate the trained models on the semantic textual similarity datasets. We evaluate our models both on the Norwegian version of the datasets, and the original English. We report Spearman's correlation for the STS datasets. + +### 5.1 Evaluation in Norwegian + +In Table 1 we see the average performance on the Norwegian STS before training using the average of the last layer to compare embeddings. We also tested using the average of first and last layers (giving similar numbers) and using "cls" token + +431 + +--- + +${}^{3}$ https://huggingface.co/datasets/NbAiLab/norwegian_parliament + +--- + +432 486 + +
ModelSTS12STS13STS14STS15STS16STSBSICKR$\mathbf{{Avg}.}$
BERT55.2149.6449.2963.6854.3954.6750.9353.97
RoBERTa60.3059.1257.1568.7364.3364.0454.3961.15
mBERT60.8862.3155.9170.7866.8061.8757.1362.24
WikiBERT63.3870.2162.6374.0470.9070.8862.5267.79
NorBERT56.4165.3354.3268.9568.0062.4064.5462.85
NB-BERT-base59.4070.7057.9371.8769.9469.2563.9866.15
NB-BERT-large70.4580.8072.7981.5378.4179.3569.1876.07
+ +488 + +489 + +490 + +491 + +493 + +494 + +433 487 + +438 492 + +(a) Performance of unsupervised models on the Norwegian STS datasets. + +
ModelSTS12STS13STS14STS15STS16STSBSICKR$\mathbf{{Avg}.}$
mBERT73.4369.0970.8481.5073.8276.4772.7973.99
WikiBERT73.2964.4869.2480.3274.5175.4269.9472.45
NorBERT74.3070.6972.0982.5676.9179.3373.7475.66
NB-BERT-base76.3177.2075.4384.4777.6982.1477.9778.75
NB-BERT-large77.0783.6580.2886.2481.8784.3778.4481.70
+ +495 + +496 + +497 + +498 + +499 + +500 + +501 + +(b) Performance on the Norwegian STS datasets of supervised models trained on both Norwegian and English NLI data. 502 + +
ModelSTS12STS13STS14STS15STS16STSBSICKR$\mathbf{{Avg}.}$
mBERT69.2871.5069.4478.1274.3871.1267.7071.65
WikiBERT70.1471.1871.7977.5676.2074.2067.3272.63
NorBERT70.7974.4672.4480.6677.7376.6571.5674.90
NB-BERT-base72.4179.2274.6781.4777.7278.4973.5076.78
NB-BERT-large74.6783.6579.4784.1581.8282.2574.7580.11
+ +503 + +504 + +505 + +506 + +507 + +509 (c) Performance on the Norwegian STS datasets of supervised models trained on Norwegian NLI data. + +Table 2: Results of our models tested on the Norwegian STS datasets. 512 (giving worse numbers). Thus we have a baseline to compare how much the models have learned from the training. + +In Table 2a we see the performance of our unsupervised models on the Norwegian STS datasets. These are the results when we test without the MLP, which on average performs slightly better than using MLP also for testing. + +In Table 2b we see the results from training supervised models on the combination of Norwegian and English NLI data, while Table 2c shows the performance when training on only Norwegian NLI data. We see that training with English included improves performance over merely training in Norwegian for all models. + +We see that the supervised models perform much better than the unsupervised ones. This would usually not be surprising, but considering the supervised data is automatically translated and therefore presumably of lower quality than the unsupervised data, it is interesting to note. + +### 5.2 Evaluation in English + +In Table 3a we show the results from testing our + +485 unsupervised models on the English dataset. In + +Table 3b we show the results from testing our su- 514 pervised models trained on the combined English and Norwegian dataset on the English STS data, while Table 3c shows the results for supervised models trained only on Norwegian data. + +519 + +Since we have automatically translated the STS data, we are unsure how accurate the ground truth + +labels in Norwegian will be, since there will be 522 examples of sentences where the similarity of the + +sentences changes because of differing transla- 524 tions. However we think that this should not influence comparisons between different models very much. This is supported by the fact that the internal ranking between models for the Norwegian + +and the English dataset is the same among the Nor- 529 wegian unsupervised models. (English models unsurprisingly are higher in the rankings when tested on English) + +One of the more interesting findings in this pa- 534 per is how strong performance our models get on the English STS data. NB-BERT-base was initialized from the mBERT checkpoint which can + +partly explain this, but not all models was started 538 + +from a model pre-trained in English. The un- 539 + +540 594 + +
ModelSTS12STS13STS14STS15STS16STSBSICKR$\mathbf{{Avg}.}$
BERT(english)54.7670.7757.3969.3269.1961.6666.2964.20
roBERTa(english)65.2677.0667.0976.8876.7175.3265.6071.99
mBERT63.5673.1063.9574.6773.5668.5861.6168.43
WikiBERT64.6877.6067.0476.2076.3074.6365.3471.68
NorBERT52.9662.3054.9967.4569.8363.6862.4061.94
NB-BERT-base56.2372.0657.9368.7171.0967.2561.6364.99
NB-BERT-large72.5483.6876.0883.0381.0981.3268.8078.08
+ +596 + +597 + +598 + +599 + +600 + +601 + +602 + +541 595 + +(a) Performance of unsupervised models on English STS datasets. + +
ModelSTS12STS13STS14STS15STS16STSBSICKR$\mathbf{{Avg}.}$
mBERT76.8879.6977.5884.9978.5281.3677.3079.47
WikiBERT72.4559.5667.0880.8775.2175.3174.0172.07
NorBERT73.3969.4072.6583.1077.3080.4876.5576.13
NBBert-base76.9378.7877.7685.2880.2982.9678.4980.07
NBBert-large78.3085.9281.7887.1183.2485.7279.5683.09
+ +603 + +604 + +605 + +606 + +607 + +608 + +609 + +(b) Performance of supervised models on English STS datasets fine-tuned on both Norwegian and English MNLI. (c) Performance of supervised models on English STS datasets fine-tuned on Norwegian MNLI. + +
ModelSTS12STS13STS14STS15STS16STSBSICKR$\mathbf{{Avg}.}$
mBERT72.6279.3675.8481.8779.7077.4870.1876.72
WikiBERT65.4765.3067.4076.8673.1268.9160.5968.24
NorBERT66.9068.6269.6379.3576.2373.3869.6671.97
NBBert-base71.5780.3076.3081.5579.2378.0971.1276.88
NBBert-large76.4285.5881.2385.4983.2183.1575.0481.45
+ +Table 3: Results of our models tested on the English STS datasets. + +610 + +611 + +612 + +613 + +615 + +617 + +619 + +620 supervised NB-BERT-large achieves a score of 78.08 on English STS. For comparison, the best unsupervised model in the original SimCSE paper, SimCSE-RoBERTa-large, achieved a score of 78.90. Thus we see that we have a model pre-trained on a Norwegian corpus (containg some English), further trained unsupervised in Norwegian, that achieves less than 1% worse score than the best English model, trained in English. This model is also better than the best unsupervised English model in the original SentenceBERT paper. The supervised NB-BERT trained only on Norwegian NLI achieved a score of 81.45, while the version trained on Norwegian and English NLI + +583 achieve a score of 83.09. Comparably the supervised original English version SimCSE-BERT-base got a score of 81.57 and SimCSE-RoBERTa-large 83.76. Thus we see that we achieve comparable performance between a supervised Norwe- + +588 gian large BERT and a supervised English base BERT, when testing in English. Our best supervised model is less than $1\%$ away from the best English SimCSE model, although this is less surprising than for the unsupervised models, since we + +593 + +in this case fine-tune our model also on English 622 NLI. We also note that our best supervised model which is trained on only Norwegian is better than the best supervised English model in the Sentence-BERT paper. Thus it does seem like the models learn a lot for performing well at English sentence similarity even though the pre-training is mostly in Norwegian. The strong performance in English of NB-BERT models was already noted in (Kum- + +mervold et al., 2021). 632 + +To see if we can better understand the + +above findings, we tested the English supervised 637 SimCSE-RoBERTa-large on Norwegian STS, and achieved only an average score of 54.23 . Thus a very good English model scores badly in Norwegian, while a very good Norwegian model scores + +well in English. This might indicate that the rea- 642 son the Norwegian models all perform so well in English is that there is enough English in the Norwegian training data (probably including many snippets in the Norwegian parts) that the models + +learn quite a lot of English. 647 + +648 + +BERT 76.7 + +RoBERTa 79.8 mBERT WikiBERT NorBERT NB-BERT-base 82.7 NB-BERT-large 89.7 (a) Performance of unsupervised models when fine-tuned on the Talk of Norway dataset. mBERT 79.3 WikiBERT 82.6 NorBERT 85.7 NB-BERT-base 83.4 NB-BERT-large 89.3 (b) Performance of supervised models trained on Norwegian NLI when fine-tuned on the Talk of Norway dataset. mBERT 79.2 WikiBERT 81.1 NorBERT 84.9 NB-BERT-base 83.3 NB-BERT-large 89.3 (c) Performance of supervised models trained in on Norwegian and English NLI on the Talk of Norway dataset. + +Table 4: Performance of our models on the ToN dataset. + +649 + +650 + +651 + +## 6 Results classification + +We report macro F1 score for the binary classification tasks. + +### 6.1 ToN binary classification + +In Table 4a we see the performance of the unsupervised models when fine-tuned on the Talk of Norway dataset. In Table 4b we see the perfor- + +686 mance of the supervised models trained on Norwegian NLI and then fine-tuned on the ToN dataset, while Table 4c shows the performance when train- + +689 ing on both Norwegian and English NLI. + +691 We see that training the models to give bet- ter sentence embeddings gives some performance gains on this task, compared to fine-tuning the base model: In (Kummervold et al., 2021) it is reported that NB-BERT achieves a score of 81.8 , while NorBERT scores 78.2 and mBERT 78.4 on this task. All our numbers are slightly higher. + +We see that for this classification task training to make sentence models with English NLI data included did not help: the numbers are very similar + +701 with and without it. (a) Performance of unsupervised models, fine-tuned on the NoReC binary classification dataset. mBERT 72.2 WikiBERT 77.9 NorBERT 82.4 NB-BERT-base 85.9 + +
BERT63.1
RoBERTa64.4
mBERT70.3
WikiBERT77.0
NorBERT82.0
NB-BERT-base84.3
NB-BERT-large87.6
+ +NB-BERT-large 87.0 (b) Performance of supervised models trained on only Norwegian NLI when fine-tuned on the NoReC binary classification dataet. mBERT 74.4 WikiBERT 77.6 NorBERT 81.0 NB-BERT-base 84.9 + +NB-BERT-large 87.3 (c) Performance of supervised models trained on Norwegian and English NLI when fine-tuned on the NoReC binary classification dataset. + +Table 5: Performance of our models on the NoReC binary classification dataset. + +### 6.2 NoReC binary classification + +In Table 5a we see the performance of unsupervised models on the NoReC binary classification task. In Table 5b we see the results of supervised models trained on Norwegian NLI, while in Table 5c we see the results of supervised models trained on Norwegian and English NLI. + +For this task it is less clear that we get gains from training sentence embedding models: The highest reported number for this task is NB-BERT-base which is reported as 86.4 in (Kummervold et al., 2021) and 83.9 in (Kutuzov et al., 2021). Our best score for NB-BERT-base is 85.9, which is not better than this. Our best model NB-BERT-large also does not achieve a higher score than about ${87}\%$ , which is only slightly better than the smaller models. We do not know the reason we get improvements for ToN classification, and not here. The mBERT model do improve with training, but that is not so surprising, since it is not already as strong in Norwegian as most of the other models. + +## 7 Discussion + +757 + +We believe that our models perform well on the semantic sentence similarity task, even if we do not have any strict comparison since this is the first evalutation of Norwegian sentence embedding models on the STS data. The Norwegian dataset corresponds to the English one, so the scores of English models on English STS and Norwegian models on Norwegian STS should in principle correspond to each other, but because of the extra noise added by the automatic translation we are not surprised that the Norwegian numbers are a bit worse. We see that the models improve a lot compared to before training, and because they perform quite well even for the English STS datasets, we are confident that they have indeed learned something useful in Norwegian. + +The supervised models perform better than our unsupervised models even though the supervised models are trained on machine translated data. This shows that machine translated data could be useful for doing NLP in smaller languages, at least for some tasks such as ours. The difference in the numbers we get for unsupervised and supervised training are similar to the ones in the original Sim-CSE paper. It is a bit unclear to what extent the specific content and language of the training data is important for performing well on STS tasks. For example, one can improve the performance of English SimCSE by training on unrelated image data (Jian et al., 2022). This might be because the task is a form of clustering, and images and text in other languages are structurally similar enough that the models learn something useful. + +From doing our experiments we get comparisons of the different Norwegian language models. This is because this method of making sentence embeddings is mostly a way of extracting the knowledge already learned by the models, since the amount of training we do is much smaller than the amount the models already have been pre-trained. An unsuprising conclusion is that the scale of the model is the most important factor in making good language models. NB-BERT-large is the best model by clear margins for all of our evaluations. This conforms to the general tendency in recent NLP that scaling up models is more effective than tailoring data or architecture on a given scale. Next, we find that for binary classification the models NB-BERT-base and Nor- + +809 BERT perform quite similary, while WikiBERT is + +generally a bit weaker, while all of them clearly 810 + +outperform mBERT. For sentence similarity we 811 + +find different rankings among models: Here un- 812 + +supervised WikiBERT is the second best model, 813 + +while the supervised version is the weakest of the 814 + +Norwegian supervised models. Supervised NB- 815 + +BERT-base is clearly the second best model, while 816 NorBERT performs worse on the STS task. + +We see that training sentence embedding models slightly improves performance on the binary + +classification tasks, but not by much compared 821 with the base models. There is no clear tendency on whether training supervised or unsupervised improves performance on classification more, since the numbers we get are similar in both + +cases. 826 + +828 + +## References + +Eneko Agirre, Carmen Banea, Claire Cardie, Daniel 830 + +Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei 831 Guo, Iñigo Lopez-Gazpio, Montse Maritxalar, Rada + +Mihalcea, German Rigau, Larraitz Uria, and Janyce 833 Wiebe. 2015. SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability. In Proceedings of the 9th International + +Workshop on Semantic Evaluation (SemEval 2015), 836 pages 252-263, Denver, Colorado. Association for + +Computational Linguistics. 838 + +Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei + +Guo, Rada Mihalcea, German Rigau, and Janyce 841 Wiebe. 2014. SemEval-2014 task 10: Multilingual + +semantic textual similarity. In Proceedings of the 843 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 81-91, Dublin, Ireland. Association for Computational Linguistics. + +846 + +Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, + +Aitor Gonzalez-Agirre, Rada Mihalcea, German 848 Rigau, and Janyce Wiebe. 2016. SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the + +10th International Workshop on Semantic Evalua- 851 tion (SemEval-2016), pages 497-511, San Diego, + +California. Association for Computational Linguis- 853 tics. + +Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In *SEM 2012: + +The First Joint Conference on Lexical and Compu- 858 tational Semantics - Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385- 393, Montréal, Canada. Association for Computa- + +tional Linguistics. 863 + +Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- 865 Agirre, and Weiwei Guo. 2013. *SEM 2013 shared 866 task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 32-43, Atlanta, Georgia, USA. As- 870 sociation for Computational Linguistics. + +Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. + +Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 + +880 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings + +882 of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics. + +885 + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and + +887 Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. + +Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Empirical Methods in Natural Lan- + +897 guage Processing (EMNLP). + +Tim Isbister and Magnus Sahlgren. 2020. Why not simply translate? A first swedish evalua- + +900 tion benchmark for semantic similarity. CoRR, abs/2009.03116. + +902 + +Yiran Jian, Chongyang Gao, and Soroush Vosoughi. 2022. Non-linguistic supervision for contrastive learning of sentence embeddings. In Advances in Neural Information Processing Systems. + +907 Per Kummervold, Freddy Wetjen, and Javier de la Rosa. 2022. The Norwegian colossal corpus: A text corpus for training large Norwegian language models. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 3852- 3860, Marseille, France. European Language Resources Association. + +Per E Kummervold, Javier De la Rosa, Freddy Wet-jen, and Svein Arne Brygfjeld. 2021. Operationaliz-ing a national digital library: The case for a Norwegian transformer model. In Proceedings of the 23rd + +917 Nordic Conference on Computational Linguistics + +(NoDaLiDa), pages 20-29, Reykjavik, Iceland (On- 918 + +line). Linköping University Electronic Press, Swe- 919 + +den. 920 + +Andrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja 921 + +$\varnothing$ vrelid, and Stephan Oepen. 2021. Large-scale 922 + +contextualised language modelling for Norwegian. 923 + +In Proceedings of the 23rd Nordic Conference on 924 + +Computational Linguistics (NoDaLiDa), pages 30- 925 + +40, Reykjavik, Iceland (Online). Linköping Univer- 926 sity Electronic Press, Sweden. + +Emanuele Lapponi, Martin G. Søyland, Erik Velldal, + +and Stephan Oepen. 2018. The Talk of Norway: 929 + +a richly annotated corpus of the Norwegian parlia- 930 ment, 1998-2016. Language Resources and Evalu- + +ation, pages 1-21. 932 + +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- 933 + +dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, 934 Luke Zettlemoyer, and Veselin Stoyanov. 2019. + +Roberta: A robustly optimized BERT pretraining 936 approach. CoRR, abs/1907.11692. + +Marco Marelli, Stefano Menini, Marco Baroni, Luisa 938 + +Bentivogli, Raffaella Bernardi, and Roberto Zam- 939 parelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In + +Proceedings of the Ninth International Conference 941 on Language Resources and Evaluation (LREC'14), pages 216-223, Reykjavik, Iceland. European Lan- + +guage Resources Association (ELRA). 944 + +Lilja Øvrelid, Petter Mæhlum, Jeremy Barnes, and + +Erik Velldal. 2020. A fine-grained sentiment dataset 946 for Norwegian. In Proceedings of the 12th Edition of the Language Resources and Evaluation Confer- + +ence, Marseille, France, 2020. 949 + +Sampo Pyysalo, Jenna Kanerva, Antti Virtanen, and + +Filip Ginter. 2021. WikiBERT models: Deep trans- 951 fer learning for many languages. In Proceedings of the 23rd Nordic Conference on Computational + +Linguistics (NoDaLiDa), pages 1-10, Reykjavik, 954 Iceland (Online). Linköping University Electronic Press, Sweden. + +956 + +Alec Radford, Karthik Harasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- + +standing by generative pre-training. 959 + +Nils Reimers and Iryna Gurevych. 2019. Sentence- 960 BERT: Sentence embeddings using Siamese BERT- 961 networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics. 966 + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- 970 cessing Systems, volume 30. Curran Associates, Inc. 971 + +972 Erik Velldal, Lilja Øvrelid, Eivind Alexander Bergem, 1026 973 Cathrine Stadsnes, Samia Touileb, and Fredrik 1027 + +974 Jørgensen. 2018. NoReC: The Norwegian review 1028 + +975 corpus. In Proceedings of the Eleventh International 1029 Conference on Language Resources and Evaluation 1030 976 (LREC 2018), Miyazaki, Japan. European Language + +977 Resources Association (ELRA). 1031 + +978 1032 + +Adina Williams, Nikita Nangia, and Samuel Bowman. 1033 + +980 2018. A broad-coverage challenge corpus for sen- 1034 tence understanding through inference. In Proceed- + +ings of the 2018 Conference of the North American 1035 + +Chapter of the Association for Computational Lin- 1036 + +983 guistics: Human Language Technologies, Volume 1037 + +1 (Long Papers), pages 1112-1122, New Orleans, 1038 + +985 Louisiana. Association for Computational Linguis- 1039 tics. + +986 1040 + +987 1041 + +988 1042 + +989 1043 + +990 1044 + +991 1045 + +992 1046 + +993 1047 + +994 1048 + +995 1049 + +996 1050 + +997 1051 + +998 1052 + +999 1053 + +1000 1054 + +1001 1055 + +1002 1056 + +1003 1057 + +1004 1058 + +1005 1059 + +1006 1060 + +1007 1061 + +1008 1062 + +1009 1063 + +1010 1064 + +1011 1065 + +1012 1066 + +1013 1067 + +1014 1068 + +1015 1069 + +1016 1070 + +1017 1071 + +1018 1072 + +1019 1073 + +1020 1074 + +1021 1075 + +1022 1076 + +1023 1077 + +1024 1078 + +1025 1079 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/tcxy7vRVKlg/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/tcxy7vRVKlg/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..57f0495634e9b0df7e4eef98a1ec5b49d3db3186 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/tcxy7vRVKlg/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,680 @@ +000 054 + +§ TRAINING AND EVALUATING NORWEGIAN SENTENCE EMBEDDING MODELS + +001 055 + +002 056 + +003 057 + +004 058 + +005 059 + +006 060 + +§ ABSTRACT + +We train and evaluate Norwegian sentence embedding models using the contrastive learning methodology SimCSE. We start + +016 from pre-trained Norwegian encoder models and train both unsupervised and super- + +018 vised models. The models are evaluated on a machine-translated version of semantic textual similarity datasets, as well as bi- + +021 nary classification tasks. We show that we can train good Norwegian sentence em- + +023 bedding models, that clearly outperform the pre-trained encoder models, as well as + +026 the multilingual mBERT, on the task of sentence similarity. + +028 + +§ 1 INTRODUCTION + +Recently there have been a huge increase in the + +031 capabilities of natural language processing systems. The new dominant paradigm is using large + +033 language models such as BERT (Devlin et al., 2019) or GPT (Radford et al., 2018) as a starting model which one adapts to any given task one wishes to solve. There exists several different versions of BERT-type encoder models in Norwegian + +038 (Kummervold et al., 2021), (Kutuzov et al., 2021), (Pyysalo et al., 2021). It is well-known that BERT-type models that give contextual words embed-dings do not give particularly good sentence em-beddings (Reimers and Gurevych, 2019). For this reason we train and evaluate Norwegian sentence embedding models, using the pre-trained encoder models as starting points. + +We train models using the state of the art Sim-CSE methodology, similarly to the original paper (Gao et al., 2021). Like them, we train both unsupervised and supervised models. We start with a pretrained bidirectional language encoder model such as BERT or RoBERTa (Liu et al., 2019). For + +053 the unsupervised version we sample texts from the + +061 + +062 + +063 + +064 + +Norwegian Colossal Corpus (NCC) dataset (Kum- 065 mervold et al., 2022). We then pass them through + +the model using two different dropout masks and 067 predict contrastively which pairs within a batch represent the same text. For the supervised ver- + +sion, we train on a machine-translated version of 070 natural language inference (NLI) data, where we use sentences related by "entailment" as positive sentences, and sentences labeled as contradiction as hard negative sentences. We train on both the Norwegian dataset, and a combined dataset of + +both Norwegian and English NLI data, and show 077 that the latter gives better results for sentence representations in Norwegian. We evaluate our mod- + +els on a machine translated version of semantic 080 textual similarities (STS) datasets, as well as on + +the sequence classification problems in Norwe- 082 gian "Talk of Norway" and the binary classification version of the NoReC review dataset (Velldal + +et al., 2018). 085 + +Our main contributions are: + +087 + +1. We train and evaluate Norwegian unsupervised and supervised sentence embedding + +models. 090 + +2. We demonstrate a new way to compare the 092 various existing Norwegian language models by measuring their performance after training + +them to make sentence embeddings. 095 + +3. We show that our sentence encoders some- 097 times get better performance than the base encoder on classification. In particular, we obtain new state of the art results on the classification problem "Talk of Norway". + +102 + +4. Through our experiments we illustrate the usefulness of machine translated datasets for training and evaluating Norwegian language models. In particular, we show that super- + +vised training on machine translated data out- 107 performs unsupervised training on Norwe- + +109 gian data. + +§ 2 RELATED WORK + +The fundamental technique we build on is that of training large transformer models (Vaswani et al., 2017). In particular, we utilize the large encoder models Bidirectional Encoder Representations from Transformers (BERT) and Robustly Optimized BERT (RoBERTa) by using them as pre-trained starting points. + +Our work builds upon existing language models trained in Norwegian. The National Library of Norway has trained BERT models in Norwegian (Kummervold et al., 2021), which we call NB-BERT, which exists in both base and large size. Also, the language technology group at the University of Oslo has trained their version of a BERT for Norwegian called NorBERT (Kutuzov et al., 2021). There is also a WikiBERT model trained on Norwegian Wikipedia (Pyysalo et al., 2021). We also test the multilingual version of BERT (Devlin et al., 2019), which is trained in Norwegian and many other languages. + +Our work uses existing methodology for making sentence embedding models. The first paper to improve BERT to make better sentence representations by training it for that purpose, was the Sentence-BERT paper (Reimers and Gurevych, 2019), which trained sentence embedding models by using siamese networks. We build upon the newer Simple Contrastive learning of Sentence Embeddings (SimCSE) methodology (Gao et al., 2021), which uses a contrastive training objective to create sentence embeddings from a pre-trained encoder. The idea behind both of these works is that of finding a training procedure that better extracts the knowledge about sentences that already exists in the pre-trained encoder model. + +§ 3 DATA + +For the unsupervised models, we sample data from the Norwegian Colossal Corpus (NCC) (Kummer-vold et al., 2022). This is a dataset of different smaller Norwegian text corpuses that has been collected into one corpus by the National Library of Norway to train language models. This is primarily a Norwegian corpus, although there are some amounts of other languages present. The dataset description estimates that ${87}\%$ of documents are in Norwegian, with about $6 - 7\%$ of documents in + +Sentence: Deltakerne mente at hvis inter- 162 163 essenter var seriøse om â forbedre finansrap-porteringsmodellen, ville en gruppe bli op-prettet og finansiert spesielt for dette formälet. + +Positive: Deltakerne forventer at seriøse in- + +teressenter vil danne en gruppe for à forbedre 168 finansrapporteringsmodellen. + +Negative: A group was created to improve the financial reporting model. + +Figure 1: An example of a triplet of sentences of mixed language in the Norwegian/English NLI dataset. + +English and the rest in other European languages 178 + +(mostly other Nordic languages). We sample 1 180 million texts from the dataset for training unsupervised. Some are longer than one sentence, but all are truncated to max 32 tokens before training, thus they are all approximately sentence length. + +For supervised training we train with data collected for the task of natural language inference (NLI). This task is that of taking a pair of sentences and predicting the relationship between them as either "entailment", "neutral" or "contradiction". The authors of the SimCSE paper use NLI data to create triples of a sentence with one positive and one hard negative and show that this data work well for training sentence models using contrastive learning, thus we follow this practice. We use a dataset that has been curated for training in Norwegian by the National Library of Norway. ${}^{1}$ The original data is based on the English datasets the Stanford Natural Language Inference (SNLI) Corpus (Bowman et al., 2015) and Multi-Genre Natural Language Inference (MNLI) dataset (Williams et al., 2018). The Norwegian data is machine translated from the MNLI dataset and has about 128 thousand triples. There is also a combined Norwegian and English version of the dataset made by taking a combination of the translated Norwegian MNLI data and English MNLI and SNLI data. 2 Also included are extra combined Norwegian/English sentence triples: For each of the translated triples there is a joint + +215 + +${}^{1}$ https://huggingface.co/datasets/NbAiLab/mnli-norwegian + +${}^{2}$ The same English data that was used to train English SimCSE: https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse + +217 Sentence 1: en mann skjærer opp en agurk . Sentence 2: en mann skjærer en agurk. Similarity: 4.2 + +Sentence 1: en mann spiller harpe. Sentence 2: en mann spiller et keyboard . Similarity: 1.5 + +Figure 2: Examples from the translated STS-Benchmark dataset. Similarity ratings are from 0- 5. + +Norwegian/English triple consisting of one or two sentences in each of English and Norwegian, see Figure 1 for an example. The English/Norwegian dataset contains about 531 thousand triples of sentences. + +For evaluation we also machine translate the standard English datasets for semantic textual similarity STS12-16 (Agirre et al., 2012), (Agirre et al., 2013), (Agirre et al., 2014), (Agirre et al., 2015), (Agirre et al., 2016), STSBenchmark (Cer et al., 2017), and SICK relatedness (Marelli et al., 2014). The task is predicting how similar a pair of sentences are to each other on a scale of 0 -5 . We use these datasets only for validation and testing and never for training. In fig. 2 we see two examples from the translated STS Benchmark dataset. + +The usage of translated datasets is a weakness compared to having original data in Norwegian. This project can also be viewed as an exploration of what performance it is possible to get from auto-translated English datasets: To the degree they are shown to be useful, one will have much more data one could potentially work with in Norwegian language processing. We note that + +259 for sentence similiarity, a similar exploration of translated data has been done for Swedish in (Is-bister and Sahlgren, 2020). They conclude that they do not recommend the usage of automatically translated STS datasets for fine-tuning, but that it should probably have limited negative consequences for comparing models. We partly follow their recommendation: We only use translated STS data for valdiation and evaluation, but we do perform supervised training on translated + +269 NLI data. + +§ 4 EXPERIMENTS + +270 + +271 + +Our experiments follow the implementations in 272 + +the SimCSE paper closely. We start with a pre- 273 trained encoder model that is either BERT or RoBERTa. + +For unsupervised training we sample one mil- 276 lion texts from the NCC dataset. We then pass each text through the model using two different dropout masks to obtain two different text representations ${s}_{i}$ and ${s}_{i}^{ + }$ for each text. Here dropout + +functions as a form of continuous augmentation of 281 embeddings. Then we contrastively predict which pairs of texts within a batch are the same using cross-entropy loss on the cosine similarity scores. In other words, the loss for text $i$ is given by + +286 + +$$ +{\operatorname{loss}}_{i} = - \log \frac{{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{i}^{ + }}\right) /\tau }}{\mathop{\sum }\limits_{{j = 1}}^{b}{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{j}^{ + }}\right) /\tau }}, +$$ + +288 + +where sim is cosine similarity and $\tau$ is a tempera- 291 ture hyperparameter which we simply set to 0.05 , + +which is the outcome of optimization done in the 293 SimCSE paper. + +For training unsupervised models, the models + +we start from are given by their names on hug- 296 gingface as + +298 + + * bert-base-cased [english model] + + * roberta-base [english model] 300 301 + + * bert-base-multilingual-cased 302 303 + + * TurkuNLP/wikibert-base-no-cased 304 + +305 + + * Itgoslo/norbert2 306 + +307 + + * NbAiLab/nb-bert-base 308 + +309 + + * NbAiLab/nb-bert-large 310 + +The english models are included as a sanity + +check: Since we are using automatically trans- 313 lated datasets to choose the best models, we want to compare their performance with some models that are expected to perform worse than Norwegian models. For the same reason we also test on the English STS datasets. + +We train the supervised models using NLI data where each sentence has one paired sentenced labeled as entailment, which is regarded as a positive sample, and one sentence labeled with con- + +tradiction, which is considered a negative sample. 323 + +325 + +max width= + +Model $\mathbf{{Avg}.{STS}}$ + +1-2 +BERT 34.29 + +1-2 +RoBERTa 25.56 + +1-2 +mBERT 48.34 + +1-2 +WikiBERT 42.21 + +1-2 +NorBERT 54.42 + +1-2 +NB-BERT-base 50.41 + +1-2 +NB-BERT-large 49.90 + +1-2 + +Table 1: Average performance of models before training using average of the last layer on Norwegian STS. + +We thus obtain three different sentence representations ${s}_{i},{s}_{i}^{ + },{s}_{i}^{ - }$ . As in the SimCSE paper, we train contrastively trying to predict the positive pairs, and add the negative sentence representation ${s}_{i}^{ - }$ to the loss function as follows: + +$$ +{\operatorname{loss}}_{i} = - \log \frac{{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{i}^{ + }}\right) /\tau }}{\mathop{\sum }\limits_{{j = 1}}^{b}{e}^{\operatorname{sim}\left( {{s}_{i},{s}_{j}^{ + }}\right) /\tau } + {e}^{\operatorname{sim}\left( {{s}_{i},{s}_{j}^{ - }}\right) /\tau }} +$$ + +(1) + +For training supervised models we start with the following models: + + * bert-base-multilingual-cased + + * TurkuNLP/wikibert-base-no-cased + + * Itgoslo/norbert2 + + * NbAiLab/nb-bert-base + + * NbAiLab/nb-bert-large + +We train with the same settings as in the Sim- + +362 CSE paper: We set a max sequence length of 32, and use the learning rates and batch sizes given in the appendix of the SimCSE paper (which vary by model type and size). Each model is trained + +367 on a single NVIDIA 3090 GPU. For some models we have to use gradient accumulation to achieve the correct batch size due to lack of RAM, which changes training dynamics a bit, since contrastive loss depends on the entire batch. We do not see any noticable effects on results from this. We train with the Adam optimizer with linear weight decay and put a multi-layer perceptron (MLP) on top of the model for training. Unsupervised we train for one epoch, and supervised for three. The best model is selected by evaluating on the dev + +part of the STS Benchmark dataset. For evalua- 378 + +tion we test both with and without this MLP, and 379 + +find that generally, testing without the MLP gives 380 + +slightly better results. We train three versions of 381 each model and report average scores. + +The models are also fine-tuned on two Norwe- + +gian sequence classification tasks. Talk of Nor- 384 way (ToN) is a subset of the Norwegian parliament speeches dataset (Lapponi et al., 2018), where the task is to classify whether the speech was given by SV or FrP (politically left or right, respectively) selected in (Kummervold et al., 2021). 3 NoReC is a dataset of reviews in Norwegian from different domains such as movies, video games and music (Velldal et al., 2018). From this dataset one can extract a binary classification task by taking the subset of reviews that are clearly positive or negative and letting the task be to classify them as positive or negative (Øvrelid et al., 2020).We take the text representations made by the model before the MLP, and add a linear classification layer on top and fine-tune the entire model on the training dataset. For both the fine-tuning datasets we do a grid search for hyperparameters under the following conditions (these are the same hyperparame-ters as in the finetuning examples in the appendix of the original BERT paper (Devlin et al., 2019)): + + * epochs=2, 3, 4 + + * learning rate $= 2\mathrm{e} - 5,3\mathrm{e} - 5,5\mathrm{e} - 5$ + + * batch size 16,32 + +We use the macro f1 score on the validation set to select the best model for each training run. We do three training runs and report the average of test scores. + +§ 5 RESULTS SENTENCE SIMILARITY + +We evaluate the trained models on the semantic textual similarity datasets. We evaluate our models both on the Norwegian version of the datasets, and the original English. We report Spearman's correlation for the STS datasets. + +§ 5.1 EVALUATION IN NORWEGIAN + +In Table 1 we see the average performance on the Norwegian STS before training using the average of the last layer to compare embeddings. We also tested using the average of first and last layers (giving similar numbers) and using "cls" token + +431 + +${}^{3}$ https://huggingface.co/datasets/NbAiLab/norwegian_parliament + +432 486 + +max width= + +Model STS12 STS13 STS14 STS15 STS16 STSB SICKR $\mathbf{{Avg}.}$ + +1-9 +BERT 55.21 49.64 49.29 63.68 54.39 54.67 50.93 53.97 + +1-9 +RoBERTa 60.30 59.12 57.15 68.73 64.33 64.04 54.39 61.15 + +1-9 +mBERT 60.88 62.31 55.91 70.78 66.80 61.87 57.13 62.24 + +1-9 +WikiBERT 63.38 70.21 62.63 74.04 70.90 70.88 62.52 67.79 + +1-9 +NorBERT 56.41 65.33 54.32 68.95 68.00 62.40 64.54 62.85 + +1-9 +NB-BERT-base 59.40 70.70 57.93 71.87 69.94 69.25 63.98 66.15 + +1-9 +NB-BERT-large 70.45 80.80 72.79 81.53 78.41 79.35 69.18 76.07 + +1-9 + +488 + +489 + +490 + +491 + +493 + +494 + +433 487 + +438 492 + +(a) Performance of unsupervised models on the Norwegian STS datasets. + +max width= + +Model STS12 STS13 STS14 STS15 STS16 STSB SICKR $\mathbf{{Avg}.}$ + +1-9 +mBERT 73.43 69.09 70.84 81.50 73.82 76.47 72.79 73.99 + +1-9 +WikiBERT 73.29 64.48 69.24 80.32 74.51 75.42 69.94 72.45 + +1-9 +NorBERT 74.30 70.69 72.09 82.56 76.91 79.33 73.74 75.66 + +1-9 +NB-BERT-base 76.31 77.20 75.43 84.47 77.69 82.14 77.97 78.75 + +1-9 +NB-BERT-large 77.07 83.65 80.28 86.24 81.87 84.37 78.44 81.70 + +1-9 + +495 + +496 + +497 + +498 + +499 + +500 + +501 + +(b) Performance on the Norwegian STS datasets of supervised models trained on both Norwegian and English NLI data. 502 + +max width= + +Model STS12 STS13 STS14 STS15 STS16 STSB SICKR $\mathbf{{Avg}.}$ + +1-9 +mBERT 69.28 71.50 69.44 78.12 74.38 71.12 67.70 71.65 + +1-9 +WikiBERT 70.14 71.18 71.79 77.56 76.20 74.20 67.32 72.63 + +1-9 +NorBERT 70.79 74.46 72.44 80.66 77.73 76.65 71.56 74.90 + +1-9 +NB-BERT-base 72.41 79.22 74.67 81.47 77.72 78.49 73.50 76.78 + +1-9 +NB-BERT-large 74.67 83.65 79.47 84.15 81.82 82.25 74.75 80.11 + +1-9 + +503 + +504 + +505 + +506 + +507 + +509 (c) Performance on the Norwegian STS datasets of supervised models trained on Norwegian NLI data. + +Table 2: Results of our models tested on the Norwegian STS datasets. 512 (giving worse numbers). Thus we have a baseline to compare how much the models have learned from the training. + +In Table 2a we see the performance of our unsupervised models on the Norwegian STS datasets. These are the results when we test without the MLP, which on average performs slightly better than using MLP also for testing. + +In Table 2b we see the results from training supervised models on the combination of Norwegian and English NLI data, while Table 2c shows the performance when training on only Norwegian NLI data. We see that training with English included improves performance over merely training in Norwegian for all models. + +We see that the supervised models perform much better than the unsupervised ones. This would usually not be surprising, but considering the supervised data is automatically translated and therefore presumably of lower quality than the unsupervised data, it is interesting to note. + +§ 5.2 EVALUATION IN ENGLISH + +In Table 3a we show the results from testing our + +485 unsupervised models on the English dataset. In + +Table 3b we show the results from testing our su- 514 pervised models trained on the combined English and Norwegian dataset on the English STS data, while Table 3c shows the results for supervised models trained only on Norwegian data. + +519 + +Since we have automatically translated the STS data, we are unsure how accurate the ground truth + +labels in Norwegian will be, since there will be 522 examples of sentences where the similarity of the + +sentences changes because of differing transla- 524 tions. However we think that this should not influence comparisons between different models very much. This is supported by the fact that the internal ranking between models for the Norwegian + +and the English dataset is the same among the Nor- 529 wegian unsupervised models. (English models unsurprisingly are higher in the rankings when tested on English) + +One of the more interesting findings in this pa- 534 per is how strong performance our models get on the English STS data. NB-BERT-base was initialized from the mBERT checkpoint which can + +partly explain this, but not all models was started 538 + +from a model pre-trained in English. The un- 539 + +540 594 + +max width= + +Model STS12 STS13 STS14 STS15 STS16 STSB SICKR $\mathbf{{Avg}.}$ + +1-9 +BERT(english) 54.76 70.77 57.39 69.32 69.19 61.66 66.29 64.20 + +1-9 +roBERTa(english) 65.26 77.06 67.09 76.88 76.71 75.32 65.60 71.99 + +1-9 +mBERT 63.56 73.10 63.95 74.67 73.56 68.58 61.61 68.43 + +1-9 +WikiBERT 64.68 77.60 67.04 76.20 76.30 74.63 65.34 71.68 + +1-9 +NorBERT 52.96 62.30 54.99 67.45 69.83 63.68 62.40 61.94 + +1-9 +NB-BERT-base 56.23 72.06 57.93 68.71 71.09 67.25 61.63 64.99 + +1-9 +NB-BERT-large 72.54 83.68 76.08 83.03 81.09 81.32 68.80 78.08 + +1-9 + +596 + +597 + +598 + +599 + +600 + +601 + +602 + +541 595 + +(a) Performance of unsupervised models on English STS datasets. + +max width= + +Model STS12 STS13 STS14 STS15 STS16 STSB SICKR $\mathbf{{Avg}.}$ + +1-9 +mBERT 76.88 79.69 77.58 84.99 78.52 81.36 77.30 79.47 + +1-9 +WikiBERT 72.45 59.56 67.08 80.87 75.21 75.31 74.01 72.07 + +1-9 +NorBERT 73.39 69.40 72.65 83.10 77.30 80.48 76.55 76.13 + +1-9 +NBBert-base 76.93 78.78 77.76 85.28 80.29 82.96 78.49 80.07 + +1-9 +NBBert-large 78.30 85.92 81.78 87.11 83.24 85.72 79.56 83.09 + +1-9 + +603 + +604 + +605 + +606 + +607 + +608 + +609 + +(b) Performance of supervised models on English STS datasets fine-tuned on both Norwegian and English MNLI. (c) Performance of supervised models on English STS datasets fine-tuned on Norwegian MNLI. + +max width= + +Model STS12 STS13 STS14 STS15 STS16 STSB SICKR $\mathbf{{Avg}.}$ + +1-9 +mBERT 72.62 79.36 75.84 81.87 79.70 77.48 70.18 76.72 + +1-9 +WikiBERT 65.47 65.30 67.40 76.86 73.12 68.91 60.59 68.24 + +1-9 +NorBERT 66.90 68.62 69.63 79.35 76.23 73.38 69.66 71.97 + +1-9 +NBBert-base 71.57 80.30 76.30 81.55 79.23 78.09 71.12 76.88 + +1-9 +NBBert-large 76.42 85.58 81.23 85.49 83.21 83.15 75.04 81.45 + +1-9 + +Table 3: Results of our models tested on the English STS datasets. + +610 + +611 + +612 + +613 + +615 + +617 + +619 + +620 supervised NB-BERT-large achieves a score of 78.08 on English STS. For comparison, the best unsupervised model in the original SimCSE paper, SimCSE-RoBERTa-large, achieved a score of 78.90. Thus we see that we have a model pre-trained on a Norwegian corpus (containg some English), further trained unsupervised in Norwegian, that achieves less than 1% worse score than the best English model, trained in English. This model is also better than the best unsupervised English model in the original SentenceBERT paper. The supervised NB-BERT trained only on Norwegian NLI achieved a score of 81.45, while the version trained on Norwegian and English NLI + +583 achieve a score of 83.09. Comparably the supervised original English version SimCSE-BERT-base got a score of 81.57 and SimCSE-RoBERTa-large 83.76. Thus we see that we achieve comparable performance between a supervised Norwe- + +588 gian large BERT and a supervised English base BERT, when testing in English. Our best supervised model is less than $1\%$ away from the best English SimCSE model, although this is less surprising than for the unsupervised models, since we + +593 + +in this case fine-tune our model also on English 622 NLI. We also note that our best supervised model which is trained on only Norwegian is better than the best supervised English model in the Sentence-BERT paper. Thus it does seem like the models learn a lot for performing well at English sentence similarity even though the pre-training is mostly in Norwegian. The strong performance in English of NB-BERT models was already noted in (Kum- + +mervold et al., 2021). 632 + +To see if we can better understand the + +above findings, we tested the English supervised 637 SimCSE-RoBERTa-large on Norwegian STS, and achieved only an average score of 54.23 . Thus a very good English model scores badly in Norwegian, while a very good Norwegian model scores + +well in English. This might indicate that the rea- 642 son the Norwegian models all perform so well in English is that there is enough English in the Norwegian training data (probably including many snippets in the Norwegian parts) that the models + +learn quite a lot of English. 647 + +648 + +BERT 76.7 + +RoBERTa 79.8 mBERT WikiBERT NorBERT NB-BERT-base 82.7 NB-BERT-large 89.7 (a) Performance of unsupervised models when fine-tuned on the Talk of Norway dataset. mBERT 79.3 WikiBERT 82.6 NorBERT 85.7 NB-BERT-base 83.4 NB-BERT-large 89.3 (b) Performance of supervised models trained on Norwegian NLI when fine-tuned on the Talk of Norway dataset. mBERT 79.2 WikiBERT 81.1 NorBERT 84.9 NB-BERT-base 83.3 NB-BERT-large 89.3 (c) Performance of supervised models trained in on Norwegian and English NLI on the Talk of Norway dataset. + +Table 4: Performance of our models on the ToN dataset. + +649 + +650 + +651 + +§ 6 RESULTS CLASSIFICATION + +We report macro F1 score for the binary classification tasks. + +§ 6.1 TON BINARY CLASSIFICATION + +In Table 4a we see the performance of the unsupervised models when fine-tuned on the Talk of Norway dataset. In Table 4b we see the perfor- + +686 mance of the supervised models trained on Norwegian NLI and then fine-tuned on the ToN dataset, while Table 4c shows the performance when train- + +689 ing on both Norwegian and English NLI. + +691 We see that training the models to give bet- ter sentence embeddings gives some performance gains on this task, compared to fine-tuning the base model: In (Kummervold et al., 2021) it is reported that NB-BERT achieves a score of 81.8, while NorBERT scores 78.2 and mBERT 78.4 on this task. All our numbers are slightly higher. + +We see that for this classification task training to make sentence models with English NLI data included did not help: the numbers are very similar + +701 with and without it. (a) Performance of unsupervised models, fine-tuned on the NoReC binary classification dataset. mBERT 72.2 WikiBERT 77.9 NorBERT 82.4 NB-BERT-base 85.9 + +max width= + +BERT 63.1 + +1-2 +RoBERTa 64.4 + +1-2 +mBERT 70.3 + +1-2 +WikiBERT 77.0 + +1-2 +NorBERT 82.0 + +1-2 +NB-BERT-base 84.3 + +1-2 +NB-BERT-large 87.6 + +1-2 + +NB-BERT-large 87.0 (b) Performance of supervised models trained on only Norwegian NLI when fine-tuned on the NoReC binary classification dataet. mBERT 74.4 WikiBERT 77.6 NorBERT 81.0 NB-BERT-base 84.9 + +NB-BERT-large 87.3 (c) Performance of supervised models trained on Norwegian and English NLI when fine-tuned on the NoReC binary classification dataset. + +Table 5: Performance of our models on the NoReC binary classification dataset. + +§ 6.2 NOREC BINARY CLASSIFICATION + +In Table 5a we see the performance of unsupervised models on the NoReC binary classification task. In Table 5b we see the results of supervised models trained on Norwegian NLI, while in Table 5c we see the results of supervised models trained on Norwegian and English NLI. + +For this task it is less clear that we get gains from training sentence embedding models: The highest reported number for this task is NB-BERT-base which is reported as 86.4 in (Kummervold et al., 2021) and 83.9 in (Kutuzov et al., 2021). Our best score for NB-BERT-base is 85.9, which is not better than this. Our best model NB-BERT-large also does not achieve a higher score than about ${87}\%$ , which is only slightly better than the smaller models. We do not know the reason we get improvements for ToN classification, and not here. The mBERT model do improve with training, but that is not so surprising, since it is not already as strong in Norwegian as most of the other models. + +§ 7 DISCUSSION + +757 + +We believe that our models perform well on the semantic sentence similarity task, even if we do not have any strict comparison since this is the first evalutation of Norwegian sentence embedding models on the STS data. The Norwegian dataset corresponds to the English one, so the scores of English models on English STS and Norwegian models on Norwegian STS should in principle correspond to each other, but because of the extra noise added by the automatic translation we are not surprised that the Norwegian numbers are a bit worse. We see that the models improve a lot compared to before training, and because they perform quite well even for the English STS datasets, we are confident that they have indeed learned something useful in Norwegian. + +The supervised models perform better than our unsupervised models even though the supervised models are trained on machine translated data. This shows that machine translated data could be useful for doing NLP in smaller languages, at least for some tasks such as ours. The difference in the numbers we get for unsupervised and supervised training are similar to the ones in the original Sim-CSE paper. It is a bit unclear to what extent the specific content and language of the training data is important for performing well on STS tasks. For example, one can improve the performance of English SimCSE by training on unrelated image data (Jian et al., 2022). This might be because the task is a form of clustering, and images and text in other languages are structurally similar enough that the models learn something useful. + +From doing our experiments we get comparisons of the different Norwegian language models. This is because this method of making sentence embeddings is mostly a way of extracting the knowledge already learned by the models, since the amount of training we do is much smaller than the amount the models already have been pre-trained. An unsuprising conclusion is that the scale of the model is the most important factor in making good language models. NB-BERT-large is the best model by clear margins for all of our evaluations. This conforms to the general tendency in recent NLP that scaling up models is more effective than tailoring data or architecture on a given scale. Next, we find that for binary classification the models NB-BERT-base and Nor- + +809 BERT perform quite similary, while WikiBERT is + +generally a bit weaker, while all of them clearly 810 + +outperform mBERT. For sentence similarity we 811 + +find different rankings among models: Here un- 812 + +supervised WikiBERT is the second best model, 813 + +while the supervised version is the weakest of the 814 + +Norwegian supervised models. Supervised NB- 815 + +BERT-base is clearly the second best model, while 816 NorBERT performs worse on the STS task. + +We see that training sentence embedding models slightly improves performance on the binary + +classification tasks, but not by much compared 821 with the base models. There is no clear tendency on whether training supervised or unsupervised improves performance on classification more, since the numbers we get are similar in both + +cases. 826 + +828 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/uygq9_N7TL/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/uygq9_N7TL/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..c6c6806bb432070df2a24d02c3ed54efc405ab17 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/uygq9_N7TL/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,571 @@ +000 054 + +# Uncertainty-Aware Natural Language Inference with Stochastic Weight Averaging + +001 055 + +002 056 + +003 057 + +Anonymous Author + +Affiliation / Address line 1 + +006 Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 058 + +Affiliation / Address line 1 059 + +Affiliation / Address line 2 060 + +Affiliation / Address line 3 + +email@domain + +## Abstract + +This paper introduces Bayesian uncertainty modeling using Stochastic Weight Averaging-Gaussian (SWAG) in Natural + +016 Language Understanding (NLU) tasks. We apply the approach to standard + +018 tasks in natural language inference (NLI) and demonstrate the effectiveness of the method in terms of prediction accuracy + +021 and correlation with human annotation disagreements. We argue that the uncer- + +023 tainty representations in SWAG better reflect subjective interpretation and the nat- + +026 ural variation that is also present in human language understanding. The results re- + +028 veal the importance of uncertainty modeling, an often neglected aspect of neural language modeling, in NLU tasks. + +031 + +## 1 Introduction + +033 + +Arguably, human language understanding is not objective nor deterministic. The same utterance or + +036 text can be interpreted in different ways by different people depending on their language standards, + +038 background knowledge and world views, the linguistic context, as well as the situation in which the utterance or text appears. This uncertainty about potential readings is typically not modeled + +043 in Natural Language Understanding (NLU) re- search and is often ignored in NLU benchmarks and datasets. Instead, they usually assign a single interpretation as a gold standard to be predicted by an artificial system ignoring the inherent ambiguity of language and potential disagreements that humans arrive at. + +Some datasets like SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) do, however, contain information about different readings in the + +053 form of annotation disagreement. These datasets + +include the labels from five different rounds of an- 065 notation which show in some cases clear disagree- + +ment about the correct label for the sentence pair. 067 Those labeling discrepancies can certainly be a result of annotation mistakes but more commonly + +they arise from differences in understanding the 070 task, the given information and how it relates to + +world knowledge and personal experience. 072 + +Moving towards uncertainty-aware neural lan- + +guage models, we present our initial results us- 075 ing Stochastic Weight Averaging (SWA) (Izmailov + +et al., 2018) and SWA-Gaussian (SWAG) (Mad- 077 dox et al., 2019) on the task of Natural Language Inference. SWAG provides a scalable approach to + +calibrate neural networks and to model uncertainty 080 presentations and is straightforward to apply with + +standard neural architectures. Our study addresses 082 the two main questions: + +- How does uncertainty modeling using SWAG + +influence prediction performance and gener- 085 alization in NLI tasks? + +087 + +- How well does the calibrated model reflect + +human disagreement and annotation vari- 089 + +ance? 090 + +In this paper, we first test the performance of 092 SWA and SWAG in SNLI and MNLI tasks. We then study if adding weight averaging improves + +the generalization power of NLI models as tested 095 through cross-dataset experiments. Finally, we + +analyse the probability distributions from SWA 097 + +and SWAG to test how well the model uncertainty 098 + +corresponds to annotator disagreements. 099 + +## 2 Background and Related Work + +102 + +### 2.1 Uncertainty in human annotations + +In a recent position paper Plank (2022) argue that instead of taking human label variation as a prob- + +lem, we should embrace it as an opportunity and 107 take it into consideration in all the steps of the ML 109 pipeline: data, modeling and evaluation. The paper provides a comprehensive survey of research on (i) reasons for human label variation, (ii) modeling human label variation, and (iii) evaluating with human label variation. + +Pavlick and Kwiatkowski (2019) studied human disagreements in NLI tasks and argue that we should move to an evaluation objective that more closely corresponds to the natural interpretation variance that exists in data. Such a move would require that NLU models be properly calibrated to reflect the distribution we can expect and, hence, move to a more natural inference engine. + +Chen et al. (2020) propose Uncertain NLI (UNLI), a task that moves away from categorical labels into probabilistic values. They use a scalar regression model and show that the model predictions correlate with human judgement. + +### 2.2 Representing Model Uncertainty + +The approach to uncertainty modeling that we consider is related to the well-established technique of model ensembling. Stochastic optimization procedures applied in training deep neural networks are non-deterministic and depend on hyper-parameters and initial seeds. Ensembles have been used as a pragmatic solution to average over several solutions, and the positive impact on model performance pushed ensembling into the standard toolbox of deep learning. Related to en-sembling is the technique of checkpoint averaging (refer to e.g. Gao et al., 2022), which is also known to improve performance. + +Intuitively, ensembles and checkpoint averages also reflect the idea of different views and interpretations of the data and, therefore, provide a framework for uncertainty modeling. SWA and SWAG build on that idea, and SWAG provides a generic and efficient approach for approximating Bayesian uncertainty and model calibration. + +SWA (Izmailov et al., 2018) is a checkpoint averaging method that tracks the optimization trajectory for a model during training, using the average of encountered values as the eventual parameters: + +$$ +{\theta }_{\mathrm{{SWA}}} = \frac{1}{T}\mathop{\sum }\limits_{{i = 1}}^{T}{\theta }_{i} \tag{1} +$$ + +with ${\theta }_{\mathrm{{SWA}}}$ denoting the SWA solution for parame- + +161 ter $\theta$ after $\mathrm{T}$ epochs of training. + +SWAG (Maddox et al., 2019) extends this 162 + +method to estimate Gaussian posteriors for model 163 parameters, by also estimating a covariance matrix for the parameters. For computational feasibility, a low-rank plus diagonal approximation to the covariance matrix is used: + +168 + +$$ +{\sum }_{\text{low-rank }} \approx \frac{1}{T - 1}\mathop{\sum }\limits_{{i = 1}}^{T}\left( {{\theta }_{i} - {\widehat{\theta }}_{i}}\right) {\left( {\theta }_{i} - {\widehat{\theta }}_{i}\right) }^{T} \tag{2} +$$ + +$$ +{\sum }_{\text{diag }} = \operatorname{diag}\left( {\frac{1}{T}\mathop{\sum }\limits_{{i = 1}}^{T}{\theta }_{i}^{2} - {\theta }_{\mathrm{{SWA}}}^{2}}\right) \tag{3} +$$ + +where ${\widehat{\theta }}_{i}$ in (2) is the running estimate of the parameters’ mean obtained from the first $i$ samples. + +The resulting posterior approximations are given 178 by + +$$ +{\theta }_{\mathrm{{SWAG}}} \sim \mathcal{N}\left( {{\theta }_{\mathrm{{SWA}}},\frac{1}{2}\left( {{\sum }_{\text{diag }} + {\sum }_{\text{low-rank }}}\right) }\right) . +$$ + +180(4) + +Once the posteriors are thus approximated, in test time, the model is utilized by sampling from the + +approximated posteriors for $N$ times, and tak- 185 ing the average of the predicted distributions from these samples as the answer of the model. + +One of the advantages of SWAG is the possi- 188 bility to seamlessly start with any pre-trained so- + +lution. Approximating the posterior is then done 190 during fine-tuning without the need to change the underlying model. + +193 + +## 3 Experiments + +195 + +We test the performance of SWA and SWAG on + +the natural language inference task using three 198 NLI datasets, including cross-dataset experiments, + +and study the effect on both hard and soft labeling. 200 + +### 3.1 Datasets + +We use Stanford Natural Language Inference cor- + +pus (SNLI) (Bowman et al., 2015) and Multi- 205 Genre Natural Language Inference (MNLI) corpus (Williams et al., 2018) as the datasets in our experiments. We also study cross-dataset generalisation capability of the model with and without weight averaging. For those experiments we also include SICK (Marelli et al., 2014) as a test set. In cross-dataset generalization experiments we first fine-tune the model with a training data from one NLI dataset (e.g. SNLI) and then test with a test + +set from another NLI dataset (e.g. MNLI-mm). 215 + +SNLI is a dataset of ${570}\mathrm{k}$ sentence pairs which have been manually labeled with entailment, contradiction, and neutral labels. The source for the premise sentences in SNLI were image captions from the Flickr30k corpus (Young et al., 2014). + +MNLI is made of ${433}\mathrm{\;k}$ sentence pairs labeled with entailment, contradiction and neutral, containing examples from ten genres of written and spoken English. Five of the genres are included in the training set. The development and test sets have been split into matched (MNLI-m) and mismatched (MNLI-mm) sets, where the former includes only sentences from the same genres as the training data, and the latter includes genres not present in the training data. ${}^{1}$ + +SICK includes 9,840 examples with logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was constructed automatically by taking pairs of sentences from a random subset of the $8\mathrm{\;K}$ Image-Flickr (Young et al., 2014) and the SemEval 2012 STS MSRVideo Description (Agirre et al., 2012) datasets by using rule-based approach to construct examples for the different logical inference types. + +### 3.2 Methods + +In all the experiments we fine tune a pre-trained RoBERTa-base model (Liu et al., 2019) from the Hugging Face Transformers library (Wolf et al., 2020). As a common practice in the NLI tasks, we use the majority-vote gold labels for training even if multiple annotations are available. + +We add stochastic weight averaging to the RoBERTa model by using the SWA implementation from PyTorch 1.12 and the SWAG implementation by (Maddox et al., 2019) To study how well SWA and SWAG perform in NLI as compared to a baseline model, we ran the same fine-tuning with SNLI and MNLI datasets utilizing SWA and SWAG for weight averaging. + +
$\mathbf{{Dataset}}$$\mathbf{{Method}}$Acc (%)SD$\Delta$
SNLIbase90.800.26-
SNLISWA91.470.24+0.67
SNLISWAG91.590.14+0.79
MNLI-mbase86.530.20-
MNLI-mSWA87.600.19$+ {1.07}$
MNLI-mSWAG87.760.12+1.23
MNLI-mmbase86.310.26-
MNLI-mmSWA87.340.29+1.03
MNLI-mmSWAG87.510.19+1.20
+ +Table 1: Comparison of SWA and SWAG performance on NLI benchmarks (mean accuracy and standard deviation over 5 runs). $\Delta$ is the difference to the baseline result (base) with no weight averaging. + +270 + +271 + +272 + +273 + +274 + +275 + +276 + +277 + +278 + +279 + +280 + +281 + +283 + +### 3.3 Results + +285 + +286 + +The standard evaluation for the NLI task is the ac- 287 + +curacy on aggregated gold labels. However, as two 288 + +of the test data sets (from SNLI and MNLI) also 289 contains multiple human annotations, we also use + +those for measuring the cross entropy of the pre- 291 dicted distribution on the human label distribution + +(soft labeling, e.g. Peterson et al., 2019; Pavlick 293 and Kwiatkowski, 2019). + +296 + +#### 3.3.1 Accuracy + +298 + +The basic classification results are in Table 1. We + +report average accuracies and standard deviation 300 + +over 5 runs with different random seeds. 301 + +Both SWA and SWAG provide significant im- + +provements over the baseline without weight aver- 303 + +aging. SWAG performs slightly better than SWA 304 across all the three experiments. + +In order to test if weight averaging improves the 306 generalization capability of NLI models, we fur- + +ther performed cross-dataset generalization tests 308 following (Talman and Chatzikyriakidis, 2019). The results are reported in Table 2. + +The results of cross-dataset experiments are + +slightly mixed: We do not notice a clear advan- 313 tage of SWAG over SWA, but with the exception of training with MNLI and testing with SICK, we do notice improvement for weight averaging approaches as compared to the baseline. The performance on SICK drops significantly in all cases and the difference between the approaches is minimal, showing that the NLI training data is not a good fit for that benchmark. + +The other cross-dataset results highlight the ad- + +vantage of weight averaging, indicating that the 323 + +--- + +${}^{1}$ As the test data for MNLI have not been made publicly available, we use the development sets when reporting the results for MNLI. + +2 https://pytorch.org/docs/1.12/optim.html#stochastic-weight-averaging + +'https://github.com/wjmaddox/swa_gaus sian + +--- + +324 + +
DatasetMethod$\mathbf{{Acc}\left( \% \right) }$SD$\Delta$
SNLI $\rightarrow$ MNLI-mbase77.310.57
SNLI $\rightarrow$ MNLI-mSWA79.670.372.37
SNLI $\rightarrow$ MNLI-mSWAG79.330.212.03
$\mathrm{{SNLI}} \rightarrow$ MNLI-mmbase77.400.78
SNLI $\rightarrow$ MNLI-mmSWA79.440.192.04
SNLI $\rightarrow$ MNLI-mmSWAG79.240.291.84
$\mathrm{{SNLI}} \rightarrow \mathrm{{SICK}}$base57.080.77
SNLI $\rightarrow$ SICKSWA57.090.320.01
SNLI $\rightarrow$ SICKSWAG57.170.370.08
$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$base82.840.74
$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$SWA84.150.351.31
$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$SWAG84.450.271.61
$\mathrm{{MNLI}} \rightarrow \mathrm{{SICK}}$base56.630.94
$\mathrm{{MNLI}} \rightarrow \mathrm{{SICK}}$SWA56.170.60-0.46
MNLI $\rightarrow$ SICKSWAG56.530.91-0.10
+ +Table 2: Cross-dataset experiments with and without weight averaging (mean accuracy and standard deviation over 5 runs with different random seeds), where the left hand side of the arrow is the training set and the right hand side is the testing set. + +325 + +326 + +327 + +328 + +329 + +330 improved modeling of uncertainty can lead to better generalizations. + +#### 3.3.2 Cross Entropy + +We also test how well weight averaging approaches can be used to model annotator disagreement and annotation uncertainty in the NLI testsets of SNLI and MNLI. These two datasets come with five different annotation labels for every data point, often with high disagreement between human annotators indicating inherently confusing data points with high aleatoric uncertainty (Der Kiureghian and Ditlevsen, 2009). For quantifying the goodness of fit of the model predictions, we calculate the cross entropy between the predicted and annotation distributions. + +362 Table 3 depicts the resulting cross entropy val- ues, with lower values denoting more faithful predictions. SWA and SWAG result in consistently more similar distributions with that of annotations, complementing their overall better accuracy results (Section 3.3). In contrast to the accuracy results, here SWAG outperforms SWA in all cases, indicating that the Gaussian posterior helps to model the data uncertainty more accurately. The results also carry over to the cross-dataset experiments as shown on the table. + +The comparison between system predictions + +
$\mathbf{{Dataset}}$$\mathbf{{Method}}$Cross Entropy$\Delta$
SNLIbase0.83
SNLISWA0.75-0.08
SNLISWAG0.69-0.14
MNLI-mbase0.87
MNLI-mSWA0.80-0.07
MNLI-mSWAG0.73-0.14
MNLI-mmbase0.84
MNLI-mmSWA0.77-0.07
MNLI-mmSWAG0.69-0.15
$\mathrm{{SNLI}} \rightarrow$ MNLI-mbase1.13
SNLI $\rightarrow$ MNLI-mSWA0.90-0.23
SNLI $\rightarrow$ MNLI-mSWAG0.80-0.33
SNLI $\rightarrow$ MNLI-mmbase1.12
SNLI $\rightarrow$ MNLI-mmSWA0.88-0.24
SNLI $\rightarrow$ MNLI-mmSWAG0.79-0.33
$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$base1.04
$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$SWA0.97-0.07
$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$SWAG0.89-0.15
+ +Table 3: Comparison of cross entropies between data annotation distributions using base, SWA and SWAG methods. $\Delta$ is the difference to the baseline cross entropy values. + +378 + +379 + +380 + +381 + +382 + +383 + +384 + +385 + +386 + +389 + +394 + +and annotator variation deserves some further 399 analysis. Preliminary study (see examples in Ap- + +pendix A) indicates that the prediction uncertainty 401 in SWAG for individual instances very well follows human annotation confusion. Furthermore, + +we identified cases with a larger mismatch be- 404 tween system predictions and human disagree- + +ment where the latter is mainly caused by erro- 406 neous or at least questionable decisions. This points to the use of SWAG in an active learning + +scenario, where annotation noise can be identified 409 + +using a well calibrated prediction model. 411 + +## 4 Conclusions + +414 + +Our results show that weight averaging provides + +consistent and significant improvement for both 416 SNLI and MNLI datasets. The cross-dataset results are slightly mixed but also show the trend of improved cross-domain generalization. Finally, + +we demonstrate a clear increase in the correlation 421 with human annotation variance when comparing SWAG with non-Bayesian approaches. + +For future work we consider making use of multiple annotations also during training and exten- + +sions of SWAG such as MultiSWAG (Wilson and 426 Izmailov, 2020). We also plan to test the methods on different NLU datasets, especially those with a high number of annotations (e.g. Nie et al., 2020), and compare the annotation variation and system + +predictions in more detail. 431 + +--- + +${}^{4}$ Note that for the Baseline and SWA models, we consider the output from the eventual softmax function as the predicted distribution, while for the SWAG model, we use the average output distribution from $N = {20}$ sampled models. + +--- + +## References + +433 + +Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics. 438 + +Samuel R. Bowman, Gabor Angeli, Christopher Potts, 440 and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 443 632-642, Lisbon, Portugal. Association for Computational Linguistics. + +445 Tongfei Chen, Zhengping Jiang, Adam Poliak, Keisuke Sakaguchi, and Benjamin Van Durme. 2020. Uncertain natural language inference. In Proceedings 448 of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8772-8779, On- 450 line. Association for Computational Linguistics. + +Armen Der Kiureghian and Ove Ditlevsen. 2009. Aleatory or epistemic? does it matter? Structural 453 safety, 31(2):105-112. + +Yingbo Gao, Christian Herold, Zijian Yang, and Her- 455 mann Ney. 2022. Revisiting checkpoint averaging for neural machine translation. In Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, pages 188-196, Online only. Association for Computational Linguistics. + +Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. 2018. Averaging weights leads to wider optima and better generalization. Uncertainty in Artificial Intelligence (UAI). + +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692. + +Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, and Andrew Gordon Wilson. 2019. A simple baseline for bayesian uncertainty in deep learning. Advances in Neural Information Processing Systems, 32. + +Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zam-parelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 216-223, Reykjavik, Iceland. European Language Resources Association (ELRA). + +Yixin Nie, Xiang Zhou, and Mohit Bansal. 2020. What can we learn from collective human opinions on natural language inference data? In Proceedings of the 485 2020 Conference on Empirical Methods in Natural + +Language Processing (EMNLP), pages 9131-9143, 486 + +Online. Association for Computational Linguistics. 487 + +488 + +Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent 489 disagreements in human textual inferences. Trans- 490 actions of the Association for Computational Lin- + +guistics, 7:677-694. 491 + +492 + +Joshua C. Peterson, Ruairidh M. Battleday, Thomas L. 493 Griffiths, and Olga Russakovsky. 2019. Human uncertainty makes classification more robust. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 9616-9625. + +497 + +Barbara Plank. 2022. The "problem" of human label + +variation: On ground truth in data, modeling and 499 + +evaluation. In Proceedings of the 2022 Conference 500 on Empirical Methods in Natural Language Pro- + +cessing, pages 10671-10682. Association for Com- 501 + +putational Linguistics. 502 + +Aarne Talman and Stergios Chatzikyriakidis. 2019. 504 Testing the generalization power of neural network models across NLI benchmarks. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing + +and Interpreting Neural Networks for NLP, pages 507 85-94, Florence, Italy. Association for Computa- + +tional Linguistics. 509 + +Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- + +tence understanding through inference. In Proceed- 512 ings of the 2018 Conference of the North American + +Chapter of the Association for Computational Lin- 514 guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- + +tics. 517 + +Andrew G Wilson and Pavel Izmailov. 2020. Bayesian 519 deep learning and a probabilistic perspective of generalization. In Advances in Neural Information Processing Systems, volume 33, pages 4697-4708. Cur- + +ran Associates, Inc. 522 + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien 524 Chaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, Remi Louf, Morgan Funtow-icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, + +Quentin Lhoest, and Alexander Rush. 2020. Trans- 529 formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. + +534 + +Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. TACL, 2:67- + +78. 538 539 + +540 594 + +![0196413c-35b4-74d1-9e28-c1ece5af2332_5_189_168_1289_617_0.jpg](images/0196413c-35b4-74d1-9e28-c1ece5af2332_5_189_168_1289_617_0.jpg) + +Table 4: Comparison of probability distributions of human annotations vs. SWAG model predictions, for three randomly selected data points from the SNLI dataset. (Left and middle) Correctly predicted cases, as indicated by low cross entropy, (Right) A wrongly predicted case, as indicated by high cross entropy. SWAG points indicate the outputted probability distributions from $N = {20}$ samples. + +608 + +609 + +541 595 + +542 596 + +543 597 + +544 598 + +545 599 + +546 600 + +547 + +548 + +549 + +550 + +551 605 + +552 + +553 607 + +556 610 + +558 612 + +561 615 + +563 + +## A Appendix + +Here we showcase and discuss three randomly + +566 selected data points from the SNLI dataset, and compare the predictions of the $N = {20}$ samples + +568 from the SWAG model with the annotation distributions for each of these points. Table 4 presents two cases (left and middle) in which the SWAG model makes the correct prediction, and another case (right) in which the model makes the wrong + +573 prediction. In the high agreement cases, indicated by lower cross entropies between the annotations and prediction, the SWAG model not only selects the correct label for the instance, but also predicts the annotator disagreement correctly when such a + +578 disagreement exists (middle) versus when it does not (left). + +581 The third figure presents a case where the pre- dictions of the SWAG samples are more certain + +583 than expected: Annotators disagree on whether the hypothesis is Entailment or Neutral, whereas the model predictions place all probability mass to the Neutral class. The corresponding cross entropy is high, which reflects this disagreement. It + +588 should be noted that this is also a fairly controversial and difficult data point, and to conclude Entailment requires making some strong assumptions. Ideally, such disagreements between system predictions and annotator distributions may also + +593 be used as cues within the training process itself. + +Two potential venues are (1) using the incongru- 617 + +ence between the two distributions as the loss sig- 618 + +nal to drive the optimization process directly (as 619 + +opposed to using only the gold label and the pre- 620 + +dicted class label), and (2) using the incongruence 621 + +in predictions in an active learning scenario. 622 + +623 + +624 + +625 + +626 + +627 + +628 + +629 + +630 + +631 + +632 + +633 + +634 + +635 + +636 + +637 + +638 + +639 + +640 641 642 + +643 644 645 646 647 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/uygq9_N7TL/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/uygq9_N7TL/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..d61bbc8110c05ea75f0d35ac2f524b866c373742 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/uygq9_N7TL/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,509 @@ +000 054 + +§ UNCERTAINTY-AWARE NATURAL LANGUAGE INFERENCE WITH STOCHASTIC WEIGHT AVERAGING + +001 055 + +002 056 + +003 057 + +Anonymous Author + +Affiliation / Address line 1 + +006 Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 058 + +Affiliation / Address line 1 059 + +Affiliation / Address line 2 060 + +Affiliation / Address line 3 + +email@domain + +§ ABSTRACT + +This paper introduces Bayesian uncertainty modeling using Stochastic Weight Averaging-Gaussian (SWAG) in Natural + +016 Language Understanding (NLU) tasks. We apply the approach to standard + +018 tasks in natural language inference (NLI) and demonstrate the effectiveness of the method in terms of prediction accuracy + +021 and correlation with human annotation disagreements. We argue that the uncer- + +023 tainty representations in SWAG better reflect subjective interpretation and the nat- + +026 ural variation that is also present in human language understanding. The results re- + +028 veal the importance of uncertainty modeling, an often neglected aspect of neural language modeling, in NLU tasks. + +031 + +§ 1 INTRODUCTION + +033 + +Arguably, human language understanding is not objective nor deterministic. The same utterance or + +036 text can be interpreted in different ways by different people depending on their language standards, + +038 background knowledge and world views, the linguistic context, as well as the situation in which the utterance or text appears. This uncertainty about potential readings is typically not modeled + +043 in Natural Language Understanding (NLU) re- search and is often ignored in NLU benchmarks and datasets. Instead, they usually assign a single interpretation as a gold standard to be predicted by an artificial system ignoring the inherent ambiguity of language and potential disagreements that humans arrive at. + +Some datasets like SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) do, however, contain information about different readings in the + +053 form of annotation disagreement. These datasets + +include the labels from five different rounds of an- 065 notation which show in some cases clear disagree- + +ment about the correct label for the sentence pair. 067 Those labeling discrepancies can certainly be a result of annotation mistakes but more commonly + +they arise from differences in understanding the 070 task, the given information and how it relates to + +world knowledge and personal experience. 072 + +Moving towards uncertainty-aware neural lan- + +guage models, we present our initial results us- 075 ing Stochastic Weight Averaging (SWA) (Izmailov + +et al., 2018) and SWA-Gaussian (SWAG) (Mad- 077 dox et al., 2019) on the task of Natural Language Inference. SWAG provides a scalable approach to + +calibrate neural networks and to model uncertainty 080 presentations and is straightforward to apply with + +standard neural architectures. Our study addresses 082 the two main questions: + + * How does uncertainty modeling using SWAG + +influence prediction performance and gener- 085 alization in NLI tasks? + +087 + + * How well does the calibrated model reflect + +human disagreement and annotation vari- 089 + +ance? 090 + +In this paper, we first test the performance of 092 SWA and SWAG in SNLI and MNLI tasks. We then study if adding weight averaging improves + +the generalization power of NLI models as tested 095 through cross-dataset experiments. Finally, we + +analyse the probability distributions from SWA 097 + +and SWAG to test how well the model uncertainty 098 + +corresponds to annotator disagreements. 099 + +§ 2 BACKGROUND AND RELATED WORK + +102 + +§ 2.1 UNCERTAINTY IN HUMAN ANNOTATIONS + +In a recent position paper Plank (2022) argue that instead of taking human label variation as a prob- + +lem, we should embrace it as an opportunity and 107 take it into consideration in all the steps of the ML 109 pipeline: data, modeling and evaluation. The paper provides a comprehensive survey of research on (i) reasons for human label variation, (ii) modeling human label variation, and (iii) evaluating with human label variation. + +Pavlick and Kwiatkowski (2019) studied human disagreements in NLI tasks and argue that we should move to an evaluation objective that more closely corresponds to the natural interpretation variance that exists in data. Such a move would require that NLU models be properly calibrated to reflect the distribution we can expect and, hence, move to a more natural inference engine. + +Chen et al. (2020) propose Uncertain NLI (UNLI), a task that moves away from categorical labels into probabilistic values. They use a scalar regression model and show that the model predictions correlate with human judgement. + +§ 2.2 REPRESENTING MODEL UNCERTAINTY + +The approach to uncertainty modeling that we consider is related to the well-established technique of model ensembling. Stochastic optimization procedures applied in training deep neural networks are non-deterministic and depend on hyper-parameters and initial seeds. Ensembles have been used as a pragmatic solution to average over several solutions, and the positive impact on model performance pushed ensembling into the standard toolbox of deep learning. Related to en-sembling is the technique of checkpoint averaging (refer to e.g. Gao et al., 2022), which is also known to improve performance. + +Intuitively, ensembles and checkpoint averages also reflect the idea of different views and interpretations of the data and, therefore, provide a framework for uncertainty modeling. SWA and SWAG build on that idea, and SWAG provides a generic and efficient approach for approximating Bayesian uncertainty and model calibration. + +SWA (Izmailov et al., 2018) is a checkpoint averaging method that tracks the optimization trajectory for a model during training, using the average of encountered values as the eventual parameters: + +$$ +{\theta }_{\mathrm{{SWA}}} = \frac{1}{T}\mathop{\sum }\limits_{{i = 1}}^{T}{\theta }_{i} \tag{1} +$$ + +with ${\theta }_{\mathrm{{SWA}}}$ denoting the SWA solution for parame- + +161 ter $\theta$ after $\mathrm{T}$ epochs of training. + +SWAG (Maddox et al., 2019) extends this 162 + +method to estimate Gaussian posteriors for model 163 parameters, by also estimating a covariance matrix for the parameters. For computational feasibility, a low-rank plus diagonal approximation to the covariance matrix is used: + +168 + +$$ +{\sum }_{\text{ low-rank }} \approx \frac{1}{T - 1}\mathop{\sum }\limits_{{i = 1}}^{T}\left( {{\theta }_{i} - {\widehat{\theta }}_{i}}\right) {\left( {\theta }_{i} - {\widehat{\theta }}_{i}\right) }^{T} \tag{2} +$$ + +$$ +{\sum }_{\text{ diag }} = \operatorname{diag}\left( {\frac{1}{T}\mathop{\sum }\limits_{{i = 1}}^{T}{\theta }_{i}^{2} - {\theta }_{\mathrm{{SWA}}}^{2}}\right) \tag{3} +$$ + +where ${\widehat{\theta }}_{i}$ in (2) is the running estimate of the parameters’ mean obtained from the first $i$ samples. + +The resulting posterior approximations are given 178 by + +$$ +{\theta }_{\mathrm{{SWAG}}} \sim \mathcal{N}\left( {{\theta }_{\mathrm{{SWA}}},\frac{1}{2}\left( {{\sum }_{\text{ diag }} + {\sum }_{\text{ low-rank }}}\right) }\right) . +$$ + +180(4) + +Once the posteriors are thus approximated, in test time, the model is utilized by sampling from the + +approximated posteriors for $N$ times, and tak- 185 ing the average of the predicted distributions from these samples as the answer of the model. + +One of the advantages of SWAG is the possi- 188 bility to seamlessly start with any pre-trained so- + +lution. Approximating the posterior is then done 190 during fine-tuning without the need to change the underlying model. + +193 + +§ 3 EXPERIMENTS + +195 + +We test the performance of SWA and SWAG on + +the natural language inference task using three 198 NLI datasets, including cross-dataset experiments, + +and study the effect on both hard and soft labeling. 200 + +§ 3.1 DATASETS + +We use Stanford Natural Language Inference cor- + +pus (SNLI) (Bowman et al., 2015) and Multi- 205 Genre Natural Language Inference (MNLI) corpus (Williams et al., 2018) as the datasets in our experiments. We also study cross-dataset generalisation capability of the model with and without weight averaging. For those experiments we also include SICK (Marelli et al., 2014) as a test set. In cross-dataset generalization experiments we first fine-tune the model with a training data from one NLI dataset (e.g. SNLI) and then test with a test + +set from another NLI dataset (e.g. MNLI-mm). 215 + +SNLI is a dataset of ${570}\mathrm{k}$ sentence pairs which have been manually labeled with entailment, contradiction, and neutral labels. The source for the premise sentences in SNLI were image captions from the Flickr30k corpus (Young et al., 2014). + +MNLI is made of ${433}\mathrm{\;k}$ sentence pairs labeled with entailment, contradiction and neutral, containing examples from ten genres of written and spoken English. Five of the genres are included in the training set. The development and test sets have been split into matched (MNLI-m) and mismatched (MNLI-mm) sets, where the former includes only sentences from the same genres as the training data, and the latter includes genres not present in the training data. ${}^{1}$ + +SICK includes 9,840 examples with logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was constructed automatically by taking pairs of sentences from a random subset of the $8\mathrm{\;K}$ Image-Flickr (Young et al., 2014) and the SemEval 2012 STS MSRVideo Description (Agirre et al., 2012) datasets by using rule-based approach to construct examples for the different logical inference types. + +§ 3.2 METHODS + +In all the experiments we fine tune a pre-trained RoBERTa-base model (Liu et al., 2019) from the Hugging Face Transformers library (Wolf et al., 2020). As a common practice in the NLI tasks, we use the majority-vote gold labels for training even if multiple annotations are available. + +We add stochastic weight averaging to the RoBERTa model by using the SWA implementation from PyTorch 1.12 and the SWAG implementation by (Maddox et al., 2019) To study how well SWA and SWAG perform in NLI as compared to a baseline model, we ran the same fine-tuning with SNLI and MNLI datasets utilizing SWA and SWAG for weight averaging. + +max width= + +$\mathbf{{Dataset}}$ $\mathbf{{Method}}$ Acc (%) SD $\Delta$ + +1-5 +SNLI base 90.80 0.26 - + +1-5 +SNLI SWA 91.47 0.24 +0.67 + +1-5 +SNLI SWAG 91.59 0.14 +0.79 + +1-5 +MNLI-m base 86.53 0.20 - + +1-5 +MNLI-m SWA 87.60 0.19 $+ {1.07}$ + +1-5 +MNLI-m SWAG 87.76 0.12 +1.23 + +1-5 +MNLI-mm base 86.31 0.26 - + +1-5 +MNLI-mm SWA 87.34 0.29 +1.03 + +1-5 +MNLI-mm SWAG 87.51 0.19 +1.20 + +1-5 + +Table 1: Comparison of SWA and SWAG performance on NLI benchmarks (mean accuracy and standard deviation over 5 runs). $\Delta$ is the difference to the baseline result (base) with no weight averaging. + +270 + +271 + +272 + +273 + +274 + +275 + +276 + +277 + +278 + +279 + +280 + +281 + +283 + +§ 3.3 RESULTS + +285 + +286 + +The standard evaluation for the NLI task is the ac- 287 + +curacy on aggregated gold labels. However, as two 288 + +of the test data sets (from SNLI and MNLI) also 289 contains multiple human annotations, we also use + +those for measuring the cross entropy of the pre- 291 dicted distribution on the human label distribution + +(soft labeling, e.g. Peterson et al., 2019; Pavlick 293 and Kwiatkowski, 2019). + +296 + +§ 3.3.1 ACCURACY + +298 + +The basic classification results are in Table 1. We + +report average accuracies and standard deviation 300 + +over 5 runs with different random seeds. 301 + +Both SWA and SWAG provide significant im- + +provements over the baseline without weight aver- 303 + +aging. SWAG performs slightly better than SWA 304 across all the three experiments. + +In order to test if weight averaging improves the 306 generalization capability of NLI models, we fur- + +ther performed cross-dataset generalization tests 308 following (Talman and Chatzikyriakidis, 2019). The results are reported in Table 2. + +The results of cross-dataset experiments are + +slightly mixed: We do not notice a clear advan- 313 tage of SWAG over SWA, but with the exception of training with MNLI and testing with SICK, we do notice improvement for weight averaging approaches as compared to the baseline. The performance on SICK drops significantly in all cases and the difference between the approaches is minimal, showing that the NLI training data is not a good fit for that benchmark. + +The other cross-dataset results highlight the ad- + +vantage of weight averaging, indicating that the 323 + +${}^{1}$ As the test data for MNLI have not been made publicly available, we use the development sets when reporting the results for MNLI. + +2 https://pytorch.org/docs/1.12/optim.html#stochastic-weight-averaging + +'https://github.com/wjmaddox/swa_gaus sian + +324 + +max width= + +Dataset Method $\mathbf{{Acc}\left( \% \right) }$ SD $\Delta$ + +1-5 +SNLI $\rightarrow$ MNLI-m base 77.31 0.57 X + +1-5 +SNLI $\rightarrow$ MNLI-m SWA 79.67 0.37 2.37 + +1-5 +SNLI $\rightarrow$ MNLI-m SWAG 79.33 0.21 2.03 + +1-5 +$\mathrm{{SNLI}} \rightarrow$ MNLI-mm base 77.40 0.78 X + +1-5 +SNLI $\rightarrow$ MNLI-mm SWA 79.44 0.19 2.04 + +1-5 +SNLI $\rightarrow$ MNLI-mm SWAG 79.24 0.29 1.84 + +1-5 +$\mathrm{{SNLI}} \rightarrow \mathrm{{SICK}}$ base 57.08 0.77 X + +1-5 +SNLI $\rightarrow$ SICK SWA 57.09 0.32 0.01 + +1-5 +SNLI $\rightarrow$ SICK SWAG 57.17 0.37 0.08 + +1-5 +$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$ base 82.84 0.74 X + +1-5 +$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$ SWA 84.15 0.35 1.31 + +1-5 +$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$ SWAG 84.45 0.27 1.61 + +1-5 +$\mathrm{{MNLI}} \rightarrow \mathrm{{SICK}}$ base 56.63 0.94 X + +1-5 +$\mathrm{{MNLI}} \rightarrow \mathrm{{SICK}}$ SWA 56.17 0.60 -0.46 + +1-5 +MNLI $\rightarrow$ SICK SWAG 56.53 0.91 -0.10 + +1-5 + +Table 2: Cross-dataset experiments with and without weight averaging (mean accuracy and standard deviation over 5 runs with different random seeds), where the left hand side of the arrow is the training set and the right hand side is the testing set. + +325 + +326 + +327 + +328 + +329 + +330 improved modeling of uncertainty can lead to better generalizations. + +§ 3.3.2 CROSS ENTROPY + +We also test how well weight averaging approaches can be used to model annotator disagreement and annotation uncertainty in the NLI testsets of SNLI and MNLI. These two datasets come with five different annotation labels for every data point, often with high disagreement between human annotators indicating inherently confusing data points with high aleatoric uncertainty (Der Kiureghian and Ditlevsen, 2009). For quantifying the goodness of fit of the model predictions, we calculate the cross entropy between the predicted and annotation distributions. + +362 Table 3 depicts the resulting cross entropy val- ues, with lower values denoting more faithful predictions. SWA and SWAG result in consistently more similar distributions with that of annotations, complementing their overall better accuracy results (Section 3.3). In contrast to the accuracy results, here SWAG outperforms SWA in all cases, indicating that the Gaussian posterior helps to model the data uncertainty more accurately. The results also carry over to the cross-dataset experiments as shown on the table. + +The comparison between system predictions + +max width= + +$\mathbf{{Dataset}}$ $\mathbf{{Method}}$ Cross Entropy $\Delta$ + +1-4 +SNLI base 0.83 X + +1-4 +SNLI SWA 0.75 -0.08 + +1-4 +SNLI SWAG 0.69 -0.14 + +1-4 +MNLI-m base 0.87 X + +1-4 +MNLI-m SWA 0.80 -0.07 + +1-4 +MNLI-m SWAG 0.73 -0.14 + +1-4 +MNLI-mm base 0.84 X + +1-4 +MNLI-mm SWA 0.77 -0.07 + +1-4 +MNLI-mm SWAG 0.69 -0.15 + +1-4 +$\mathrm{{SNLI}} \rightarrow$ MNLI-m base 1.13 X + +1-4 +SNLI $\rightarrow$ MNLI-m SWA 0.90 -0.23 + +1-4 +SNLI $\rightarrow$ MNLI-m SWAG 0.80 -0.33 + +1-4 +SNLI $\rightarrow$ MNLI-mm base 1.12 X + +1-4 +SNLI $\rightarrow$ MNLI-mm SWA 0.88 -0.24 + +1-4 +SNLI $\rightarrow$ MNLI-mm SWAG 0.79 -0.33 + +1-4 +$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$ base 1.04 X + +1-4 +$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$ SWA 0.97 -0.07 + +1-4 +$\mathrm{{MNLI}} \rightarrow \mathrm{{SNLI}}$ SWAG 0.89 -0.15 + +1-4 + +Table 3: Comparison of cross entropies between data annotation distributions using base, SWA and SWAG methods. $\Delta$ is the difference to the baseline cross entropy values. + +378 + +379 + +380 + +381 + +382 + +383 + +384 + +385 + +386 + +389 + +394 + +and annotator variation deserves some further 399 analysis. Preliminary study (see examples in Ap- + +pendix A) indicates that the prediction uncertainty 401 in SWAG for individual instances very well follows human annotation confusion. Furthermore, + +we identified cases with a larger mismatch be- 404 tween system predictions and human disagree- + +ment where the latter is mainly caused by erro- 406 neous or at least questionable decisions. This points to the use of SWAG in an active learning + +scenario, where annotation noise can be identified 409 + +using a well calibrated prediction model. 411 + +§ 4 CONCLUSIONS + +414 + +Our results show that weight averaging provides + +consistent and significant improvement for both 416 SNLI and MNLI datasets. The cross-dataset results are slightly mixed but also show the trend of improved cross-domain generalization. Finally, + +we demonstrate a clear increase in the correlation 421 with human annotation variance when comparing SWAG with non-Bayesian approaches. + +For future work we consider making use of multiple annotations also during training and exten- + +sions of SWAG such as MultiSWAG (Wilson and 426 Izmailov, 2020). We also plan to test the methods on different NLU datasets, especially those with a high number of annotations (e.g. Nie et al., 2020), and compare the annotation variation and system + +predictions in more detail. 431 + +${}^{4}$ Note that for the Baseline and SWA models, we consider the output from the eventual softmax function as the predicted distribution, while for the SWAG model, we use the average output distribution from $N = {20}$ sampled models. \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wEJaCIkgLG/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wEJaCIkgLG/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..58fb11de6c82ed48e9002cc7ea9070be22511286 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wEJaCIkgLG/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,1121 @@ +000 054 + +# Danish Clinical Named Entity Recognition and Relation Extraction + +001 055 + +002 056 + +003 057 + +004 058 + +005 059 + +006 060 + +007 061 + +008 062 + +## Abstract + +016 + +Electronic health records contain impor- + +018 tant information regarding the patients' medical history but much of this information is stored in unstructured narra- + +021 tive text. This paper presents the first Danish clinical named entity recognition + +023 and relation extraction dataset for extrac- + +025 tion of six types of clinical events, six types of attributes, and three types of 026 relations. The dataset contains 11,607 027 paragraphs from Danish electronic health 028 + +029 records containing 54,631 clinical events, + +030 41,954 attributes, and 14,604 relations. + +031 We detail the methodology of developing the annotation scheme, and train a + +033 transformer-based architecture on the developed dataset with macro F1 performance of ${60.05}\% ,{44.85}\%$ , and ${70.64}\%$ + +036 for clinical events, attributes, and relations, respectively. + +038 + +## 1 Introduction + +040 Electronic health records (EHR) contain important information regarding the patients' medical his- + +043 tory including diagnoses, medications, treatment plans, allergies, and test results. However, much of this information is stored in unstructured narrative text. While this information could be used to guide diagnostic decision making and treatment + +048 plans, the unstructured format makes it infeasible to fully exploit in clinical practice and research. + +Natural language processing (NLP) algorithms could be used to transform the unstructured narrative text of the EHR into structured information + +053 and give medical doctors (MD) a fast overview of + +063 + +064 + +065 + +066 + +067 + +068 + +even a medical history spanning multiple years. 069 + +NLP models' ability to process and extract infor- 070 mation from written text keeps improving with + +benchmark-breaking models being published on 072 a regular basis. For example, transformer-based + +models such as GPT-3 (Brown et al., 2020), BERT 075 (Devlin et al., 2019), and ELECTRA (Clark et al., + +2020) have recently shown promising results for 077 many NLP tasks, e.g. named entity recognition and relation extraction (NER). In NER, models + +are trained to tag words with predefined entities 080 and find the relations between them. In clinical + +NER, entities such as diseases, treatments, drugs, 082 and tests have been extracted automatically from EHRs. However, many of the developed datasets + +are only in English and for specific clinical spe- 085 cialities or note types (Uzuner et al., 2007, 2010; + +Bethard et al., 2016). 087 + +This paper describes the methodology for developing the first Danish clinical NER dataset. + +The dataset consists of text paragraphs from Dan- 090 ish EHRs spanning multiple departments and note + +types. 092 + +First, the paper describes the clinical dataset, the strategy for choosing entities tailored to extract + +important information from EHRs, and the anno- 095 tation scheme. Next, we train a transformer-based + +architecture on the developed NER dataset. 097 098 + +## 2 Methods + +This section describes the data, annotation + +scheme, and model used for Danish clinical NER. 102 + +### 2.1 Data + +We extracted 11,607 paragraphs with a length between 11 and 75 words from EHRs from Odense + +University Hospital in Denmark. Paragraphs were 107 sampled randomly from different EHR note types across every department of the hospital to ensure the data distribution would resemble that of EHRs: ${46}\%$ were from clinical contacts, ${13}\%$ primary journals, 10% care data, 3% epicrises, 3% ambulatory care contacts, $2\%$ surgical notes, $2\%$ emergency room journals, and ${20}\%$ were from 55 different minor EHR note types. Paragraphs were lowercased and anonymised by two of the authors. + +
Clinical eventDescription
DiseaseA disorder of structure or function, especially one that has a known cause and a distinctive group of symptoms, signs, or anatomical changes. Examples include cancer, influenza, and narcolepsy.
$\mathbf{{Symptom}}$A symptom is a physical or mental feature which is regarded as indicating a condition of disease, particularly such a feature that is apparent to the patient. We include abnormal findings, which the MD makes when examining the patient objectively, as these are sometimes coinciding with symptoms-e.g. bruises. Examples include headache, stomach ache, and pain.
DiagnosticAny tool or method concerned with the diagnosis of illnesses or other problems. Includes measurements and tests. Examples include CT scans, blood samples, and temperatures.
TreatmentA treatment is any medical care given to a patient for an illness or injury. Examples include medication, plaster, and rehabilitation.
$\mathbf{{Anatomy}}$Any part of human anatomy. Includes body fluids and excrements. Examples include arms, organs, and blood.
ResultAll results of diagnostics that do not carry any meaning without being coupled to the diagnostic. Examples include numbers that indicate length, temperature, or volumes. Diseases or symptoms found by diagnostics are annotated as such, e.g. a tumour found by a CT scan.
+ +Table 1: Description of clinical events. Descriptions were inspired by the Oxford English Dictionary. + +### 2.2 Annotation + +#### 2.2.1 Annotation scheme + +Two MDs with expert clinical domain knowledge developed the annotation scheme through an iterative process of making annotation rules and testing them. + +Annotation rules were made to extract clinically relevant information from the medical history. Focus was for the rules to be as complete as possible to capture all important information about the medical history while still being simple to use for the annotators. + +We extracted three types of information: clinical events, the attributes of the clinical events, and relations between the clinical events. + +Clinical events were: diseases; symptoms, including abnormal findings; diagnostics; treatments; anatomies including body fluids and excrements; and results. Symptoms and abnormal findings were joined in one as they sometimes coincided. Normal findings were not included as there were so many that they would cloud the visualisation of the history. Table 1 shows all clini- + +
AttributesDescription
$\mathbf{{Prior}}$Entities that occurred in prior admissions or in the distant past. Includes treatments that are being stopped at that point in time.
CurrentEntities that occur in the present. Includes prescribed medicine.
FutureEntities that occur or might occur in the future-e.g. the risk of skin cancer, or ordering diagnostics for a later day.
DoubtAny entity that is not confirmed. Includes any treatments that might need to be started in the future.
NegationEntities such as diseases or symptoms that are mentioned as not being present.
Non-patientEntities that are not related to the patient in question. One example is the disease history of the patient's relatives.
+ +Table 2: Description of attributes. + +162 + +163 + +168 cal events and their descriptions as defined by the medical experts. + +Clinical events were further described by their attributes. Attributes were: prior; current; future; doubt; negation; and non-patient. All clinical events could take one of the six attributes except anatomies and results. Anatomies did not take any attributes while results could only take a prior or current attribute. Table 2 shows all attributes and their descriptions. + +Clinical events could connect to each other in limited ways through one-way relations. Diseases, diagnostics, and symptoms could connect to anatomies through a "has location" relation. Diseases, symptoms, and anatomies could connect to treatments through a "is treated with" relation. Diagnostics could connect to results through a "has + +result" relation. 190 + +Figure 1 shows an overview of the clinical + +events, attributes, and relations. Appendix A 193 shows the full annotation guidelines with further + +details and explanations to the annotators. 195 + +#### 2.2.2 Annotation process + +Six annotators were recruited for the task. Five were Master of Science in Medicine students and + +one was a MD. 200 Figure 2 shows the process of annotator training. It included reading the annotation guide and an iterative process of annotating a learning set of 55 paragraphs (not included in dataset) followed + +by error analysis until a final test was made on 205 a set of 98 gold paragraphs annotated by an expert MD. Paragraphs were annotated using the CLAMP software (Soysal et al., 2017). We report the micro F1 of each annotator on the gold set. + +Figure 3 shows an example of an annotated 210 paragraph. + +### 2.3 Entity and relation extraction model + +This section describes the architecture of the + +Princeton University Relation Extraction system 215 + +216 + +"is treated with". Orange: "has location". Grey: "has result". (B) Attributes. Anatomy (dashed lines) takes no attributes. Other clinical events must take one attribute. Results only take prior or current attributes. + +![019640f0-5e15-7636-abed-e6af67e93aa3_2_379_158_897_220_0.jpg](images/019640f0-5e15-7636-abed-e6af67e93aa3_2_379_158_897_220_0.jpg) + +Figure 1: (A) Clinical events and relations between them. Symptoms include abnormal findings. Anatomies include body fluids and excrements. Diagnostics include measurements and tests. Blue: + +217 + +218 + +219 + +220 + +221 + +222 + +223 + +227 + +![019640f0-5e15-7636-abed-e6af67e93aa3_2_416_623_171_249_0.jpg](images/019640f0-5e15-7636-abed-e6af67e93aa3_2_416_623_171_249_0.jpg) + +Figure 2: Annotator training process. Figure inspired by Sun et al. (2013). + +229 + +232 + +234 + +237 + +239 + +![019640f0-5e15-7636-abed-e6af67e93aa3_2_281_996_440_205_0.jpg](images/019640f0-5e15-7636-abed-e6af67e93aa3_2_281_996_440_205_0.jpg) + +Figure 3: Example of annotated paragraph. % signifies that no attribute could be assigned to the clinical event per the annotation scheme. + +249 (PURE) (Zhong and Chen, 2021) which we used and adapted for Danish clinical NER. It further describes the dataset used and the training of the models. + +254 + +#### 2.3.1 Model architecture + +PURE is a NER deep learning model based on a transformer structure. The model has a separate + +259 entity and relation extraction part. For entity extraction, the model takes as input all possible text spans up to a maximum length. A transformer extracts contextual word embeddings for the start and end token of each span. They + +264 are concatenated with a learned span width embedding and classified by a feedforward network. + +When extracting relations, for each candidate pair of entities, the text is passed through a transformer with inserted entity start and end marker to- + +269 kens for the subject and object entity, also indicat- + +270 + +271 + +272 + +273 + +274 + +275 + +276 + +277 + +278 + +279 + +280 + +281 + +![019640f0-5e15-7636-abed-e6af67e93aa3_2_840_623_621_243_0.jpg](images/019640f0-5e15-7636-abed-e6af67e93aa3_2_840_623_621_243_0.jpg) + +Figure 4: (A) Classification of clinical events from start and end tokens of span. Span width embedding not depicted. (B) Classification of attribute using clinical event marker tokens. (C) Classification of relation using subject/object and clinical event marker tokens. Figure inspired by Zhong and Chen (2021). + +282 + +283 + +284 + +285 + +286 + +287 + +288 + +289 + +290 + +291 + +293 + +296 + +298 ing the type. The concatenation of the start marker token for the candidate subject and object entity is + +classified by a feedforward neural network. 301 + +We used PURE's entity extraction approach for 303 + +clinical events and the relation extraction approach 304 + +for relations between clinical events. 305 + +We used our own approach adapted from the 306 + +PURE relation extraction approach for attributes. 307 + +We inserted clinical event start and end marker 308 + +tokens, passed all tokens through a transformer, 309 + +concatenated the start and end marker tokens, and 310 + +classified the attribute using a feedforward net- 311 + +work. The marker tokens were used for classi- 312 + +fication instead of the word(s) forming the clini- 313 cal event to guide the model to look more at the context rather than the specific word-the context being the important factor in attribute classifica- + +tion. Additionally, enriching the input with the 318 type of the clinical event could guide the model if 319 attributes were described differently for different 320 + +clinical events. 321 + +Figure 4 shows the three types of extraction 322 + +tasks. 323 + +#### 2.3.2 Datasets + +Table 3 shows the number of clinical events, attributes, and relations by type in the train, validation, and test set. The dataset had a total of 11,607 paragraphs, each containing a varying number of clinical events, attributes, and relations. On average, each paragraph contained 4.7 clinical events, 3.6 attributes, and 1.3 relations. We split the paragraphs in train, validation, and test sets for an approximate ${80}\% - {10}\% - {10}\%$ ratio between each type of clinical event, attribute, and relation. The sets were unbalanced on type of entity or relation-e.g. for the attributes training set, there were 23,217 current and only 480 non-patient attributes. All datasets were in the json format used by PURE (see Zhong and Chen (2021)). + +#### 2.3.3 Training + +When training the clinical event extraction model, we used a Danish Clinical ELECTRA pretrained on the narrative text from 299,718 EHRs from Odense University Hospital as the transformer base (Pedersen et al., 2022). The model had $\sim {13}\mathrm{M}$ parameters and consisted of 12 transformer layers with 4 attention heads. We used a dropout of 0.1 after the last ELECTRA hidden layer output. We tested classification heads with two hidden layers of varying size, each followed by a dropout of 0.2 and a ReLU activation function. We used a maximum span of 8 and a train batch size of 32 . We trained for 100 epochs using the AdamW optimizer with learning rate 1e-5 for the transformer layers and 1e-4 for the classification head, and a warm-up proportion of 0.1 . + +When training each of the models for extracting attributes and relations, we used the same transformer base with a normalisation layer and a + +362 dropout of 0.1 after the concatenation of tokens. We tested classification heads with two hidden + +365 layers of varying size, each followed by a dropout of 0.2 and a ReLU activation function. We fur- + +367 ther tested a classification head only consisting of a single classification layer. We used a train batch size of 32 and a maximum sequence length of 128 . We trained for 20 epochs using the AdamW optimizer with learning rate $2\mathrm{e} - 5$ and a warm-up proportion of 0.1 . + +We modified the training method of PURE to guide the models towards equal performance on all classes. We used a weighted loss function to 376 counteract the unbalanced dataset (experiment in 377 Appendix B). Class weights were calculated for + +the training of each model using the default for- 378 + +mula in Scikit-learn (Pedregosa et al., 2011): 379 + +380 + +$$ +{w}_{x} = \frac{{n}_{\text{samples }}}{{n}_{\text{classes }} \cdot {n}_{x}} \tag{1} +$$ + +381 382 + +where $x$ is the class, ${n}_{\text{samples }}$ is the number of to- 384 tal samples, and ${n}_{\text{classes }}$ is the number of classes. The negative class, i.e. samples not to be given any label by the model, was given a weight of 1 . + +To further enforce equal performance on all + +classes, we chose the best model for each of the 389 clinical event, attribute, and relation extraction + +tasks as the model iteration with the best macro 391 F1 on the validation set, rather than the micro F1 standard of PURE (experiment in Appendix B). + +The negative class was excluded when calculating 394 the F1. We only trained the attribute and relation + +models to make classifications that were allowed 396 for the connected clinical events according to the annotation scheme. Appendix C shows the results + +of the hyperparameter search. We report the micro 399 and macro recall, precision, and F1 for the best + +models on the test set. 401 + +## 3 Results + +404 + +This section presents the agreement of the annota- + +tors on the gold set and the results of the Danish 406 clinical NER models. + +### 3.1 Annotation + +409 + +Table 4 shows the annotators' micro F1 per- + +formance on the gold set. For clinical events, 411 it ranged ${83.71}\% - {91.24}\%$ (average ${85.62}\%$ ) for overlapping matches, and 74.12%-85.15% (average 77.67%) for exact matches. For attributes, it + +ranged ${79.21}\% - {86.19}\%$ (average ${81.71}\%$ ) and for 416 relations 71.28%-90.06% (average 77.79%). + +### 3.2 Entity and relation extraction model + +The models that had the best validation perfor- + +mance in the hyperparameter search were: 421 + +- A clinical event extraction model with two hidden layers of size 450 in the classification head. + +426 + +- An attribute extraction model with a single classification layer. + +- A relation extraction model with two hidden 430 + +layers of size 150 in the classification head. 431 + +433 487 + +
Train (% of row total)Validation (% of row total)Test (% of row total)Total (% of column total)
Paragraphs9,687 (83%)960 (8%)960 (8%)11,607 (100%)
Clinical events
Diseases2,033 (78%)295 (11%)272 (10%)2,600 (5%)
Symptoms11,937 (80%)1,455 (10%)1,571 (10%)14,963 (27%)
Diagnostics8,921 (80%)1,095 (10%)1,194 (11%)11,210 (21%)
Treatments6,918 (79%)911 (10%)882 (10%)8,711 (16%)
Anatomies10,172 (80%)1,227 (10%)1,278 (10%)12,677 (23%)
Results3,522 (79%)473 (11%)475 (11%)4,470 (8%)
TOTAL43,503 (80%)5,456 (10%)5,672 (10%)54,631 (100%)
Attributes
Prior2,028 (80%)237 (9%)283 (11%)2,548 (6%)
Current23,217 (79%)3,021 (10%)3,109 (11%)29,347 (70%)
Future1,237 (79%)161 (10%)160 (10%)1,558 (4%)
Doubt2,479 (82%)263 (9%)289 (10%)3,031 (7%)
Negation3,890 (80%)496 (10%)500 (10%)4,886 (12%)
Non-patient480 (82%)51 (9%)53 (9%)584 (1%)
TOTAL33,331 (79%)4,229 (10%)4,394 (10%)41,954 (100%)
Relations
is treated with1,485 (80%)175 (9%)197 (11%)1,857 (13%)
has location6,501 (80%)779 (10%)823 (10%)8,103 (55%)
has result3,652 (79%)499 (11%)493 (11%)4,644 (32%)
TOTAL11,638 (80%)1,453 (10%)1,513 (10%)14,604 (100%)
+ +Table 3: Composition of the train, validation and test sets by type of clinical event, attribute, and relation. + +486 + +488 + +489 + +490 + +437 491 + +438 492 + +439 + +443 497 + +447 501 + +448 502 + +449 + +450 504 + +451 + +452 + +453 507 + +455 + +
AnnotatorA$\mathbf{B}$CDE$\mathbf{F}$
Overlap match, micro F1%
Clinical event91.2484.2284.4185.7184.4383.71
Attribute86.1983.0679.2181.2979.7580.75
Relation90.0676.9775.6077.0171.2875.84
Exact match, micro F1%
Clinical event85.1576.0876.2978.6974.1275.71
+ +456 + +457 + +458 + +459 + +460 + +461 Table 4: The anonymised annotators' performance + +462 on the gold set. Exact match: a match is defined + +463 as the exact tokens annotated in the gold set with + +464 the same label. Overlap match: a match is defined + +465 as minimum one token overlapping with the gold + +466 set annotation of the same label. Only an overlap + +467 match F1 is calculated for attributes and relations + +468 as evaluating an exact match would propagate the potential error in the span of the clinical event to + +470 which the attribute or relation is connected. + +475 + +
$\mathbf{{Micro}}$$\mathbf{{Macro}}$
$\mathbf{R}\%$$\mathbf{P}\%$F1%R%$\mathbf{P}\%$F1%
Overlap match
Clinical events66.2977.3171.3864.8872.6068.20
Exact match
Clinical events60.9765.6463.2259.8461.3060.05
Attributes66.0466.0466.0451.6042.6444.85
Relations75.8872.6674.2374.7467.8570.64
+ +Table 5: Performance of the best clinical event, attribute, and relation extraction models on the test set. Attributes and relations are only reported with an exact match as the models do not consider the span of the clinical event from which the attribute or relation is classified. R: Recall. P: Precision. + +480 + +485 + +Table 5 shows the performance of the best mod- 509 els on the test set. Clinical events were extracted with exact micro F1 63.22% and macro F1 60.05%, attributes with micro F1 66.04% and macro F1 44.85%, and relations with micro F1 + +74.23% and macro F1 70.64%. The negative class 514 was excluded when calculating the recall, precision, and F1 scores. + +Figure 5 shows the confusion matrices of per- 517 formance on clinical events, attributes, and rela- + +tions. The confusion matrices include the clinical 519 + +events and relations that were not extracted and 520 + +falsely extracted by the model ('O'). 521 + +The model for clinical event extraction per- 522 formed best on anatomies (69%) and worst on re- + +sults (53%). 1,568 spans were falsely extracted 524 + +as a clinical event with symptoms being the most 525 + +frequent (21%). The model for attribute extrac- 526 + +tion performed best on negations (84%) and worst 527 + +on non-patient (23%). The model for relation ex- 528 + +traction performed best on "has result" (93%) and 529 530 worst on "is treated with" (62%). 432 false rela- + +tions were extracted of which "has location" was 532 the most frequent misclassification (45%). 533 + +## 4 Discussion and limitations + +534 + +535 + +This paper presented a methodology for develop- 536 + +ing a dataset for Danish clinical NER. It presented 537 + +an annotation scheme for annotation of all clinical 538 + +events, their attributes, and relations that are rele- 539 + +540 594 + +![019640f0-5e15-7636-abed-e6af67e93aa3_5_211_177_1222_446_0.jpg](images/019640f0-5e15-7636-abed-e6af67e93aa3_5_211_177_1222_446_0.jpg) + +Figure 5: Confusion matrices of performance on (A) clinical events, (B) attributes, and (C) relations. 'O' counts the clinical events and relations that were not extracted and falsely extracted by the model. + +602 + +603 + +541 595 + +542 596 + +543 597 + +544 598 + +545 599 + +546 600 + +547 601 + +551 605 + +607 + +609 vant for the medical history. The dataset included text paragraphs from Danish EHRs spanning multiple departments and note types. + +We trained and adapted PURE NER deep learning models to extract clinical events (overlap match macro F1 68.20%; exact match macro F1 ${60.05}\%$ ), attributes of clinical events (macro F1 ${44.85}\%$ ), and relations between clinical events + +566 (macro F1 70.64%). The results are promising for Danish clinical NER but need improvement. A discussion of possible improvements to the methodology, limitations, and future work is provided below. + +The clinical event extraction model had similar performance on all classes with accuracies between ${53}\%$ (results) and ${69}\%$ (anatomies). There was little contamination between classes as most errors were caused by failure to extract or false extraction of a clinical event. There was some contamination between symptoms and diseases with ${12}\%$ of diseases being classified as symptoms and $5\%$ of symptoms being classified as diseases. This supports claims by annotators that diseases and symptoms in some cases are difficult to differentiate and that extra attention must be given to dif- + +583 ferentiate these in the annotation guidelines. + +The attribute extraction model had large differences in performance with accuracies between ${23}\%$ (non-patient) and ${84}\%$ (negation). There were more misclassifications of the non-patient attribute as doubt $\left( {{40}\% }\right)$ than correct classifications. The future and doubt attributes had significant contamination between them with ${25}\%$ and ${11}\%$ misclassifications as the other class, respec- + +593 tively. The many misclassifications between non- + +610 patient and doubt attributes, and especially future and doubt attributes, could indicate that the model would improve if the non-patient, doubt, and future attributes were merged to a single class of uncertain attributes. This would most likely not harm the usefulness of the model to MDs significantly. + +The fact that more prior attributes were misclassified as current (41%) than correct classifica- + +tions (36%) likewise indicates that these two at- 620 tributes could be merged into a single class of clin- + +ical events that occurred. This would, however, 622 decrease the usefulness of the model as it is important for MDs reviewing the medical history to + +know if a clinical event is prior or current. 625 + +The relation model extracted ${93}\%$ of the "has + +result" relations, and ${62}\%$ and ${69}\%$ of the "is 627 treated with" and "has location" relations, respectively. The differences are likely caused by the fact + +that the "has result" relation only connects diag- 630 nostics to results while the two other relations have + +three different one-way relationships. 632 + +In this paper, we only explored one type of NER model and tested a limited set of architectures and hyperparameters. Future work could in- + +clude testing other architectures and enriching the 637 model input with more information, e.g. the output of a text parser, which could help differentiate attributes dealing with the time-aspect. The six annotators had an average micro F1 (overlap + +match) of ${85.62}\% ,{81.71}\%$ , and ${77.79}\%$ for clin- 642 ical events, attributes, and relations, respectively. Merging certain attributes and more emphasis on differences between symptoms and diseases could + +increase these scores. 646 + +The Danish clinical NER dataset is not made 647 publicly available due to it containing sensitive + +649 information. We advise interested researchers to contact us for sharing possibilities. + +## 5 Conclusions + +This paper presented methodology and annotation scheme for developing the first Danish clinical NER dataset. The corpus consists of 11,607 paragraphs annotated for six entity types, six attributes, and three relations. The corpus was used to fine-tune language models which showed promising results for classifying the entities, attributes, and re- + +661 lations of the dataset. + +664 + +## References + +Steven Bethard, Guergana Savova, Wei-Te Chen, Leon + +666 Derczynski, James Pustejovsky, and Marc Verhagen. 2016. Semeval-2016 task 12: Clinical tempeval. In Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), pages 1052- 1062. + +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901. + +Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. https://openreview.net/forum?id=r1xMH1BtvB Electra: Pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representations. + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. https://doi.org/10.18653/v1/N19-1423 BERT: Pre-training of deep bidirectional transformers for + +686 language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational + +691 Linguistics. + +Jannik S Pedersen, Martin S Laursen, Cristina Soguero-Ruiz, Thiusius R Savarimuthu, Ras-mus Søgaard Hansen, and Pernille J Vinholt. 2022. Domain over size: Clinical electra surpasses general bert for bleeding site classification in the free text of electronic health records. In 2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), pages 1-4. IEEE. + +Fabian Pedregosa, Gaël Varoquaux, Alexandre Gram- 701 fort, Vincent Michel, Bertrand Thirion, Olivier + +Grisel, Mathieu Blondel, Peter Prettenhofer, Ron 702 + +Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: 703 Machine learning in python. the Journal of machine 704 Learning research, 12:2825-2830. 705 + +Ergin Soysal, Jingqi Wang, Min Jiang, Yonghui 706 + +Wu, Serguei Pakhomov, Hongfang Liu, and Hua 707 + +Xu. 2017. https://doi.org/10.1093/jamia/ocx132 708 CLAMP - a toolkit for efficiently building customized clinical natural language processing pipelines. Journal of the American Medical Informatics Association, 25(3):331-336. + +Weiyi Sun, Anna Rumshisky, and Ozlem Uzuner. 2013. https://doi.org/https://doi.org/10.1016/j.jbi.2013.07.004 Annotating temporal information in clinical nar- + +ratives. Journal of Biomedical Informatics, 715 46:S5-S12. Supplement: 2012 i2b2 NLP Challenge on Temporal Relations in Clinical Data. + +Özlem Uzuner, Yuan Luo, and Peter Szolovits. 2007. 718 Evaluating the state-of-the-art in automatic de- + +identification. Journal of the American Medical In- 720 formatics Association, 14(5):550-563. + +Özlem Uzuner, Imre Solti, and Eithon Cadag. 2010. + +Extracting medication information from clinical 723 text. Journal of the American Medical Informatics + +Association, 17(5):514-518. 725 + +Zexuan Zhong and Danqi Chen. 2021. A frustratingly easy approach for entity and relation extraction. In + +Proceedings of the 2021 Conference of the North 728 American Chapter of the Association for Computa- + +tional Linguistics: Human Language Technologies, 730 pages 50-61. + +## Appendices + +## A Annotation guidelines + +733 + +### A.1 Clinical events + +735 + +#### A.1.1 Disease + +736 + +Contains all diseases including diseases that could 737 738 + +be considered a result of a Diagnostic. 739 + +#### A.1.2 Symptom + +740 + +Includes all symptoms and abnormal findings. Findings that are not abnormal should not be annotated. However, a negation of an abnormal finding + +should be annotated because the abnormal finding 745 is mentioned even though it is not present. For example, "fracture" should be annotated in the sentence "there is no sign of fracture." + +If there is a negation of a non-abnormal finding, + +it should be annotated in the entity. For example, 750 "cannot hear" is annotated in the sentence "patient cannot hear anything." + +In the sentence "no symptoms," the word "symptoms" should not be annotated as a symptom, as it does not contain any information. 755 + +In case a symptom or abnormal finding is found 757 by a Diagnostic, there may be a coincidence with the Result entity. Here, it is annotated as Symptom 759 if the entity can provide sufficient meaning alone. For example, "cyst" or "tumour." + +If the Symptom cannot stand alone and one needs to know which Diagnostic was carried out in order to understand the result, the entity should instead be annotated as Result and have a "has result" relationship from the Diagnostic entity. For example, this applies to "Temp: 24 C" and "Stix: 3+". "Temp" and "Stix" are annotated as Diagnostic with "is treated with" relationship to Result "24 C" and "3+." + +#### A.1.3 Result + +Includes all results of Diagnostic, e.g. values and blood test results. + +A Result cannot stand on its own. A relation from the Diagnostic is needed for it to make sense. These can be entities like "stable", "positive", "negative", "24 C" or "3+". + +Typically, this entity will appear in sentence structures with a colon: "Diagnostic: Result". Note that the two entities are mentioned very close to each other in the text-in this case only with a colon in between. An example could be "Temp: 24 C" or "Stix: 3+". "Temp" and "Stix" are annotated as Diagnostics with a "has result" relation to Result "24 C" and "3+". + +Entities that can instead be annotated as Symptom will typically be mentioned further away or completely lack a Diagnostic as a Symptom can stand alone and make sense. + +See also the description for Symptom. + +#### A.1.4 Diagnostic + +Includes all diagnostics, measurements, and tests. This can include CT scans, blood tests, MR scans, and recordings of a newborn's length, temperature, etc. + +Note that "blood sample results" and "radiology description" are not a Diagnostic and should not be annotated. + +If KAD is mentioned along with a volume, e.g. "KAD emptied of ${200}\mathrm{\;{mL}}$ ," it is marked as Diagnostic - Result. If there is no volume specified, KAD is annotated as Treatment. + +#### A.1.5 Treatment + +Includes all forms of treatment including medica- + +809 tion. + +To annotate entities as concisely as possible, for 810 + +example in the sentence "good effect of ${2.5}\mathrm{{mg}}$ 811 + +morphine IV," only "morphine" should be anno- 812 + +tated as Treatment. 813 + +In the sentence "treated for xxx," the word 814 "treatment" should not be annotated as Treatment + +as it does not contain any information. 816 + +If KAD is mentioned without a volume indication, it should be annotated as Treatment. If KAD is mentioned with a volume, for example "KAD + +emptied for ${200}\mathrm{\;{mL}}$ ," it should be annotated as 821 "Diagnostic - Result." + +#### A.1.6 Anatomy + +823 + +Includes all mentions of anatomies and things + +from the body (blood, feces, urine, sweat, etc.). 826 + +Typically used to indicate the location of a Dis- + +ease or Symptom, a Diagnostic, or a Treatment. 828 Examples: "brain", "left foot" or "duodenum". + +When Anatomy is described by an adjacent + +word, for example "left", this should be included 831 in the entity. + +Remember to annotate the Anatomy entities 833 that should not be linked to other entities. + +### A.2 Attributes + +836 + +#### A.2.1 Current + +The entity is either present, carried out, or cur- 838 rent. If medication is prescribed to the patient, this should also be marked as "Treatment - Current", as it can be assumed that the treatment will start and it may be the last time it is mentioned in the journal. On the other hand, "Scheduling a CT for Tuesday." should be marked as "Future" as it will be described in a future medical note, for example with the result. + +#### A.2.2 Negation + +848 + +The entity is not present. For example, if it is mentioned that the patient does not have a fracture, the fracture should be marked as Symptom - Negation. + +Note that the word "not" should not be part of the 853 marked entity. However, if there is a negation of a normal finding, it should be annotated as such. For example, "cannot hear" in the sentence "patient cannot hear anything" is annotated as Symp- + +tom - Present. 858 + +#### A.2.3 Prior + +If the entity refers to a previous case, i.e., a pre- + +vious hospitalisation or if it happened a long time 862 + +ago. For example, it should be annotated as a prior 863 + +Treatment when a cast or drain is removed, as the + +865 treatment is finished. However, if a CT scan from the previous day is mentioned, it should be annotated as Current. + +#### A.2.4 Future + +Everything that takes place in the future. For example, cancer is annotated as Disease - Future if it is mentioned that "there is a risk of cancer if you use tanning beds too often." + +It is marked as Diagnostic - Future if an MRI scan is planned for the next day. However, if it is written "the treatment with xxx starts" or "rp. xxx" it should be marked as Treatment - Current as it is assumed that the treatment will certainly + +880 happen. + +Also includes references to possible future + +882 treatments. + +#### A.2.5 Doubt + +If the patient might have a disease that has not yet been confirmed. + +If a Treatment should be given provided that certain things change. + +The difference between Doubt and Future is that Future is more certain - it is going to happen - while Doubt is more uncertain or conditional. + +#### A.2.6 Non-patient + +If an entity does not have a direct connection to the patient. This can occur when a general letter is sent out regarding cancer screening. Cancer should then be annotated as Disease - Non-patient. If it is mentioned that the patient's mother had a certain disease, it should also be annotated in this way. + +902 + +### A.3 Relations + +When entities are annotated, the relationships between entities can be annotated. This is done + +907 by pulling the "From entity" over to the "To entity". The direction of the relationship is important. Therefore, pay attention to the name of the relationship and read it out loud if necessary, "Entity - Relation - Entity" and listen to see if it makes sense or if the arrow needs to be reversed. CLAMP will show which relationships can be annotated for the pair being drawn between. + +## has location + +917 From entities: Disease, Symptom, Diagnostic. + +To entities: Anatomy. 918 + +919 + +## has result + +920 + +From entities: Diagnostic. 921 922 + +To entities: Result. 923 + +924 + +## is treated with + +925 + +From entities: Disease, Symptom, Anatomy. 926 + +To entities: Treatment. 927 928 + +The "is treated with" relation links the en- 929 930 + +tities Disease, Symptom, and Anatomy to a 931 Treatment. In some cases, sentences describing a required treatment could be linked to both an + +Anatomy and Treatment entity. In this case, 934 the Treatment should be linked to the Symptom + +instead of the Anatomy. You should only link the 936 Anatomy to the Treatment using the "is treated with" relation if the Treatment cannot be linked to + +anything else. Example: "Left knee skin scraping 939 is treated with plaster." Annotation: skin scraping + +- "Treated with" - plaster. 941 + +### A.4 General notes + +944 + +It is important not to annotate periods, commas, 946 etc. unless they are part of an abbreviation. For example, in "Patient has cancer," only "cancer" and not "cancer." should be marked. If you double-click a word, CLAMP will only mark the word + +and not any punctuation next to the word. This 951 can make it a bit troublesome to include periods in abbreviations. + +Entities should be annotated as concisely as 954 possible without losing meaning. This means + +that in the sentence "there are signs of cancer," 956 only "cancer" and not "signs of cancer" should be marked as an entity. If an entity has some describing words next to it, the following rule can be used to decide how much should be annotated. In the + +sentence "pain in the front of the arm," only "arm" 961 is marked as Anatomy since "front" and "arm" are connected through the word "of." In the sentence "pain in the left arm," "left arm" is marked as + +Anatomy since there are no words between "left" 966 and "arm". In sentences describing a prescription of medication, only the name is marked as Treatment, and not, for example, the quantity indication + +or the number of days. 970 + +Entities may not overlap with each other. 971 + +973 + +
Evaluation metricLossMicroMacro
$\mathbf{R}$$\mathbf{P}$F1$\mathbf{R}$$\mathbf{P}$$\mathbf{{F1}}$
Micro F1Unweighted0.790.790.790.380.410.39
Weighted0.620.620.620.450.330.34
Macro F1Unweighted0.770.770.770.420.420.41
Weighted0.600.600.600.510.420.44
+ +Table 6: Micro and macro recall, precision, and F1 score on the validation set when selecting the best iteration of the model based on micro and macro F1 score with unweighted and weighted loss. R: Recall. P: Precision. + +978 + +982 + +983 + +## B Selection of loss and evaluation metric + +984 + +985 This appendix details experiments performed to 986 test whether to use unweighted or weighted loss 987 + +988 and if to select the best model iteration using mi- + +989 cro or macro F1. + +990 The attribute extraction was selected for testing the loss and evaluation metric because it was the most unbalanced. We ran the test with a Danish + +993 clinical ELECTRA transformer base with normalisation and a dropout of 0.1 after the concatenation + +995 of tokens, and a classification head with two hidden layers of size 75 , each followed by a dropout of 0.2 and a ReLU activation function. We used a train batch size of 32 and a maximum sequence length of 128. We trained for 20 epochs using the + +1000 AdamW optimizer with learning rate 2e-5 and a warm-up proportion of 0.1 . + +Class weights were calculated for the training of each model using the default formula in Scikit- + +1005 learn (Pedregosa et al., 2011): + +$$ +{w}_{x} = \frac{{n}_{\text{samples }}}{{n}_{\text{classes }} \cdot {n}_{x}} \tag{2} +$$ + +1010 where $x$ is the class, ${n}_{\text{samples }}$ is the number of total samples, and ${n}_{\text{classes }}$ is the number of classes. The negative class, i.e. samples not to be given any label by the model, was given a weight of 1 . + +Table 6 shows the micro and macro recall, pre- + +1015 cision, and F1 score on the validation set when selecting the best iteration of the model based on micro and macro F1 score with unweighted and 1019 weighted loss. + +1020 Figure 6 shows that using the micro F1 to select 1021 the best iteration of the model resulted in some 1022 classes being practically excluded during classi- 1023 fication. Using the macro F1 to select the best 1024 model iteration and training with a weighted loss 1025 gave the most equal performance on all classes + +
Classification head hidden layersValidation Exact F1 %
Clinical event2x7558.49
2x 15059.82
2x 30060.68
2x 45061.34
2x 60060.91
AttributeNone48.01
2x5043.20
2x7543.85
2x 15044.10
2x 30044.32
RelationNone66.15
2x7568.39
2x 15068.85
2x 30067.39
+ +Table 7: Results of the hyperparameter search. + +1026 + +1027 + +1028 + +1029 + +1030 + +1031 + +1032 + +1033 + +1034 + +1035 + +1036 + +1037 + +1038 + +1039 + +1040 + +1041 + +1042 + +1043 + +## C Hyperparameter search + +1044 + +1045 + +Table 7 shows the results of the hyperparameter 1046 + +search. 1047 + +1048 + +1049 + +1050 + +1051 + +1052 + +1053 + +1054 + +1055 + +1056 + +1057 + +1058 + +1059 + +1060 + +1061 + +1062 + +1063 + +1064 + +1065 + +1066 + +1067 + +1068 + +1069 + +1070 + +1071 + +1072 + +1073 + +1074 + +1075 1076 + +1077 1078 1079 + +1080 1134 + +1081 1135 + +1082 1136 + +1083 1137 + +1084 1138 + +1085 1139 + +1086 1140 + +1087 1141 + +1088 1142 + +1089 1143 + +1090 1144 + +1091 1145 + +1092 1146 + +1093 1147 + +![019640f0-5e15-7636-abed-e6af67e93aa3_10_385_642_854_897_0.jpg](images/019640f0-5e15-7636-abed-e6af67e93aa3_10_385_642_854_897_0.jpg) + +Figure 6: Confusion matrices showing the performance of the models chosen based on (A) micro F1, (B) macro F1, (C) micro F1 trained with weighted loss, and (D) macro F1 trained with weighted loss. + +1094 1148 + +1095 1149 + +1096 1150 + +1097 1151 + +1098 1152 + +1099 1153 + +1100 1154 + +1101 1155 + +1102 1156 + +1103 1157 + +1104 1158 + +1105 1159 + +1106 1160 + +1107 1161 + +1108 1162 + +1109 1163 + +1110 1164 + +1111 1165 + +1112 1166 + +1113 1167 + +1114 1168 + +1115 1169 + +1116 1170 + +1117 1171 + +1118 1172 + +1119 1173 + +1120 1174 + +1121 1175 + +1122 1176 + +1123 1177 + +1124 1178 + +1125 1179 + +1126 1180 + +1127 1181 + +1128 1182 + +1129 1183 + +1130 1184 + +1131 1185 + +1132 1186 + +1133 1187 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wEJaCIkgLG/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wEJaCIkgLG/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..58e6ae84f1f6c61136a8e6bebc876b72c3298dcc --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wEJaCIkgLG/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,748 @@ +000 054 + +§ DANISH CLINICAL NAMED ENTITY RECOGNITION AND RELATION EXTRACTION + +001 055 + +002 056 + +003 057 + +004 058 + +005 059 + +006 060 + +007 061 + +008 062 + +§ ABSTRACT + +016 + +Electronic health records contain impor- + +018 tant information regarding the patients' medical history but much of this information is stored in unstructured narra- + +021 tive text. This paper presents the first Danish clinical named entity recognition + +023 and relation extraction dataset for extrac- + +025 tion of six types of clinical events, six types of attributes, and three types of 026 relations. The dataset contains 11,607 027 paragraphs from Danish electronic health 028 + +029 records containing 54,631 clinical events, + +030 41,954 attributes, and 14,604 relations. + +031 We detail the methodology of developing the annotation scheme, and train a + +033 transformer-based architecture on the developed dataset with macro F1 performance of ${60.05}\% ,{44.85}\%$ , and ${70.64}\%$ + +036 for clinical events, attributes, and relations, respectively. + +038 + +§ 1 INTRODUCTION + +040 Electronic health records (EHR) contain important information regarding the patients' medical his- + +043 tory including diagnoses, medications, treatment plans, allergies, and test results. However, much of this information is stored in unstructured narrative text. While this information could be used to guide diagnostic decision making and treatment + +048 plans, the unstructured format makes it infeasible to fully exploit in clinical practice and research. + +Natural language processing (NLP) algorithms could be used to transform the unstructured narrative text of the EHR into structured information + +053 and give medical doctors (MD) a fast overview of + +063 + +064 + +065 + +066 + +067 + +068 + +even a medical history spanning multiple years. 069 + +NLP models' ability to process and extract infor- 070 mation from written text keeps improving with + +benchmark-breaking models being published on 072 a regular basis. For example, transformer-based + +models such as GPT-3 (Brown et al., 2020), BERT 075 (Devlin et al., 2019), and ELECTRA (Clark et al., + +2020) have recently shown promising results for 077 many NLP tasks, e.g. named entity recognition and relation extraction (NER). In NER, models + +are trained to tag words with predefined entities 080 and find the relations between them. In clinical + +NER, entities such as diseases, treatments, drugs, 082 and tests have been extracted automatically from EHRs. However, many of the developed datasets + +are only in English and for specific clinical spe- 085 cialities or note types (Uzuner et al., 2007, 2010; + +Bethard et al., 2016). 087 + +This paper describes the methodology for developing the first Danish clinical NER dataset. + +The dataset consists of text paragraphs from Dan- 090 ish EHRs spanning multiple departments and note + +types. 092 + +First, the paper describes the clinical dataset, the strategy for choosing entities tailored to extract + +important information from EHRs, and the anno- 095 tation scheme. Next, we train a transformer-based + +architecture on the developed NER dataset. 097 098 + +§ 2 METHODS + +This section describes the data, annotation + +scheme, and model used for Danish clinical NER. 102 + +§ 2.1 DATA + +We extracted 11,607 paragraphs with a length between 11 and 75 words from EHRs from Odense + +University Hospital in Denmark. Paragraphs were 107 sampled randomly from different EHR note types across every department of the hospital to ensure the data distribution would resemble that of EHRs: ${46}\%$ were from clinical contacts, ${13}\%$ primary journals, 10% care data, 3% epicrises, 3% ambulatory care contacts, $2\%$ surgical notes, $2\%$ emergency room journals, and ${20}\%$ were from 55 different minor EHR note types. Paragraphs were lowercased and anonymised by two of the authors. + +max width= + +Clinical event Description + +1-2 +Disease A disorder of structure or function, especially one that has a known cause and a distinctive group of symptoms, signs, or anatomical changes. Examples include cancer, influenza, and narcolepsy. + +1-2 +$\mathbf{{Symptom}}$ A symptom is a physical or mental feature which is regarded as indicating a condition of disease, particularly such a feature that is apparent to the patient. We include abnormal findings, which the MD makes when examining the patient objectively, as these are sometimes coinciding with symptoms-e.g. bruises. Examples include headache, stomach ache, and pain. + +1-2 +Diagnostic Any tool or method concerned with the diagnosis of illnesses or other problems. Includes measurements and tests. Examples include CT scans, blood samples, and temperatures. + +1-2 +Treatment A treatment is any medical care given to a patient for an illness or injury. Examples include medication, plaster, and rehabilitation. + +1-2 +$\mathbf{{Anatomy}}$ Any part of human anatomy. Includes body fluids and excrements. Examples include arms, organs, and blood. + +1-2 +Result All results of diagnostics that do not carry any meaning without being coupled to the diagnostic. Examples include numbers that indicate length, temperature, or volumes. Diseases or symptoms found by diagnostics are annotated as such, e.g. a tumour found by a CT scan. + +1-2 + +Table 1: Description of clinical events. Descriptions were inspired by the Oxford English Dictionary. + +§ 2.2 ANNOTATION + +§ 2.2.1 ANNOTATION SCHEME + +Two MDs with expert clinical domain knowledge developed the annotation scheme through an iterative process of making annotation rules and testing them. + +Annotation rules were made to extract clinically relevant information from the medical history. Focus was for the rules to be as complete as possible to capture all important information about the medical history while still being simple to use for the annotators. + +We extracted three types of information: clinical events, the attributes of the clinical events, and relations between the clinical events. + +Clinical events were: diseases; symptoms, including abnormal findings; diagnostics; treatments; anatomies including body fluids and excrements; and results. Symptoms and abnormal findings were joined in one as they sometimes coincided. Normal findings were not included as there were so many that they would cloud the visualisation of the history. Table 1 shows all clini- + +max width= + +Attributes Description + +1-2 +$\mathbf{{Prior}}$ Entities that occurred in prior admissions or in the distant past. Includes treatments that are being stopped at that point in time. + +1-2 +Current Entities that occur in the present. Includes prescribed medicine. + +1-2 +Future Entities that occur or might occur in the future-e.g. the risk of skin cancer, or ordering diagnostics for a later day. + +1-2 +Doubt Any entity that is not confirmed. Includes any treatments that might need to be started in the future. + +1-2 +Negation Entities such as diseases or symptoms that are mentioned as not being present. + +1-2 +Non-patient Entities that are not related to the patient in question. One example is the disease history of the patient's relatives. + +1-2 + +Table 2: Description of attributes. + +162 + +163 + +168 cal events and their descriptions as defined by the medical experts. + +Clinical events were further described by their attributes. Attributes were: prior; current; future; doubt; negation; and non-patient. All clinical events could take one of the six attributes except anatomies and results. Anatomies did not take any attributes while results could only take a prior or current attribute. Table 2 shows all attributes and their descriptions. + +Clinical events could connect to each other in limited ways through one-way relations. Diseases, diagnostics, and symptoms could connect to anatomies through a "has location" relation. Diseases, symptoms, and anatomies could connect to treatments through a "is treated with" relation. Diagnostics could connect to results through a "has + +result" relation. 190 + +Figure 1 shows an overview of the clinical + +events, attributes, and relations. Appendix A 193 shows the full annotation guidelines with further + +details and explanations to the annotators. 195 + +§ 2.2.2 ANNOTATION PROCESS + +Six annotators were recruited for the task. Five were Master of Science in Medicine students and + +one was a MD. 200 Figure 2 shows the process of annotator training. It included reading the annotation guide and an iterative process of annotating a learning set of 55 paragraphs (not included in dataset) followed + +by error analysis until a final test was made on 205 a set of 98 gold paragraphs annotated by an expert MD. Paragraphs were annotated using the CLAMP software (Soysal et al., 2017). We report the micro F1 of each annotator on the gold set. + +Figure 3 shows an example of an annotated 210 paragraph. + +§ 2.3 ENTITY AND RELATION EXTRACTION MODEL + +This section describes the architecture of the + +Princeton University Relation Extraction system 215 + +216 + +"is treated with". Orange: "has location". Grey: "has result". (B) Attributes. Anatomy (dashed lines) takes no attributes. Other clinical events must take one attribute. Results only take prior or current attributes. (A) Clinical events and relations Disease has location Result has result Diagnostic Anatomy Symptom (B) Attributes is treated with Prior Doubt Treatment Current Negation Future Non-patient + +Figure 1: (A) Clinical events and relations between them. Symptoms include abnormal findings. Anatomies include body fluids and excrements. Diagnostics include measurements and tests. Blue: + +217 + +218 + +219 + +220 + +221 + +222 + +223 + +227 + +Study annotation guide learning set Error analysis Annotate gold set Annotate + +Figure 2: Annotator training process. Figure inspired by Sun et al. (2013). + +229 + +232 + +234 + +237 + +239 + +Current Anatomy the left breast has location Symptom slight redness + +Figure 3: Example of annotated paragraph. % signifies that no attribute could be assigned to the clinical event per the annotation scheme. + +249 (PURE) (Zhong and Chen, 2021) which we used and adapted for Danish clinical NER. It further describes the dataset used and the training of the models. + +254 + +§ 2.3.1 MODEL ARCHITECTURE + +PURE is a NER deep learning model based on a transformer structure. The model has a separate + +259 entity and relation extraction part. For entity extraction, the model takes as input all possible text spans up to a maximum length. A transformer extracts contextual word embeddings for the start and end token of each span. They + +264 are concatenated with a learned span width embedding and classified by a feedforward network. + +When extracting relations, for each candidate pair of entities, the text is passed through a transformer with inserted entity start and end marker to- + +269 kens for the subject and object entity, also indicat- + +270 + +271 + +272 + +273 + +274 + +275 + +276 + +277 + +278 + +279 + +280 + +281 + +Symptom Anatomy in the left breast in the [O:An] left breast $\left\lbrack {/\mathrm{O} : \mathrm{{An}}}\right\rbrack$ (A) slight redness Current [Sy] has location (C) slight [S:Sy] redness $\left\lbrack {/\mathrm{S} : \mathrm{{Sy}}}\right\rbrack$ + +Figure 4: (A) Classification of clinical events from start and end tokens of span. Span width embedding not depicted. (B) Classification of attribute using clinical event marker tokens. (C) Classification of relation using subject/object and clinical event marker tokens. Figure inspired by Zhong and Chen (2021). + +282 + +283 + +284 + +285 + +286 + +287 + +288 + +289 + +290 + +291 + +293 + +296 + +298 ing the type. The concatenation of the start marker token for the candidate subject and object entity is + +classified by a feedforward neural network. 301 + +We used PURE's entity extraction approach for 303 + +clinical events and the relation extraction approach 304 + +for relations between clinical events. 305 + +We used our own approach adapted from the 306 + +PURE relation extraction approach for attributes. 307 + +We inserted clinical event start and end marker 308 + +tokens, passed all tokens through a transformer, 309 + +concatenated the start and end marker tokens, and 310 + +classified the attribute using a feedforward net- 311 + +work. The marker tokens were used for classi- 312 + +fication instead of the word(s) forming the clini- 313 cal event to guide the model to look more at the context rather than the specific word-the context being the important factor in attribute classifica- + +tion. Additionally, enriching the input with the 318 type of the clinical event could guide the model if 319 attributes were described differently for different 320 + +clinical events. 321 + +Figure 4 shows the three types of extraction 322 + +tasks. 323 + +§ 2.3.2 DATASETS + +Table 3 shows the number of clinical events, attributes, and relations by type in the train, validation, and test set. The dataset had a total of 11,607 paragraphs, each containing a varying number of clinical events, attributes, and relations. On average, each paragraph contained 4.7 clinical events, 3.6 attributes, and 1.3 relations. We split the paragraphs in train, validation, and test sets for an approximate ${80}\% - {10}\% - {10}\%$ ratio between each type of clinical event, attribute, and relation. The sets were unbalanced on type of entity or relation-e.g. for the attributes training set, there were 23,217 current and only 480 non-patient attributes. All datasets were in the json format used by PURE (see Zhong and Chen (2021)). + +§ 2.3.3 TRAINING + +When training the clinical event extraction model, we used a Danish Clinical ELECTRA pretrained on the narrative text from 299,718 EHRs from Odense University Hospital as the transformer base (Pedersen et al., 2022). The model had $\sim {13}\mathrm{M}$ parameters and consisted of 12 transformer layers with 4 attention heads. We used a dropout of 0.1 after the last ELECTRA hidden layer output. We tested classification heads with two hidden layers of varying size, each followed by a dropout of 0.2 and a ReLU activation function. We used a maximum span of 8 and a train batch size of 32 . We trained for 100 epochs using the AdamW optimizer with learning rate 1e-5 for the transformer layers and 1e-4 for the classification head, and a warm-up proportion of 0.1 . + +When training each of the models for extracting attributes and relations, we used the same transformer base with a normalisation layer and a + +362 dropout of 0.1 after the concatenation of tokens. We tested classification heads with two hidden + +365 layers of varying size, each followed by a dropout of 0.2 and a ReLU activation function. We fur- + +367 ther tested a classification head only consisting of a single classification layer. We used a train batch size of 32 and a maximum sequence length of 128 . We trained for 20 epochs using the AdamW optimizer with learning rate $2\mathrm{e} - 5$ and a warm-up proportion of 0.1 . + +We modified the training method of PURE to guide the models towards equal performance on all classes. We used a weighted loss function to 376 counteract the unbalanced dataset (experiment in 377 Appendix B). Class weights were calculated for + +the training of each model using the default for- 378 + +mula in Scikit-learn (Pedregosa et al., 2011): 379 + +380 + +$$ +{w}_{x} = \frac{{n}_{\text{ samples }}}{{n}_{\text{ classes }} \cdot {n}_{x}} \tag{1} +$$ + +381 382 + +where $x$ is the class, ${n}_{\text{ samples }}$ is the number of to- 384 tal samples, and ${n}_{\text{ classes }}$ is the number of classes. The negative class, i.e. samples not to be given any label by the model, was given a weight of 1 . + +To further enforce equal performance on all + +classes, we chose the best model for each of the 389 clinical event, attribute, and relation extraction + +tasks as the model iteration with the best macro 391 F1 on the validation set, rather than the micro F1 standard of PURE (experiment in Appendix B). + +The negative class was excluded when calculating 394 the F1. We only trained the attribute and relation + +models to make classifications that were allowed 396 for the connected clinical events according to the annotation scheme. Appendix C shows the results + +of the hyperparameter search. We report the micro 399 and macro recall, precision, and F1 for the best + +models on the test set. 401 + +§ 3 RESULTS + +404 + +This section presents the agreement of the annota- + +tors on the gold set and the results of the Danish 406 clinical NER models. + +§ 3.1 ANNOTATION + +409 + +Table 4 shows the annotators' micro F1 per- + +formance on the gold set. For clinical events, 411 it ranged ${83.71}\% - {91.24}\%$ (average ${85.62}\%$ ) for overlapping matches, and 74.12%-85.15% (average 77.67%) for exact matches. For attributes, it + +ranged ${79.21}\% - {86.19}\%$ (average ${81.71}\%$ ) and for 416 relations 71.28%-90.06% (average 77.79%). + +§ 3.2 ENTITY AND RELATION EXTRACTION MODEL + +The models that had the best validation perfor- + +mance in the hyperparameter search were: 421 + + * A clinical event extraction model with two hidden layers of size 450 in the classification head. + +426 + + * An attribute extraction model with a single classification layer. + + * A relation extraction model with two hidden 430 + +layers of size 150 in the classification head. 431 + +433 487 + +max width= + +X Train (% of row total) Validation (% of row total) Test (% of row total) Total (% of column total) + +1-5 +Paragraphs 9,687 (83%) 960 (8%) 960 (8%) 11,607 (100%) + +1-5 +5|c|Clinical events + +1-5 +Diseases 2,033 (78%) 295 (11%) 272 (10%) 2,600 (5%) + +1-5 +Symptoms 11,937 (80%) 1,455 (10%) 1,571 (10%) 14,963 (27%) + +1-5 +Diagnostics 8,921 (80%) 1,095 (10%) 1,194 (11%) 11,210 (21%) + +1-5 +Treatments 6,918 (79%) 911 (10%) 882 (10%) 8,711 (16%) + +1-5 +Anatomies 10,172 (80%) 1,227 (10%) 1,278 (10%) 12,677 (23%) + +1-5 +Results 3,522 (79%) 473 (11%) 475 (11%) 4,470 (8%) + +1-5 +TOTAL 43,503 (80%) 5,456 (10%) 5,672 (10%) 54,631 (100%) + +1-5 +5|c|Attributes + +1-5 +Prior 2,028 (80%) 237 (9%) 283 (11%) 2,548 (6%) + +1-5 +Current 23,217 (79%) 3,021 (10%) 3,109 (11%) 29,347 (70%) + +1-5 +Future 1,237 (79%) 161 (10%) 160 (10%) 1,558 (4%) + +1-5 +Doubt 2,479 (82%) 263 (9%) 289 (10%) 3,031 (7%) + +1-5 +Negation 3,890 (80%) 496 (10%) 500 (10%) 4,886 (12%) + +1-5 +Non-patient 480 (82%) 51 (9%) 53 (9%) 584 (1%) + +1-5 +TOTAL 33,331 (79%) 4,229 (10%) 4,394 (10%) 41,954 (100%) + +1-5 +5|c|Relations + +1-5 +is treated with 1,485 (80%) 175 (9%) 197 (11%) 1,857 (13%) + +1-5 +has location 6,501 (80%) 779 (10%) 823 (10%) 8,103 (55%) + +1-5 +has result 3,652 (79%) 499 (11%) 493 (11%) 4,644 (32%) + +1-5 +TOTAL 11,638 (80%) 1,453 (10%) 1,513 (10%) 14,604 (100%) + +1-5 + +Table 3: Composition of the train, validation and test sets by type of clinical event, attribute, and relation. + +486 + +488 + +489 + +490 + +437 491 + +438 492 + +439 + +443 497 + +447 501 + +448 502 + +449 + +450 504 + +451 + +452 + +453 507 + +455 + +max width= + +Annotator A $\mathbf{B}$ C D E $\mathbf{F}$ + +1-7 +X 6|c|Overlap match, micro F1% + +1-7 +Clinical event 91.24 84.22 84.41 85.71 84.43 83.71 + +1-7 +Attribute 86.19 83.06 79.21 81.29 79.75 80.75 + +1-7 +Relation 90.06 76.97 75.60 77.01 71.28 75.84 + +1-7 +X 6|c|Exact match, micro F1% + +1-7 +Clinical event 85.15 76.08 76.29 78.69 74.12 75.71 + +1-7 + +456 + +457 + +458 + +459 + +460 + +461 Table 4: The anonymised annotators' performance + +462 on the gold set. Exact match: a match is defined + +463 as the exact tokens annotated in the gold set with + +464 the same label. Overlap match: a match is defined + +465 as minimum one token overlapping with the gold + +466 set annotation of the same label. Only an overlap + +467 match F1 is calculated for attributes and relations + +468 as evaluating an exact match would propagate the potential error in the span of the clinical event to + +470 which the attribute or relation is connected. + +475 + +max width= + +X 3|c|$\mathbf{{Micro}}$ 3|c|$\mathbf{{Macro}}$ + +1-7 +X $\mathbf{R}\%$ $\mathbf{P}\%$ F1% R% $\mathbf{P}\%$ F1% + +1-7 +X 6|c|Overlap match + +1-7 +Clinical events 66.29 77.31 71.38 64.88 72.60 68.20 + +1-7 +X 6|c|Exact match + +1-7 +Clinical events 60.97 65.64 63.22 59.84 61.30 60.05 + +1-7 +Attributes 66.04 66.04 66.04 51.60 42.64 44.85 + +1-7 +Relations 75.88 72.66 74.23 74.74 67.85 70.64 + +1-7 + +Table 5: Performance of the best clinical event, attribute, and relation extraction models on the test set. Attributes and relations are only reported with an exact match as the models do not consider the span of the clinical event from which the attribute or relation is classified. R: Recall. P: Precision. + +480 + +485 + +Table 5 shows the performance of the best mod- 509 els on the test set. Clinical events were extracted with exact micro F1 63.22% and macro F1 60.05%, attributes with micro F1 66.04% and macro F1 44.85%, and relations with micro F1 + +74.23% and macro F1 70.64%. The negative class 514 was excluded when calculating the recall, precision, and F1 scores. + +Figure 5 shows the confusion matrices of per- 517 formance on clinical events, attributes, and rela- + +tions. The confusion matrices include the clinical 519 + +events and relations that were not extracted and 520 + +falsely extracted by the model ('O'). 521 + +The model for clinical event extraction per- 522 formed best on anatomies (69%) and worst on re- + +sults (53%). 1,568 spans were falsely extracted 524 + +as a clinical event with symptoms being the most 525 + +frequent (21%). The model for attribute extrac- 526 + +tion performed best on negations (84%) and worst 527 + +on non-patient (23%). The model for relation ex- 528 + +traction performed best on "has result" (93%) and 529 530 worst on "is treated with" (62%). 432 false rela- + +tions were extracted of which "has location" was 532 the most frequent misclassification (45%). 533 + +§ 4 DISCUSSION AND LIMITATIONS + +534 + +535 + +This paper presented a methodology for develop- 536 + +ing a dataset for Danish clinical NER. It presented 537 + +an annotation scheme for annotation of all clinical 538 + +events, their attributes, and relations that are rele- 539 + +540 594 + +(A) Clinical events (B) Attributes (C) Relations 0.04 0.1 0.1 0.06 0.08 0.11 has location 0 0.69 0 0.31 0.6 0.48 0.25 0.01 0.02 -0.5 -0.4 0.11 0.5 0.16 0.03 has result 0.93 0.07 0.02 0.07 0.84 0.01 -0.2 0.37 0.45 0.17 0 0.02 0.4 0.11 0.23 -0.0 -0.0 is treated with Non-patient Predicted Disease 0.58 0.12 0.01 0.01 0.27 Prior 0.36 0.41 Symptom 0.05 0.01 0.01 0.36 Current 0.06 0.69 0.5 Diagnostic 0.01 0.63 0.01 0.33 Future 0.08 0.16 Treatment 0.01 0.01 0.01 0.38 -0.3 Doubt 0.04 0.16 Anatomy 0.01 0 0.69 0.3 Negation 0.01 0.05 -0.1 0.1 0.21 0.14 Non-patient 0.19 0.06 $= {0.0}$ Treatment + +Figure 5: Confusion matrices of performance on (A) clinical events, (B) attributes, and (C) relations. 'O' counts the clinical events and relations that were not extracted and falsely extracted by the model. + +602 + +603 + +541 595 + +542 596 + +543 597 + +544 598 + +545 599 + +546 600 + +547 601 + +551 605 + +607 + +609 vant for the medical history. The dataset included text paragraphs from Danish EHRs spanning multiple departments and note types. + +We trained and adapted PURE NER deep learning models to extract clinical events (overlap match macro F1 68.20%; exact match macro F1 ${60.05}\%$ ), attributes of clinical events (macro F1 ${44.85}\%$ ), and relations between clinical events + +566 (macro F1 70.64%). The results are promising for Danish clinical NER but need improvement. A discussion of possible improvements to the methodology, limitations, and future work is provided below. + +The clinical event extraction model had similar performance on all classes with accuracies between ${53}\%$ (results) and ${69}\%$ (anatomies). There was little contamination between classes as most errors were caused by failure to extract or false extraction of a clinical event. There was some contamination between symptoms and diseases with ${12}\%$ of diseases being classified as symptoms and $5\%$ of symptoms being classified as diseases. This supports claims by annotators that diseases and symptoms in some cases are difficult to differentiate and that extra attention must be given to dif- + +583 ferentiate these in the annotation guidelines. + +The attribute extraction model had large differences in performance with accuracies between ${23}\%$ (non-patient) and ${84}\%$ (negation). There were more misclassifications of the non-patient attribute as doubt $\left( {{40}\% }\right)$ than correct classifications. The future and doubt attributes had significant contamination between them with ${25}\%$ and ${11}\%$ misclassifications as the other class, respec- + +593 tively. The many misclassifications between non- + +610 patient and doubt attributes, and especially future and doubt attributes, could indicate that the model would improve if the non-patient, doubt, and future attributes were merged to a single class of uncertain attributes. This would most likely not harm the usefulness of the model to MDs significantly. + +The fact that more prior attributes were misclassified as current (41%) than correct classifica- + +tions (36%) likewise indicates that these two at- 620 tributes could be merged into a single class of clin- + +ical events that occurred. This would, however, 622 decrease the usefulness of the model as it is important for MDs reviewing the medical history to + +know if a clinical event is prior or current. 625 + +The relation model extracted ${93}\%$ of the "has + +result" relations, and ${62}\%$ and ${69}\%$ of the "is 627 treated with" and "has location" relations, respectively. The differences are likely caused by the fact + +that the "has result" relation only connects diag- 630 nostics to results while the two other relations have + +three different one-way relationships. 632 + +In this paper, we only explored one type of NER model and tested a limited set of architectures and hyperparameters. Future work could in- + +clude testing other architectures and enriching the 637 model input with more information, e.g. the output of a text parser, which could help differentiate attributes dealing with the time-aspect. The six annotators had an average micro F1 (overlap + +match) of ${85.62}\% ,{81.71}\%$ , and ${77.79}\%$ for clin- 642 ical events, attributes, and relations, respectively. Merging certain attributes and more emphasis on differences between symptoms and diseases could + +increase these scores. 646 + +The Danish clinical NER dataset is not made 647 publicly available due to it containing sensitive + +649 information. We advise interested researchers to contact us for sharing possibilities. + +§ 5 CONCLUSIONS + +This paper presented methodology and annotation scheme for developing the first Danish clinical NER dataset. The corpus consists of 11,607 paragraphs annotated for six entity types, six attributes, and three relations. The corpus was used to fine-tune language models which showed promising results for classifying the entities, attributes, and re- + +661 lations of the dataset. + +664 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wKieg8k2taJ/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wKieg8k2taJ/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..ca18d6bc1111d1aa0712d1f351d855f080d2c419 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wKieg8k2taJ/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,923 @@ +000 054 + +# Slaapte or Sliep? Extending Neural-Network Simulations of English Past Tense Learning to Dutch and German + +001 055 + +002 056 + +003 057 + +004 058 + +005 059 + +006 060 + +## Abstract + +This work studies the plausibility of sequence-to-sequence neural networks as models of morphological acquisition by + +016 humans. We replicate the findings of Kirov and Cotterell (2018) on the well- + +018 known challenge of the English past tense and examine their generalizability to two related but morphologically richer lan- + +021 guages, namely Dutch and German. Using a new dataset of English/Dutch/German + +023 (ir)regular verb forms, we show that the major findings of Kirov and Cotterell + +026 (2018) hold for all three languages, includ- ing the observation of over-regularization + +028 errors and micro U-shape learning trajectories. At the same time, we observe troublesome cases of non human-like errors + +031 similar to those reported by recent followup studies with different languages or neu- + +033 ral architectures. Finally, we study the possibility of switching to orthographic input in the absence of pronunciation in- + +036 formation and show this can have a nonnegligible impact on the simulation re- + +038 sults, with possibly misleading findings. + +## 1 Introduction + +The plausibility of neural network-based or con-nectionist models in simulating psycholinguistic behaviours has been attracting considerable attention since Rumelhart and McClelland (1986) first modeled the past-tense acquisition with an early example of sequence-to-sequence network. Their experiment received harsh criticism (e.g., Pinker and Prince, 1988) but also inspired cognitive scientists with alternatives (e.g., Kirov and Cotterell, 2018; Plunkett and Juola, 1999; Taat-gen and Anderson, 2002). Much more recently, + +053 Kirov and Cotterell (2018) replicated Rumelhart + +061 + +062 + +063 + +064 + +and McClelland (1986)'s simulations using a mod- 065 ern encoder-decoder neural architecture developed + +for the task of morphological paradigm comple- 067 tion. Their improved results resolved much of the original criticisms by Pinker and Prince (1988). + +The main purpose of this paper is to study the 070 generalizability of Kirov and Cotterell (2018)'s + +findings beyond the case of English. Specifically, 072 we consider two languages that are genetically + +related to English, but morphologically richer - 075 namely, Dutch and German. In these languages + +too, past tense inflection is divided into regular and 077 irregular verbs, but with different proportions and different inflectional patterns than English. More- + +over, German and Dutch are characterized by a 080 much more transparent orthography than English + +(Van den Bosch et al., 1994; Marjou, 2021), which 082 allows us to study the usability of grapheme-based input for simulating past tense acquisition patterns + +when pronunciation information may not avail- 085 able. Concretely, we aim to answer the following + +research questions: 087 + +1. Can the model applied by Kirov and Cot- + +terell (2018) to English also simulate the past 090 tense acquisition process in languages with + +more complex morphological inflection, such 092 as Dutch and German? + +2. Given the more predictable grapheme-to- 095 phoneme correspondence, i.e., orthographic + +transparency (Marjou, 2021), in these two 097 languages, will the model perform similarly if the written forms of verbs are used for training instead of the phonetic ones? + +To answer these two questions, we build and release a new past-tense inflection dataset of English, Dutch, and German, covering both grapheme and phoneme features (Section 3). ${}^{1}$ We + +107 then replicate the single-task learning experiments + +--- + +${}^{1}$ All code and data are available at https:// anonynmous + +--- + +109 of Kirov and Cotterell (2018) (Section 4) and extend them to our multilingual dataset, using both phoneme- and grapheme-based input for comparison (Section 5). + +Our findings reconfirm the potential and limitations of using neural networks for the simulation of human language learning patterns. Our model shows human-like behavior in learning past tenses of verbs, such as the micro U-shape coined by Plunkett et al. (1991) and over-regularization errors in all the examined languages; however non human-like errors are also reported. We also find that learning irregular past tense forms is considerably easier in Dutch and German than in English. Finally, we observe that higher orthographic transparency indeed leads to more consistent learning results when a model is trained with grapheme vs. phoneme input. + +## 2 Background + +Past tense debate The acquisition of verbal past tense in English, particularly the over-regularization of the irregular verbs in the process of learning (Marcus et al., 1992), has been serving as a testing ground for different hypotheses in language modelling for decades. A much debated question is whether the past tense of (ir)regular verbs is learnt by rules and memories (e.g., Plaut and Gonnerman, 2000; Seidenberg and Gonner-man, 2000; Marcus et al., 1995; Albright and Hayes, 2003; Pinker and Ullman, 2002), by analogy (e.g., Ramscar, 2002; Albright and Hayes, 2003) or by a dual mechanism (Pinker and Prince, 1988; Taatgen and Anderson, 2002). + +Marcus et al. (1995) posited the necessity of mental rules in learning German irregular verbs. By contrast, Ernestus and Baayen's (2004) and Hahn and Nakisa's (2000) studies on Dutch and German respectively provided evidence in favour of connectionist and analogical approaches: they showed that humans tend to choose wrong past tense suffixes for regular verbs whose phonological structure is similar to that of irregular ones. + +Recent connectionist revival The recent development of deep learning methods in computational linguistics has led to a renewed interest in connectionist approaches to modelling language acquisition and processing by humans (e.g., Bly-thing et al., 2018; Kádár et al., 2017; Pater, 2019; + +161 Corkery et al., 2019; McCurdy et al., 2020). Last + +year, modelling morphological acquisition trajec- 162 + +tories was adopted as one of the shared tasks 163 + +of SIGMORPHON-UniMorph (Kodner and Khal- 164 ifa, 2022). The three submitted neural systems (Pimentel et al., 2021; Kakolu Ramarao et al., 2022; Elsner and Court, 2022) exhibited over- + +regularization and developmental regression, but 168 non-human-like behaviours were also observed. + +Some recent studies have revealed a poor alignment between the way humans and neural encoder-decoder models generalize to new words (wug test) in the case of English verb past tense + +(Corkery et al., 2019) and German plural nouns 175 (McCurdy et al., 2020). Dankers et al. (2021) observed cognitively plausible representations in + +a recurrent neural network (RNN) trained to in- 178 flect German plural nouns but also found evidence + +of problematic 'shortcut' learning. Wiemerslage 180 et al. (2022) observed that Transformers resemble humans in learning the morphological inflection of English and German in the wug tests but they also pointed out the divergence of the model in Ger- + +man production. However, computational simula- 185 tions have succeeded in replicating the U-shaped learning curve during the acquisition of past tense (Kirov and Cotterell, 2018; Plunkett and Marchman, 2020). Additionally, further probing experi- + +ments have suggested that neural models do learn 190 linguistic representations (Goodwin et al., 2020; Hupkes et al., 2018; Ravichander et al., 2020). Our research continues on exploring the cognitive plausibility of neural networks in modeling lan- + +guage inflection learning. 195 + +Recurrent encoder-decoder inflection model In this work, we adopt the model of Kirov and Cotterell (2018), henceforth referred to as K&C. + +This model is based on the encoder-decoder archi- 200 tecture proposed by Bahdanau et al. (2014), with input representation and hyper-parameters taken from Kann and Schütze (2016). The architecture consists of a bidirectional LSTM (BiLSTM) encoder augmented with an attention mechanism and a unidirectional LSTM decoder. The task of the encoder is to map each phonetic (or orthographic) symbol from the input string to a unique embedding and then process that embedding to get a context-sensitive representation of that symbol. The decoder reads the context vector from the final cell of the encoder and generates an output of phoneme/grapheme sequences through training + +a BiLSTM model with two hidden layers. For 215 more details on the model, see Bahdanau et al. + +217 (2014); Kann and Schütze (2016); Kirov and Cotterell (2018). + +## 3 Datasets + +To replicate the results published by K&C, we employ their dataset based on CELEX (Baayen et al., 1993). ${}^{2}$ To extend the experiments to Dutch and German and compare the results to English, we build a new dataset containing past tense forms in all three languages. + +### 3.1 K&C English Dataset + +K&C's CELEX-based dataset contains 4,039 English verb types including 3,871 regular verbs and 168 irregular verbs. Each verb is associated with an infinitive form and past tense form, both in International Phonetic Alphabet (IPA). Moreover, each verb is marked as regular or irregular (Albright and Hayes, 2003). + +Note that there are label errors in their dataset. For example, dive-dived, dream-dreamed, light-lighted are marked as irregular. This is possibly because those verbs have two past tense forms and the other form does not follow the regular inflection (dive-dove, dream-dreamt, light-light). However, as the past tense of those verbs in the original dataset aligns with the regular inflection rule of English, we take those verbs as regular ones and manually correct their labels. + +### 3.2 Multilingual Unimorph-based Dataset + +We use the morphological annotation dataset Uni-morph (McCarthy et al., 2020) as a source of English, Dutch, and German word forms to enable a fair comparison in our multilingual experiments. In this lexicon, each entry consists of the infinitive of the verb, the conjugation, and the tag containing the Part-Of-Speech and inflectional information. An important adjustment has to be + +259 made here because English has only two forms for the present tense (I/you/we/they) and only one for the past. By contrast, Dutch and German distinguish more persons in both present and past tense. To address this, we include for each lemma the first/second/third singular present form and plural form together with their respective past form, each as a separate entry (see examples in Figure 1). + +269 + +
present(g)past(g)present(p)past(p)reg
accountsaccounted@k6nts@k6ntIdreg
accountaccounted@k6nt@k6ntIdreg
feelsfeltfilzfEltirreg
feelfeltfilfEltirreg
+ +(a) English + +
slaapsliepslapslipirreg
slaaptsliepslaptslipirreg
slapensliepenslap@slip@irreg
behoefbehoefdeb@hufb@huvd@reg
behoeftbehoefdeb@huftb@huvd@reg
behoevenbehoefdenb@huv@b@huvd@reg
+ +
(b) Dutch
berechneberechneteb@rExn@b@rExn@t@reg
berechnestberechnetestb@rExn@stb@rExn@t@streg
berechnetberechneteb@rExn@tb@rExn@t@reg
berechnenberechnetenb@rExn@nb@rExn@t@nreg
flieheflohfliafloirreg
fliehstflohstflistflostirreg
fliehtflohflitfloirreg
fliehenflohenflianflo@nirreg
+ +(c) German + +Figure 1: Excerpt of the newly introduced dataset of English, Dutch and German past tense. Dutch verbs: slapen (to sleep); behoeven (to need). German: berechnen (to calculate); fliehen (to fleed). + +270 + +271 + +272 + +273 + +274 + +275 + +276 + +277 + +278 + +279 + +280 + +281 + +282 + +283 + +284 + +285 + +286 + +287 + +288 + +289 + +290 + +291 + +293 + +Specifically, we start by extracting from Uni- 296 morph a list of verb lemmas and their correspond- + +ing present and past tense forms. A different ex- 298 traction script is used in each language because of + +the different number of forms and slightly differ- 301 ent POS tags: + +- English only has two present tense forms: 303 one for the third person singular and one for the rest. Mostly, there is only one past tense. + +306 + +- Most verbs in Dutch have three present tense 307 + +forms and two past tense forms. 308 + +309 + +- Most verbs in German have five present tense 310 + +forms and four past tense forms. 311 + +Next, we tag each form as regular or irregular, 312 313 based on a simple rule-based strategy: 314 + +- English: if the past tense ends with 'ed' then + +it is considered a regular verb. 316 + +317 + +- Dutch: if the singular past tense ends with 318 + +'-de' or '-te', it is considered regular. 319 + +320 + +- German: if the singular past tense of the first 321 + +or third person ends with '-te', it is consid- 322 + +ered regular. 323 + +--- + +${}^{2}$ Dataset, code and other experimental details are taken from https://github.com/ckirov/ RevisitPinkerAndPrince + +--- + +324 378 + +325 379 + +
LanguageTypeNumber of verbsCountTotal verbs (%)
traindevtest
Count(%)Count(%)Count(%)
Englishall4,87979.961110.061410.16,104100.0
regular4,60175.45298.75208.55,65092.6
irregular2784.6821.3941.54547.4
Dutchall4,89680.161210.06079.96,115100.0
regular4,38371.75509.05428.95,47589.6
irregular5138.4621.0651.064010.4
Germanall4,86579.761610.162010.26,101100.0
regular4,29970.55358.85789.55,41288.8
irregular5669.2811.3420.768911.2
+ +Table 1: Dataset distributed into train, dev and test sets in each of the three languages. The number of regular and irregular verbs is also reported. The percentage is calculated over the total number of verbs per language. + +387 + +388 + +326 380 + +327 381 + +328 382 + +329 383 + +330 384 + +331 385 + +332 386 + +335 389 + +390 + +337 391 + +340 394 + +342 Finally, the IPA transcriptions of all word forms are retrieved from CELEX for all languages and added to the final dataset. As shown in Figure 1 , the resulting dataset is in the same format as K&C's CELEX-based dataset. + +Data selection The generated Dutch data only contains 6106 verb forms versus 11489 and 6975 in English and German respectively. Therefore, to enable a fair comparison among languages, we need to downsample the larger datasets. However, randomly choosing $6\mathrm{\;K}$ verb forms from the English and German lists may lead to a poor selection given the long tail of infrequent words. As a solution, we use word form frequencies as provided in the CELEX data and choose all words with a frequency of more than 1 in a million, and complement with a random selection of less frequent words in order to get approximately 6106 verb forms. + +362 After shuffling, the word forms are split into a train set $\left( {{80}\% }\right)$ , a development(dev)set $\left( {{10}\% }\right)$ and a test set $\left( {{10}\% }\right)$ . The data distribution into three sets and regular/irregular verbs for each language is reported in Table 1. + +367 + +### 3.3 Remarkable problems + +A few problems occurred during data preparation. First, rule-based tagging of lemma's is not as trivial as it seems at first sights. For example, in English, not all past tenses ending with '-ed' are regular. Using the data of $\mathrm{K}\& \mathrm{C}$ , we added a few exceptions that are all irregular words ending with '-ed': bled, bred, led, misled, fled, + +377 and forms of fed (including breast-fed, + +force-fed and bottle-fed). 396 + +Also, in the original K&C experiment, the model should be able to predict past tense based on what it learned from other verbs, not from other word forms. In morphologically richer languages, a lemma has more word forms and data splitting becomes problematic. For instance, a model might have learned that work $\rightarrow$ worked and walks $\rightarrow$ walked, then it might predict that works $\rightarrow$ worked. In such a case, it is not possible to know whether the model made the right prediction based on similarities to other lemmas (walks) or to other forms of the same verb (work). To be as comparable as possible to the original setup of $\mathrm{K}\& \mathrm{C}$ , we put all forms of the same verb in the same data split (that is, either training, dev or test). As a result, if the model scores well, we know for sure that it cannot make predictions based on other forms of the same verb. + +Another issue is that one present tense form nor- + +mally corresponds to one past tense form. How- 416 ever, German poses two notable exceptions to this: + +- The second person singular verb form ends with '-st' and the third person singular ends + +with '-t'. Those forms coincide if a verb al- 421 ready ends with an 's', but there is still a difference between those forms in the past tense. For example, bremst is the present conju- + +gation form of verb bremsen (to brake) for 426 pronoun du you, er he and even ihr you. + +- Verbs ending in '-t' can be the third person singular or the second person plural informal. For example, wundert is the present conju- + +431 gation of the verb wundern (to wonder) for the pronoun ihr you and er he. + +In the former case, the model should be able to output multiple solutions, since only context can make clear whether it is the second person or the third person. However, this complicates the evaluation. As a solution, we exclude the third person form if it collides with the second person. As for the latter issue, we choose to remove all second person plural informal forms, since those are far less frequent than the third person singular forms. + +## 4 Replication of K&C + +Before moving to the main multilingual experiments, we replicate the original $\mathrm{K}\& \mathrm{C}$ experiments (single-task only). + +### 4.1 Experimental Setup + +For the replication, we employ K&C's CELEX-based dataset and keep the model architecture and hyper-parameters unchanged using Open-NMT (Klein et al.,2017) ${}^{3}$ . See more details in Appendix A. Following K&C, the model is trained on the IPA transcription. + +We use word form-level accuracy to evaluate model performance. An important remark concerns data splitting: K&C did not release their specific data split, which makes it impossible to replicate the exact same results. We, therefore, create our own splits following K&C's proportions (80/10/10% for training/dev/test). To obtain more reliable results, we train the model three times using different random seeds for different initialization and report the averaged resulting accuracies. + +To study the micro U-shape learning curve of irregular verbs, we save the model at each 10 epochs and use those partially-trained models to predict the test set and compare their prediction results. + +### 4.2 Results + +As shown in Table 2, the results on the training set are almost the same as reported in the original paper, which means our replication is largely successful. ${}^{4}$ We note that the accuracy for irregular + +verbs in the dev and test set is considerably dif- 486 + +ferent from that of K&C (dev: 21.1% vs. 53.3%; 487 test: 35.3% vs. 28.6%). Since K&C did not re- + +lease their specific data split, replicating their ex- 489 act results on the small portion of irregular verbs is not possible. Given that our results are averaged + +over three random seeds and on all three split sets, 492 we consider them more reliable, which means the model might perform worse at learning the past tense of irregular verbs than K&C's report. + +
allregularirregular
traindevtesttraindevtesttraindevtest
K&C99.897.495.199.999.298.997.653.328.6
Ours99.995.396.599.998.499.298.421.135.3
+ +Table 2: Mean accuracy of our replication of K&C with 3 random seeds. + +497 + +499 + +502 + +### 4.3 Discussion + +The reason we assume for the gap between our results and K&C's is twofold: (i) the number of irregular verbs is much lower than regular ones, which makes the accuracy change dramatically even if only few more or few less verbs are predicted correctly than the original experiments; (ii) we corrected the label errors mentioned above, thus the number of irregular verbs becoming smaller than before. This small difference could cause a large impact on the accuracy calcu- + +lation given that these two sets only contain about 519 20 irregular verbs. To test this hypothesis, we conduct 9-fold cross-validation ${}^{5}$ and find that the ac- + +curacy for irregular verbs varied in different dev 522 splits, ranging widely between 9% and 42%. + +524 + +## 5 Multilingual Experiments + +This section presents the results of our main experiments aimed at comparing Dutch and German + +past learning patterns to the English ones. It also 529 presents the results of grapheme vs phoneme sequence learning in all three languages. Because Dutch and German pronunciation is more predictable than the English one, we expect that the + +difference between grapheme and phoneme learn- 534 ing will be smaller in these languages. + +539 + +--- + +${}^{3}$ However, as the epoch has been deprecated in the latest version of OpenNMT, we converted it to train_steps based on its relationship with steps. + +${}^{4}$ Our results are also very close to those of Corkery et al. (2019), who did a similar replication and reported the averaged accuracy over ten runs initialized with different random seeds, but only on the training set. + +${}^{5}$ We keep the test set unchanged and validated across the train and dev sets. To make sure the dev set has a comparable number of verbs as the original set, we adopt 9 fold instead of 10 fold cross-validation. + +--- + +540 594 + +
allregularirregular
traindevtesttraindevtesttraindevtest
EN99.593.192.199.896.195.098.127.840.5
NL98.988.488.499.291.492.296.562.457.9
DE98.985.092.599.492.095.196.738.757.9
(a) Phoneme input
+ +
allregularirregular
traindevtesttraindevtesttraindevtest
EN99.193.693.899.898.298.189.011.128.1
NL99.488.089.699.891.293.097.958.661.0
DE98.486.493.699.193.595.793.939.565.9
+ +(b) Grapheme input + +597 + +598 + +599 + +601 + +541 595 + +542 596 + +546 600 + +Table 3: Past tense inflection accuracy in English, Dutch, and German; all averaged over 3 random seeds. + +
epochEnglishDutchGerman
hitsbestijgt (mounts)gilt (applies)
10hItIdhittedb@stKGd@besteeggIlt@galte
20hItsthitb@stexbesteeggIlt@galt
30hItIdhittedb@stKGd@besteegg<galt
40hItIdhittedb@stKGd@besteegg<galt
50hIthittedb@stKGd@besteegg<galt
60hItsthitb@stexbesteeggIIt@gilte
70hIthitb@stexbestijgdeg<galt
80hItIdhittedb@stexbesteegg<galt
90hItIdhittedb@stexbesteegg<galt
100hIthitb@stexbesteegg<galt
+ +Table 4: The oscillating development (micro U-shape) of single verbs in three languages: with phoneme or grapheme inputs, the respectively predicted past phonetic (left) or orthographic (right) forms are changing with the training proceeding, but their final predictions are correct when reaching the last epoch. + +602 + +603 + +551 605 + +604 + +606 + +608 + +609 + +611 + +613 + +614 + +615 + +553 607 + +556 610 + +558 612 + +616 + +563 617 + +618 + +619 + +566 620 + +For comparability, all experiments in this section use the newly introduced Unimorph-based dataset, which includes a similar amount of training forms in all languages (cf. Table 1). The model architecture and the hyperparameter settings are the same as in previous experiments. We also run each experiments three times with different random seeds and report the averaged results. + +Result overview For the forms seen in training, the model is able to learn both regular and irregular past tense inflection with more than 95% accuracy (Table 3a), and with similar learning curves (Figure 2), which confirms and strengthens the main findings of $\mathrm{K}\& \mathrm{C}$ on two other languages. + +583 Comparing Table 3a to 3b, we find that the overall trends are maintained when the model is trained on graphemes instead of phonemes (the original setup of $\mathrm{K}\& \mathrm{C}$ ). However, a notable exception is observed: grapheme learning results in a much lower accuracy of English irregular verbs. + +In the following sections, we discuss these results in more detail. + +593 + +621 + +### 5.1 Past Tense Learning Results in English, Dutch, and German + +622 + +623 + +Accuracy Looking closer at the results across 624 + +languages (Table 3a), we notice that inflecting un- 625 + +seen Dutch regular verbs is slightly harder than in 626 + +German and English. This might be explained by 627 the fact that in Dutch all voiced consonants become unvoiced at the end of a word, but to predict if the past tense becomes '-de' (for voiced + +consonants) or '-te' (for unvoiced consonants), we 632 still need the end consonant of the stem, which can be found within the lemma and most of the times in the spelling of the word form. Unfortunately, this information is absent in the pronun- + +ciation. For example, in the pair lAnt-lAndd@, 637 one will not know whether the past tense should be 1And@ or 1Ant @ before seeing the orthographic form 1 and. We find that such errors account for about ${50}\% \left( {{18}/{38}}\right)$ of all Dutch regular verb er- + +rors. This difference in voiced/unvoiced regular 642 past tense endings only occurs in Dutch. + +As for irregular verbs, we find a large difference across languages in the ability to generalize to new + +forms. Especially in English, while the model has 646 + +647 + +648 + +![019640f9-a9c3-790f-a313-c96e4eeadd22_6_190_172_636_970_0.jpg](images/019640f9-a9c3-790f-a313-c96e4eeadd22_6_190_172_636_970_0.jpg) + +Figure 2: Learning curves of the model on the German, English, and Dutch training set (with random seed 123). + +649 + +650 + +651 + +652 + +653 + +654 + +659 + +661 + +664 almost perfectly learned to inflect seen verbs, it + +681 has a hard time predicting the form of new irregular verbs (dev: 27.8%, test: 40.5%). This effect is smaller in Dutch and German, suggesting the irregular inflection patterns in these languages are more predictable. Surprisingly, the model made + +686 more mistakes when predicting the inflections of the irregular verbs in the German dev set than the test set (dev: 38.7%, test: 57.9%). By inspecting + +689 the mistakes, we found that the model incorrectly took many irregular verbs as regular ones because + +691 of their resemblance (high character overlap). For instance, reitest-*reitetest/rittest (ride) is influenced by the regular conjugation of bereitest-bereitetest (prepare). We + +696 found ${23}/{81}$ irregular verbs in the dev set are very similar to regular verbs in the training set. Out of these, 8 irregular verbs are identical to regular ones except for a prefix (e.g., reitet (rides) vs. bereitet (prepares) and reitest (ride) vs. + +701 verbreitest (spread), which could be highly + +confusing for a model that is only based on form 702 + +regardless of meaning. By contrast, such overlap 703 + +is not found between the irregular verbs in the test 704 + +set and regular ones in the training set. This distri- 705 butional discrepancy might explain the lower accuracy in the dev set. It echoes with our other + +finding discussed in the next section that irregu- 708 lar verbs might be misled by regular verbs if they share representation similarity. + +Errors and learning trajectories Going be- + +yond overall accuracy, we inspect the learning tra- 713 jectories of individual verbs in our dataset. We + +find that human-like overregularization patterns 715 similar to those observed by K&C in English also occur in Dutch and German. For example, + +in Dutch, after 40 epochs of training, the model 718 change verscheent to verscheen as the past + +tense of verschijnt (appears). However, af- 720 ter 50 epochs, the model again generate the wrong form verscheent. After 70 epochs, the correct result is again obtained. Similar patterns are observed for sink in English and streitet (argues) in German. All wrongly predicted irregular verbs are caused by over-regularization. In other words, no patterns like ated in English or lookte in Dutch are found, which is consistent with humans' learning behaviour (Pinker and Prince, 1988). More examples from English, Dutch and German are listed in Table 4. + +Additionally, we find cases where the model generates an irregular form for a regular verb, because of the resemblance with other (irregular) verbs. In Dutch, for example, the regular verb versier-versierde (decorate-decorated) gets incorrectly inflected as *versoor by resemblance to verbs like verlies-verloor (lose-lost). Similar errors also occur in German. For instance, the wrong prediction of verfehle-*verfahl/verfehlte (miss-missed) might be misled by the pair befehlen-befahlen (order-ordered), and schweben-*schwoben/schwebten (float-floated) is possibly due to its resemblance to schieben-schoben (push-pushed). Interestingly, this type of errors aligns with Ernestus and Baayen (2004)'s experiments with Dutch speakers: phonological similarity, rather than rule-based regularity, influences participants' judgments toward the inflection of verbs. + +That said, the model also displays error pat- + +terns that are not human-like, such as copying the 755 present form or randomly removing phonemes (or letters) from it. Similar cases of non-plausible predictions were also observed at the Sigmor-phon Shared Task (Kodner and Khalifa, 2022), for instance forgive-*forgaved/forgave or seek-*sougk/sought. As also observed by Wiemerslage et al. (2022), this kind of model predictions contrasts with the behaviour of human speakers, who mostly resort to generating a regular past tense when a verb is unknown. + +### 5.2 Phoneme vs. Grapheme Input + +Undoubtedly, using phoneme input is more principled than grapheme input when simulating human acquisition patterns. However, pronunciation information is not always available and makes it harder to extend this kind of simulations beyond a small set of widely studied languages. Here, we investigate the usability of grapheme-based input for modeling past tense inflection. We expect German and Dutch to be a good use case for this, given their more transparent orthography compared to English (Marjou, 2021). + +The results in Table 3 clearly show that switching to grapheme input for the English simulations is not principled as this results in a slight increase of regular inflection accuracy (from 99.8/96.1/95.0% to 99.8/98.2/98.1% train/dev/test) as opposed to a large decrease of irregular inflection accuracy (from 98.1/27.8/40.5% to ${89.0}/{11.1}/{28.1}\%$ ). The latter effect is particularly marked, suggesting non-transparent orthography may not be a uniform property of the language but may be correlating with less regular word forms within a language. We leave this investigation to future work. + +Using grapheme input in Dutch and German seems much safer (differences are overall small, with only a slight increase in almost all cases). Our observations seem to reflect the figures of Mar-jou (2021), who give a much higher transparency score to Dutch and German than to English. + +In sum, using graphemes to simulate human patterns of morphological acquisition is possible but should be done with caution and only in some languages. A good practice could be to first verify that the orthographic transparency of a language is high (Marjou (2021) present results for 17 languages). When that is not possible, grapheme-based results should be at least validated against a small-scale pronunciation dataset. + +809 + +## 6 Conclusions + +810 + +811 + +In this work, we study the plausibility of using 812 + +sequence-to-sequence neural networks for simu- 813 + +lating human patterns of past tense acquisition. 814 + +More specifically, we replicate findings by Kirov 815 + +and Cotterell (2018) and examine their generaliz- 816 ability beyond the specific case of English, using a new dataset of English/Dutch/German (ir)regular verb forms based on Unimorph (McCarthy et al., 2020). + +We show that the main findings of $\mathrm{K}\& \mathrm{C}$ also 821 largely hold for Dutch and German, including over-regularization errors and the oscillating (or micro U-shape) learning trajectory of individual verb forms across training epochs. At the same + +time, we also observe cases of non human-like 826 errors, for instance when the model just keeps + +the present form unchanged or randomly removes 828 phonemes from it. A notable difference among + +our studied languages concern unseen English ir- 831 regular verbs, which appeared to be much harder to inflect than the Dutch and German ones. We also observe that the orthographic transparency of a language influences and possibly confounds the model's learning performance: higher transparent orthography contributes to more reliable and consistent simulation results, but in general this aspect should be seriously considered when setting up new benchmarks of morphological acquisition. + +Future work could include the construction of a nonce word benchmark in Dutch and German + +to enable a multi-lingual evaluation of this task 843 (Corkery et al., 2019), as well as an in-depth investigation of the different level of irregular past + +inflection difficulty in our three languages. 846 + +Kirov and Cotterell (2018) provided very + +promising evidence for the use of modern neural 848 networks to model the human language acquisition patterns. Our work confirms the potential of + +this research direction, but also raises important 851 issues and joins recent follow-up studies (Cork- + +ery et al., 2019; Dankers et al., 2021; Kodner and 853 Khalifa, 2022; Wiemerslage et al., 2022) that have warned against over-optimistic conclusions. + +## References + +858 + +Adam Albright and Bruce Hayes. 2003. Rules vs. analogy in english past tenses: A computational/experimental study. Cognition, 90(2):119- 161. + +859 + +860 + +863 + +R Harald Baayen, Richard Piepenbrock, and H Van Rijn. 1993. The celex lexical database (cd-rom). linguistic data consortium. Philadelphia, PA: University of Pennsylvania. + +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. + +Ryan P Blything, Ben Ambridge, and Elena VM Lieven. 2018. Children's acquisition of the english past-tense: Evidence for a single-route account from novel verb production data. Cognitive Science, 42:621-639. + +A Van den Bosch, Alain Content, W Daelemans, and Béatrice De Gelder. 1994. Analysing orthographic depth of different languages using data-oriented algorithms: Qualico94. In Proceedings of the 2d International Conference on Quantitative Linguistics, pages 26-31. + +Maria Corkery, Yevgen Matusevych, and Sharon Goldwater. 2019. Are we there yet? encoder-decoder neural networks as cognitive models of English past tense inflection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3868-3877, Florence, Italy. Association for Computational Linguistics. + +Verna Dankers, Anna Langedijk, Kate McCurdy, Adina Williams, and Dieuwke Hupkes. 2021. Generalising to German plural noun classes, from the perspective of a recurrent neural network. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 94-108, Online. Association for Computational Linguistics. + +Micha Elsner and Sara Court. 2022. OSU at Sig-Morphon 2022: Analogical inflection with rule features. In Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 220-225, Seattle, Washington. Association for Computational Linguistics. + +Mirjam Ernestus and Harald Baayen. 2004. Analogical effects in regular past tense production in dutch. + +Emily Goodwin, Koustuv Sinha, and Timothy J. O'Donnell. 2020. Probing linguistic systematicity. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1958-1969, Online. Association for Computational Linguistics. + +Ulrike Hahn and Ramin Charles Nakisa. 2000. German inflection: Single route or dual route? Cognitive Psychology, 41(4):313-360. + +Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926. + +865 866 867 870 + +885 + +887 + +895 + +897 + +900 + +902 + +907 + +917 + +Akos Kádár, Grzegorz Chrupata, and Afra Alishahi. 918 + +2017. Representation of linguistic form and function in recurrent neural networks. Computational Linguistics, 43(4):761-780. + +Akhilesh Kakolu Ramarao, Yulia Zinova, Kevin Tang, and Ruben van de Vijver. 2022. HeiMorph at SIG-MORPHON 2022 shared task on morphological acquisition trajectories. In Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 236-239, Seattle, Washington. Association for Computational Linguistics. + +Katharina Kann and Hinrich Schütze. 2016. Med: The lmu system for the sigmorphon 2016 shared task on morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 62-70. + +Christo Kirov and Ryan Cotterell. 2018. Recurrent neural networks in linguistic theory: Revisiting pinker and prince (1988) and the past tense debate. Transactions of the Association for Computational Linguistics, 6:651-665. + +Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810. + +Jordan Kodner and Salam Khalifa. 2022. SIGMORPHON-UniMorph 2022 shared task 0: Modeling inflection in language acquisition. In Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 157-175, Seattle, Washington. Association for Computational Linguistics. + +Gary F Marcus, Ursula Brinkmann, Harald Clahsen, Richard Wiese, and Steven Pinker. 1995. German inflection: The exception that proves the rule. Cognitive psychology, 29(3):189-256. + +Gary F Marcus, Steven Pinker, Michael Ullman, Michelle Hollander, T John Rosen, Fei Xu, and Harald Clahsen. 1992. Overregularization in language acquisition. Monographs of the society for research in child development, pages i-178. + +Xavier Marjou. 2021. OTEANN: Estimating the transparency of orthographies with an artificial neural network. In Proceedings of the Third Workshop on Computational Typology and Multilingual NLP, pages 1-9, Online. Association for Computational Linguistics. + +Arya D. McCarthy, Christo Kirov, Matteo Grella, Amrit Nidhi, Patrick Xia, Kyle Gorman, Ekaterina Vylomova, Sabrina J. Mielke, Garrett Nicolai, Miikka Silfverberg, Timofey Arkhangelskiy, Na-taly Krizhanovsky, Andrew Krizhanovsky, Elena Klyachko, Alexey Sorokin, John Mansfield, Valts + +919 + +920 + +921 + +922 + +923 + +924 + +929 + +934 + +936 + +939 + +941 + +956 + +959 + +961 + +966 971 972 Ernštreits, Yuval Pinter, Cassandra L. Jacobs, Ryan 973 Cotterell, Mans Hulden, and David Yarowsky. 2020. 974 UniMorph 3.0: Universal Morphology. In Proceed- 975 ings of the 12th Language Resources and Evaluation 976 Conference, pages 3922-3931, Marseille, France. European Language Resources Association. 977 + +978 Kate McCurdy, Sharon Goldwater, and Adam Lopez. 979 2020. Inflecting when there's no majority: Limi- 980 tations of encoder-decoder neural networks as cog- nitive models for German plurals. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1745-1756, On- 983 line. Association for Computational Linguistics. + +Joe Pater. 2019. Generative linguistics and neural net- 985 works at 60: Foundation, friction, and fusion. Lan- + +986 guage, 95(1):e41-e74. + +987 + +988 Tiago Pimentel, Maria Ryskina, Sabrina J. Mielke, + +989 Shijie Wu, Eleanor Chodroff, Brian Leonard, Gar- rett Nicolai, Yustinus Ghanggo Ate, Salam Khal- + +990 ifa, Nizar Habash, Charbel El-Khaissi, Omer Goldman, Michael Gasser, William Lane, Matt Coler, Arturo Oncevay, Jaime Rafael Montoya Samame, + +993 Gema Celeste Silva Villegas, Adam Ek, Jean-Philippe Bernardy, Andrey Shcherbakov, Aziyana Bayyr-ool, Karina Sheifer, Sofya Ganieva, Matvey + +995 Plugaryov, Elena Klyachko, Ali Salehi, Andrew Krizhanovsky, Natalia Krizhanovsky, Clara Vania, Sardana Ivanova, Aelita Salchak, Christo- + +998 pher Straughn, Zoey Liu, Jonathan North Washington, Duygu Ataman, Witold Kieraé, Marcin Woliński, Totok Suhardijanto, Niklas Stoehr, Zahroh + +1000 Nuriah, Shyam Ratan, Francis M. Tyers, Edoardo M. Ponti, Grant Aiton, Richard J. Hatcher, Emily Prud'hommeaux, Ritesh Kumar, Mans Hulden, Botond Barta, Dorina Lakatos, Gábor Szolnok, Ju-dit Acs, Mohit Raj, David Yarowsky, Ryan Cotterell, Ben Ambridge, and Ekaterina Vylomova. + +1005 2021. SIGMORPHON 2021 shared task on morphological reinflection: Generalization across languages. In Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229-259, On- + +1010 line. Association for Computational Linguistics. + +Steven Pinker and Alan Prince. 1988. On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition, 28(1-2):73-193. + +1015 Steven Pinker and Michael T Ullman. 2002. The past and future of the past tense. Trends in cognitive sciences, 6(11):456-463. + +David C Plaut and Laura M Gonnerman. 2000. Are non-semantic morphological effects incompatible with a distributed connectionist approach to lexical processing? Language and Cognitive Processes, 15(4-5):445-485. + +Kim Plunkett and Patrick Juola. 1999. A connectionist 1024 model of english past tense and plural morphology. 1025 Cognitive Science, 23(4):463-490. + +Kim Plunkett and Virginia Marchman. 2020. U-shaped 1026 learning and frequency effects in a multilayered per- 1027 ceptron: Implications for child language acquisi- 1028 tion. Connectionist psychology: A text with read- 1029 + +ings, pages 487-526. 1030 + +Kim Plunkett, Virginia Marchman, and Steen Lade- 1031 + +gaard Knudsen. 1991. From rote learning to system 1032 + +building: acquiring verb morphology in children and 1033 + +connectionist nets. In Connectionist Models, pages 1034 201-219. Elsevier. + +1035 + +Michael Ramscar. 2002. The role of meaning in in- 1036 + +flection: Why the past tense does not require a rule. 1037 + +Cognitive Psychology, 45(1):45-94. 1038 + +Abhilasha Ravichander, Eduard Hovy, Kaheer Sule- 1039 + +man, Adam Trischler, and Jackie Chi Kit Cheung. 1040 + +2020. On the systematicity of probing contextual- 1041 + +ized word representations: The case of hypernymy 1042 in bert. In Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics, + +pages 88-102. 1044 + +David E Rumelhart and James L McClelland. 1986. On + +learning the past tenses of english verbs. 1047 + +Mark S Seidenberg and Laura M Gonnerman. 2000. + +Explaining derivational morphology as the conver- 1049 + +gence of codes. Trends in cognitive sciences, 1050 + +4(9):353-361. 1051 + +Niels A Taatgen and John R Anderson. 2002. Why 1052 do children learn to say "broke"? a model of learn- + +ing the past tense without feedback. Cognition, 1054 + +86(2):123-155. 1055 + +Adam Wiemerslage, Shiran Dudy, and Katharina 1056 + +Kann. 2022. A comprehensive comparison of neu- 1057 + +ral networks as cognitive models of inflection. arXiv 1058 + +preprint arXiv:2210.12321. 1059 + +1060 + +1061 + +1062 + +1063 + +1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 + +1080 1134 + +
ParameterValue
seed123
feat_vec_size300
feat_mergeconcat
rnn_typeLSTM
encoder_typebrnn
encoder_layers2
encoder_rnn_size100
decoder_typernn
decoder_layers2
decoder_rnn_size100
dropout0.3
learning_rate_decay1.0
learning_rate1.0
batch_size20
(trainingsample size/
train_stepsbatch size)*the number of epochs
beam_size12
optimadadelta
verboseTrue
tensorboardTrue
tensorboard_log_dirlogs
report_everysteps / 100
log_filedirectory of the log file
log_file_level20
+ +A Appendix A displays hyperparameter settings of the replicating experiments and the extension experiments. + +1081 1135 + +1164 + +1082 1136 + +1083 1137 + +1084 1138 + +1085 1139 + +1086 1140 + +1087 1141 + +1088 1142 + +1089 1143 + +1090 1144 + +1091 1145 + +1092 1146 + +1093 1147 + +1094 1148 + +1095 1149 + +1096 1150 + +1097 1151 + +1098 1152 + +1099 1153 + +1100 1154 + +1101 1155 + +1102 1156 + +1103 1157 + +1104 1158 + +1105 1159 + +1106 1160 + +1107 1161 + +1108 1162 + +1109 1163 + +1111 1165 + +1112 1166 + +1113 1167 + +1114 1168 + +1115 1169 + +1116 1170 + +1117 1171 + +1118 1172 + +1119 1173 + +1120 1174 + +1121 1175 + +1122 1176 + +1123 1177 + +1124 1178 + +1125 1179 + +1126 1180 + +1127 1181 + +1128 1182 + +1129 1183 + +1130 1184 + +1131 1185 + +1132 1186 + +1133 1187 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wKieg8k2taJ/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wKieg8k2taJ/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..e8b49ee3959685560898af2200125f9d003280c2 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wKieg8k2taJ/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,780 @@ +000 054 + +§ SLAAPTE OR SLIEP? EXTENDING NEURAL-NETWORK SIMULATIONS OF ENGLISH PAST TENSE LEARNING TO DUTCH AND GERMAN + +001 055 + +002 056 + +003 057 + +004 058 + +005 059 + +006 060 + +§ ABSTRACT + +This work studies the plausibility of sequence-to-sequence neural networks as models of morphological acquisition by + +016 humans. We replicate the findings of Kirov and Cotterell (2018) on the well- + +018 known challenge of the English past tense and examine their generalizability to two related but morphologically richer lan- + +021 guages, namely Dutch and German. Using a new dataset of English/Dutch/German + +023 (ir)regular verb forms, we show that the major findings of Kirov and Cotterell + +026 (2018) hold for all three languages, includ- ing the observation of over-regularization + +028 errors and micro U-shape learning trajectories. At the same time, we observe troublesome cases of non human-like errors + +031 similar to those reported by recent followup studies with different languages or neu- + +033 ral architectures. Finally, we study the possibility of switching to orthographic input in the absence of pronunciation in- + +036 formation and show this can have a nonnegligible impact on the simulation re- + +038 sults, with possibly misleading findings. + +§ 1 INTRODUCTION + +The plausibility of neural network-based or con-nectionist models in simulating psycholinguistic behaviours has been attracting considerable attention since Rumelhart and McClelland (1986) first modeled the past-tense acquisition with an early example of sequence-to-sequence network. Their experiment received harsh criticism (e.g., Pinker and Prince, 1988) but also inspired cognitive scientists with alternatives (e.g., Kirov and Cotterell, 2018; Plunkett and Juola, 1999; Taat-gen and Anderson, 2002). Much more recently, + +053 Kirov and Cotterell (2018) replicated Rumelhart + +061 + +062 + +063 + +064 + +and McClelland (1986)'s simulations using a mod- 065 ern encoder-decoder neural architecture developed + +for the task of morphological paradigm comple- 067 tion. Their improved results resolved much of the original criticisms by Pinker and Prince (1988). + +The main purpose of this paper is to study the 070 generalizability of Kirov and Cotterell (2018)'s + +findings beyond the case of English. Specifically, 072 we consider two languages that are genetically + +related to English, but morphologically richer - 075 namely, Dutch and German. In these languages + +too, past tense inflection is divided into regular and 077 irregular verbs, but with different proportions and different inflectional patterns than English. More- + +over, German and Dutch are characterized by a 080 much more transparent orthography than English + +(Van den Bosch et al., 1994; Marjou, 2021), which 082 allows us to study the usability of grapheme-based input for simulating past tense acquisition patterns + +when pronunciation information may not avail- 085 able. Concretely, we aim to answer the following + +research questions: 087 + +1. Can the model applied by Kirov and Cot- + +terell (2018) to English also simulate the past 090 tense acquisition process in languages with + +more complex morphological inflection, such 092 as Dutch and German? + +2. Given the more predictable grapheme-to- 095 phoneme correspondence, i.e., orthographic + +transparency (Marjou, 2021), in these two 097 languages, will the model perform similarly if the written forms of verbs are used for training instead of the phonetic ones? + +To answer these two questions, we build and release a new past-tense inflection dataset of English, Dutch, and German, covering both grapheme and phoneme features (Section 3). ${}^{1}$ We + +107 then replicate the single-task learning experiments + +${}^{1}$ All code and data are available at https:// anonynmous + +109 of Kirov and Cotterell (2018) (Section 4) and extend them to our multilingual dataset, using both phoneme- and grapheme-based input for comparison (Section 5). + +Our findings reconfirm the potential and limitations of using neural networks for the simulation of human language learning patterns. Our model shows human-like behavior in learning past tenses of verbs, such as the micro U-shape coined by Plunkett et al. (1991) and over-regularization errors in all the examined languages; however non human-like errors are also reported. We also find that learning irregular past tense forms is considerably easier in Dutch and German than in English. Finally, we observe that higher orthographic transparency indeed leads to more consistent learning results when a model is trained with grapheme vs. phoneme input. + +§ 2 BACKGROUND + +Past tense debate The acquisition of verbal past tense in English, particularly the over-regularization of the irregular verbs in the process of learning (Marcus et al., 1992), has been serving as a testing ground for different hypotheses in language modelling for decades. A much debated question is whether the past tense of (ir)regular verbs is learnt by rules and memories (e.g., Plaut and Gonnerman, 2000; Seidenberg and Gonner-man, 2000; Marcus et al., 1995; Albright and Hayes, 2003; Pinker and Ullman, 2002), by analogy (e.g., Ramscar, 2002; Albright and Hayes, 2003) or by a dual mechanism (Pinker and Prince, 1988; Taatgen and Anderson, 2002). + +Marcus et al. (1995) posited the necessity of mental rules in learning German irregular verbs. By contrast, Ernestus and Baayen's (2004) and Hahn and Nakisa's (2000) studies on Dutch and German respectively provided evidence in favour of connectionist and analogical approaches: they showed that humans tend to choose wrong past tense suffixes for regular verbs whose phonological structure is similar to that of irregular ones. + +Recent connectionist revival The recent development of deep learning methods in computational linguistics has led to a renewed interest in connectionist approaches to modelling language acquisition and processing by humans (e.g., Bly-thing et al., 2018; Kádár et al., 2017; Pater, 2019; + +161 Corkery et al., 2019; McCurdy et al., 2020). Last + +year, modelling morphological acquisition trajec- 162 + +tories was adopted as one of the shared tasks 163 + +of SIGMORPHON-UniMorph (Kodner and Khal- 164 ifa, 2022). The three submitted neural systems (Pimentel et al., 2021; Kakolu Ramarao et al., 2022; Elsner and Court, 2022) exhibited over- + +regularization and developmental regression, but 168 non-human-like behaviours were also observed. + +Some recent studies have revealed a poor alignment between the way humans and neural encoder-decoder models generalize to new words (wug test) in the case of English verb past tense + +(Corkery et al., 2019) and German plural nouns 175 (McCurdy et al., 2020). Dankers et al. (2021) observed cognitively plausible representations in + +a recurrent neural network (RNN) trained to in- 178 flect German plural nouns but also found evidence + +of problematic 'shortcut' learning. Wiemerslage 180 et al. (2022) observed that Transformers resemble humans in learning the morphological inflection of English and German in the wug tests but they also pointed out the divergence of the model in Ger- + +man production. However, computational simula- 185 tions have succeeded in replicating the U-shaped learning curve during the acquisition of past tense (Kirov and Cotterell, 2018; Plunkett and Marchman, 2020). Additionally, further probing experi- + +ments have suggested that neural models do learn 190 linguistic representations (Goodwin et al., 2020; Hupkes et al., 2018; Ravichander et al., 2020). Our research continues on exploring the cognitive plausibility of neural networks in modeling lan- + +guage inflection learning. 195 + +Recurrent encoder-decoder inflection model In this work, we adopt the model of Kirov and Cotterell (2018), henceforth referred to as K&C. + +This model is based on the encoder-decoder archi- 200 tecture proposed by Bahdanau et al. (2014), with input representation and hyper-parameters taken from Kann and Schütze (2016). The architecture consists of a bidirectional LSTM (BiLSTM) encoder augmented with an attention mechanism and a unidirectional LSTM decoder. The task of the encoder is to map each phonetic (or orthographic) symbol from the input string to a unique embedding and then process that embedding to get a context-sensitive representation of that symbol. The decoder reads the context vector from the final cell of the encoder and generates an output of phoneme/grapheme sequences through training + +a BiLSTM model with two hidden layers. For 215 more details on the model, see Bahdanau et al. + +217 (2014); Kann and Schütze (2016); Kirov and Cotterell (2018). + +§ 3 DATASETS + +To replicate the results published by K&C, we employ their dataset based on CELEX (Baayen et al., 1993). ${}^{2}$ To extend the experiments to Dutch and German and compare the results to English, we build a new dataset containing past tense forms in all three languages. + +§ 3.1 K&C ENGLISH DATASET + +K&C's CELEX-based dataset contains 4,039 English verb types including 3,871 regular verbs and 168 irregular verbs. Each verb is associated with an infinitive form and past tense form, both in International Phonetic Alphabet (IPA). Moreover, each verb is marked as regular or irregular (Albright and Hayes, 2003). + +Note that there are label errors in their dataset. For example, dive-dived, dream-dreamed, light-lighted are marked as irregular. This is possibly because those verbs have two past tense forms and the other form does not follow the regular inflection (dive-dove, dream-dreamt, light-light). However, as the past tense of those verbs in the original dataset aligns with the regular inflection rule of English, we take those verbs as regular ones and manually correct their labels. + +§ 3.2 MULTILINGUAL UNIMORPH-BASED DATASET + +We use the morphological annotation dataset Uni-morph (McCarthy et al., 2020) as a source of English, Dutch, and German word forms to enable a fair comparison in our multilingual experiments. In this lexicon, each entry consists of the infinitive of the verb, the conjugation, and the tag containing the Part-Of-Speech and inflectional information. An important adjustment has to be + +259 made here because English has only two forms for the present tense (I/you/we/they) and only one for the past. By contrast, Dutch and German distinguish more persons in both present and past tense. To address this, we include for each lemma the first/second/third singular present form and plural form together with their respective past form, each as a separate entry (see examples in Figure 1). + +269 + +max width= + +present(g) past(g) present(p) past(p) reg X + +1-6 +accounts accounted @k6nts @k6ntId reg X + +1-6 +account accounted @k6nt @k6ntId reg X + +1-6 +feels felt filz fElt irreg X + +1-6 +feel felt fil fElt irreg X + +1-6 + +(a) English + +max width= + +slaap sliep slap slip irreg + +1-5 +slaapt sliep slapt slip irreg + +1-5 +slapen sliepen slap@ slip@ irreg + +1-5 +behoef behoefde b@huf b@huvd@ reg + +1-5 +behoeft behoefde b@huft b@huvd@ reg + +1-5 +behoeven behoefden b@huv@ b@huvd@ reg + +1-5 + +max width= + +5|c|(b) Dutch + +1-5 +berechne berechnete b@rExn@ b@rExn@t@ reg + +1-5 +berechnest berechnetest b@rExn@st b@rExn@t@st reg + +1-5 +berechnet berechnete b@rExn@t b@rExn@t@ reg + +1-5 +berechnen berechneten b@rExn@n b@rExn@t@n reg + +1-5 +fliehe floh flia flo irreg + +1-5 +fliehst flohst flist flost irreg + +1-5 +flieht floh flit flo irreg + +1-5 +fliehen flohen flian flo@n irreg + +1-5 + +(c) German + +Figure 1: Excerpt of the newly introduced dataset of English, Dutch and German past tense. Dutch verbs: slapen (to sleep); behoeven (to need). German: berechnen (to calculate); fliehen (to fleed). + +270 + +271 + +272 + +273 + +274 + +275 + +276 + +277 + +278 + +279 + +280 + +281 + +282 + +283 + +284 + +285 + +286 + +287 + +288 + +289 + +290 + +291 + +293 + +Specifically, we start by extracting from Uni- 296 morph a list of verb lemmas and their correspond- + +ing present and past tense forms. A different ex- 298 traction script is used in each language because of + +the different number of forms and slightly differ- 301 ent POS tags: + + * English only has two present tense forms: 303 one for the third person singular and one for the rest. Mostly, there is only one past tense. + +306 + + * Most verbs in Dutch have three present tense 307 + +forms and two past tense forms. 308 + +309 + + * Most verbs in German have five present tense 310 + +forms and four past tense forms. 311 + +Next, we tag each form as regular or irregular, 312 313 based on a simple rule-based strategy: 314 + + * English: if the past tense ends with 'ed' then + +it is considered a regular verb. 316 + +317 + + * Dutch: if the singular past tense ends with 318 + +'-de' or '-te', it is considered regular. 319 + +320 + + * German: if the singular past tense of the first 321 + +or third person ends with '-te', it is consid- 322 + +ered regular. 323 + +${}^{2}$ Dataset, code and other experimental details are taken from https://github.com/ckirov/ RevisitPinkerAndPrince + +324 378 + +325 379 + +max width= + +3*Language 3*Type 6|c|Number of verbs 3*Count 3*Total verbs (%) + +3-8 + 2|c|train 2|c|dev 2|c|test + +3-8 + Count (%) Count (%) Count (%) + +1-10 +3*English all 4,879 79.9 611 10.0 614 10.1 6,104 100.0 + +2-10 + regular 4,601 75.4 529 8.7 520 8.5 5,650 92.6 + +2-10 + irregular 278 4.6 82 1.3 94 1.5 454 7.4 + +1-10 +3*Dutch all 4,896 80.1 612 10.0 607 9.9 6,115 100.0 + +2-10 + regular 4,383 71.7 550 9.0 542 8.9 5,475 89.6 + +2-10 + irregular 513 8.4 62 1.0 65 1.0 640 10.4 + +1-10 +3*German all 4,865 79.7 616 10.1 620 10.2 6,101 100.0 + +2-10 + regular 4,299 70.5 535 8.8 578 9.5 5,412 88.8 + +2-10 + irregular 566 9.2 81 1.3 42 0.7 689 11.2 + +1-10 + +Table 1: Dataset distributed into train, dev and test sets in each of the three languages. The number of regular and irregular verbs is also reported. The percentage is calculated over the total number of verbs per language. + +387 + +388 + +326 380 + +327 381 + +328 382 + +329 383 + +330 384 + +331 385 + +332 386 + +335 389 + +390 + +337 391 + +340 394 + +342 Finally, the IPA transcriptions of all word forms are retrieved from CELEX for all languages and added to the final dataset. As shown in Figure 1, the resulting dataset is in the same format as K&C's CELEX-based dataset. + +Data selection The generated Dutch data only contains 6106 verb forms versus 11489 and 6975 in English and German respectively. Therefore, to enable a fair comparison among languages, we need to downsample the larger datasets. However, randomly choosing $6\mathrm{\;K}$ verb forms from the English and German lists may lead to a poor selection given the long tail of infrequent words. As a solution, we use word form frequencies as provided in the CELEX data and choose all words with a frequency of more than 1 in a million, and complement with a random selection of less frequent words in order to get approximately 6106 verb forms. + +362 After shuffling, the word forms are split into a train set $\left( {{80}\% }\right)$ , a development(dev)set $\left( {{10}\% }\right)$ and a test set $\left( {{10}\% }\right)$ . The data distribution into three sets and regular/irregular verbs for each language is reported in Table 1. + +367 + +§ 3.3 REMARKABLE PROBLEMS + +A few problems occurred during data preparation. First, rule-based tagging of lemma's is not as trivial as it seems at first sights. For example, in English, not all past tenses ending with '-ed' are regular. Using the data of $\mathrm{K}\& \mathrm{C}$ , we added a few exceptions that are all irregular words ending with '-ed': bled, bred, led, misled, fled, + +377 and forms of fed (including breast-fed, + +force-fed and bottle-fed). 396 + +Also, in the original K&C experiment, the model should be able to predict past tense based on what it learned from other verbs, not from other word forms. In morphologically richer languages, a lemma has more word forms and data splitting becomes problematic. For instance, a model might have learned that work $\rightarrow$ worked and walks $\rightarrow$ walked, then it might predict that works $\rightarrow$ worked. In such a case, it is not possible to know whether the model made the right prediction based on similarities to other lemmas (walks) or to other forms of the same verb (work). To be as comparable as possible to the original setup of $\mathrm{K}\& \mathrm{C}$ , we put all forms of the same verb in the same data split (that is, either training, dev or test). As a result, if the model scores well, we know for sure that it cannot make predictions based on other forms of the same verb. + +Another issue is that one present tense form nor- + +mally corresponds to one past tense form. How- 416 ever, German poses two notable exceptions to this: + + * The second person singular verb form ends with '-st' and the third person singular ends + +with '-t'. Those forms coincide if a verb al- 421 ready ends with an 's', but there is still a difference between those forms in the past tense. For example, bremst is the present conju- + +gation form of verb bremsen (to brake) for 426 pronoun du you, er he and even ihr you. + + * Verbs ending in '-t' can be the third person singular or the second person plural informal. For example, wundert is the present conju- + +431 gation of the verb wundern (to wonder) for the pronoun ihr you and er he. + +In the former case, the model should be able to output multiple solutions, since only context can make clear whether it is the second person or the third person. However, this complicates the evaluation. As a solution, we exclude the third person form if it collides with the second person. As for the latter issue, we choose to remove all second person plural informal forms, since those are far less frequent than the third person singular forms. + +§ 4 REPLICATION OF K&C + +Before moving to the main multilingual experiments, we replicate the original $\mathrm{K}\& \mathrm{C}$ experiments (single-task only). + +§ 4.1 EXPERIMENTAL SETUP + +For the replication, we employ K&C's CELEX-based dataset and keep the model architecture and hyper-parameters unchanged using Open-NMT (Klein et al.,2017) ${}^{3}$ . See more details in Appendix A. Following K&C, the model is trained on the IPA transcription. + +We use word form-level accuracy to evaluate model performance. An important remark concerns data splitting: K&C did not release their specific data split, which makes it impossible to replicate the exact same results. We, therefore, create our own splits following K&C's proportions (80/10/10% for training/dev/test). To obtain more reliable results, we train the model three times using different random seeds for different initialization and report the averaged resulting accuracies. + +To study the micro U-shape learning curve of irregular verbs, we save the model at each 10 epochs and use those partially-trained models to predict the test set and compare their prediction results. + +§ 4.2 RESULTS + +As shown in Table 2, the results on the training set are almost the same as reported in the original paper, which means our replication is largely successful. ${}^{4}$ We note that the accuracy for irregular + +verbs in the dev and test set is considerably dif- 486 + +ferent from that of K&C (dev: 21.1% vs. 53.3%; 487 test: 35.3% vs. 28.6%). Since K&C did not re- + +lease their specific data split, replicating their ex- 489 act results on the small portion of irregular verbs is not possible. Given that our results are averaged + +over three random seeds and on all three split sets, 492 we consider them more reliable, which means the model might perform worse at learning the past tense of irregular verbs than K&C's report. + +max width= + +2*X 3|c|all 3|c|regular 3|c|irregular + +2-10 + train dev test train dev test train dev test + +1-10 +K&C 99.8 97.4 95.1 99.9 99.2 98.9 97.6 53.3 28.6 + +1-10 +Ours 99.9 95.3 96.5 99.9 98.4 99.2 98.4 21.1 35.3 + +1-10 + +Table 2: Mean accuracy of our replication of K&C with 3 random seeds. + +497 + +499 + +502 + +§ 4.3 DISCUSSION + +The reason we assume for the gap between our results and K&C's is twofold: (i) the number of irregular verbs is much lower than regular ones, which makes the accuracy change dramatically even if only few more or few less verbs are predicted correctly than the original experiments; (ii) we corrected the label errors mentioned above, thus the number of irregular verbs becoming smaller than before. This small difference could cause a large impact on the accuracy calcu- + +lation given that these two sets only contain about 519 20 irregular verbs. To test this hypothesis, we conduct 9-fold cross-validation ${}^{5}$ and find that the ac- + +curacy for irregular verbs varied in different dev 522 splits, ranging widely between 9% and 42%. + +524 + +§ 5 MULTILINGUAL EXPERIMENTS + +This section presents the results of our main experiments aimed at comparing Dutch and German + +past learning patterns to the English ones. It also 529 presents the results of grapheme vs phoneme sequence learning in all three languages. Because Dutch and German pronunciation is more predictable than the English one, we expect that the + +difference between grapheme and phoneme learn- 534 ing will be smaller in these languages. + +539 + +${}^{3}$ However, as the epoch has been deprecated in the latest version of OpenNMT, we converted it to train_steps based on its relationship with steps. + +${}^{4}$ Our results are also very close to those of Corkery et al. (2019), who did a similar replication and reported the averaged accuracy over ten runs initialized with different random seeds, but only on the training set. + +${}^{5}$ We keep the test set unchanged and validated across the train and dev sets. To make sure the dev set has a comparable number of verbs as the original set, we adopt 9 fold instead of 10 fold cross-validation. + +540 594 + +max width= + +X 3|c|all 3|c|regular 3|c|irregular + +1-10 +X train dev test train dev test train dev test + +1-10 +EN 99.5 93.1 92.1 99.8 96.1 95.0 98.1 27.8 40.5 + +1-10 +NL 98.9 88.4 88.4 99.2 91.4 92.2 96.5 62.4 57.9 + +1-10 +DE 98.9 85.0 92.5 99.4 92.0 95.1 96.7 38.7 57.9 + +1-10 +10|c|(a) Phoneme input + +1-10 + +max width= + +2*X 3|c|all 3|c|regular 3|c|irregular + +2-10 + train dev test train dev test train dev test + +1-10 +EN 99.1 93.6 93.8 99.8 98.2 98.1 89.0 11.1 28.1 + +1-10 +NL 99.4 88.0 89.6 99.8 91.2 93.0 97.9 58.6 61.0 + +1-10 +DE 98.4 86.4 93.6 99.1 93.5 95.7 93.9 39.5 65.9 + +1-10 + +(b) Grapheme input + +597 + +598 + +599 + +601 + +541 595 + +542 596 + +546 600 + +Table 3: Past tense inflection accuracy in English, Dutch, and German; all averaged over 3 random seeds. + +max width= + +2*epoch 2|c|English 2|c|Dutch 2|c|German + +2-7 + 2|c|hits 2|c|bestijgt (mounts) 2|c|gilt (applies) + +1-7 +10 hItId hitted b@stKGd@ besteeg gIlt@ galte + +1-7 +20 hItst hit b@stex besteeg gIlt@ galt + +1-7 +30 hItId hitted b@stKGd@ besteeg g< galt + +1-7 +40 hItId hitted b@stKGd@ besteeg g< galt + +1-7 +50 hIt hitted b@stKGd@ besteeg g< galt + +1-7 +60 hItst hit b@stex besteeg gIIt@ gilte + +1-7 +70 hIt hit b@stex bestijgde g< galt + +1-7 +80 hItId hitted b@stex besteeg g< galt + +1-7 +90 hItId hitted b@stex besteeg g< galt + +1-7 +100 hIt hit b@stex besteeg g< galt + +1-7 + +Table 4: The oscillating development (micro U-shape) of single verbs in three languages: with phoneme or grapheme inputs, the respectively predicted past phonetic (left) or orthographic (right) forms are changing with the training proceeding, but their final predictions are correct when reaching the last epoch. + +602 + +603 + +551 605 + +604 + +606 + +608 + +609 + +611 + +613 + +614 + +615 + +553 607 + +556 610 + +558 612 + +616 + +563 617 + +618 + +619 + +566 620 + +For comparability, all experiments in this section use the newly introduced Unimorph-based dataset, which includes a similar amount of training forms in all languages (cf. Table 1). The model architecture and the hyperparameter settings are the same as in previous experiments. We also run each experiments three times with different random seeds and report the averaged results. + +Result overview For the forms seen in training, the model is able to learn both regular and irregular past tense inflection with more than 95% accuracy (Table 3a), and with similar learning curves (Figure 2), which confirms and strengthens the main findings of $\mathrm{K}\& \mathrm{C}$ on two other languages. + +583 Comparing Table 3a to 3b, we find that the overall trends are maintained when the model is trained on graphemes instead of phonemes (the original setup of $\mathrm{K}\& \mathrm{C}$ ). However, a notable exception is observed: grapheme learning results in a much lower accuracy of English irregular verbs. + +In the following sections, we discuss these results in more detail. + +593 + +621 + +§ 5.1 PAST TENSE LEARNING RESULTS IN ENGLISH, DUTCH, AND GERMAN + +622 + +623 + +Accuracy Looking closer at the results across 624 + +languages (Table 3a), we notice that inflecting un- 625 + +seen Dutch regular verbs is slightly harder than in 626 + +German and English. This might be explained by 627 the fact that in Dutch all voiced consonants become unvoiced at the end of a word, but to predict if the past tense becomes '-de' (for voiced + +consonants) or '-te' (for unvoiced consonants), we 632 still need the end consonant of the stem, which can be found within the lemma and most of the times in the spelling of the word form. Unfortunately, this information is absent in the pronun- + +ciation. For example, in the pair lAnt-lAndd@, 637 one will not know whether the past tense should be 1And@ or 1Ant @ before seeing the orthographic form 1 and. We find that such errors account for about ${50}\% \left( {{18}/{38}}\right)$ of all Dutch regular verb er- + +rors. This difference in voiced/unvoiced regular 642 past tense endings only occurs in Dutch. + +As for irregular verbs, we find a large difference across languages in the ability to generalize to new + +forms. Especially in English, while the model has 646 + +647 + +648 + +english_irreg english_reg dutch_irreg dutch_reg german_irreg german_reg 60 100 Number of Epochs (a) Phoneme Input english_reg dutch_irreg dutch_reg german_irreg 100 Number of Epochs (b) Grapheme Input Accuracy 80 65 0 40 100 Accuracy 80 70 60 40 + +Figure 2: Learning curves of the model on the German, English, and Dutch training set (with random seed 123). + +649 + +650 + +651 + +652 + +653 + +654 + +659 + +661 + +664 almost perfectly learned to inflect seen verbs, it + +681 has a hard time predicting the form of new irregular verbs (dev: 27.8%, test: 40.5%). This effect is smaller in Dutch and German, suggesting the irregular inflection patterns in these languages are more predictable. Surprisingly, the model made + +686 more mistakes when predicting the inflections of the irregular verbs in the German dev set than the test set (dev: 38.7%, test: 57.9%). By inspecting + +689 the mistakes, we found that the model incorrectly took many irregular verbs as regular ones because + +691 of their resemblance (high character overlap). For instance, reitest-*reitetest/rittest (ride) is influenced by the regular conjugation of bereitest-bereitetest (prepare). We + +696 found ${23}/{81}$ irregular verbs in the dev set are very similar to regular verbs in the training set. Out of these, 8 irregular verbs are identical to regular ones except for a prefix (e.g., reitet (rides) vs. bereitet (prepares) and reitest (ride) vs. + +701 verbreitest (spread), which could be highly + +confusing for a model that is only based on form 702 + +regardless of meaning. By contrast, such overlap 703 + +is not found between the irregular verbs in the test 704 + +set and regular ones in the training set. This distri- 705 butional discrepancy might explain the lower accuracy in the dev set. It echoes with our other + +finding discussed in the next section that irregu- 708 lar verbs might be misled by regular verbs if they share representation similarity. + +Errors and learning trajectories Going be- + +yond overall accuracy, we inspect the learning tra- 713 jectories of individual verbs in our dataset. We + +find that human-like overregularization patterns 715 similar to those observed by K&C in English also occur in Dutch and German. For example, + +in Dutch, after 40 epochs of training, the model 718 change verscheent to verscheen as the past + +tense of verschijnt (appears). However, af- 720 ter 50 epochs, the model again generate the wrong form verscheent. After 70 epochs, the correct result is again obtained. Similar patterns are observed for sink in English and streitet (argues) in German. All wrongly predicted irregular verbs are caused by over-regularization. In other words, no patterns like ated in English or lookte in Dutch are found, which is consistent with humans' learning behaviour (Pinker and Prince, 1988). More examples from English, Dutch and German are listed in Table 4. + +Additionally, we find cases where the model generates an irregular form for a regular verb, because of the resemblance with other (irregular) verbs. In Dutch, for example, the regular verb versier-versierde (decorate-decorated) gets incorrectly inflected as *versoor by resemblance to verbs like verlies-verloor (lose-lost). Similar errors also occur in German. For instance, the wrong prediction of verfehle-*verfahl/verfehlte (miss-missed) might be misled by the pair befehlen-befahlen (order-ordered), and schweben-*schwoben/schwebten (float-floated) is possibly due to its resemblance to schieben-schoben (push-pushed). Interestingly, this type of errors aligns with Ernestus and Baayen (2004)'s experiments with Dutch speakers: phonological similarity, rather than rule-based regularity, influences participants' judgments toward the inflection of verbs. + +That said, the model also displays error pat- + +terns that are not human-like, such as copying the 755 present form or randomly removing phonemes (or letters) from it. Similar cases of non-plausible predictions were also observed at the Sigmor-phon Shared Task (Kodner and Khalifa, 2022), for instance forgive-*forgaved/forgave or seek-*sougk/sought. As also observed by Wiemerslage et al. (2022), this kind of model predictions contrasts with the behaviour of human speakers, who mostly resort to generating a regular past tense when a verb is unknown. + +§ 5.2 PHONEME VS. GRAPHEME INPUT + +Undoubtedly, using phoneme input is more principled than grapheme input when simulating human acquisition patterns. However, pronunciation information is not always available and makes it harder to extend this kind of simulations beyond a small set of widely studied languages. Here, we investigate the usability of grapheme-based input for modeling past tense inflection. We expect German and Dutch to be a good use case for this, given their more transparent orthography compared to English (Marjou, 2021). + +The results in Table 3 clearly show that switching to grapheme input for the English simulations is not principled as this results in a slight increase of regular inflection accuracy (from 99.8/96.1/95.0% to 99.8/98.2/98.1% train/dev/test) as opposed to a large decrease of irregular inflection accuracy (from 98.1/27.8/40.5% to ${89.0}/{11.1}/{28.1}\%$ ). The latter effect is particularly marked, suggesting non-transparent orthography may not be a uniform property of the language but may be correlating with less regular word forms within a language. We leave this investigation to future work. + +Using grapheme input in Dutch and German seems much safer (differences are overall small, with only a slight increase in almost all cases). Our observations seem to reflect the figures of Mar-jou (2021), who give a much higher transparency score to Dutch and German than to English. + +In sum, using graphemes to simulate human patterns of morphological acquisition is possible but should be done with caution and only in some languages. A good practice could be to first verify that the orthographic transparency of a language is high (Marjou (2021) present results for 17 languages). When that is not possible, grapheme-based results should be at least validated against a small-scale pronunciation dataset. + +809 + +§ 6 CONCLUSIONS + +810 + +811 + +In this work, we study the plausibility of using 812 + +sequence-to-sequence neural networks for simu- 813 + +lating human patterns of past tense acquisition. 814 + +More specifically, we replicate findings by Kirov 815 + +and Cotterell (2018) and examine their generaliz- 816 ability beyond the specific case of English, using a new dataset of English/Dutch/German (ir)regular verb forms based on Unimorph (McCarthy et al., 2020). + +We show that the main findings of $\mathrm{K}\& \mathrm{C}$ also 821 largely hold for Dutch and German, including over-regularization errors and the oscillating (or micro U-shape) learning trajectory of individual verb forms across training epochs. At the same + +time, we also observe cases of non human-like 826 errors, for instance when the model just keeps + +the present form unchanged or randomly removes 828 phonemes from it. A notable difference among + +our studied languages concern unseen English ir- 831 regular verbs, which appeared to be much harder to inflect than the Dutch and German ones. We also observe that the orthographic transparency of a language influences and possibly confounds the model's learning performance: higher transparent orthography contributes to more reliable and consistent simulation results, but in general this aspect should be seriously considered when setting up new benchmarks of morphological acquisition. + +Future work could include the construction of a nonce word benchmark in Dutch and German + +to enable a multi-lingual evaluation of this task 843 (Corkery et al., 2019), as well as an in-depth investigation of the different level of irregular past + +inflection difficulty in our three languages. 846 + +Kirov and Cotterell (2018) provided very + +promising evidence for the use of modern neural 848 networks to model the human language acquisition patterns. Our work confirms the potential of + +this research direction, but also raises important 851 issues and joins recent follow-up studies (Cork- + +ery et al., 2019; Dankers et al., 2021; Kodner and 853 Khalifa, 2022; Wiemerslage et al., 2022) that have warned against over-optimistic conclusions. \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wbQd_esbJC/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wbQd_esbJC/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..94307b00715c74f3f44ee2267e972d638998563c --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wbQd_esbJC/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,781 @@ +000 054 + +# Parser Evaluation for Analyzing Swedish 19th-20th Century Literature + +001 055 + +002 056 + +003 Anonymous Author + +Affiliation / Address line 1 + +006 Affiliation / Address line 2 Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +## Abstract + +013 In this study, we aim to find a parser for accurately identifying different types of subordinate clauses, and related phenomena, + +016 in 19th-20th-century Swedish literature. Since no test set is available for parsing + +018 data from this time period, we propose a lightweight annotation scheme for annotating a single relation of interest per sen- + +021 tence. We train a variety of parsers for Swedish and compare evaluations on stan- + +023 dard modern test sets and our targeted test set. We find clear trends in which parser + +026 types perform best on the standard test set, but that performance is considerably more + +028 varied on the targeted test set. We believe that our proposed annotation scheme can be useful for complementing standard + +031 evaluations, with a low annotation effort. + +033 + +## 1 Introduction + +Dependency parsers can be useful tools for analyzing large text materials, and as such can en- + +036 able large-scale studies within many scientific disciplines. Modern parsers can achieve very high + +038 scores on standard test sets, at least for languages with large treebanks, but these test sets are often limited to only a few domains, and typically on publication-level modern language, such as news or Wikipedia. For more challenging text types, for instance, noisy data like Twitter or historical texts, parsers typically perform considerably worse even for high-resource languages. + +Parsers are typically evaluated on a treebank that is split into training, development, and test sets. This can overestimate the parser performance, since parsers are then trained on data that matches its test set in all relevant aspects, such as genre, time period, and annotation style. Fur- + +053 thermore, parser evaluation is typically done using + +metrics that give a holistic score for the full tree, 065 such as (un)labeled attachment score. In many + +real-world scenarios, such as ours, we are not in- 067 terested in the full tree, but in a subset of relations. + +This study is part of a larger project with 070 the overall aim to identify and explore language + +change in Swedish literature during the period 072 1800-1930. During the 19th century, the Swedish language changed in several aspects. This change + +includes various linguistic levels and involve also 075 lexical aspects. Overall, the changes led to a + +smaller difference between spoken and written 077 Swedish since the written language moved closer to the spoken vernacular. The goal of the project + +is to cover morphological, syntactical, and lexical 080 changes. In this paper, however, we focus only on + +syntactic aspects. The changes in the 19th century 082 resulted in a less complex language - not least as far as subordinate clauses and related phenom- + +ena are concerned. To enable large-scale analysis 085 of subordinate clauses, we require a high-quality + +parser for our target domain, Swedish literary nov- 087 els and short stories from 1800-1930. In this paper, we explore whether parsers can be evaluated + +for this domain, without requiring a large manual 090 annotation effort. + +To evaluate a parser for a new text type and task, 092 as in our case 19th century literature with a focus mainly on subordinate clauses, we would ideally like to have an annotated treebank for the target + +text type. However, this is a human annotation 097 task that is time-consuming, and thus costly, and which requires an expert on dependency grammar. For many practical projects, this is not feasible. We propose a lightweight annotation task for our target task, which consists of only annotating one type of phenomenon per sentence, constituting a targeted test set. We then explore whether this could be an efficient option to annotating full trees. The focus is on four phenomena related to subor- + +dinate clauses, and annotate a small targeted test 107 set for our target text type, which will be publicly released. For comparison, we also evaluate on standard Swedish test sets. + +We compare several variants of three generations of parsers trained on different subsets of the Universal Dependencies (UD) treebanks (Nivre et al., 2020), and evaluate them on UD, both with holistic metrics and for a subset of relations of interest, as well as on our targeted test set. On the UD test sets we see clear trends that a modern BERT-based parser is better than BiLSTM- and SVM-based parsers, and that it is better to train on several North Germanic languages than only on Swedish. However, on our new targeted test set, the results are more mixed, and we see less clear trends, which is in line with earlier work for German (Adelmann et al., 2018). We think that our targeted test set is able to give a complementary view to standard evaluations. + +In Section 2 we review related work, followed by a description of our project focused on Swedish language change in Section 3. In Section 4 we describe the data and in Section 5 we describe the parsers evaluated, including the multilingual training setup. We summarize the results in Section 6, discuss them in Section 7, and finally we conclude in Section 8. + +## 2 Related Work + +Dependency parsers have continuously developed, from 'old school' parsers like MaltParser (Nivre et al., 2007) and MSTparser (McDonald et al., 2005) based on classical machine learning, like support vector machines, to modern neural parsers. Many of the first strong neural parsers were based on recurrent neural networks, as most of the best parsers in the CoNLL 2017 shared task on dependency parsing (Zeman et al., 2017). Next, models based on deep contextualized em-beddings have been taking over, and most strong parsers today are based on fine-tuning contextu-alized models like BERT (Devlin et al., 2019) or XLM-R (Conneau et al., 2020), e.g. Machamp (van der Goot et al., 2021) and Trankit (Nguyen et al., 2021). + +The standard way to evaluate dependency parsers is by calculating holistic metrics such as labeled attachment score (LAS), which measures the percentage of words which gets both their head word and label correct. There are, however, examples of more detailed evaluations (e.g. McDonald + +and Nivre, 2007; Kulmizev et al., 2019; Salomoni, 162 + +2017), focusing on aspects such as arc and sen- 163 tence lengths, non-projective dependencies, and scores for specific POS-tags and dependency relations. The overall conclusion is typically that different parser types have different strengths. As far + +as we are aware, there are no datasets and evalua- 168 tions like our proposal, focused on a single relation per sentence. + +Highly relevant to our study is the work of Adelmann et al. (2018), who evaluate a set of six parsers for digital humanities research, focusing on novels and academic texts for German. Like us, they are also interested in specific relations, for in- + +stance, related to speaker attribution, and not only 178 in holistic evaluation. Unlike us, they perform + +a full dependency tree annotation effort for three 180 sample texts. In addition, they do not include any neural parsers in their evaluation. They find that several parsers do well on the holistic metrics, but that the results are considerably worse for several of the specific relations of interest, such as appositions, and that it is not always the overall strongest parser that is the best choice for a specific relation. Salomoni (2017) performed a detailed evaluation on parsing German 17th-century literature, for which he annotated two excerpts of text with full dependency annotations. Again, no neural parsers were included in the study, which found a drop compared to in-domain results, but where the relative performance of the two parsers evaluated was consistent on different metrics, possibly because of the large difference in performance between them. + +Swedish literary texts from different eras have 200 been analyzed for different purposes before, requiring taggers and/or parsers. Dahllöf (2022) aims to characterize differences between dialogue and narrative in contemporary fiction, whereas (Stymne et al., 2018) analyze prose rhythm in a novel from 1940. However, in none of these studies, the choice of tagger and/or parser is motivated. There have also been some earlier smaller-scale studies focusing on the transition towards a more colloquial written Swedish. For instance, language development in Swedish literature during the 19th century has been explored, but only on a small scale focusing on individual authors (e.g. + +Lindstedt, 1922; Von Hofsten, 1935). 215 + +216 270 + +
LanguageTreebankGenresTrainTest
SwedishTalbankennews, nonfiction67K20K
PUDnews, wiki-19K
LinES-Mfiction, nonfiction, spoken18K73K
NorwegianBokmaalblog, news, nonfiction244K30K
Nynorskblog, news, nonfiction245K25K
NynorskLIAspoken35K10K
DanishDDTfiction, news, nonfiction, spoken80K10K
FaroeseFarPaHCbible1.5K6.6K
IcelandicModernnews, nonfiction7.5K10K
+ +Table 1: Treebanks used, with info about genres (as defined in UD) and number of tokens in test and training data. LinES-M refers to our modified version of LinES. + +277 + +217 271 + +218 272 + +219 273 + +220 274 + +221 275 + +222 276 + +278 + +227 + +## 3 Language Change in 19th Century Swedish + +229 + +This study is part of a larger project with the over- + +232 all aim to identify and explore language change in Swedish literature during the period 1800-1930. + +234 In the history of the Swedish language, this period is characterized by modernization in the sense that the written language was influenced by the spoken vernacular. In this process of modernization, fictional prose is of certain interest since it + +239 has been suggested that linguistic change spread from literary dialogue (Engdahl, 1962; Teleman, 2003). By investigating a corpus of literary texts the project will not only contribute with a more detailed account of language change in 19th-century Swedish but also address the question of how linguistic change increased in the community. + +The modernization of the Swedish written language during the 19th century affected several lin- + +249 guistic aspects. As for the lexicon, it is wellknown that formal functions words were replaced by colloquial counterparts. Much attention has also been devoted to the loss of verbal agreement, i.e. the use of the vernacular singular variant in + +254 both singular and plural. On the syntactic level, Engdahl (1962) has shown a remarkable change in sentence length during the end of the 19th century. Engdahl's study focuses on non-fictional prose, periodicals from 1878 to 1950, but his re- + +259 sults call for a more detailed account of syntactic complexity during the period, and hence we will focus on subordinate clauses and phenomena related to them in this paper. + +For this study, we have chosen to focus on three types of subordinate clauses, based on UD dependency labels, and one phenomenon related to subordinate clauses: (i) relative clauses (RELCL), (ii) cleft constructions (CLEFT),[1 (iii) clausal + +269 + +281 complements not determined by obligatory con- + +trol (CCOMP), and (iv) auxiliary drop (NO-AUX). 283 Whereas the first three types can be used in order to measure syntactic complexity, auxiliary drop + +has been suggested to mark written style, and 286 hence almost never occur in spoken language (cf. + +Wellander, 1939). Since auxiliary drop of fi- 288 nite verbs is restricted to subordinate clauses in Swedish, we have included it as related to sub- + +ordinate clauses. In this study, we only include 291 + +auxiliary drop that occurs in clausal complements 293 CCOMP. + +## 4 Data + +296 + +In this section, we will describe the data used. We + +will first describe the data from UD, including the 298 modified version of the LinES treebank, and then describe the targeted dataset we constructed for + +this project 301 + +### 4.1 Universal Dependencies Treebanks + +303 + +We use data from Universal Dependencies Nivre + +et al. (2020) version 2.11 (Zeman et al., 2022) for 306 training our parsers and for the standard evalua- + +tion. Besides dependency annotations, UD also 308 contains lemmas, universal and language-specific part-of-speech tags (UPOS/XPOS), and morphological features. Our main focus is on Swedish, for which there are three treebanks, Talbanken, + +LinES, and PUD, where PUD only contains a test 313 set. In addition, we use data from related north Germanic languages: Norwegian (both variants: Bokmål and Nynorsk), Danish, Faroese, and Icelandic. The treebanks used are summarized in Ta- + +ble [1]. The intuition behind also using related lan- 318 guages is twofold, first, it has been shown to improve parsers (e.g. Smith et al., 2018a), second, + +323 + +--- + +subtypes of ACL, clausal modifier of noun, and are denoted ACL:RELCL and ACL:CLEFT. In this paper, we will use shorter names, excluding the prefix. + +${}^{1}$ In UD, both relative clauses and cleft constructions are + +--- + +324 378 + +
RelationExampleClass
RELCLHvad hon beundrar Maurits , som kan *stâ* så lugn !Correct
RELCLMen kan du säga hvar vi *äro* ?False
NO-AUXJag har fätt hvad du i natt *skrifvit* till mig .Correct
+ +Table 2: Examples of sentences shown to the annotators, marked as either correct or wrong. + +380 + +325 379 + +381 + +382 + +383 + +330 we believe it may make the parser more robust to non-standard Swedish, which has many differences from the modern Swedish of the Swedish treebanks. Written Norwegian and Danish, in particular, are very similar to Swedish, and are considered mutually intelligible. + +As can be seen in Table 1, the genres, according to the UD specification, of the treebanks used are mixed. To be able, to at least some extent, investigate whether it would help to have an in-genre test set, we create a modified version, LinES-M, of the LinES treebank (Ahrenberg, 2007) which consists of three genres: literary fiction, Microsoft manuals, and European parliament proceedings. The literary part contains a set of novels translated from English, published 1977-2017. While this is not a perfect match to our target of novels and short stories written originally in Swedish during an earlier time period, this was the closest we could get to an in-domain test set, without any re-annotations. We re-split LinES by merging the data from the training and test sets, and moving all literature [2] to a new test set, and all other texts to a new training set, referred to as LinES-M in Table 1. + +For evaluation on the Swedish UD test sets, we report labeled attachment score (LAS). For LinES-M, we also report F1-scores for the three relations in focus for our targeted test set and AUX, which is relevant to identify auxiliary drop. + +### 4.2 Targeted Literature Dataset + +In this section, we will describe the sampling and annotation of the targeted literary dataset annotated for this project as an alternative way of evaluating the performance of parsers on specific phenomena in a specific text type. The targeted dataset will be made publicly available. + +## Sampling and Text Processing + +Our target data is literary texts from 1800-1930, focusing on novels and collections of short stories. Such works have been made available by + +Litteraturbanken. ${}^{3}$ We choose to work only with 384 + +the subset of works that have been proofread after 385 386 going through OCR, available in an XML format. We extracted all novels and short stories available + +in this format from the time period of interest. 389 From these texts, we extracted the raw text para- + +graphs. For another sub-project, we had already 391 extracted a set of novels where quotations are used to mark dialog, and used quotation marks to sep- + +arate dialogue and narrative, which we use also in 394 this study. This sample consists of 165 novels and + +collections of short stories. 396 + +The selected works were parsed early on in the project, using Swepipe and UUparser ${}^{s}$ with + +Swepipe tags (see 5). From the parse trees, we 399 extracted all sentences containing a relation of interest and marked the head word for which that relation occurred. For NO-AUX, we also checked that there was no outgoing AUX relation from the head word. It is not uncommon to have several instances of a single relation in a sentence, but we only marked a single occurrence per example, to make the annotation consistent between sentences. From this set, we randomly sampled 200 sentences for each relation type, except CLEFT, for which we only found 74 examples, which were all included. Table 2 shows examples, also containing examples of plural verb forms äro (modern: är, 'are') and old-fashioned spelling 'skrifvit' (modern: skrivit, 'written'). + +#### 4.2.1 Annotation + +416 + +The annotation was performed by two experts on Swedish grammar, both native Swedish speakers. The annotators were given the example sentences in Excel, and for each sentence, they were to decide whether the marked head word belonged to the given type or not. For each type, 20 examples were annotated by both annotators, and the remaining examples were split between them. Af- + +ter the first round, there were a few disagreements 426 in the doubly annotated sets, which were discussed by the annotators, followed by a re-annotation of all examples. The initial round of annotation + +431 + +--- + +${}^{2}$ The literary works are in documents2,3,4,6,7, and 8 ; document 1 contains Microsoft manuals and document 5 contains parliament proceedings. (Lars Ahrenberg, personal communication) + +https://litteraturbanken.se/ + +--- + +433 was very quick, roughly between 15-30 minutes per 100 examples, with a somewhat longer time needed for CCOMP. Table 3 shows the number of correct and wrong examples for each class. Note that the dataset is skewed towards positive examples. + +
RelationCorrectWrong
CLEFT6410
RELCL13367
CCOMP14159
NO-AUX17030
+ +Table 3: Class distribution in our annotated dataset + +#### 4.2.2 Evaluation + +We evaluate on the targeted dataset by calculating the number of times the parser assigns the correct relation to the focus word, and for NO-AUX, that there in addition is no aux-dependent. We then calculate precision and recall for each relation type. Note that this is different from standard evaluation of dependency parsers where we evaluate a full tree. In this case, we instead evaluate a single relation of interest for each sentence. + +## 5 Parsers + +In order to investigate how well the different types of evaluation work, we explore three generations of parsers. While the main focus is on dependency parsing. As a baseline, we use the easily accessible Swepipe with its provided model for Swedish. We also use two generations of neural parsers, UUParser and Machamp, for which we also experiment with multilingual parsing. We train each model three times with different random seeds and report average scores. + +### 5.1 Swepipe + +As a baseline parser, we wanted an easily accessible parser, which comes with a trained parsing model, and which might be used by non-experts in a digital humanities project. Our choice was to use the Swedish annotation pipeline, Swepipe. 4, a pre-trained model covering all steps needed to analyse Swedish texts from scratch, including tok-enization, tagging, and parsing. Swepipe is similar to several other systems targeted at this user group, such as the web-based Swegram 5, which uses the same parser and tagger (Megyesi et al., 2019). + +Swepipe is pre-neural and uses efselab (Östling, 486 + +2018) for tagging and MaltParser (Nivre et al., 487 2007) trained on Talbanken for parsing. Malt-Parser is a classical transition-based parser, using a support vector machine for classification, based on a feature vector with words, POS-tags, and already built relations. + +### 5.2 UUParser + +UUParser (de Lhoneux et al., 2017; Smith et al., 2018b) is a neural transition-based dependency parser with a BiLSTM feature extractor, based on + +Kiperwasser and Goldberg (2016). Word repre- 499 sentations are fed to a BiLSTM, to create contex-tualized word representations, which are given as + +input to an MLP classifying the next transition. 502 We use an arc-hybrid transition model (Kuhlmann + +et al., 2011) with a swap transition and a static- 504 dynamic oracle (de Lhoneux et al., 2017). As input word representation we use word embeddings, character-based word embeddings, UPOS-tag em-beddings, and treebank embeddings, which represent the treebank of a sentence. All embeddings were initialized randomaly at training time. We use the default UUparser settings (Smith et al., 2018b), except for adding drop-out with a rate of 0.33 for UPOS-embeddings, since the parser is trained with gold tags. At test time, we use two different sets of POS-tags, from Swepipe/efselab and from Machamp. We will call these variants UUparser ${}^{s}$ and UUparser ${}^{m}$ respectively. To counteract the differing sizes of the training data, we limited the number of sentences used per treebank to 4,300 per iteration. + +522 + +### 5.3 Machamp + +Machamp (van der Goot et al., 2021) is a toolkit 524 for multitask learning covering several NLP tasks, based on fine-tuning a pre-trained contextualized model, like BERT (Devlin et al., 2019). In a multitask setup, each task has a separate decoder. The dependency parser is a graph-based parser using deep biaffine attention (Dozat and Manning, 2018) to score word pairs, and the CLU algorithm (Chu and Liu, 1965; Edmonds, 1967) to extract trees. For tagging, a greedy decoder, with a softmax output layer is used. + +In this work we use Machamp in a multi-task setup, to jointly learn tagging of UPOS, XPOS and morphological features, and dependency parsing. + +We experiment with two sets of language models, 539 + +--- + +4https://github.com/robertostling/ efselab + +${}^{5}$ https://cl.lingfil.uu.se/swegram/ + +--- + +540 + +
GroupIncluded treebanks/languages
TalbankSwedish-talbanken
SwedishTalbank+ Swedish-LinES-M
SweNorSwedish + Norwegian (*3)
ScandSweNor + Danish
NorthGScand + Faroese + Icelandic
+ +Table 4: Groups of languages/treebanks used for multilingual training. + +541 + +542 + +543 + +546 multilingual BERT (mBERT Devlin et al., 2019) ${}^{6}$ , trained on 104 languages including all languages used in our study except Faroese, and the Swedish model KB-BERT (Malmsten et al., 2020), trained only on Swedish. We will call these systems Machamp ${}^{m}$ and Macahmp ${}^{k}$ respectively. For both models, we used the cased version. KB-BERT + +556 has been shown to improve Swedish named entity recognition and POS-tagging (Malmsten et al., + +558 2020), but as far as we are aware, it has not been used in multilingual dependency parsing models. We use the default parameters of Machamp. To counteract the differing sizes of the training data, we applied sampling smoothing set to 0.5 . + +### 5.4 Multilingual Training + +For UUParser and Machamp, we explore multilingual training. We limit ourselves to the North-Germanic languages, all relatively closely related to Swedish. We train two Swedish models, on Talbanken only, to be comparable with Swepipe, and also with LinES-M. In addition, we train three models with different subsets of the other North Germanic Languages. For our multilingual models, we first combine Swedish with Norwegian, which has three treebanks covering both variants of Norwegian. We then add Danish, to train a Scandinavian model. The reason for adding Norwegian first, despite the fact that Danish is considered a closer relative to Swedish, is the availability of more data for Norwegian with variability in language variants. Our final model, NorthG, also adds Faroese and Icelandic, which are more distant from Swedish, and not mutually intelligible. The language groups are summarized in Table 4. + +## 6 Results + +Tables 5 and 6 show results from the standard and targeted evaluations for Swepipe, UUparser ${}^{m}$ with Machamp ${}^{k}$ POS-tags and Machamp ${}^{k}$ trained with + +KB-BERT. In all tables, we mark the three best 594 + +results for each metric in bold. 595 + +596 + +Table 5 shows results on UD test sets. We see 597 + +no obvious differences between LAS on the in- 598 genre LinES-M and the other two Swedish test + +sets, indicating that time period might play a big- 600 ger role than genre in our scenario. Swepipe has overall the lowest scores, followed by UUparser ${}^{m}$ , and then ${\operatorname{Machamp}}^{k}$ . For the two Swedish models, the differences between using only Talbanken + +and adding the small LinES-M training set are 605 typically small, but sometimes with a positive + +effect for UUparser ${}^{m}$ and a negative effect for 607 Machamp ${}^{k}$ . Adding Norwegian leads to improvements in nearly all scores, often quite substan- + +tial, whereas adding additional languages has a 610 smaller impact. The difference between parsers varies for the different relation types. Swepipe does not find any CLEFTs, and falls behind UUparser ${}^{m}$ on all other relation types, especially for AUX. Machamp ${}^{k}$ improves considerably over UUparser ${}^{m}$ for all explored relations, except AUX, where both neural parsers perform well, possibly since they both use the POS-tags of Machamp ${}^{k}$ . + +The results in Table 6 for our targeted test set 620 show a partially different picture. First, we note + +that Swepipe has a very high recall for all re- 622 lation types except CLEFT, which it never predicts. We think this is mainly an artifact of the + +sampling procedure for this test set, where the 625 annotated sentences were sampled from Swepipe + +and UUparser ${}^{s}$ , with Swepipe POS-tags, which 627 means that they were mostly predicted as correct by Swepipe. The other parsers do not have this advantage, and thus have a lower recall, which we believe is more predictive of real performance. + +Swepipe has considerably lower precision than the 632 other parsers for all relation types. We believe that the evaluation should still be fair in comparing ${\text{UUparser}}^{m}$ and Machamp ${}^{k}$ , from which + +no samples were taken. Compared to the stan- 637 dard evaluation where Machamp ${}^{k}$ was clearly better than UUparser ${}^{m}$ , we now see a more mixed picture, where there is no clear overall advantage of Machamp ${}^{k}$ over ${\mathrm{{UUparser}}}^{m}$ , and the results are mixed across relation types and precision/recall. The trends between training languages are also less clear, with some combinations standing out in performance for some relation types. Machamp ${}^{k}$ trained with Scand and NorthG, has a + +considerably higher recall on RELCL than the other 647 + +--- + +https://github.com/google-research/ bert/blob/master/multilingual.md + +--- + +648 702 + +
LASF1, LinES-M
LinES-MTBPUDCLEFTRELCLCCOMPAUX
Swepipe-Talbank71.7579.6978.82-61.3154.9888.45
UUparser ${}^{m}$ -Talbank72.1083.7576.6626.8264.6759.6293.99
UUparser ${}^{m}$ -Swedish75.5183.7677.5029.1267.3761.6594.21
UUparser ${}^{m}$ -Norswe79.6985.6081.5039.9274.3466.7994.35
UUparser ${}^{m}$ -Scand79.7485.4381.3441.7473.0364.9394.20
UUparser ${}^{m}$ -NorthG79.3385.3581.2741.7172.8264.7094.27
Machamp ${}^{k}$ -Talbank80.5492.2486.0556.7379.0774.5995.44
Machamp ${}^{k}$ -Swedish80.2690.7286.8349.6775.8471.2993.94
Machamp ${}^{k}$ -Norswe83.1391.6386.7955.4281.2975.3295.29
Machamp ${}^{k}$ -Scand83.1692.3187.2155.5481.2174.2795.97
Machamp ${}^{k}$ -NorthG83.0392.3587.1756.0082.2774.7895.85
+ +Table 5: Results on standard Swedish UD test sets. LAS for all three Swedish test sets, and F1-scores for four relations of interest for LinES-M. + +649 703 + +650 704 + +651 705 + +652 706 + +653 707 + +654 708 + +655 709 + +656 710 + +657 711 + +658 712 + +659 713 + +660 714 + +661 715 + +
PrecisionRecall
CLEFTRELCLCCOMPNO-AUXCLEFTRELCLCCOMPNO-AUX
Swepipe-Talbank-66.3370.4184.620.0099.2598.5797.06
${\mathrm{{UUparser}}}^{m}$ -Talbank92.4693.3294.1198.1450.3582.3763.9751.44
UUparser ${}^{m}$ -Swedish92.4993.4595.8497.6069.7981.4565.9550.85
UUparser ${}^{m}$ -NorSwe92.1294.6597.3998.3084.5581.2070.8756.21
UUparser ${}^{m}$ -Scand94.6495.6996.7398.7284.2079.6270.4861.05
UUparser ${}^{m}$ -NorthG93.3195.5596.0699.0575.0079.3774.1361.57
Machamp ${}^{k}$ -Talbank94.1295.1694.6398.5259.9083.4675.4865.69
Machamp ${}^{k}$ -Swedish94.9296.1995.0998.8153.1282.2173.8165.10
Machamp ${}^{k}$ -NorSwe95.3896.7194.7799.1372.9279.7073.3367.25
Machamp ${}^{k}$ -Scand96.6195.1194.2999.0159.3887.4766.9058.82
Machamp ${}^{k}$ -NorthG95.3893.8393.4699.0064.0687.7268.1058.04
+ +Table 6: Precision and recall for our targeted test set. + +724 + +726 + +662 716 + +663 717 + +664 718 + +665 719 + +666 720 + +667 721 + +668 722 + +669 723 + +671 725 + +727 + +674 728 + +676 models, with only a small drop in precision. On CCOMP and NO-AUX, on the other hand, these two models instead have a low recall, without gain- + +679 ing much on precision. We do not see this pattern for UUparser ${}^{m}$ , where the Scand model is overall + +681 strong. + +In Table 7 we show a summary of results for both variants of UUparser and Machamp, showing + +684 only precision for the targeted test set, since recall is biased towards Swepipe and UUparser ${}^{s}$ due to + +686 the sampling. T We can see that UUparser ${}^{s}$ does not consistently improve on LAS over Swepipe when trained on the same Talbanken data, but + +689 that adding the Scandinavian treebanks improves the results considerably both for the UD evalua- + +691 tions and on the targeted test set. When we compare the two variants of UUparser and Machamp we see that ${\mathrm{{UUparser}}}^{m}$ and ${\operatorname{Machamp}}^{k}$ beat their variant consistently on the UD evaluation, and in most cases on the targeted test set. We also see + +696 that training on Scand is better than training on Talbanken in the majority of cases, both for UD + +701 + +and on Precision for the targeted test set, however, 730 from Table 6, we know that Scand is sometimes not as strong on recall. + +## 7 Discussion + +733 + +An important question is whether the parser per- 735 formance on our target task is good enough to use for our study of change in the Swedish writ- + +ten language. Overall, both Machamp and UU- 738 parser have good precision for all our relations + +of interest, always scoring above 90, and reach- 740 ing scores above 96 for some parsers for each relation type. The recall, however, is considerably lower. This means that the instances of each rela- + +tion type the parser finds are mostly good, but it 745 does miss a substantial part of relevant instances. The recall is highest for RELCL, where it is well above 80 for several of the Machamp models with UUparser also above 80 . This approaches a level + +that is usable for our end project, of finding syn- 750 tactic features in 18th-19th-century literature, and tracking them over time. Other relation types have a more mixed performance, as CLEFT, for which + +${\text{UUparser}}^{m}$ trained on NorSwe and Scand per- 754 + +forms very well, with a recall of over 84 , but where 755 + +--- + +${}^{7}$ To save space, we only show results for two training language groups. The other groups exhibit largely the same trends. + +--- + +757 811 + +
LASF1, UD_LinES-MP, litt
LinES-MTBPUDCLEFTRELCLCCOMPAUXCLEFTRELCLCCOMPNO-AUX
Swepipe-Talbank71.7579.6978.82-61.3154.9888.45-79.5282.1490.41
UUparser ${}^{s}$ -Talbank70.8082.3575.7826.0863.0158.3991.3192.8092.5293.0596.50
UUparser ${}^{s}$ -Scand77.6383.3980.2530.7770.5562.2290.8293.8694.0794.6697.95
UUparser ${}^{m}$ -Talbank72.1083.7576.6626.8264.6759.6293.9992.4693.3294.1198.14
UUparser ${}^{m}$ -Scand79.7485.4381.3441.7473.0364.9394.2094.6495.6996.7398.72
Machamp ${}^{m}$ -Talbank77.2089.3584.2138.4772.8769.0992.9192.9496.1393.0098.23
Machamp ${}^{m}$ -Scand80.1389.5085.7943.0977.6771.1893.4993.4196.9892.4799.08
Machamp ${}^{k}$ -Talbank80.5492.2486.0556.7379.0774.5995.4494.1295.1694.6398.52
Machamp ${}^{k}$ -Scand83.1692.3187.2155.5481.2174.2795.9796.6195.1194.2999.01
+ +Table 7: Comparison of parser variants, on standard test sets and our test set. + +810 + +812 + +813 + +814 + +815 + +816 other models perform considerably worse. The recall of CCOMP, and especially of NO-AUX is lower, and we would need to improve parser performance for those relation types, possibly by using domain adaptation techniques, before they would reach a useful level. The varying performance of parsers for different relation types is in line with the results for German by Adelmann et al. (2018), who recommend choosing different parsers for different end goals. + +On the standard evaluation, Machamp is clearly overall better than UUparser, training on Scand is better than training only on Swedish, KB-BERT is better than mBERT for Machamp, and UUparser is better with Machamp tags than with Swepipe tags. For our targeted test sets, however, we see fewer clear trends, and there is much more variation among the systems. Machamp ${}^{k}$ and UUparser ${}^{m}$ tend to perform better than their counterparts, and the multilingual models may have a small advantage over the Swedish-only models. Swepipe clearly seems to fall behind the other parsers on precision, whereas its high recall can be explained by the sampling procedure. A side-effect of our study is that we have found that Machamp ${}^{k}$ trained on Scand or NorthG is a very strong parser for modern Swedish as measured by the UD test sets. + +Our targeted test set does suffer from an issue with sampling from only two parsers, which affects its recall mainly for Swepipe, but also for ${\text{UUparser}}^{s}$ . We believe UUparser ${}^{m}$ is less affected since it relies on a different set of POS-tags. The dataset is also relatively small, especially for the CLEFT relation. However, we think it still contributes to showing that when selecting a parser for a particular target task and text type, we cannot rely solely on evaluation scores on standard test sets, as also shown by Adelmann et al. (2018). + +809 Even if we focus on the F1-score for the relations + +821 of interest, rather than on the full tree, we see no clear similarity of parser ranking to the evaluation of the same relation types in our targeted test set. To further investigate whether this type of test set + +can indeed be useful, we would need to perform 826 further analysis. It would be interesting to learn more about where the main improvements shown on UD evaluation for a parser like Machamp ${}^{k}$ actually occurs. We also think it would be useful + +to consider the sampling for the test set, specifi- 831 cally if it is worth the effort to also annotate some + +raw text, in order to find instances not identified by 833 any of our parsers. Another issue that we did not + +yet explore, is whether parsing performance varies 836 over the time period in question. + +## 8 Conclusion + +838 + +839 + +We describe a study of Swedish dependency 840 + +parsers with the goal of tracking changes in the 841 use of certain types of subordinate clauses and re- + +lated phenomena in Swedish literature from 1800- 843 1930. Since standard test sets do not cover this time period or genre, and we did not have the re- + +sources to perform a full annotation of dependency 846 + +trees, we propose a smaller-scale annotation task, 848 focusing on single relation types. We evaluated a set of parsers on UD and on our targeted test set. + +While there was a clear and relatively consistent 851 order between the parsers on the UD evaluation, + +the performance was more mixed on our targeted 853 test set, without a clear overall best parser across relation types. We believe that our proposed annotation scheme can be useful in complementing standard evaluations, with a low annotation effort, + +but that more analysis is needed. 858 + +## References + +Benedikt Adelmann, Wolfgang Menzel, Melanie An-dresen, and Heike Zinsmeister. 2018. Evaluation of + +862 + +863 out-of-domain dependency parsing for its applica- 865 tion in a digital humanities project. In Proceedings 866 of the 14th Conference on Natural Language Processing (KONVENS 2018), pages 121-135, Vienna, Austria. + +Lars Ahrenberg. 2007. LinES: An English-Swedish 870 parallel treebank. In Proceedings of the 16th Nordic Conference of Computational Linguistics (NODAL-IDA 2007), pages 270-273, Tartu, Estonia. University of Tartu, Estonia. + +Yoeng-Jin Chu and Tseng-Hong Liu. 1965. On the shortest arborescence of a directed graph. Scientia Sinica, 14:1396-1400. + +Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- + +880 moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In + +882 Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Linguistics. + +885 Mats Dahllöf. 2022. Quotation and narration in contemporary popular fiction in swedish - stylometric + +887 explorations. In Proceedings of the 6th Digital Humanities in the Nordic and Baltic Countries Conference (DHNB 2022), pages 203-211, Uppsala, Swe- + +890 den. + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and + +892 Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference + +895 of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), + +897 pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. + +900 Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meet- + +902 ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 484-490, Melbourne, Australia. Association for Computational Linguistics. + +Jack Edmonds. 1967. Optimum branchings. Journal + +907 of Research of the national Bureau of Standards B, 71(4):233-240. + +Sven Engdahl. 1962. Studier i nusvensk sakprosa. Några utvecklingslinjer. Skrifter utgivna av Insti-tutionen för nordiska spräk vid Uppsala universitet, Uppsala. + +Rob van der Goot, Ahmet Üstün, Alan Ramponi, Ibrahim Sharaf, and Barbara Plank. 2021. Massive choice, ample tasks (MaChAmp): A toolkit for multi-task learning in NLP. In Proceedings of + +917 the 16th Conference of the European Chapter of the + +Association for Computational Linguistics: System 918 + +Demonstrations, pages 176-197, Online. Associa- 919 + +tion for Computational Linguistics. 920 + +Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- 921 + +ple and accurate dependency parsing using bidirec- 922 + +tional LSTM feature representations. Transactions 923 + +of the Association for Computational Linguistics, 924 + +4:313-327. 925 + +Marco Kuhlmann, Carlos Gómez-Rodríguez, and Gior- 926 gio Satta. 2011. Dynamic programming algorithms for transition-based dependency parsers. In Pro- + +ceedings of the 49th Annual Meeting of the Asso- 929 ciation for Computational Linguistics: Human Language Technologies, pages 673-682, Portland, Oregon, USA. Association for Computational Linguistics. + +Artur Kulmizev, Miryam de Lhoneux, Johannes 934 Gontrum, Elena Fano, and Joakim Nivre. 2019. + +Deep contextualized word embeddings in transition- 936 based and graph-based dependency parsing - a tale of two parsers revisited. In Proceedings of the 2019 Conference on Empirical Methods in Natu- + +ral Language Processing and the 9th International 939 Joint Conference on Natural Language Processing + +(EMNLP-IJCNLP), pages 2755-2768, Hong Kong, 941 China. Association for Computational Linguistics. + +Miryam de Lhoneux, Sara Stymne, and Joakim Nivre. 2017. Arc-hybrid non-projective dependency parsing with a static-dynamic oracle. In Proceedings of + +the 15th International Conference on Parsing Tech- 946 nologies, pages 99-104, Pisa, Italy. Association for Computational Linguistics. + +Torvald Lindstedt. 1922. Studier över stilen i Gösta 949 Berlings saga. Nysvenska studier, 2:31-77. + +Martin Malmsten, Love Börjeson, and Chris Haf- 951 fenden. 2020. Playing with words at the National Library of Sweden - making a Swedish BERT. CoRR, + +abs/2007.01658. 954 + +Ryan McDonald and Joakim Nivre. 2007. Charac- + +terizing the errors of data-driven dependency pars- 956 ing models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Lan- + +guage Processing and Computational Natural Lan- 959 guage Learning (EMNLP-CoNLL), pages 122-131, Prague, Czech Republic. Association for Computa- + +tional Linguistics. 961 + +Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajič. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of Human Language Technology Conference + +and Conference on Empirical Methods in Natural 966 Language Processing, pages 523-530, Vancouver, British Columbia, Canada. Association for Computational Linguistics. + +Beáta Megyesi, Anne Palmér, and Näsman Jesper. + +2019. SWEGRAM - Annotering och analys av 971 972 svenska texter. Technical report, Department of Lin- 973 guistics and Philology, Uppsala University. 974 + +975 Minh Van Nguyen, Viet Dac Lai, Amir Pouran Ben Veyseh, and Thien Huu Nguyen. 2021. Trankit: 976 A light-weight transformer-based toolkit for multilingual natural language processing. In Proceedings 978 of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Sys- 980 tem Demonstrations, pages 80-90, Online. Association for Computational Linguistics. + +Joakim Nivre, Johan Hall, Jens Nilsson, Atanas + +983 Chanev, Gülsen Eryiğit, Sandra Kübler, Svetoslav Marinov, and Erwin Marsi. 2007. Maltparser: A + +985 language-independent system for data-driven depen- + +986 dency parsing. Natural Language Engineering, ${13}\left( 2\right) ,,{13}\left( 2\right) : {95} - {135}$ . 987 + +988 Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- + +989 ter, Jan Hajič, Christopher D. Manning, Sampo + +990 Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the Twelfth Language Resources + +993 and Evaluation Conference, pages 4034-4043, Marseille, France. European Language Resources Asso- + +995 ciation. + +Robert Östling. 2018. Part of speech tagging: Shallow or deep learning? Northern European Journal of + +998 Language Technology, 5:1-15. + +1000 Alessio Salomoni. 2017. Dependency parsing on late- 18th-century German aesthetic writings: A preliminary inquiry into Schiller and F. Schlegel. In Proceedings of the 2nd International Conference on Digital Access to Textual Cultural Heritage, DAT-eCH2017, page 47-52, New York, NY, USA. Asso- + +1005 ciation for Computing Machinery. + +Aaron Smith, Bernd Bohnet, Miryam de Lhoneux, Joakim Nivre, Yan Shao, and Sara Stymne. 2018a. 82 treebanks, 34 models: Universal Dependency parsing with multi-treebank models. In Proceedings + +1010 of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 113-123, Brussels, Belgium. Association for Computational Linguistics. + +Aaron Smith, Miryam de Lhoneux, Sara Stymne, and + +1015 Joakim Nivre. 2018b. An investigation of the interactions between pre-trained word embeddings, character models and POS tags in dependency parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2711-2720, Brussels, Belgium. Association + +1020 for Computational Linguistics. + +Sara Stymne, Johan Svedjedal, and Carin Östman. 2018. Spräklig rytm i skönlitterär prosa. En fall-studie i Karin Boyes Kallocain. Samlaren. Tidskrift för forskning om svensk och annan nordisk litter- + +1025 atur, 139:128-161. + +Ulf Teleman. 2003. Tradis och funkis : svensk 1026 + +spräkvård och spräkpolitik efter 1800, 1st edition. 1027 + +Norstedts ordbok, Stockholm, Sweden. 1028 + +Louise Von Hofsten. 1935. Några stildrag hos 1029 + +Selma Lagerlöf med utgångspunkt frān Charlotte 1030 + +Löwenskiöld. Nysvenska studier, 15:150-183. 1031 + +Erik Wellander. 1939. Riktig svenska: en handledning 1032 1033 i svenska sprèkets vård. Norstedt, Stockholm, Swe- + +den. 1034 + +1035 + +Daniel Zeman, Joakim Nivre, Mitchell Abrams, 1036 + +et al. 2022. Universal dependencies 2.11. 1037 LINDAT/CLARIAH-CZ digital library at the Insti- + +tute of Formal and Applied Linguistics (UFAL), 1038 + +Faculty of Mathematics and Physics, Charles Uni- 1039 + +versity. 1040 + +Daniel Zeman, Martin Popel, Milan Straka, Jan 1041 1042 Hajič, Joakim Nivre, Filip Ginter, Juhani Luotolahti, + +Sampo Pyysalo, Slav Petrov, Martin Potthast, Fran- 1043 + +cis Tyers, Elena Badmaeva, Memduh Gokirmak, 1044 Anna Nedoluzhko, Silvie Cinková, Jan Hajič jr., Jaroslava Hlaváčová, Václava Kettnerová, Zdeňka + +Urešová, Jenna Kanerva, Stina Ojala, Anna Mis- 1047 silä, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Le- + +ung, Marie-Catherine de Marneffe, Manuela San- 1049 + +guinetti, Maria Simi, Hiroshi Kanayama, Valeria 1050 + +de Paiva, Kira Droganova, Héctor Martínez Alonso, 1051 + +Çağrı Çöltekin, Umut Sulubacak, Hans Uszkor- 1052 + +eit, Vivien Macketanz, Aljoscha Burchardt, Kim 1053 Harris, Katrin Marheinecke, Georg Rehm, Tolga + +Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran 1054 + +Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, 1055 + +Jesse Kirchner, Hector Fernandez Alcalde, Jana Str- 1056 + +nadová, Esha Banerjee, Ruli Manurung, Antonio 1057 + +Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo 1058 Mendonça, Tatiana Lando, Rattima Nitisaroj, and + +Josie Li. 2017. CoNLL 2017 shared task: Multi- 1059 + +lingual parsing from raw text to Universal Depen- 1060 + +dencies. In Proceedings of the CoNLL 2017 Shared 1061 + +Task: Multilingual Parsing from Raw Text to Univer- 1062 + +sal Dependencies, pages 1-19, Vancouver, Canada. 1063 Association for Computational Linguistics. 1064 + +1065 + +1066 + +1067 + +1068 + +1069 + +1070 + +1071 + +1072 1073 1074 + +1075 1076 1077 1078 1079 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wbQd_esbJC/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wbQd_esbJC/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..5b61d1a801f8a81b0ddd0f031551eac5cad07983 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/wbQd_esbJC/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,728 @@ +000 054 + +§ PARSER EVALUATION FOR ANALYZING SWEDISH 19TH-20TH CENTURY LITERATURE + +001 055 + +002 056 + +003 Anonymous Author + +Affiliation / Address line 1 + +006 Affiliation / Address line 2 Affiliation / Address line 3 + +email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +Affiliation / Address line 2 + +Affiliation / Address line 3 + +email@domain + +Anonymousest Author 057 + +Affiliation / Address line 1 058 + +Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062 + +063 + +§ ABSTRACT + +013 In this study, we aim to find a parser for accurately identifying different types of subordinate clauses, and related phenomena, + +016 in 19th-20th-century Swedish literature. Since no test set is available for parsing + +018 data from this time period, we propose a lightweight annotation scheme for annotating a single relation of interest per sen- + +021 tence. We train a variety of parsers for Swedish and compare evaluations on stan- + +023 dard modern test sets and our targeted test set. We find clear trends in which parser + +026 types perform best on the standard test set, but that performance is considerably more + +028 varied on the targeted test set. We believe that our proposed annotation scheme can be useful for complementing standard + +031 evaluations, with a low annotation effort. + +033 + +§ 1 INTRODUCTION + +Dependency parsers can be useful tools for analyzing large text materials, and as such can en- + +036 able large-scale studies within many scientific disciplines. Modern parsers can achieve very high + +038 scores on standard test sets, at least for languages with large treebanks, but these test sets are often limited to only a few domains, and typically on publication-level modern language, such as news or Wikipedia. For more challenging text types, for instance, noisy data like Twitter or historical texts, parsers typically perform considerably worse even for high-resource languages. + +Parsers are typically evaluated on a treebank that is split into training, development, and test sets. This can overestimate the parser performance, since parsers are then trained on data that matches its test set in all relevant aspects, such as genre, time period, and annotation style. Fur- + +053 thermore, parser evaluation is typically done using + +metrics that give a holistic score for the full tree, 065 such as (un)labeled attachment score. In many + +real-world scenarios, such as ours, we are not in- 067 terested in the full tree, but in a subset of relations. + +This study is part of a larger project with 070 the overall aim to identify and explore language + +change in Swedish literature during the period 072 1800-1930. During the 19th century, the Swedish language changed in several aspects. This change + +includes various linguistic levels and involve also 075 lexical aspects. Overall, the changes led to a + +smaller difference between spoken and written 077 Swedish since the written language moved closer to the spoken vernacular. The goal of the project + +is to cover morphological, syntactical, and lexical 080 changes. In this paper, however, we focus only on + +syntactic aspects. The changes in the 19th century 082 resulted in a less complex language - not least as far as subordinate clauses and related phenom- + +ena are concerned. To enable large-scale analysis 085 of subordinate clauses, we require a high-quality + +parser for our target domain, Swedish literary nov- 087 els and short stories from 1800-1930. In this paper, we explore whether parsers can be evaluated + +for this domain, without requiring a large manual 090 annotation effort. + +To evaluate a parser for a new text type and task, 092 as in our case 19th century literature with a focus mainly on subordinate clauses, we would ideally like to have an annotated treebank for the target + +text type. However, this is a human annotation 097 task that is time-consuming, and thus costly, and which requires an expert on dependency grammar. For many practical projects, this is not feasible. We propose a lightweight annotation task for our target task, which consists of only annotating one type of phenomenon per sentence, constituting a targeted test set. We then explore whether this could be an efficient option to annotating full trees. The focus is on four phenomena related to subor- + +dinate clauses, and annotate a small targeted test 107 set for our target text type, which will be publicly released. For comparison, we also evaluate on standard Swedish test sets. + +We compare several variants of three generations of parsers trained on different subsets of the Universal Dependencies (UD) treebanks (Nivre et al., 2020), and evaluate them on UD, both with holistic metrics and for a subset of relations of interest, as well as on our targeted test set. On the UD test sets we see clear trends that a modern BERT-based parser is better than BiLSTM- and SVM-based parsers, and that it is better to train on several North Germanic languages than only on Swedish. However, on our new targeted test set, the results are more mixed, and we see less clear trends, which is in line with earlier work for German (Adelmann et al., 2018). We think that our targeted test set is able to give a complementary view to standard evaluations. + +In Section 2 we review related work, followed by a description of our project focused on Swedish language change in Section 3. In Section 4 we describe the data and in Section 5 we describe the parsers evaluated, including the multilingual training setup. We summarize the results in Section 6, discuss them in Section 7, and finally we conclude in Section 8. + +§ 2 RELATED WORK + +Dependency parsers have continuously developed, from 'old school' parsers like MaltParser (Nivre et al., 2007) and MSTparser (McDonald et al., 2005) based on classical machine learning, like support vector machines, to modern neural parsers. Many of the first strong neural parsers were based on recurrent neural networks, as most of the best parsers in the CoNLL 2017 shared task on dependency parsing (Zeman et al., 2017). Next, models based on deep contextualized em-beddings have been taking over, and most strong parsers today are based on fine-tuning contextu-alized models like BERT (Devlin et al., 2019) or XLM-R (Conneau et al., 2020), e.g. Machamp (van der Goot et al., 2021) and Trankit (Nguyen et al., 2021). + +The standard way to evaluate dependency parsers is by calculating holistic metrics such as labeled attachment score (LAS), which measures the percentage of words which gets both their head word and label correct. There are, however, examples of more detailed evaluations (e.g. McDonald + +and Nivre, 2007; Kulmizev et al., 2019; Salomoni, 162 + +2017), focusing on aspects such as arc and sen- 163 tence lengths, non-projective dependencies, and scores for specific POS-tags and dependency relations. The overall conclusion is typically that different parser types have different strengths. As far + +as we are aware, there are no datasets and evalua- 168 tions like our proposal, focused on a single relation per sentence. + +Highly relevant to our study is the work of Adelmann et al. (2018), who evaluate a set of six parsers for digital humanities research, focusing on novels and academic texts for German. Like us, they are also interested in specific relations, for in- + +stance, related to speaker attribution, and not only 178 in holistic evaluation. Unlike us, they perform + +a full dependency tree annotation effort for three 180 sample texts. In addition, they do not include any neural parsers in their evaluation. They find that several parsers do well on the holistic metrics, but that the results are considerably worse for several of the specific relations of interest, such as appositions, and that it is not always the overall strongest parser that is the best choice for a specific relation. Salomoni (2017) performed a detailed evaluation on parsing German 17th-century literature, for which he annotated two excerpts of text with full dependency annotations. Again, no neural parsers were included in the study, which found a drop compared to in-domain results, but where the relative performance of the two parsers evaluated was consistent on different metrics, possibly because of the large difference in performance between them. + +Swedish literary texts from different eras have 200 been analyzed for different purposes before, requiring taggers and/or parsers. Dahllöf (2022) aims to characterize differences between dialogue and narrative in contemporary fiction, whereas (Stymne et al., 2018) analyze prose rhythm in a novel from 1940. However, in none of these studies, the choice of tagger and/or parser is motivated. There have also been some earlier smaller-scale studies focusing on the transition towards a more colloquial written Swedish. For instance, language development in Swedish literature during the 19th century has been explored, but only on a small scale focusing on individual authors (e.g. + +Lindstedt, 1922; Von Hofsten, 1935). 215 + +216 270 + +max width= + +Language Treebank Genres Train Test + +1-5 +3*Swedish Talbanken news, nonfiction 67K 20K + +2-5 + PUD news, wiki - 19K + +2-5 + LinES-M fiction, nonfiction, spoken 18K 73K + +1-5 +3*Norwegian Bokmaal blog, news, nonfiction 244K 30K + +2-5 + Nynorsk blog, news, nonfiction 245K 25K + +2-5 + NynorskLIA spoken 35K 10K + +1-5 +Danish DDT fiction, news, nonfiction, spoken 80K 10K + +1-5 +Faroese FarPaHC bible 1.5K 6.6K + +1-5 +Icelandic Modern news, nonfiction 7.5K 10K + +1-5 + +Table 1: Treebanks used, with info about genres (as defined in UD) and number of tokens in test and training data. LinES-M refers to our modified version of LinES. + +277 + +217 271 + +218 272 + +219 273 + +220 274 + +221 275 + +222 276 + +278 + +227 + +§ 3 LANGUAGE CHANGE IN 19TH CENTURY SWEDISH + +229 + +This study is part of a larger project with the over- + +232 all aim to identify and explore language change in Swedish literature during the period 1800-1930. + +234 In the history of the Swedish language, this period is characterized by modernization in the sense that the written language was influenced by the spoken vernacular. In this process of modernization, fictional prose is of certain interest since it + +239 has been suggested that linguistic change spread from literary dialogue (Engdahl, 1962; Teleman, 2003). By investigating a corpus of literary texts the project will not only contribute with a more detailed account of language change in 19th-century Swedish but also address the question of how linguistic change increased in the community. + +The modernization of the Swedish written language during the 19th century affected several lin- + +249 guistic aspects. As for the lexicon, it is wellknown that formal functions words were replaced by colloquial counterparts. Much attention has also been devoted to the loss of verbal agreement, i.e. the use of the vernacular singular variant in + +254 both singular and plural. On the syntactic level, Engdahl (1962) has shown a remarkable change in sentence length during the end of the 19th century. Engdahl's study focuses on non-fictional prose, periodicals from 1878 to 1950, but his re- + +259 sults call for a more detailed account of syntactic complexity during the period, and hence we will focus on subordinate clauses and phenomena related to them in this paper. + +For this study, we have chosen to focus on three types of subordinate clauses, based on UD dependency labels, and one phenomenon related to subordinate clauses: (i) relative clauses (RELCL), (ii) cleft constructions (CLEFT),[1 (iii) clausal + +269 + +281 complements not determined by obligatory con- + +trol (CCOMP), and (iv) auxiliary drop (NO-AUX). 283 Whereas the first three types can be used in order to measure syntactic complexity, auxiliary drop + +has been suggested to mark written style, and 286 hence almost never occur in spoken language (cf. + +Wellander, 1939). Since auxiliary drop of fi- 288 nite verbs is restricted to subordinate clauses in Swedish, we have included it as related to sub- + +ordinate clauses. In this study, we only include 291 + +auxiliary drop that occurs in clausal complements 293 CCOMP. + +§ 4 DATA + +296 + +In this section, we will describe the data used. We + +will first describe the data from UD, including the 298 modified version of the LinES treebank, and then describe the targeted dataset we constructed for + +this project 301 + +§ 4.1 UNIVERSAL DEPENDENCIES TREEBANKS + +303 + +We use data from Universal Dependencies Nivre + +et al. (2020) version 2.11 (Zeman et al., 2022) for 306 training our parsers and for the standard evalua- + +tion. Besides dependency annotations, UD also 308 contains lemmas, universal and language-specific part-of-speech tags (UPOS/XPOS), and morphological features. Our main focus is on Swedish, for which there are three treebanks, Talbanken, + +LinES, and PUD, where PUD only contains a test 313 set. In addition, we use data from related north Germanic languages: Norwegian (both variants: Bokmål and Nynorsk), Danish, Faroese, and Icelandic. The treebanks used are summarized in Ta- + +ble [1]. The intuition behind also using related lan- 318 guages is twofold, first, it has been shown to improve parsers (e.g. Smith et al., 2018a), second, + +323 + +subtypes of ACL, clausal modifier of noun, and are denoted ACL:RELCL and ACL:CLEFT. In this paper, we will use shorter names, excluding the prefix. + +${}^{1}$ In UD, both relative clauses and cleft constructions are + +324 378 + +max width= + +Relation Example Class + +1-3 +RELCL Hvad hon beundrar Maurits, som kan *stâ* så lugn ! Correct + +1-3 +RELCL Men kan du säga hvar vi *äro* ? False + +1-3 +NO-AUX Jag har fätt hvad du i natt *skrifvit* till mig . Correct + +1-3 + +Table 2: Examples of sentences shown to the annotators, marked as either correct or wrong. + +380 + +325 379 + +381 + +382 + +383 + +330 we believe it may make the parser more robust to non-standard Swedish, which has many differences from the modern Swedish of the Swedish treebanks. Written Norwegian and Danish, in particular, are very similar to Swedish, and are considered mutually intelligible. + +As can be seen in Table 1, the genres, according to the UD specification, of the treebanks used are mixed. To be able, to at least some extent, investigate whether it would help to have an in-genre test set, we create a modified version, LinES-M, of the LinES treebank (Ahrenberg, 2007) which consists of three genres: literary fiction, Microsoft manuals, and European parliament proceedings. The literary part contains a set of novels translated from English, published 1977-2017. While this is not a perfect match to our target of novels and short stories written originally in Swedish during an earlier time period, this was the closest we could get to an in-domain test set, without any re-annotations. We re-split LinES by merging the data from the training and test sets, and moving all literature [2] to a new test set, and all other texts to a new training set, referred to as LinES-M in Table 1. + +For evaluation on the Swedish UD test sets, we report labeled attachment score (LAS). For LinES-M, we also report F1-scores for the three relations in focus for our targeted test set and AUX, which is relevant to identify auxiliary drop. + +§ 4.2 TARGETED LITERATURE DATASET + +In this section, we will describe the sampling and annotation of the targeted literary dataset annotated for this project as an alternative way of evaluating the performance of parsers on specific phenomena in a specific text type. The targeted dataset will be made publicly available. + +§ SAMPLING AND TEXT PROCESSING + +Our target data is literary texts from 1800-1930, focusing on novels and collections of short stories. Such works have been made available by + +Litteraturbanken. ${}^{3}$ We choose to work only with 384 + +the subset of works that have been proofread after 385 386 going through OCR, available in an XML format. We extracted all novels and short stories available + +in this format from the time period of interest. 389 From these texts, we extracted the raw text para- + +graphs. For another sub-project, we had already 391 extracted a set of novels where quotations are used to mark dialog, and used quotation marks to sep- + +arate dialogue and narrative, which we use also in 394 this study. This sample consists of 165 novels and + +collections of short stories. 396 + +The selected works were parsed early on in the project, using Swepipe and UUparser ${}^{s}$ with + +Swepipe tags (see 5). From the parse trees, we 399 extracted all sentences containing a relation of interest and marked the head word for which that relation occurred. For NO-AUX, we also checked that there was no outgoing AUX relation from the head word. It is not uncommon to have several instances of a single relation in a sentence, but we only marked a single occurrence per example, to make the annotation consistent between sentences. From this set, we randomly sampled 200 sentences for each relation type, except CLEFT, for which we only found 74 examples, which were all included. Table 2 shows examples, also containing examples of plural verb forms äro (modern: är, 'are') and old-fashioned spelling 'skrifvit' (modern: skrivit, 'written'). + +§ 4.2.1 ANNOTATION + +416 + +The annotation was performed by two experts on Swedish grammar, both native Swedish speakers. The annotators were given the example sentences in Excel, and for each sentence, they were to decide whether the marked head word belonged to the given type or not. For each type, 20 examples were annotated by both annotators, and the remaining examples were split between them. Af- + +ter the first round, there were a few disagreements 426 in the doubly annotated sets, which were discussed by the annotators, followed by a re-annotation of all examples. The initial round of annotation + +431 + +${}^{2}$ The literary works are in documents2,3,4,6,7, and 8 ; document 1 contains Microsoft manuals and document 5 contains parliament proceedings. (Lars Ahrenberg, personal communication) + +https://litteraturbanken.se/ + +433 was very quick, roughly between 15-30 minutes per 100 examples, with a somewhat longer time needed for CCOMP. Table 3 shows the number of correct and wrong examples for each class. Note that the dataset is skewed towards positive examples. + +max width= + +Relation Correct Wrong + +1-3 +CLEFT 64 10 + +1-3 +RELCL 133 67 + +1-3 +CCOMP 141 59 + +1-3 +NO-AUX 170 30 + +1-3 + +Table 3: Class distribution in our annotated dataset + +§ 4.2.2 EVALUATION + +We evaluate on the targeted dataset by calculating the number of times the parser assigns the correct relation to the focus word, and for NO-AUX, that there in addition is no aux-dependent. We then calculate precision and recall for each relation type. Note that this is different from standard evaluation of dependency parsers where we evaluate a full tree. In this case, we instead evaluate a single relation of interest for each sentence. + +§ 5 PARSERS + +In order to investigate how well the different types of evaluation work, we explore three generations of parsers. While the main focus is on dependency parsing. As a baseline, we use the easily accessible Swepipe with its provided model for Swedish. We also use two generations of neural parsers, UUParser and Machamp, for which we also experiment with multilingual parsing. We train each model three times with different random seeds and report average scores. + +§ 5.1 SWEPIPE + +As a baseline parser, we wanted an easily accessible parser, which comes with a trained parsing model, and which might be used by non-experts in a digital humanities project. Our choice was to use the Swedish annotation pipeline, Swepipe. 4, a pre-trained model covering all steps needed to analyse Swedish texts from scratch, including tok-enization, tagging, and parsing. Swepipe is similar to several other systems targeted at this user group, such as the web-based Swegram 5, which uses the same parser and tagger (Megyesi et al., 2019). + +Swepipe is pre-neural and uses efselab (Östling, 486 + +2018) for tagging and MaltParser (Nivre et al., 487 2007) trained on Talbanken for parsing. Malt-Parser is a classical transition-based parser, using a support vector machine for classification, based on a feature vector with words, POS-tags, and already built relations. + +§ 5.2 UUPARSER + +UUParser (de Lhoneux et al., 2017; Smith et al., 2018b) is a neural transition-based dependency parser with a BiLSTM feature extractor, based on + +Kiperwasser and Goldberg (2016). Word repre- 499 sentations are fed to a BiLSTM, to create contex-tualized word representations, which are given as + +input to an MLP classifying the next transition. 502 We use an arc-hybrid transition model (Kuhlmann + +et al., 2011) with a swap transition and a static- 504 dynamic oracle (de Lhoneux et al., 2017). As input word representation we use word embeddings, character-based word embeddings, UPOS-tag em-beddings, and treebank embeddings, which represent the treebank of a sentence. All embeddings were initialized randomaly at training time. We use the default UUparser settings (Smith et al., 2018b), except for adding drop-out with a rate of 0.33 for UPOS-embeddings, since the parser is trained with gold tags. At test time, we use two different sets of POS-tags, from Swepipe/efselab and from Machamp. We will call these variants UUparser ${}^{s}$ and UUparser ${}^{m}$ respectively. To counteract the differing sizes of the training data, we limited the number of sentences used per treebank to 4,300 per iteration. + +522 + +§ 5.3 MACHAMP + +Machamp (van der Goot et al., 2021) is a toolkit 524 for multitask learning covering several NLP tasks, based on fine-tuning a pre-trained contextualized model, like BERT (Devlin et al., 2019). In a multitask setup, each task has a separate decoder. The dependency parser is a graph-based parser using deep biaffine attention (Dozat and Manning, 2018) to score word pairs, and the CLU algorithm (Chu and Liu, 1965; Edmonds, 1967) to extract trees. For tagging, a greedy decoder, with a softmax output layer is used. + +In this work we use Machamp in a multi-task setup, to jointly learn tagging of UPOS, XPOS and morphological features, and dependency parsing. + +We experiment with two sets of language models, 539 + +4https://github.com/robertostling/ efselab + +${}^{5}$ https://cl.lingfil.uu.se/swegram/ + +540 + +max width= + +Group Included treebanks/languages + +1-2 +Talbank Swedish-talbanken + +1-2 +Swedish Talbank+ Swedish-LinES-M + +1-2 +SweNor Swedish + Norwegian (*3) + +1-2 +Scand SweNor + Danish + +1-2 +NorthG Scand + Faroese + Icelandic + +1-2 + +Table 4: Groups of languages/treebanks used for multilingual training. + +541 + +542 + +543 + +546 multilingual BERT (mBERT Devlin et al., 2019) ${}^{6}$ , trained on 104 languages including all languages used in our study except Faroese, and the Swedish model KB-BERT (Malmsten et al., 2020), trained only on Swedish. We will call these systems Machamp ${}^{m}$ and Macahmp ${}^{k}$ respectively. For both models, we used the cased version. KB-BERT + +556 has been shown to improve Swedish named entity recognition and POS-tagging (Malmsten et al., + +558 2020), but as far as we are aware, it has not been used in multilingual dependency parsing models. We use the default parameters of Machamp. To counteract the differing sizes of the training data, we applied sampling smoothing set to 0.5 . + +§ 5.4 MULTILINGUAL TRAINING + +For UUParser and Machamp, we explore multilingual training. We limit ourselves to the North-Germanic languages, all relatively closely related to Swedish. We train two Swedish models, on Talbanken only, to be comparable with Swepipe, and also with LinES-M. In addition, we train three models with different subsets of the other North Germanic Languages. For our multilingual models, we first combine Swedish with Norwegian, which has three treebanks covering both variants of Norwegian. We then add Danish, to train a Scandinavian model. The reason for adding Norwegian first, despite the fact that Danish is considered a closer relative to Swedish, is the availability of more data for Norwegian with variability in language variants. Our final model, NorthG, also adds Faroese and Icelandic, which are more distant from Swedish, and not mutually intelligible. The language groups are summarized in Table 4. + +§ 6 RESULTS + +Tables 5 and 6 show results from the standard and targeted evaluations for Swepipe, UUparser ${}^{m}$ with Machamp ${}^{k}$ POS-tags and Machamp ${}^{k}$ trained with + +KB-BERT. In all tables, we mark the three best 594 + +results for each metric in bold. 595 + +596 + +Table 5 shows results on UD test sets. We see 597 + +no obvious differences between LAS on the in- 598 genre LinES-M and the other two Swedish test + +sets, indicating that time period might play a big- 600 ger role than genre in our scenario. Swepipe has overall the lowest scores, followed by UUparser ${}^{m}$ , and then ${\operatorname{Machamp}}^{k}$ . For the two Swedish models, the differences between using only Talbanken + +and adding the small LinES-M training set are 605 typically small, but sometimes with a positive + +effect for UUparser ${}^{m}$ and a negative effect for 607 Machamp ${}^{k}$ . Adding Norwegian leads to improvements in nearly all scores, often quite substan- + +tial, whereas adding additional languages has a 610 smaller impact. The difference between parsers varies for the different relation types. Swepipe does not find any CLEFTs, and falls behind UUparser ${}^{m}$ on all other relation types, especially for AUX. Machamp ${}^{k}$ improves considerably over UUparser ${}^{m}$ for all explored relations, except AUX, where both neural parsers perform well, possibly since they both use the POS-tags of Machamp ${}^{k}$ . + +The results in Table 6 for our targeted test set 620 show a partially different picture. First, we note + +that Swepipe has a very high recall for all re- 622 lation types except CLEFT, which it never predicts. We think this is mainly an artifact of the + +sampling procedure for this test set, where the 625 annotated sentences were sampled from Swepipe + +and UUparser ${}^{s}$ , with Swepipe POS-tags, which 627 means that they were mostly predicted as correct by Swepipe. The other parsers do not have this advantage, and thus have a lower recall, which we believe is more predictive of real performance. + +Swepipe has considerably lower precision than the 632 other parsers for all relation types. We believe that the evaluation should still be fair in comparing ${\text{ UUparser }}^{m}$ and Machamp ${}^{k}$ , from which + +no samples were taken. Compared to the stan- 637 dard evaluation where Machamp ${}^{k}$ was clearly better than UUparser ${}^{m}$ , we now see a more mixed picture, where there is no clear overall advantage of Machamp ${}^{k}$ over ${\mathrm{{UUparser}}}^{m}$ , and the results are mixed across relation types and precision/recall. The trends between training languages are also less clear, with some combinations standing out in performance for some relation types. Machamp ${}^{k}$ trained with Scand and NorthG, has a + +considerably higher recall on RELCL than the other 647 + +https://github.com/google-research/ bert/blob/master/multilingual.md + +648 702 + +max width= + +2*X 3|c|LAS 4|c|F1, LinES-M + +2-8 + LinES-M TB PUD CLEFT RELCL CCOMP AUX + +1-8 +Swepipe-Talbank 71.75 79.69 78.82 - 61.31 54.98 88.45 + +1-8 +UUparser ${}^{m}$ -Talbank 72.10 83.75 76.66 26.82 64.67 59.62 93.99 + +1-8 +UUparser ${}^{m}$ -Swedish 75.51 83.76 77.50 29.12 67.37 61.65 94.21 + +1-8 +UUparser ${}^{m}$ -Norswe 79.69 85.60 81.50 39.92 74.34 66.79 94.35 + +1-8 +UUparser ${}^{m}$ -Scand 79.74 85.43 81.34 41.74 73.03 64.93 94.20 + +1-8 +UUparser ${}^{m}$ -NorthG 79.33 85.35 81.27 41.71 72.82 64.70 94.27 + +1-8 +Machamp ${}^{k}$ -Talbank 80.54 92.24 86.05 56.73 79.07 74.59 95.44 + +1-8 +Machamp ${}^{k}$ -Swedish 80.26 90.72 86.83 49.67 75.84 71.29 93.94 + +1-8 +Machamp ${}^{k}$ -Norswe 83.13 91.63 86.79 55.42 81.29 75.32 95.29 + +1-8 +Machamp ${}^{k}$ -Scand 83.16 92.31 87.21 55.54 81.21 74.27 95.97 + +1-8 +Machamp ${}^{k}$ -NorthG 83.03 92.35 87.17 56.00 82.27 74.78 95.85 + +1-8 + +Table 5: Results on standard Swedish UD test sets. LAS for all three Swedish test sets, and F1-scores for four relations of interest for LinES-M. + +649 703 + +650 704 + +651 705 + +652 706 + +653 707 + +654 708 + +655 709 + +656 710 + +657 711 + +658 712 + +659 713 + +660 714 + +661 715 + +max width= + +2*X 4|c|Precision 4|c|Recall + +2-9 + CLEFT RELCL CCOMP NO-AUX CLEFT RELCL CCOMP NO-AUX + +1-9 +Swepipe-Talbank - 66.33 70.41 84.62 0.00 99.25 98.57 97.06 + +1-9 +${\mathrm{{UUparser}}}^{m}$ -Talbank 92.46 93.32 94.11 98.14 50.35 82.37 63.97 51.44 + +1-9 +UUparser ${}^{m}$ -Swedish 92.49 93.45 95.84 97.60 69.79 81.45 65.95 50.85 + +1-9 +UUparser ${}^{m}$ -NorSwe 92.12 94.65 97.39 98.30 84.55 81.20 70.87 56.21 + +1-9 +UUparser ${}^{m}$ -Scand 94.64 95.69 96.73 98.72 84.20 79.62 70.48 61.05 + +1-9 +UUparser ${}^{m}$ -NorthG 93.31 95.55 96.06 99.05 75.00 79.37 74.13 61.57 + +1-9 +Machamp ${}^{k}$ -Talbank 94.12 95.16 94.63 98.52 59.90 83.46 75.48 65.69 + +1-9 +Machamp ${}^{k}$ -Swedish 94.92 96.19 95.09 98.81 53.12 82.21 73.81 65.10 + +1-9 +Machamp ${}^{k}$ -NorSwe 95.38 96.71 94.77 99.13 72.92 79.70 73.33 67.25 + +1-9 +Machamp ${}^{k}$ -Scand 96.61 95.11 94.29 99.01 59.38 87.47 66.90 58.82 + +1-9 +Machamp ${}^{k}$ -NorthG 95.38 93.83 93.46 99.00 64.06 87.72 68.10 58.04 + +1-9 + +Table 6: Precision and recall for our targeted test set. + +724 + +726 + +662 716 + +663 717 + +664 718 + +665 719 + +666 720 + +667 721 + +668 722 + +669 723 + +671 725 + +727 + +674 728 + +676 models, with only a small drop in precision. On CCOMP and NO-AUX, on the other hand, these two models instead have a low recall, without gain- + +679 ing much on precision. We do not see this pattern for UUparser ${}^{m}$ , where the Scand model is overall + +681 strong. + +In Table 7 we show a summary of results for both variants of UUparser and Machamp, showing + +684 only precision for the targeted test set, since recall is biased towards Swepipe and UUparser ${}^{s}$ due to + +686 the sampling. T We can see that UUparser ${}^{s}$ does not consistently improve on LAS over Swepipe when trained on the same Talbanken data, but + +689 that adding the Scandinavian treebanks improves the results considerably both for the UD evalua- + +691 tions and on the targeted test set. When we compare the two variants of UUparser and Machamp we see that ${\mathrm{{UUparser}}}^{m}$ and ${\operatorname{Machamp}}^{k}$ beat their variant consistently on the UD evaluation, and in most cases on the targeted test set. We also see + +696 that training on Scand is better than training on Talbanken in the majority of cases, both for UD + +701 + +and on Precision for the targeted test set, however, 730 from Table 6, we know that Scand is sometimes not as strong on recall. + +§ 7 DISCUSSION + +733 + +An important question is whether the parser per- 735 formance on our target task is good enough to use for our study of change in the Swedish writ- + +ten language. Overall, both Machamp and UU- 738 parser have good precision for all our relations + +of interest, always scoring above 90, and reach- 740 ing scores above 96 for some parsers for each relation type. The recall, however, is considerably lower. This means that the instances of each rela- + +tion type the parser finds are mostly good, but it 745 does miss a substantial part of relevant instances. The recall is highest for RELCL, where it is well above 80 for several of the Machamp models with UUparser also above 80 . This approaches a level + +that is usable for our end project, of finding syn- 750 tactic features in 18th-19th-century literature, and tracking them over time. Other relation types have a more mixed performance, as CLEFT, for which + +${\text{ UUparser }}^{m}$ trained on NorSwe and Scand per- 754 + +forms very well, with a recall of over 84, but where 755 + +${}^{7}$ To save space, we only show results for two training language groups. The other groups exhibit largely the same trends. + +757 811 + +max width= + +2*X 3|c|LAS 4|c|F1, UD_LinES-M 4|c|P, litt + +2-12 + LinES-M TB PUD CLEFT RELCL CCOMP AUX CLEFT RELCL CCOMP NO-AUX + +1-12 +Swepipe-Talbank 71.75 79.69 78.82 - 61.31 54.98 88.45 - 79.52 82.14 90.41 + +1-12 +UUparser ${}^{s}$ -Talbank 70.80 82.35 75.78 26.08 63.01 58.39 91.31 92.80 92.52 93.05 96.50 + +1-12 +UUparser ${}^{s}$ -Scand 77.63 83.39 80.25 30.77 70.55 62.22 90.82 93.86 94.07 94.66 97.95 + +1-12 +UUparser ${}^{m}$ -Talbank 72.10 83.75 76.66 26.82 64.67 59.62 93.99 92.46 93.32 94.11 98.14 + +1-12 +UUparser ${}^{m}$ -Scand 79.74 85.43 81.34 41.74 73.03 64.93 94.20 94.64 95.69 96.73 98.72 + +1-12 +Machamp ${}^{m}$ -Talbank 77.20 89.35 84.21 38.47 72.87 69.09 92.91 92.94 96.13 93.00 98.23 + +1-12 +Machamp ${}^{m}$ -Scand 80.13 89.50 85.79 43.09 77.67 71.18 93.49 93.41 96.98 92.47 99.08 + +1-12 +Machamp ${}^{k}$ -Talbank 80.54 92.24 86.05 56.73 79.07 74.59 95.44 94.12 95.16 94.63 98.52 + +1-12 +Machamp ${}^{k}$ -Scand 83.16 92.31 87.21 55.54 81.21 74.27 95.97 96.61 95.11 94.29 99.01 + +1-12 + +Table 7: Comparison of parser variants, on standard test sets and our test set. + +810 + +812 + +813 + +814 + +815 + +816 other models perform considerably worse. The recall of CCOMP, and especially of NO-AUX is lower, and we would need to improve parser performance for those relation types, possibly by using domain adaptation techniques, before they would reach a useful level. The varying performance of parsers for different relation types is in line with the results for German by Adelmann et al. (2018), who recommend choosing different parsers for different end goals. + +On the standard evaluation, Machamp is clearly overall better than UUparser, training on Scand is better than training only on Swedish, KB-BERT is better than mBERT for Machamp, and UUparser is better with Machamp tags than with Swepipe tags. For our targeted test sets, however, we see fewer clear trends, and there is much more variation among the systems. Machamp ${}^{k}$ and UUparser ${}^{m}$ tend to perform better than their counterparts, and the multilingual models may have a small advantage over the Swedish-only models. Swepipe clearly seems to fall behind the other parsers on precision, whereas its high recall can be explained by the sampling procedure. A side-effect of our study is that we have found that Machamp ${}^{k}$ trained on Scand or NorthG is a very strong parser for modern Swedish as measured by the UD test sets. + +Our targeted test set does suffer from an issue with sampling from only two parsers, which affects its recall mainly for Swepipe, but also for ${\text{ UUparser }}^{s}$ . We believe UUparser ${}^{m}$ is less affected since it relies on a different set of POS-tags. The dataset is also relatively small, especially for the CLEFT relation. However, we think it still contributes to showing that when selecting a parser for a particular target task and text type, we cannot rely solely on evaluation scores on standard test sets, as also shown by Adelmann et al. (2018). + +809 Even if we focus on the F1-score for the relations + +821 of interest, rather than on the full tree, we see no clear similarity of parser ranking to the evaluation of the same relation types in our targeted test set. To further investigate whether this type of test set + +can indeed be useful, we would need to perform 826 further analysis. It would be interesting to learn more about where the main improvements shown on UD evaluation for a parser like Machamp ${}^{k}$ actually occurs. We also think it would be useful + +to consider the sampling for the test set, specifi- 831 cally if it is worth the effort to also annotate some + +raw text, in order to find instances not identified by 833 any of our parsers. Another issue that we did not + +yet explore, is whether parsing performance varies 836 over the time period in question. + +§ 8 CONCLUSION + +838 + +839 + +We describe a study of Swedish dependency 840 + +parsers with the goal of tracking changes in the 841 use of certain types of subordinate clauses and re- + +lated phenomena in Swedish literature from 1800- 843 1930. Since standard test sets do not cover this time period or genre, and we did not have the re- + +sources to perform a full annotation of dependency 846 + +trees, we propose a smaller-scale annotation task, 848 focusing on single relation types. We evaluated a set of parsers on UD and on our targeted test set. + +While there was a clear and relatively consistent 851 order between the parsers on the UD evaluation, + +the performance was more mixed on our targeted 853 test set, without a clear overall best parser across relation types. We believe that our proposed annotation scheme can be useful in complementing standard evaluations, with a low annotation effort, + +but that more analysis is needed. 858 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/xbPTfBIUby/Initial_manuscript_md/Initial_manuscript.md b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/xbPTfBIUby/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..41a53b2be21db04c7d39c4cab2954aeef8fdfee3 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/xbPTfBIUby/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,411 @@ +000 054 + +# Automatic Transcription for Estonian Children's Speech + +001 055 + +002 056 + +003 + +004 Anonymous Author + +Affiliation / Address line 1 + +006 email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +email@domain + +057 + +Anonymousest Author 058 + +Affiliation / Address line 1 059 + +email@domain 060 + +061 + +062 + +## Abstract + +We evaluate the impact of recent improvements in Automatic Speech Recognition (ASR) on transcribing Estonian children's + +016 speech. Our research focuses on fine-tuning large ASR models with a 10-hour + +018 Estonian children's speech dataset to create accurate transcriptions. Our results show that large pre-trained models hold + +021 great potential when fine-tuned first with a more substantial Estonian adult speech + +023 corpus and then further trained with children's speech. + +026 + +## 1 Introduction + +028 Automatic Speech Recognition (ASR) continues to face challenges in accurately transcribing children's speech. Research efforts are under- + +031 way to adapt adult ASR models to better handle the unique pronunciation variations and lim- + +033 ited vocabulary that are characteristic of children's speech (Thienpondt and Demuynck, 2022; Dutta et al., 2022). These adaptations are necessary due to the limitations of current ASR systems, which often lack adequate representation of children's + +038 speech and struggle to generalize to new examples. + +Recent advancements in ASR technology, including the use of large transformer-based models and unsupervised pre-training techniques, have resulted in improved performance for adult speech recognition, with the ability to train on a diverse range of data without human annotations (Baevski et al., 2020; Radford et al., 2022; Hsu et al., 2021). These models demonstrate greater robustness and generalization compared to previous systems. However, the effectiveness of these advanced ASR models for children's speech, especially in low-resource languages like Estonian, re- + +053 mains untested. + +063 + +064 + +In this paper, we are investigating two multi- 065 lingual speech models - Facebook's Wav2Vec2- + +XLS-R (Babu et al., 2021) and OpenAI's Whis- 067 per (Radford et al., 2022) - as potential starting points for building an ASR system transcribing + +Estonian children's speech. Our objective is to 070 determine the potential of these models in creat- + +ing low-effort ASR systems for children speaking 072 a low-resource language like Estonian, for which + +there are no ASR systems for children's speech. 075 To accomplish this, we fine-tune the XLS-R + +and Whisper models from scratch using children's 077 speech data. We also fine-tune pre-existing models for the Estonian language with additional chil- + +dren's speech recordings. Furthermore, we com- 080 pare the quality of the ASR system by evaluating + +a pre-made Estonian ASR system provided by Mi- 082 crosoft Azure and exploring its fine-tuning capabilities. + +Our research indicates that XLS-R models and 085 Whisper models can serve as effective starting + +points for building an ASR system using only 10 087 hours of children's speech. However, for optimal performance, these models should first be fine- + +tuned with Estonian adult speech. We achieve 090 the best word error rate of around 15 using an + +XLS-R model that was fine-tuned with Estonian 092 ASR datasets and further trained with children's speech. Furthermore, our results show that the Azure speech-to-text model performs similarly to the Estonian XLS-R model but not as well as the + +fine-tuned public models. 097 + +In the next sections, we describe which data we used for evaluation and training, which models we used and how we fine-tuned these and last but not + +least we present and analyse the results. 102 + +## 2 Dataset and evaluation + +The Children ASR dataset used in this work consists of speech recordings from 53 children aged + +6 to 13. The data was collected by the Children's 107 Clinic of Tartu University Hospital and contains a mix of both boys and girls speaking about various topics such as answering questions, describing pictures, talking about their family and friends, and more. The dataset is divided into three subsets - test, dev, and train - with no overlap in speakers or texts. + +The test set contains all age and gender groups and has a total recording duration of 278 minutes (approximately 4.6 hours). The development set is missing some speakers and has a total recording duration of 182 minutes (approximately 3 hours). The training set is also missing some speakers and has a total recording duration of 613 minutes (approximately 10 hours). A breakdown of the total recording duration for the test set by age and gender of the speakers is shown in Table 1. + +
AgeGirls (min)Boys (min)Total (min)
6172138
7141630
8171431
9221840
10151732
11201737
12162238
13191332
Total140138278
+ +Table 1: Total recording duration in minutes for the Estonian children ASR test set, broken down by age and gender of the speakers. + +The children in the dataset speak about a wide range of topics, covering everything from answering questions and describing pictures to discussing their family and friends. They also include recordings of children reading fairytales, reciting poems, and saying specific sentences. The utterances in the dataset vary in their level of spontaneity - some are unscripted expressions of thoughts, while others feature children reading. + +We evaluate the performance of our speech recognition models using the standard measure of word error rate (WER). This involves converting all text to lowercase and removing punctuation but not standardizing different spelling variations. Our reference transcriptions reflect the pronunciation of children, including any errors they may make. However, the line between correct and incorrect pronunciation is often blurry and some + +161 children's speech can be difficult to comprehend. + +We do not consider the ambiguity in human tran- 162 + +scriptions and simply compare the models' output 163 to our reference transcription, which could lead to increased WERs. + +## 3 Models and training + +168 + +We are using both public large speech models and private black box speech service. In the case of public models, we also searched for models already fine-tuned with Estonian speech data. We fine-tune the selection of these models with the children's speech dataset mentioned in the last section. + +For public models, we use two multilingual + +ones: Facebook's XLS-R and OpenAI's Whisper 178 (Radford et al., 2022). XLS-R model is trained + +with speech modelling objective, not ASR but it 180 can be fine-tuned to ASR with Connectionist Temporal Classification (CTC) (Graves et al., 2006) algorithm. The Whisper on the other hand is a multipurpose model that contains both transformer encoder and decoder blocks and has been trained on several speech-processing tasks, like multilingual speech recognition, speech translation and voice activity detection (Radford et al., 2022). + +The available XLS-R models have 300 million, 1 billion and 2 billion parameters, we are using the two smaller ones in this work. The Whisper model comes in six different sizes; we are using medium and large-v2 since the Estonian error rates for other ones are relatively high. There is one Estonian-specific fine-tuned model available for the 300 million parameter version, trained with over 700 hours of Estonian speech data (Alumäe and Olev, 2022). There are several Estonian Whisper models available in HuggingFace but these are trained with fewer data examples. We are using the best available medium and large-v2 ones. ${}^{12}$ + +We use standard fine-tuning procedures. For training XLS-R-based ASR models from scratch, we use the learning rate of $3\mathrm{e} - 4$ , a 400-step warmup and train the models for 60 epochs with children's speech dataset, which is less than 4000 steps. When further fine-tuning the Estonian XLS-R model with children's speech, we use the learning rate of $2\mathrm{e} - 5$ and 200 warmup steps. We fine-tune all the Whisper models with warmup 10% + +215 of the steps and learning rate 1e-05. When fine- + +--- + +${}^{1}$ https://huggingface.co/agnesluhtaru/ whisper-medium-et-ERR2020 + +${}^{2}$ https://huggingface.co/agnesluhtaru/ whisper-large-et-ERR2020-v2 + +--- + +217 tuning the out-of-the-box Whisper models, we train these for 5000 steps or atound 40 epochs and when fine-tuned models already trained with Estonian adult speech, we train the large model for 2000 steps or over 16 epochs and medium model for 1000 steps or eight epochs. + +For the private model, we use Microsoft Azure Speech service’s speech-to-text ${}^{3}$ , which requires an Azure subscription and a Speech resource. The transcription services can be accessed by making REST requests. + +229 Microsoft Azure offers the option to fine-tune the model with custom datasets. This process involves uploading data to train the models, fol- + +232 lowed by deploying the trained models. Since audio-based fine-tuning is not available for Esto- + +234 nian, we use text-based tuning for our work with the texts from the children's speech dataset. + +## 4 Results + +In this section, we describe the results of all the models based on Facebook's XLS-R, OpenAI'S Whisper and Microsoft Azure speech-to-text. + +### 4.1 XLS-R + +Table 2 shows the word error rate (WER) scores of fine-tuned Estonian XLS-R models using only 10 hours of Estonian children's speech data, the fine-tuned Estonian model (Alumäe and Olev, 2022) and Estonian model further trained with children's speech. We can see that the limited amount of + +249 data for fine-tuning XLS-R from scratch results in a high WER of over 30 for both models with 300 million and one billion parameters. Training an ASR model using only 10 hours of speech data + +254 can be challenging, especially when the speech is for a low-resource language and children. + +The results show that the pre-trained Estonian ASR model has a WER of around 20, while further fine-tuning the model with children's speech data + +259 leads to even better results, with a WER of less than 15. Based on the lower WER score for fine-tuned one billion parameter model, we can suggest that a larger model fine-tuned with Estonian data first and then further trained on children's speech could lead to even better results. + +The results indicate that fine-tuning the Estonian ASR model using children's speech data im- + +269 + +
ModelTestDev
xls-r-300M-children36.334.58
xls-r-1B-children30.8931.06
xls-r-300M-et20.6219.15
xls-r-300M-et-children15.3114.30
+ +Table 2: Comparison of WER scores for Face-book's Wav2Vec2 XLS-R (Babu et al., 2021) based models fine-tuned with only Estonian children's speech, only Estonian adult speech (Alumäe and Olev, 2022) and first fine-tuned to Estonian and further trained with children's speech. + +270 + +271 + +272 + +273 + +274 + +275 + +276 + +278 + +279 + +280 + +281 + +282 + +283 + +![0196413b-142f-78b6-9a49-d9a8fa3f17f7_2_845_696_614_429_0.jpg](images/0196413b-142f-78b6-9a49-d9a8fa3f17f7_2_845_696_614_429_0.jpg) + +Figure 1: Performance comparison of Estonian XLS-R ASR and children's speech fine-tuned models across age groups. + +284 + +285 + +286 + +287 + +288 + +289 + +290 + +291 + +292 + +293 + +294 + +295 + +296 + +297 + +298 + +301 + +proves performance across all age groups (refer 302 + +to Figure 1). Younger speakers tend to have a 303 + +higher word error rate (WER) than older speakers, 304 although this relationship is not always straight- + +forward. There are some exceptions, such as the 306 recognition performance for 13-year-olds being + +worse than that of younger age groups. This high- 308 + +lights that speaker variability plays a role in the 309 WER results. Nevertheless, the fine-tuning of the ASR model using children's speech data reduces the differences in recognition performance across + +age groups, resulting in improved overall perfor- 313 mance. + +### 4.2 Whisper + +The performance of the out-of-the-box Whisper 318 models on the children's dataset (see Table 3) is + +comparable to the scores reported by Radford et al. 320 + +(2022) on the Estonian Common Voice 9 Ardila 321 + +et al. (2020). All models have a WER of at least 322 + +35. So, although we can use Whisper without fine- 323 tuning, it does not transcribe Estonian speech well + +--- + +${}^{3}$ https://learn.microsoft.com/en-us/ azure/cognitive-services/speech-service/ speech-to-text + +--- + +325 and therefore does not give great transcriptions for Estonian children's speech as well. + +When only fine-tuning the model with 10 hours of children's speech, we can already get better results, with large-v2 model WER around 20, which is significantly better than using a model fine-tuned with Estonian speech that showed much better performance for XLS-R. Although we don't have reasonable evidence for saying that XLS-R models are better because the Whisper models are not optimal. + +
ModelTestDev
Whisper-medium46.1143.21
Whisper-large-v236.0135.06
Whisper-medium-children25.0824.29
Whisper-large-v2-children20.3820.58
Whisper-medium-et28.7826.83
Whisper-large-v2-et29.228.13
Whisper-medium-et-children18.6617.49
Whisper-large-v2-et-children16.0215.73
+ +Table 3: Comparison of WER scores for OpenAI Whisper (Radford et al., 2022) models and Whisper models fine-tuned with only Estonian children's speech, only Estonian adult speech and first fine-tuned to Estonian and further trained with children's speech. + +Despite using the Estonian Whisper models fine-tuned with fewer audio text pairs than the XLS-R model, when trained further with children's speech, the large model achieved similar WER as the double fine-tuned smaller XLS-R model. + +362 + +### 4.3 Azure + +The results from our evaluation of the children's speech dataset show that the out-of-the-box Azure speech-to-text model performs similarly or better than the fine-tuned Estonian XLS-R model (Alumäe and Olev, 2022). As indicated in Table 4, the Microsoft Azure speech-to-text scores are around 20 or below. + +
ModelTestDev
Microsoft Azure18.9320.18
Azure text-tuned20.3121.21
+ +Table 4: WER scores for Microsoft Azure speech-to text and it's custom text-tuned version. + +377 + +However, the experiment also shows that text- 378 + +tuning is not the best approach for this particular 379 + +dataset. The dataset mostly contains simpler vo- 380 + +cabulary and not much terminology, most likely 381 + +leading to quick overfitting with text-tuning. Cur- 382 rently, text-tuning is the only option available for + +the Estonian language, but it might not be the best 384 use case for children's speech datasets. + +## 5 Discussion + +Our experiments show that children's speech 389 recognition continues to be a tricky problem but big speech models are looking promising. It is possible to build an ASR system for Estonian children's speech without any bells and whistles using + +only 10 hours of data and get output that is de- 394 cent and might be good enough for use in chatbots. + +However, when it comes to six-year-olds, whose 396 speech is difficult to understand even for the human ear, the system is still struggling. + +We evaluate different models and it appears that 399 both OpenAI's Whisper and Facebook's XLS-R are viable options for developing a speech recognition model for Estonian children's speech. The current best word error rate is around 15 with XLS-R. However, it remains unclear if this pre-trained model is optimal for children's speech or if a lower error rate could be achieved with Whisper after fine-tuning with a similar amount of Estonian + +adult speech. Additionally, we do not obtain com- 409 parable results with the Azure service, as it does + +not permit fine-tuning with audio data. 411 + +Our findings suggest that the results could be improved by using a larger XLS-R model as the + +base or by fine-tuning Whisper models with more 414 data. Additionally, we do not use a separate lan- + +guage model, which is possible with both Whisper 416 and XLS-R models and could potentially enhance the performance of these models. + +419 + +## 6 Conclusion + +We test the performance of two speech recogni- 421 tion models, XLS-R and Whisper, on transcribing Estonian children's speech. We fine-tune the models with children's speech data and compared + +them to an off-the-shelf system from Microsoft 426 Azure. Both models fine-tuned with children's speech, outperform Microsoft Azure, which does not allow fine-tuning with audio for Estonian, and are promising for children's ASR system. + +431 + +## References + +433 Tanel Alumäe and Aivo Olev. 2022. Estonian speech 434 recognition and transcription editing service. vol- 435 ume 10, page 409-421. 436 + +437 Rosana Ardila, Megan Branson, Kelly Davis, 438 Michael Kohler, Josh Meyer, Michael Hen- retty, Reuben Morais, Lindsay Saunders, 439 Francis Tyers, and Gregor Weber. 2020. https://aclanthology.org/2020.lrec-1.520 Common voice: A massively-multilingual speech corpus. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4218-4222, Marseille, France. European Language Resources Association. + +Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2021. https://doi.org/10.48550/ARXIV.2111.09296 Xls-r: Self-supervised cross-lingual speech representation learning at scale. + +Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. In Advances in Neural Information Processing Systems, volume 33, pages 12449-12460. Curran Associates, Inc. + +Satwik Dutta, Sarah Anne Tao, Jacob C. Reyna, Rebecca Elizabeth Hacker, Dwight W. Irvin, Jay F. Buzhardt, and John H.L. Hansen. 2022. https://doi.org/10.21437/Interspeech.2022-555 Challenges remain in Building ASR for Spontaneous Preschool Children Speech in Naturalistic Educational Environments. In Proc. Interspeech 2022, pages 4322-4326. + +Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. https://doi.org/10.1145/1143844.1143891 Con-nectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd International Conference on Machine Learning, ICML '06, page 369-376, New York, NY, USA. Association for Computing Machinery. + +Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhut-dinov, and Abdelrahman Mohamed. 2021. https://doi.org/10.1109/TASLP.2021.3122291 Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Trans. Audio, Speech and Lang. Proc., 29:3451-3460. + +Alec Radford, Jong Wook Kim, Tao Xu, Greg Brock-man, Christine McLeavey, and Ilya Sutskever. 2022. https://doi.org/10.48550/ARXIV.2212.04356 Robust speech recognition via large-scale weak su- + +485 pervision. + +Jenthe Thienpondt and Kris Demuynck. 2022. https://doi.org/10.21437/Interspeech.2022-10964 Transfer Learning for Robust Low-Resource Children's Speech ASR with Transformers and Source-Filter Warping. In Proc. Interspeech 2022, pages 2213-2217. + +486 487 488 489 490 + +491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 + +514 515 + +516 + +517 + +518 + +519 + +520 + +521 + +522 + +523 + +524 + +525 + +526 + +527 + +528 + +529 + +530 + +531 + +532 + +533 + +534 + +535 + +536 + +537 + +538 + +539 \ No newline at end of file diff --git a/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/xbPTfBIUby/Initial_manuscript_tex/Initial_manuscript.tex b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/xbPTfBIUby/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..556a763f201bb0b73e6552b5bf1ae75fa0cab861 --- /dev/null +++ b/NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/xbPTfBIUby/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,412 @@ +000 054 + +§ AUTOMATIC TRANSCRIPTION FOR ESTONIAN CHILDREN'S SPEECH + +001 055 + +002 056 + +003 + +004 Anonymous Author + +Affiliation / Address line 1 + +006 email@domain + +Anonymouser Author + +Affiliation / Address line 1 + +email@domain + +057 + +Anonymousest Author 058 + +Affiliation / Address line 1 059 + +email@domain 060 + +061 + +062 + +§ ABSTRACT + +We evaluate the impact of recent improvements in Automatic Speech Recognition (ASR) on transcribing Estonian children's + +016 speech. Our research focuses on fine-tuning large ASR models with a 10-hour + +018 Estonian children's speech dataset to create accurate transcriptions. Our results show that large pre-trained models hold + +021 great potential when fine-tuned first with a more substantial Estonian adult speech + +023 corpus and then further trained with children's speech. + +026 + +§ 1 INTRODUCTION + +028 Automatic Speech Recognition (ASR) continues to face challenges in accurately transcribing children's speech. Research efforts are under- + +031 way to adapt adult ASR models to better handle the unique pronunciation variations and lim- + +033 ited vocabulary that are characteristic of children's speech (Thienpondt and Demuynck, 2022; Dutta et al., 2022). These adaptations are necessary due to the limitations of current ASR systems, which often lack adequate representation of children's + +038 speech and struggle to generalize to new examples. + +Recent advancements in ASR technology, including the use of large transformer-based models and unsupervised pre-training techniques, have resulted in improved performance for adult speech recognition, with the ability to train on a diverse range of data without human annotations (Baevski et al., 2020; Radford et al., 2022; Hsu et al., 2021). These models demonstrate greater robustness and generalization compared to previous systems. However, the effectiveness of these advanced ASR models for children's speech, especially in low-resource languages like Estonian, re- + +053 mains untested. + +063 + +064 + +In this paper, we are investigating two multi- 065 lingual speech models - Facebook's Wav2Vec2- + +XLS-R (Babu et al., 2021) and OpenAI's Whis- 067 per (Radford et al., 2022) - as potential starting points for building an ASR system transcribing + +Estonian children's speech. Our objective is to 070 determine the potential of these models in creat- + +ing low-effort ASR systems for children speaking 072 a low-resource language like Estonian, for which + +there are no ASR systems for children's speech. 075 To accomplish this, we fine-tune the XLS-R + +and Whisper models from scratch using children's 077 speech data. We also fine-tune pre-existing models for the Estonian language with additional chil- + +dren's speech recordings. Furthermore, we com- 080 pare the quality of the ASR system by evaluating + +a pre-made Estonian ASR system provided by Mi- 082 crosoft Azure and exploring its fine-tuning capabilities. + +Our research indicates that XLS-R models and 085 Whisper models can serve as effective starting + +points for building an ASR system using only 10 087 hours of children's speech. However, for optimal performance, these models should first be fine- + +tuned with Estonian adult speech. We achieve 090 the best word error rate of around 15 using an + +XLS-R model that was fine-tuned with Estonian 092 ASR datasets and further trained with children's speech. Furthermore, our results show that the Azure speech-to-text model performs similarly to the Estonian XLS-R model but not as well as the + +fine-tuned public models. 097 + +In the next sections, we describe which data we used for evaluation and training, which models we used and how we fine-tuned these and last but not + +least we present and analyse the results. 102 + +§ 2 DATASET AND EVALUATION + +The Children ASR dataset used in this work consists of speech recordings from 53 children aged + +6 to 13. The data was collected by the Children's 107 Clinic of Tartu University Hospital and contains a mix of both boys and girls speaking about various topics such as answering questions, describing pictures, talking about their family and friends, and more. The dataset is divided into three subsets - test, dev, and train - with no overlap in speakers or texts. + +The test set contains all age and gender groups and has a total recording duration of 278 minutes (approximately 4.6 hours). The development set is missing some speakers and has a total recording duration of 182 minutes (approximately 3 hours). The training set is also missing some speakers and has a total recording duration of 613 minutes (approximately 10 hours). A breakdown of the total recording duration for the test set by age and gender of the speakers is shown in Table 1. + +max width= + +Age Girls (min) Boys (min) Total (min) + +1-4 +6 17 21 38 + +1-4 +7 14 16 30 + +1-4 +8 17 14 31 + +1-4 +9 22 18 40 + +1-4 +10 15 17 32 + +1-4 +11 20 17 37 + +1-4 +12 16 22 38 + +1-4 +13 19 13 32 + +1-4 +Total 140 138 278 + +1-4 + +Table 1: Total recording duration in minutes for the Estonian children ASR test set, broken down by age and gender of the speakers. + +The children in the dataset speak about a wide range of topics, covering everything from answering questions and describing pictures to discussing their family and friends. They also include recordings of children reading fairytales, reciting poems, and saying specific sentences. The utterances in the dataset vary in their level of spontaneity - some are unscripted expressions of thoughts, while others feature children reading. + +We evaluate the performance of our speech recognition models using the standard measure of word error rate (WER). This involves converting all text to lowercase and removing punctuation but not standardizing different spelling variations. Our reference transcriptions reflect the pronunciation of children, including any errors they may make. However, the line between correct and incorrect pronunciation is often blurry and some + +161 children's speech can be difficult to comprehend. + +We do not consider the ambiguity in human tran- 162 + +scriptions and simply compare the models' output 163 to our reference transcription, which could lead to increased WERs. + +§ 3 MODELS AND TRAINING + +168 + +We are using both public large speech models and private black box speech service. In the case of public models, we also searched for models already fine-tuned with Estonian speech data. We fine-tune the selection of these models with the children's speech dataset mentioned in the last section. + +For public models, we use two multilingual + +ones: Facebook's XLS-R and OpenAI's Whisper 178 (Radford et al., 2022). XLS-R model is trained + +with speech modelling objective, not ASR but it 180 can be fine-tuned to ASR with Connectionist Temporal Classification (CTC) (Graves et al., 2006) algorithm. The Whisper on the other hand is a multipurpose model that contains both transformer encoder and decoder blocks and has been trained on several speech-processing tasks, like multilingual speech recognition, speech translation and voice activity detection (Radford et al., 2022). + +The available XLS-R models have 300 million, 1 billion and 2 billion parameters, we are using the two smaller ones in this work. The Whisper model comes in six different sizes; we are using medium and large-v2 since the Estonian error rates for other ones are relatively high. There is one Estonian-specific fine-tuned model available for the 300 million parameter version, trained with over 700 hours of Estonian speech data (Alumäe and Olev, 2022). There are several Estonian Whisper models available in HuggingFace but these are trained with fewer data examples. We are using the best available medium and large-v2 ones. ${}^{12}$ + +We use standard fine-tuning procedures. For training XLS-R-based ASR models from scratch, we use the learning rate of $3\mathrm{e} - 4$ , a 400-step warmup and train the models for 60 epochs with children's speech dataset, which is less than 4000 steps. When further fine-tuning the Estonian XLS-R model with children's speech, we use the learning rate of $2\mathrm{e} - 5$ and 200 warmup steps. We fine-tune all the Whisper models with warmup 10% + +215 of the steps and learning rate 1e-05. When fine- + +${}^{1}$ https://huggingface.co/agnesluhtaru/ whisper-medium-et-ERR2020 + +${}^{2}$ https://huggingface.co/agnesluhtaru/ whisper-large-et-ERR2020-v2 + +217 tuning the out-of-the-box Whisper models, we train these for 5000 steps or atound 40 epochs and when fine-tuned models already trained with Estonian adult speech, we train the large model for 2000 steps or over 16 epochs and medium model for 1000 steps or eight epochs. + +For the private model, we use Microsoft Azure Speech service’s speech-to-text ${}^{3}$ , which requires an Azure subscription and a Speech resource. The transcription services can be accessed by making REST requests. + +229 Microsoft Azure offers the option to fine-tune the model with custom datasets. This process involves uploading data to train the models, fol- + +232 lowed by deploying the trained models. Since audio-based fine-tuning is not available for Esto- + +234 nian, we use text-based tuning for our work with the texts from the children's speech dataset. + +§ 4 RESULTS + +In this section, we describe the results of all the models based on Facebook's XLS-R, OpenAI'S Whisper and Microsoft Azure speech-to-text. + +§ 4.1 XLS-R + +Table 2 shows the word error rate (WER) scores of fine-tuned Estonian XLS-R models using only 10 hours of Estonian children's speech data, the fine-tuned Estonian model (Alumäe and Olev, 2022) and Estonian model further trained with children's speech. We can see that the limited amount of + +249 data for fine-tuning XLS-R from scratch results in a high WER of over 30 for both models with 300 million and one billion parameters. Training an ASR model using only 10 hours of speech data + +254 can be challenging, especially when the speech is for a low-resource language and children. + +The results show that the pre-trained Estonian ASR model has a WER of around 20, while further fine-tuning the model with children's speech data + +259 leads to even better results, with a WER of less than 15. Based on the lower WER score for fine-tuned one billion parameter model, we can suggest that a larger model fine-tuned with Estonian data first and then further trained on children's speech could lead to even better results. + +The results indicate that fine-tuning the Estonian ASR model using children's speech data im- + +269 + +max width= + +Model Test Dev + +1-3 +xls-r-300M-children 36.3 34.58 + +1-3 +xls-r-1B-children 30.89 31.06 + +1-3 +xls-r-300M-et 20.62 19.15 + +1-3 +xls-r-300M-et-children 15.31 14.30 + +1-3 + +Table 2: Comparison of WER scores for Face-book's Wav2Vec2 XLS-R (Babu et al., 2021) based models fine-tuned with only Estonian children's speech, only Estonian adult speech (Alumäe and Olev, 2022) and first fine-tuned to Estonian and further trained with children's speech. + +270 + +271 + +272 + +273 + +274 + +275 + +276 + +278 + +279 + +280 + +281 + +282 + +283 + + < g r a p h i c s > + +Figure 1: Performance comparison of Estonian XLS-R ASR and children's speech fine-tuned models across age groups. + +284 + +285 + +286 + +287 + +288 + +289 + +290 + +291 + +292 + +293 + +294 + +295 + +296 + +297 + +298 + +301 + +proves performance across all age groups (refer 302 + +to Figure 1). Younger speakers tend to have a 303 + +higher word error rate (WER) than older speakers, 304 although this relationship is not always straight- + +forward. There are some exceptions, such as the 306 recognition performance for 13-year-olds being + +worse than that of younger age groups. This high- 308 + +lights that speaker variability plays a role in the 309 WER results. Nevertheless, the fine-tuning of the ASR model using children's speech data reduces the differences in recognition performance across + +age groups, resulting in improved overall perfor- 313 mance. + +§ 4.2 WHISPER + +The performance of the out-of-the-box Whisper 318 models on the children's dataset (see Table 3) is + +comparable to the scores reported by Radford et al. 320 + +(2022) on the Estonian Common Voice 9 Ardila 321 + +et al. (2020). All models have a WER of at least 322 + +35. So, although we can use Whisper without fine- 323 tuning, it does not transcribe Estonian speech well + +${}^{3}$ https://learn.microsoft.com/en-us/ azure/cognitive-services/speech-service/ speech-to-text + +325 and therefore does not give great transcriptions for Estonian children's speech as well. + +When only fine-tuning the model with 10 hours of children's speech, we can already get better results, with large-v2 model WER around 20, which is significantly better than using a model fine-tuned with Estonian speech that showed much better performance for XLS-R. Although we don't have reasonable evidence for saying that XLS-R models are better because the Whisper models are not optimal. + +max width= + +Model Test Dev + +1-3 +Whisper-medium 46.11 43.21 + +1-3 +Whisper-large-v2 36.01 35.06 + +1-3 +Whisper-medium-children 25.08 24.29 + +1-3 +Whisper-large-v2-children 20.38 20.58 + +1-3 +Whisper-medium-et 28.78 26.83 + +1-3 +Whisper-large-v2-et 29.2 28.13 + +1-3 +Whisper-medium-et-children 18.66 17.49 + +1-3 +Whisper-large-v2-et-children 16.02 15.73 + +1-3 + +Table 3: Comparison of WER scores for OpenAI Whisper (Radford et al., 2022) models and Whisper models fine-tuned with only Estonian children's speech, only Estonian adult speech and first fine-tuned to Estonian and further trained with children's speech. + +Despite using the Estonian Whisper models fine-tuned with fewer audio text pairs than the XLS-R model, when trained further with children's speech, the large model achieved similar WER as the double fine-tuned smaller XLS-R model. + +362 + +§ 4.3 AZURE + +The results from our evaluation of the children's speech dataset show that the out-of-the-box Azure speech-to-text model performs similarly or better than the fine-tuned Estonian XLS-R model (Alumäe and Olev, 2022). As indicated in Table 4, the Microsoft Azure speech-to-text scores are around 20 or below. + +max width= + +Model Test Dev + +1-3 +Microsoft Azure 18.93 20.18 + +1-3 +Azure text-tuned 20.31 21.21 + +1-3 + +Table 4: WER scores for Microsoft Azure speech-to text and it's custom text-tuned version. + +377 + +However, the experiment also shows that text- 378 + +tuning is not the best approach for this particular 379 + +dataset. The dataset mostly contains simpler vo- 380 + +cabulary and not much terminology, most likely 381 + +leading to quick overfitting with text-tuning. Cur- 382 rently, text-tuning is the only option available for + +the Estonian language, but it might not be the best 384 use case for children's speech datasets. + +§ 5 DISCUSSION + +Our experiments show that children's speech 389 recognition continues to be a tricky problem but big speech models are looking promising. It is possible to build an ASR system for Estonian children's speech without any bells and whistles using + +only 10 hours of data and get output that is de- 394 cent and might be good enough for use in chatbots. + +However, when it comes to six-year-olds, whose 396 speech is difficult to understand even for the human ear, the system is still struggling. + +We evaluate different models and it appears that 399 both OpenAI's Whisper and Facebook's XLS-R are viable options for developing a speech recognition model for Estonian children's speech. The current best word error rate is around 15 with XLS-R. However, it remains unclear if this pre-trained model is optimal for children's speech or if a lower error rate could be achieved with Whisper after fine-tuning with a similar amount of Estonian + +adult speech. Additionally, we do not obtain com- 409 parable results with the Azure service, as it does + +not permit fine-tuning with audio data. 411 + +Our findings suggest that the results could be improved by using a larger XLS-R model as the + +base or by fine-tuning Whisper models with more 414 data. Additionally, we do not use a separate lan- + +guage model, which is possible with both Whisper 416 and XLS-R models and could potentially enhance the performance of these models. + +419 + +§ 6 CONCLUSION + +We test the performance of two speech recogni- 421 tion models, XLS-R and Whisper, on transcribing Estonian children's speech. We fine-tune the models with children's speech data and compared + +them to an off-the-shelf system from Microsoft 426 Azure. Both models fine-tuned with children's speech, outperform Microsoft Azure, which does not allow fine-tuning with audio for Estonian, and are promising for children's ASR system. + +431 \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/-eCgVcWbnzE/Initial_manuscript_md/Initial_manuscript.md b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/-eCgVcWbnzE/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..89666331eb3cffff7a3575802651ae79b2d135e4 --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/-eCgVcWbnzE/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,79 @@ +# AI BASED AUTOMATIC MARK ENTRY SYSTEM + +R.Subasri*, R.Meenakumari* + +*Professor, Kongu Engineering college, Perundurai, India + +## Abstract + +An automatic mark entry system is a computer based system that automatically captures marks or grades from various sources and stores them in a database. The system is designed to automate the process of entering marks or grades for students, eliminating the need for manual entry, and reducing the chances of errors. OCR-based systems use image processing techniques to automatically recognize and extract marks or grades from scanned documents, such as exam answer sheets or report cards. By automating the process of entering marks or grades, teachers and administrators can focus on other important tasks, such as teaching and providing feedback to students. A webcam is used to capture the marks in answer sheets of all the students and the data is transferred into an Excel sheet automatically. Automatic mark entry systems not only save time and reduce errors but also provide real-time access to the data, allowing teachers and administrators to quickly analyze and evaluate the performance of students. + +## 1 Introduction + +Traditionally, marks or grades are entered manually by teachers or administrators, which can be a time-consuming process and may lead to errors. OCR-based systems use image processing techniques to automatically recognize and extract marks or grades from scanned documents, such as exam answer sheets or report cards. A webcam is used to capture the answer sheets of all the students who scored in the examination. The numbers are detected, and the data is transferred into an Excel sheet automatically. Generally this OCR technology is used in number plate recognition system. An accurate vehicle detection system for traffic control is recommended by many researchers. A system which recognises a vehicle's number plate from video using video processing and OCR technology was proposed [1] [3]. for storing the detected number plate of vehicles in a database. Further to overcome the drawback of inaccuracy in recognizing the number plates of high speed vehicles, an automatic vehicle recognition identification System using EasyOCR is recommended[2]. The validation of effectiveness of EasyOCR is also highlighted in comparison with Tesseract OCR for Automated License Plate Recognition Using Deep Learning Algorithm [4] + +In this paper, EasyOCR is applied to recognize the hand written marks in the front page of answer sheets for individual questions and total and automatically creates an excel data sheet. The front page of answer sheet is printed with other details such as name of institution, name of students, course name, etc along with tabular column for entering marks of individual question. The image of the tabular column filled with marks is scanned and given as input to easyocr algorithm for automatic creation of data base. This system ensures ${100}\%$ accuracy in mark entering process for data base creation to publish results in every educational institution. + +The automatic mark entry system as shown in Fig 1 consists of a key algorithm namely EasyOCR. EasyOCR is used for the number recognition, a webcam is used for scanning the exam paper, the detected image is displayed, and the output is automatically converted into an Excel sheet for the data storage. + +![019640dd-9078-735a-b661-ebcd536ae669_1_308_172_1048_312_0.jpg](images/019640dd-9078-735a-b661-ebcd536ae669_1_308_172_1048_312_0.jpg) + +Fig 1 Block Diagram for the AI based automatic mark entry system + +In order to recognize numbers using EasyOCR, the library uses a combination of machine learning and image processing techniques. The library is pre-trained on a large dataset of images containing various types of text, including numbers. During training, the library learns to identify the patterns and features that are characteristic of different types of text, and uses this knowledge to recognize text in new images. When recognizing numbers, EasyOCR first identifies regions of an image that contain text using image processing techniques. Once the text regions have been identified, EasyOCR applies its machine learning models to recognize the individual characters within the text regions. EasyOCR is designed to be able to recognize numbers in a wide range of formats, including handwritten numbers, numbers with unusual fonts or styles, and numbers that appear against complex backgrounds. + +Detecting numbers in a webcam image involves using image processing techniques to identify and extract numerical characters from the image. The process can be broken down as initial step of image acquisition, where the image is captured using the webcam. The webcam captures the live video tream and sends it to the computer. Secondly, image preprocessing, in which the image is captured is be preprocessed to improve its quality and prepare it for analysis. This may involve operations like resizing, cropping, color correction, and noise reduction. Finally image segmentation will segment the image into regions of interest (ROIs) where numbers are likely to be located. This may involve identifying features such as edges or corners that indicate the presence of a number.Once the ROIs have been identified, the next step is to recognize the individual characters within them. This can be done using techniques like template matching, feature extraction, or machine learning algorithms. Finally, the recognized numbers can be outputted to a display or another application. + +The identified numbers using the EasyOCR library are linked into an excel spreadsheet using a variety of programming languages and libraries. Here the popular option of the Pandas library in Python is used. After importing the necessary libraries in Python script, image is loaded using Open CV. The numbers alone are extracted using EasyOCR from the image and it displayed the result as a list of dictionaries, where each dictionary contains information about the recognized characters. the extracted numbers stored in a DataFrame created in Pandas and using the option of excel method, the data is displayed in excel sheet. + +## 2 Performance Evaluation and Testing Results + +After installation of necessary files and libraries, as a first step the user is asked to enter the course code and name and to give the number of students as in Fig 2. After completing the task, the mark sheet is kept of image capturing as in Fig 3 + +![019640dd-9078-735a-b661-ebcd536ae669_1_285_1847_1192_230_0.jpg](images/019640dd-9078-735a-b661-ebcd536ae669_1_285_1847_1192_230_0.jpg) + +Fig 2. User Interface + +![019640dd-9078-735a-b661-ebcd536ae669_2_293_204_1161_346_0.jpg](images/019640dd-9078-735a-b661-ebcd536ae669_2_293_204_1161_346_0.jpg) + +Fig 3 Input image of sample mark sheet + +After capturing the image using webcam EasyOCR will detect and display the numbers and finally that outputs displayed are stored automatically in the Excel sheet as in Fig 4 + +![019640dd-9078-735a-b661-ebcd536ae669_2_282_871_1188_320_0.jpg](images/019640dd-9078-735a-b661-ebcd536ae669_2_282_871_1188_320_0.jpg) + +Fig 4 Excel sheet with marks + +From the excel sheets, it is evident that the accuracy in transferring the marks entered in the grade sheets to excel is ${100}\%$ . By automating the grading process, educators no longer need to spend a significant amount of time and effort manually grading exams, which can significantly reduce the workload and manpower required for grading. + +## ACKNOWLEDGEMENT + +This work has been completed by utilizing the resources of the Centre of Excellence on IIoT laboratory in collaboration with ALAI labs Pve Ltd, Singapore in the department of Electronics and Instrumentation Engineering of Kongu Engineering College, Erode, TamilNadu, India. The authors would like to thank the technical team of ALAI labs Pve Ltd for their incessant support and guidance in completion of this task. + +## References + +[1] Vishwanath Burkpalli, Abhishek Joshi, Abhishek B Warad, Akash Patil. Automatic number plate recognition using Tensorflow and EasyOCR, International Research Journal of Modernization in Engineering Technology and Science, 04(09), 493-501, September-2022. + +[2] Amit Kochale, Ashutosh Khemariya, Aditi Tiwari. Real Time Automatic Vehicle (License) Recognition Identification System with the Help of Opencv & Easyocr Model, International Journal of Research, Science, Technology & Management, 24(3) 2455-2240, September 2021. + +[3] S. Ranjan et al., CR based Automated Number Plate Text Detection and Extraction, 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 2022, pp. 621- 627, doi: 10.23919/INDIACom54597.2022.9763248. + +[4] D. R. Vedhaviyassh, R. Sudhan, G. Saranya, M. Safa and D. Arun, "Comparative Analysis of EasyOCR and TesseractOCR for Automatic License Plate Recognition using Deep Learning Algorithm," 2022 6th International Conference on Electronics, Communication and Aerospace Technology, Coimbatore, India, 2022, pp. 966-971, doi: 10.1109/ICECA55336.2022.10009215. + +[5] VenkataNagaSai, Rakesh Kamisetty et al. Digitization of Data from Invoice using OCR. 6th International Conference on Computing Methodologies and Communication (ICCMC). IEEE. 2022, 1-10. + +[6] Azka Gilani et al. Table detection using deep learning. 14th IAPR international conference on document analysis and recognition (ICDAR). IEEE. 2017, 771-776. + +[7] Adam Jatowt et al. Deep statistical analysis of OCR errors for effective post-OCR processing. ACM/IEEE Joint Conference on Digital Libraries (JCDL). IEEE. 2019, 29-38. + +[8] D. Yadav, S. Sánchez-Cuadrado, and J. Morato. Optical character recognition for Hindi language using a Neural-network approach, Journal of Information Processing Systems., 9(1), 117-140, 2013. + +[9] I.K.Pathan, A.A.Ali, R. J. Ramteke. Recognition of offline handwritten isolated Urdu character, Advances in Computational Research.4(1). 117-121, 2012. + +[10] S. Mori, H. Nishida, and H. Yamada. Optical Character Recognition. Wiley Series in Microwave and Optical Engineering USA, 1999. ISBN 047308196. + +[11] J. Ravagli, Z. Ziran, and S. Marinai . Text recognition and classification in floor plan images, International Conference on Document Analysis and Recognition Workshops (ICDARW), I. 1-6. Sep. 2019. + +[12] Liang, J., Doermann, D. & Li, H. Camera-based analysis of text and documents: a survey. IJDAR 7, 84- 104 2005. + +[13] Lingqian Yang, Daji Ergu, Ying Cai, Fangyao Liu, Bo Ma. A review of natural scene text detection methods." The 8th International Conference on Information Technology and Quantitative Management (ITQM 2020 & 2021). Procedia Computer Science 199 1458-1465, 2022. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/-eCgVcWbnzE/Initial_manuscript_tex/Initial_manuscript.tex b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/-eCgVcWbnzE/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..00ba51affde69718da81fb57b992142ab8d310d6 --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/-eCgVcWbnzE/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,51 @@ +§ AI BASED AUTOMATIC MARK ENTRY SYSTEM + +R.Subasri*, R.Meenakumari* + +*Professor, Kongu Engineering college, Perundurai, India + +§ ABSTRACT + +An automatic mark entry system is a computer based system that automatically captures marks or grades from various sources and stores them in a database. The system is designed to automate the process of entering marks or grades for students, eliminating the need for manual entry, and reducing the chances of errors. OCR-based systems use image processing techniques to automatically recognize and extract marks or grades from scanned documents, such as exam answer sheets or report cards. By automating the process of entering marks or grades, teachers and administrators can focus on other important tasks, such as teaching and providing feedback to students. A webcam is used to capture the marks in answer sheets of all the students and the data is transferred into an Excel sheet automatically. Automatic mark entry systems not only save time and reduce errors but also provide real-time access to the data, allowing teachers and administrators to quickly analyze and evaluate the performance of students. + +§ 1 INTRODUCTION + +Traditionally, marks or grades are entered manually by teachers or administrators, which can be a time-consuming process and may lead to errors. OCR-based systems use image processing techniques to automatically recognize and extract marks or grades from scanned documents, such as exam answer sheets or report cards. A webcam is used to capture the answer sheets of all the students who scored in the examination. The numbers are detected, and the data is transferred into an Excel sheet automatically. Generally this OCR technology is used in number plate recognition system. An accurate vehicle detection system for traffic control is recommended by many researchers. A system which recognises a vehicle's number plate from video using video processing and OCR technology was proposed [1] [3]. for storing the detected number plate of vehicles in a database. Further to overcome the drawback of inaccuracy in recognizing the number plates of high speed vehicles, an automatic vehicle recognition identification System using EasyOCR is recommended[2]. The validation of effectiveness of EasyOCR is also highlighted in comparison with Tesseract OCR for Automated License Plate Recognition Using Deep Learning Algorithm [4] + +In this paper, EasyOCR is applied to recognize the hand written marks in the front page of answer sheets for individual questions and total and automatically creates an excel data sheet. The front page of answer sheet is printed with other details such as name of institution, name of students, course name, etc along with tabular column for entering marks of individual question. The image of the tabular column filled with marks is scanned and given as input to easyocr algorithm for automatic creation of data base. This system ensures ${100}\%$ accuracy in mark entering process for data base creation to publish results in every educational institution. + +The automatic mark entry system as shown in Fig 1 consists of a key algorithm namely EasyOCR. EasyOCR is used for the number recognition, a webcam is used for scanning the exam paper, the detected image is displayed, and the output is automatically converted into an Excel sheet for the data storage. + + < g r a p h i c s > + +Fig 1 Block Diagram for the AI based automatic mark entry system + +In order to recognize numbers using EasyOCR, the library uses a combination of machine learning and image processing techniques. The library is pre-trained on a large dataset of images containing various types of text, including numbers. During training, the library learns to identify the patterns and features that are characteristic of different types of text, and uses this knowledge to recognize text in new images. When recognizing numbers, EasyOCR first identifies regions of an image that contain text using image processing techniques. Once the text regions have been identified, EasyOCR applies its machine learning models to recognize the individual characters within the text regions. EasyOCR is designed to be able to recognize numbers in a wide range of formats, including handwritten numbers, numbers with unusual fonts or styles, and numbers that appear against complex backgrounds. + +Detecting numbers in a webcam image involves using image processing techniques to identify and extract numerical characters from the image. The process can be broken down as initial step of image acquisition, where the image is captured using the webcam. The webcam captures the live video tream and sends it to the computer. Secondly, image preprocessing, in which the image is captured is be preprocessed to improve its quality and prepare it for analysis. This may involve operations like resizing, cropping, color correction, and noise reduction. Finally image segmentation will segment the image into regions of interest (ROIs) where numbers are likely to be located. This may involve identifying features such as edges or corners that indicate the presence of a number.Once the ROIs have been identified, the next step is to recognize the individual characters within them. This can be done using techniques like template matching, feature extraction, or machine learning algorithms. Finally, the recognized numbers can be outputted to a display or another application. + +The identified numbers using the EasyOCR library are linked into an excel spreadsheet using a variety of programming languages and libraries. Here the popular option of the Pandas library in Python is used. After importing the necessary libraries in Python script, image is loaded using Open CV. The numbers alone are extracted using EasyOCR from the image and it displayed the result as a list of dictionaries, where each dictionary contains information about the recognized characters. the extracted numbers stored in a DataFrame created in Pandas and using the option of excel method, the data is displayed in excel sheet. + +§ 2 PERFORMANCE EVALUATION AND TESTING RESULTS + +After installation of necessary files and libraries, as a first step the user is asked to enter the course code and name and to give the number of students as in Fig 2. After completing the task, the mark sheet is kept of image capturing as in Fig 3 + + < g r a p h i c s > + +Fig 2. User Interface + + < g r a p h i c s > + +Fig 3 Input image of sample mark sheet + +After capturing the image using webcam EasyOCR will detect and display the numbers and finally that outputs displayed are stored automatically in the Excel sheet as in Fig 4 + + < g r a p h i c s > + +Fig 4 Excel sheet with marks + +From the excel sheets, it is evident that the accuracy in transferring the marks entered in the grade sheets to excel is ${100}\%$ . By automating the grading process, educators no longer need to spend a significant amount of time and effort manually grading exams, which can significantly reduce the workload and manpower required for grading. + +§ ACKNOWLEDGEMENT + +This work has been completed by utilizing the resources of the Centre of Excellence on IIoT laboratory in collaboration with ALAI labs Pve Ltd, Singapore in the department of Electronics and Instrumentation Engineering of Kongu Engineering College, Erode, TamilNadu, India. The authors would like to thank the technical team of ALAI labs Pve Ltd for their incessant support and guidance in completion of this task. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/1w8vMnVeJB/Initial_manuscript_md/Initial_manuscript.md b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/1w8vMnVeJB/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..552180b76684480f4e01cc5f9b859dde8ddadce7 --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/1w8vMnVeJB/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,197 @@ +# Interpretable Multimodal Emotion Recognition using Facial Features and Physiological Signals + +Puneet Kumar and Xiaobai Li* + +CMVS, University of Oulu, Finland. + +\{puneet.kumar, xiaobai.li\}@oulu.fi + +## Abstract + +This paper aims to demonstrate the importance and feasibility of fusing multimodal information for emotion recognition. It introduces a multimodal framework for emotion understanding by fusing the information from visual facial features and rPPG signals extracted from the input videos. An interpretability technique based on permutation feature importance analysis has also been implemented to compute the contributions of rPPG and visual modalities toward classifying a given input video into a particular emotion class. The experiments on IEMOCAP dataset demonstrate that the emotion classification performance improves by combining the complementary information from multiple modalities. + +Keywords: Affective Computing, Interpretable & Deployable AI, Multimodal Analysis, rPPG, Facial Features. + +## 1 Introduction + +Emotions, characterized by a rich and complex mix of physiological and cognitive states, hold significant importance across multiple fields such as psychology, human-computer interaction, affective computing, and even extending to broader domains such as virtual reality, user experience design, healthcare, and education [1]. Understanding and accurately interpreting emotions is essential in human communication and social interactions [2]. With the surge in the development and accessibility of multimodal sensing technologies, researchers can explore multiple modalities to enhance the accuracy and robustness of emotion recognition systems [3]. The current research trend focuses on building Artificial Intelligence (AI) systems that can be deployed for real-life applications [4]. + +Two such modalities, facial expressions and physiological signals, have garnered significant attention due to the rich information they offer and their non-invasive nature [5]. Facial expressions, direct and non-invasive indicators of emotion, have been thoroughly investigated [6]. Various techniques involving the extraction of facial landmarks, local descriptors, or holistic representations have been proposed to capture nuanced variations in facial muscle movements that reflect different emotional states [7]. Physiological signals, such as remote photoplethysmography (rPPG) signals, provide another layer of emotional cues. These signals, obtained through non-contact video-based techniques, offer insights into physiological changes associated with emotional responses [5]. The interplay of these two modalities offers a more holistic understanding of emotions, thus enhancing the robustness of emotion recognition systems [8]. + +Emotion classification through audio-visual information is a well-established research task $\left\lbrack {9,{10},{11}}\right\rbrack$ . However, recognizing emotion using the physiological context along with the audio-visual information score for further exploration [5]. Furthermore, despite the significant advancements, many multimodal emotion recognition models do not provide meaningful interpretations for their predictions $\left\lbrack {{12},{13}}\right\rbrack$ . Most existing interpretability techniques have been implemented for visual modality and have yet to be fully explored for multimodal analysis $\left\lbrack {{14},{15},6}\right\rbrack$ . + +This paper proposes an interpretable multimodal emotion recognition framework that extracts rPPG signals and facial features from the input videos and uses their combined context for emotion detection. The Haar cascades classifier [16] has been implemented to extract the rPPG signals, whereas a pre-trained ResNet-34-based network extracts the visual features. Further, early and late fusion approaches that integrate the static facial expression features and dynamic rPPG signals to capture both spatial and temporal aspects of emotions have been incorporated. + +An interpretability technique based on permutation feature importance (PFI) [17] has also been incorporated that computes the contribution of rPPG and visual modality towards classifying a given input video into a particular emotion class. The experiments performed on Interactive Emotional Dyadic Motion Capture (IEMOCAP) dataset [18] have resulted in an accuracy of ${54.61}\%$ while classifying the input videos into ten emotion classes ('neutral, 'happy, 'sad, 'angry, 'excited, ' 'frustrated, ' 'fearful, 'surprised, ' 'distressed' and 'other'). The increased performance on using the multimodal context than the individual accuracies on using rPPG or visual modality alone advocates the importance of leveraging the multimodal context for emotion understanding. The average contributions of rPPG and visual modalities towards emotion recognition have been computed as 37.67% and 62.33%, respectively. + +--- + +*Corresponding Author: xiaobai.li@oulu.fi + +--- + +The contributions of this paper can be summarized as follows: + +- A multimodal emotion recognition framework has been proposed to classify a given video into discrete emotion classes. It extracts the dynamic rPPG signals from the input videos and combines them with static facial expressions using early and late fusion approaches. + +- An interpretability technique has been incorporated that computes the contribution of rPPG and visual modalities towards emotion classification using the PFI algorithm. + +- Extensive experiments have been performed on the IEMOCAP dataset, and the results have been presented in terms of accuracy, precision, recall, F1 score, and modality-wise contributions toward emotion classification. + +## 2 Proposed Method + +The proposed framework has been diagrammatically depicted in Figure 1 and described in the following sections. + +![019640d7-3fac-7185-8716-12d975d53381_1_434_868_922_407_0.jpg](images/019640d7-3fac-7185-8716-12d975d53381_1_434_868_922_407_0.jpg) + +Figure 1: Schematic illustration of the proposed framework. + +### 2.1 Preprocessing and Feature Extraction + +The video files are loaded and processed frame by frame using OpenCV (cv2) library ${}^{1}$ and processed to extract rPPG signals and facial features. + +i) rPPG Signals Extraction: Face detection within each video frame during the rPPG signal extraction process is accomplished using Haar cascades [16]. The region of interest (ROI), predominantly the facial region, is isolated from each frame, after which the mean intensity is computed to generate the rPPG signal for each video. The calculation of the mean intensity within the ROI $\left( {\bar{I}c}\right)$ is represented in Eq. 1. + +$$ +\bar{I}c = \frac{1}{N}\sum {x = 1}^{W}\mathop{\sum }\limits_{{y = 1}}^{H}{I}_{x, y, c} \tag{1} +$$ + +Where ${I}_{x, y, c}$ is the intensity of the pixel at location(x, y)for color channel $c$ in the ROI, and $N$ is the total number of pixels in the ROI, whereas $W$ and $H$ represent the width and height of the ROI, respectively, and $c \in R, G, B$ . + +ii) Facial Features Extraction: Facial feature extraction employs Dlib's shape predictor [19], which is a version of the ResNet-34 trained on Face Scrub dataset[20]. As per Eq. 2, it identifies 68 facial landmarks for each detected face within every frame, distinguishing unique facial characteristics. + +$$ +P = D\left( {F,\left\{ {L}_{i}\right\} }\right) \tag{2} +$$ + +$$ +F = \left\lbrack {{f}_{1},{f}_{2},\ldots ,{f}_{n}}\right\rbrack +$$ + +--- + +${}^{1}$ https://opencv.org/ + +--- + +Where $F$ represents the face detected in a frame, $P$ represents the predicted points on the face, $D\left( {F,\left\{ {L}_{i}\right\} }\right)$ is the function for predicting points on the face, and ${L}_{ - }i$ is the set of landmark points for the ${i}^{th}$ point. As signals from different videos might differ in length, it becomes crucial to standardize the input for the neural network model. This standardization is achieved by zero-padding $\bar{I}$ and $P$ to match the maximum signal length. + +### 2.2 Multimodal Feature Fusion + +Early fusion and late fusion approaches are used to combine the rPPG signals and facial features. + +i) Early Fusion: In the early fusion approach, the rPPG signals and facial features are concatenated before being fed into the model. The fused data are then passed through a neural network comprising a flatten layer, followed by CNN layers of dimensions 512 and 256, and the final layer of size equal to the number of classes. The flatten layer transforms the 3D input tensor into a 1D tensor, and the subsequent CNN layers functions perform the classification task. The model structure is represented as per Eq. 3. + +$$ +{I}^{\prime } = \text{concatenate}\left( {\bar{I}c, P}\right) +$$ + +$$ +{I}^{\prime \prime } = \operatorname{flatten}\left( {I}^{\prime }\right) \tag{3} +$$ + +$$ +{F}_{\text{early }} = \operatorname{NNet}\left( {{I}^{\prime \prime }, C}\right) +$$ + +Where $I$ is the input shape, $C$ denotes the number of classes, $\bar{I}c$ is the mean intensity within the ROI from the rPPG signals, $P$ represents the facial features, ${NNet}$ represents the early fusion network and ${F}_{\text{early }}$ is the output of the early fusion. + +ii) Late Fusion: In the late fusion approach, the rPPG and visual models are trained separately, and their outputs are combined using a weighted average. Eq. 4 represents a late fusion approach where the models are trained separately, and their outputs are combined in the final output ${F}_{\text{late }}$ . + +$$ +{F}_{\text{late }} = {w}_{1} \cdot {M}_{\mathrm{{rPPG}}}\left( {\bar{I}c}\right) + {w}_{2} \cdot {M}_{\text{facial }}\left( P\right) \tag{4} +$$ + +Where ${M}_{\mathrm{{rPPG}}}\left( {\bar{I}c}\right)$ and ${M}_{\text{facial }}\left( P\right)$ represent the outputs of the rPPG model and the visual model, respectively, and ${w}_{1}$ and ${w}_{2}$ are the weights assigned to each model’s output in the final fusion. + +### 2.3 Emotion Classification + +This study employs three separate models for emotion classification. Two of these models operate independently, utilizing rPPG signals and facial features. The third model operates via 'early fusion,' exploiting the combined context of data from the rPPG and visual models. The outputs of these individual models are then collaboratively integrated through a 'late fusion' approach that uses a weighted addition technique. The individual models, based on rPPG signals and facial features, are constructed as follows. + +i) rPPG Model: This model utilizes a Deep Convolutional Neural Network (CNN) with two hidden layers. It incorporates Rectified Linear Unit (ReLU) activation functions for emotion classification derived from rPPG signals. + +ii) Visual Model: This model, built on facial features, employs a ResNet-based Deep CNN with two hidden layers and ReLU activation functions. + +### 2.4 Interpretability + +An explainability method based on permutation feature importance (PFI) [17] is implemented, which is used to estimate the importance of features by permuting the values of each feature and measuring the resulting impact on model performance. The PFI of feature $j$ is the decrease in the model score when values of feature $j$ are randomly permuted. PFI for a feature $j$ is the difference in the model score when the values of feature $j$ are randomly permuted. Eq. 5 mathematically represents the concept of permutation feature importance. + +$$ +{PFI}\left( j\right) = {E}_{\pi }\left\lbrack {f\left( {X}^{\left( i\right) }\right) }\right\rbrack - {E}_{\pi }\left\lbrack {f\left( {X}_{{\pi }_{j}}^{\left( i\right) }\right) }\right\rbrack \tag{5} +$$ + +Where $\operatorname{PFI}\left( j\right)$ is the permutation feature importance of feature $j,{E}_{\pi }\left\lbrack {f\left( {X}^{\left( i\right) }\right) }\right\rbrack$ is the expected value of the model score over all samples in the dataset when the model is scored normally, ${E}_{\pi }\left\lbrack {f\left( {X}_{{\pi }_{j}}^{\left( i\right) }\right) }\right\rbrack$ is the expected value of the model score when the values of feature $j$ are permuted according to some permutation $\pi$ , and ${X}_{{\pi }_{j}}^{\left( i\right) }$ denotes the dataset ${X}^{\left( i\right) }$ with the values of feature $j$ permuted according to $\pi$ . + +## 3 Results and Discussion + +### 3.1 Experimental Setup + +The emotion classification experiments have been performed on the IEMOCAP dataset [18] consisting of 10,039 videos labeled with ten discrete emotion labels ('neutral," happy, 'sad," angry, 'excited, 'frustrated,' 'fearful,' 'surprised,' 'distressed' and 'other'). The model training has been trained on NVIDIA RTX 4090 GPU for 50 epochs with a batch size of 32 and a learning rate of 0.001 . The performance has been evaluated using accuracy, precision, recall, and F1 score metrics. + +### 3.2 Results + +Table 1 summarizes the accuracy of the individual and fusion models, whereas the average contributions of rPPG and visual modalities towards emotion recognition in the early fusion setup are presented in Table 2. The proposed framework has demonstrated an emotion classification accuracy of 54.61%, and the average contributions of rPPG and visual modalities towards emotion recognition have been computed as 37.67% and 62.33%, respectively. + +Table 1: Detailed performance of the individual and fusion models. + +
ModelAccuracyPrecisionRecallF1 Score
rPPG37.45%0.370.380.38
Facial Features46.42%0.490.490.49
Late Fusion41.17%0.430.420.42
Early Fusion54.61%0.560.580.57
+ +Table 2: Average contribution of each modality towards emotion recognition. + +
ModalityContribution
rPPG37.67%
Visual62.33%
+ +Table 1 shows that both the individual models performed reasonably well. However, the fusion model outperformed the individual models, demonstrating the advantage of combining rPPG signals and facial feature information for emotion recognition. + +### 3.3 Discussion + +This paper presents a compelling case for including multimodal context in emotion recognition. While the models trained on individual modalities show moderate performance, their fusion significantly improves emotion recognition accuracy. It emphasizes the complementarity of these modalities in capturing emotional states. However, the late fusion of modalities underperforms compared to the early fusion approach, indicating that integrating modalities at an earlier stage allows for more effective learning of emotional states. + +However, this study has a few limitations of the proposed work. The IEMOCAP dataset, while widely used, may limit the generalizability of the findings. Cross-dataset experiments on larger and more diverse datasets could further strengthen the results. Moreover, more modalities such as audio, text, and other physiological signals can also be incorporated for emotion recognition. Finally, a more in-depth interpretability mechanism can be developed to explain the role of individual features in emotion detection. + +## 4 Conclusion + +This work presents a multimodal emotion recognition framework using rPPG signals and facial features. It paves the way for practical applications where transparent and interpretable emotion understanding is important. The results highlight the benefits of integrating multiple modalities for emotion recognition, with an early fusion approach yielding the highest accuracy. While there are limitations and potential improvements, our study provides a promising direction for future research in emotion recognition, emphasizing the importance of multimodal data and fusion techniques. + +## References + +[1] Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. A Review of Affective Computing: From + +Unimodal Analysis to Multimodal Fusion. Elsevier Information Fusion Journal, 37:98-125, 2017. + +[2] Yucel Cimtay, Erhan Ekmekcioglu, and Seyma Caglar-Ozhan. Cross Subject Multimodal Emotion Recognition Based on Hybrid Fusion. IEEE Access, 8:168865-168878, 2020. + +[3] Tadas Baltrušaitis, Chaitanya Ahuja, and Louis Morency. Multimodal Machine Learning: A Survey and Taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 41(2):423-443, 2018. + +[4] Andrei Paleyes, Raoul-Gabriel Urma, and Neil D Lawrence. Challenges in Deploying Machine Learning: A Survey of Case Studies. ACM Computing Surveys, 55(6):1-29, 2022. + +[5] Zitong Yu, Xiaobai Li, and Guoying Zhao. Facial Video-based Physiological Signal Measurement: Recent Advances and Affective Applications. Signal Processing Magazine, 38(6):50-58, 2021. + +[6] Sarthak Malik, Puneet Kumar, and Balasubramanian Raman. Towards Interpretable Facial Emotion Recognition. In The 12th Indian Conference on Computer Vision, Graphics and Image Processing, pages 1-9, 2021. + +[7] Nannan Wang, Xinbo Gao, Dacheng Tao, Heng Yang, and Xuelong Li. Facial Feature Point Detection: A Comprehensive Survey. Neurocomputing, 275:50-65, 2018. + +[8] Zhihong Zeng, Maja Pantic, Glenn I Roisman, and Thomas S Huang. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 31(1):39-58, 2009. + +[9] Tianrong Rao, Xiaoxu Li, and Min Xu. Learning Multi-level Deep Representations for Image Emotion Classification. Neural Processing Letters, pp. 1-19, 2019. + +[10] M Xu, F Zhang, and S Khan. Improve Accuracy of Speech Emotion Recognition with Attention Head Fusion. In IEEE Annual Computing and Communication Workshop and Conference (CCWC), pages 1058-1064, 2020. + +[11] Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. DialogueRNN: An Attentive RNN for Emotion Detection in Conversations. In The 31st AAAI Conference on Artificial Intelligence (AAAI), volume 33, pages 6818-6825, 2019. + +[12] W James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. Definitions, Methods, and Applications in Interpretable Machine Learning. Proceedings of the National Academy of Sciences, 116(44):22071-22080, 2019. + +[13] Luca Longo et al. Explainable Artificial Intelligence: Concepts, Applications, Research Challenges and Visions. In The Springer International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), pages 1-16, 2020. + +[14] Marco Tulio Ribeiro, S. Singh, and C. Guestrin. Why Should I Trust You? Explaining Predictions of Any Classifier. In International Conference on Knowledge Discovery & Data mining (KDD), pages 1135-1144, 2016. + +[15] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. In The IEEE/CVF International Conference on Computer Vision (ICCV), pages 618-626, 2017. + +[16] Sander Soo. Object Detection using Haar Cascade Classifier. Institute of Computer Science, University of Tartu, 2(3):1-12, 2014. + +[17] André Altmann, Laura Tolosi, Oliver Sander, and Thomas Lengauer. Permutation Importance: A Corrected Feature Importance Measure. Bioinformatics, 26(10):1340-1347, 2010. + +[18] Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. IEMOCAP: Interactive Emotional dyadic MOtion CAPture data. Language Resources and Evaluation, 42(4), 2008. + +[19] Davis E. King. DLIB Models. https://github.com/davisking/dlib-models, 2016. Accessed on 21.05.2023. + +[20] Hong-Wei Ng and Stefan Winkler. A Data Driven Approach to Cleaning Large Face Datasets. In IEEE International Conference on Image Processing (ICIP), pages 343-347. IEEE, 2014. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/1w8vMnVeJB/Initial_manuscript_tex/Initial_manuscript.tex b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/1w8vMnVeJB/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..ef3c5b348a0e0b704d7f465a8fc5fbb36ce6961c --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/1w8vMnVeJB/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,171 @@ +§ INTERPRETABLE MULTIMODAL EMOTION RECOGNITION USING FACIAL FEATURES AND PHYSIOLOGICAL SIGNALS + +Puneet Kumar and Xiaobai Li* + +CMVS, University of Oulu, Finland. + +{puneet.kumar, xiaobai.li}@oulu.fi + +§ ABSTRACT + +This paper aims to demonstrate the importance and feasibility of fusing multimodal information for emotion recognition. It introduces a multimodal framework for emotion understanding by fusing the information from visual facial features and rPPG signals extracted from the input videos. An interpretability technique based on permutation feature importance analysis has also been implemented to compute the contributions of rPPG and visual modalities toward classifying a given input video into a particular emotion class. The experiments on IEMOCAP dataset demonstrate that the emotion classification performance improves by combining the complementary information from multiple modalities. + +Keywords: Affective Computing, Interpretable & Deployable AI, Multimodal Analysis, rPPG, Facial Features. + +§ 1 INTRODUCTION + +Emotions, characterized by a rich and complex mix of physiological and cognitive states, hold significant importance across multiple fields such as psychology, human-computer interaction, affective computing, and even extending to broader domains such as virtual reality, user experience design, healthcare, and education [1]. Understanding and accurately interpreting emotions is essential in human communication and social interactions [2]. With the surge in the development and accessibility of multimodal sensing technologies, researchers can explore multiple modalities to enhance the accuracy and robustness of emotion recognition systems [3]. The current research trend focuses on building Artificial Intelligence (AI) systems that can be deployed for real-life applications [4]. + +Two such modalities, facial expressions and physiological signals, have garnered significant attention due to the rich information they offer and their non-invasive nature [5]. Facial expressions, direct and non-invasive indicators of emotion, have been thoroughly investigated [6]. Various techniques involving the extraction of facial landmarks, local descriptors, or holistic representations have been proposed to capture nuanced variations in facial muscle movements that reflect different emotional states [7]. Physiological signals, such as remote photoplethysmography (rPPG) signals, provide another layer of emotional cues. These signals, obtained through non-contact video-based techniques, offer insights into physiological changes associated with emotional responses [5]. The interplay of these two modalities offers a more holistic understanding of emotions, thus enhancing the robustness of emotion recognition systems [8]. + +Emotion classification through audio-visual information is a well-established research task $\left\lbrack {9,{10},{11}}\right\rbrack$ . However, recognizing emotion using the physiological context along with the audio-visual information score for further exploration [5]. Furthermore, despite the significant advancements, many multimodal emotion recognition models do not provide meaningful interpretations for their predictions $\left\lbrack {{12},{13}}\right\rbrack$ . Most existing interpretability techniques have been implemented for visual modality and have yet to be fully explored for multimodal analysis $\left\lbrack {{14},{15},6}\right\rbrack$ . + +This paper proposes an interpretable multimodal emotion recognition framework that extracts rPPG signals and facial features from the input videos and uses their combined context for emotion detection. The Haar cascades classifier [16] has been implemented to extract the rPPG signals, whereas a pre-trained ResNet-34-based network extracts the visual features. Further, early and late fusion approaches that integrate the static facial expression features and dynamic rPPG signals to capture both spatial and temporal aspects of emotions have been incorporated. + +An interpretability technique based on permutation feature importance (PFI) [17] has also been incorporated that computes the contribution of rPPG and visual modality towards classifying a given input video into a particular emotion class. The experiments performed on Interactive Emotional Dyadic Motion Capture (IEMOCAP) dataset [18] have resulted in an accuracy of ${54.61}\%$ while classifying the input videos into ten emotion classes ('neutral, 'happy, 'sad, 'angry, 'excited, ' 'frustrated, ' 'fearful, 'surprised, ' 'distressed' and 'other'). The increased performance on using the multimodal context than the individual accuracies on using rPPG or visual modality alone advocates the importance of leveraging the multimodal context for emotion understanding. The average contributions of rPPG and visual modalities towards emotion recognition have been computed as 37.67% and 62.33%, respectively. + +*Corresponding Author: xiaobai.li@oulu.fi + +The contributions of this paper can be summarized as follows: + + * A multimodal emotion recognition framework has been proposed to classify a given video into discrete emotion classes. It extracts the dynamic rPPG signals from the input videos and combines them with static facial expressions using early and late fusion approaches. + + * An interpretability technique has been incorporated that computes the contribution of rPPG and visual modalities towards emotion classification using the PFI algorithm. + + * Extensive experiments have been performed on the IEMOCAP dataset, and the results have been presented in terms of accuracy, precision, recall, F1 score, and modality-wise contributions toward emotion classification. + +§ 2 PROPOSED METHOD + +The proposed framework has been diagrammatically depicted in Figure 1 and described in the following sections. + + < g r a p h i c s > + +Figure 1: Schematic illustration of the proposed framework. + +§ 2.1 PREPROCESSING AND FEATURE EXTRACTION + +The video files are loaded and processed frame by frame using OpenCV (cv2) library ${}^{1}$ and processed to extract rPPG signals and facial features. + +i) rPPG Signals Extraction: Face detection within each video frame during the rPPG signal extraction process is accomplished using Haar cascades [16]. The region of interest (ROI), predominantly the facial region, is isolated from each frame, after which the mean intensity is computed to generate the rPPG signal for each video. The calculation of the mean intensity within the ROI $\left( {\bar{I}c}\right)$ is represented in Eq. 1. + +$$ +\bar{I}c = \frac{1}{N}\sum {x = 1}^{W}\mathop{\sum }\limits_{{y = 1}}^{H}{I}_{x,y,c} \tag{1} +$$ + +Where ${I}_{x,y,c}$ is the intensity of the pixel at location(x, y)for color channel $c$ in the ROI, and $N$ is the total number of pixels in the ROI, whereas $W$ and $H$ represent the width and height of the ROI, respectively, and $c \in R,G,B$ . + +ii) Facial Features Extraction: Facial feature extraction employs Dlib's shape predictor [19], which is a version of the ResNet-34 trained on Face Scrub dataset[20]. As per Eq. 2, it identifies 68 facial landmarks for each detected face within every frame, distinguishing unique facial characteristics. + +$$ +P = D\left( {F,\left\{ {L}_{i}\right\} }\right) \tag{2} +$$ + +$$ +F = \left\lbrack {{f}_{1},{f}_{2},\ldots ,{f}_{n}}\right\rbrack +$$ + +${}^{1}$ https://opencv.org/ + +Where $F$ represents the face detected in a frame, $P$ represents the predicted points on the face, $D\left( {F,\left\{ {L}_{i}\right\} }\right)$ is the function for predicting points on the face, and ${L}_{ - }i$ is the set of landmark points for the ${i}^{th}$ point. As signals from different videos might differ in length, it becomes crucial to standardize the input for the neural network model. This standardization is achieved by zero-padding $\bar{I}$ and $P$ to match the maximum signal length. + +§ 2.2 MULTIMODAL FEATURE FUSION + +Early fusion and late fusion approaches are used to combine the rPPG signals and facial features. + +i) Early Fusion: In the early fusion approach, the rPPG signals and facial features are concatenated before being fed into the model. The fused data are then passed through a neural network comprising a flatten layer, followed by CNN layers of dimensions 512 and 256, and the final layer of size equal to the number of classes. The flatten layer transforms the 3D input tensor into a 1D tensor, and the subsequent CNN layers functions perform the classification task. The model structure is represented as per Eq. 3. + +$$ +{I}^{\prime } = \text{ concatenate }\left( {\bar{I}c,P}\right) +$$ + +$$ +{I}^{\prime \prime } = \operatorname{flatten}\left( {I}^{\prime }\right) \tag{3} +$$ + +$$ +{F}_{\text{ early }} = \operatorname{NNet}\left( {{I}^{\prime \prime },C}\right) +$$ + +Where $I$ is the input shape, $C$ denotes the number of classes, $\bar{I}c$ is the mean intensity within the ROI from the rPPG signals, $P$ represents the facial features, ${NNet}$ represents the early fusion network and ${F}_{\text{ early }}$ is the output of the early fusion. + +ii) Late Fusion: In the late fusion approach, the rPPG and visual models are trained separately, and their outputs are combined using a weighted average. Eq. 4 represents a late fusion approach where the models are trained separately, and their outputs are combined in the final output ${F}_{\text{ late }}$ . + +$$ +{F}_{\text{ late }} = {w}_{1} \cdot {M}_{\mathrm{{rPPG}}}\left( {\bar{I}c}\right) + {w}_{2} \cdot {M}_{\text{ facial }}\left( P\right) \tag{4} +$$ + +Where ${M}_{\mathrm{{rPPG}}}\left( {\bar{I}c}\right)$ and ${M}_{\text{ facial }}\left( P\right)$ represent the outputs of the rPPG model and the visual model, respectively, and ${w}_{1}$ and ${w}_{2}$ are the weights assigned to each model’s output in the final fusion. + +§ 2.3 EMOTION CLASSIFICATION + +This study employs three separate models for emotion classification. Two of these models operate independently, utilizing rPPG signals and facial features. The third model operates via 'early fusion,' exploiting the combined context of data from the rPPG and visual models. The outputs of these individual models are then collaboratively integrated through a 'late fusion' approach that uses a weighted addition technique. The individual models, based on rPPG signals and facial features, are constructed as follows. + +i) rPPG Model: This model utilizes a Deep Convolutional Neural Network (CNN) with two hidden layers. It incorporates Rectified Linear Unit (ReLU) activation functions for emotion classification derived from rPPG signals. + +ii) Visual Model: This model, built on facial features, employs a ResNet-based Deep CNN with two hidden layers and ReLU activation functions. + +§ 2.4 INTERPRETABILITY + +An explainability method based on permutation feature importance (PFI) [17] is implemented, which is used to estimate the importance of features by permuting the values of each feature and measuring the resulting impact on model performance. The PFI of feature $j$ is the decrease in the model score when values of feature $j$ are randomly permuted. PFI for a feature $j$ is the difference in the model score when the values of feature $j$ are randomly permuted. Eq. 5 mathematically represents the concept of permutation feature importance. + +$$ +{PFI}\left( j\right) = {E}_{\pi }\left\lbrack {f\left( {X}^{\left( i\right) }\right) }\right\rbrack - {E}_{\pi }\left\lbrack {f\left( {X}_{{\pi }_{j}}^{\left( i\right) }\right) }\right\rbrack \tag{5} +$$ + +Where $\operatorname{PFI}\left( j\right)$ is the permutation feature importance of feature $j,{E}_{\pi }\left\lbrack {f\left( {X}^{\left( i\right) }\right) }\right\rbrack$ is the expected value of the model score over all samples in the dataset when the model is scored normally, ${E}_{\pi }\left\lbrack {f\left( {X}_{{\pi }_{j}}^{\left( i\right) }\right) }\right\rbrack$ is the expected value of the model score when the values of feature $j$ are permuted according to some permutation $\pi$ , and ${X}_{{\pi }_{j}}^{\left( i\right) }$ denotes the dataset ${X}^{\left( i\right) }$ with the values of feature $j$ permuted according to $\pi$ . + +§ 3 RESULTS AND DISCUSSION + +§ 3.1 EXPERIMENTAL SETUP + +The emotion classification experiments have been performed on the IEMOCAP dataset [18] consisting of 10,039 videos labeled with ten discrete emotion labels ('neutral," happy, 'sad," angry, 'excited, 'frustrated,' 'fearful,' 'surprised,' 'distressed' and 'other'). The model training has been trained on NVIDIA RTX 4090 GPU for 50 epochs with a batch size of 32 and a learning rate of 0.001 . The performance has been evaluated using accuracy, precision, recall, and F1 score metrics. + +§ 3.2 RESULTS + +Table 1 summarizes the accuracy of the individual and fusion models, whereas the average contributions of rPPG and visual modalities towards emotion recognition in the early fusion setup are presented in Table 2. The proposed framework has demonstrated an emotion classification accuracy of 54.61%, and the average contributions of rPPG and visual modalities towards emotion recognition have been computed as 37.67% and 62.33%, respectively. + +Table 1: Detailed performance of the individual and fusion models. + +max width= + +Model Accuracy Precision Recall F1 Score + +1-5 +rPPG 37.45% 0.37 0.38 0.38 + +1-5 +Facial Features 46.42% 0.49 0.49 0.49 + +1-5 +Late Fusion 41.17% 0.43 0.42 0.42 + +1-5 +Early Fusion 54.61% 0.56 0.58 0.57 + +1-5 + +Table 2: Average contribution of each modality towards emotion recognition. + +max width= + +Modality Contribution + +1-2 +rPPG 37.67% + +1-2 +Visual 62.33% + +1-2 + +Table 1 shows that both the individual models performed reasonably well. However, the fusion model outperformed the individual models, demonstrating the advantage of combining rPPG signals and facial feature information for emotion recognition. + +§ 3.3 DISCUSSION + +This paper presents a compelling case for including multimodal context in emotion recognition. While the models trained on individual modalities show moderate performance, their fusion significantly improves emotion recognition accuracy. It emphasizes the complementarity of these modalities in capturing emotional states. However, the late fusion of modalities underperforms compared to the early fusion approach, indicating that integrating modalities at an earlier stage allows for more effective learning of emotional states. + +However, this study has a few limitations of the proposed work. The IEMOCAP dataset, while widely used, may limit the generalizability of the findings. Cross-dataset experiments on larger and more diverse datasets could further strengthen the results. Moreover, more modalities such as audio, text, and other physiological signals can also be incorporated for emotion recognition. Finally, a more in-depth interpretability mechanism can be developed to explain the role of individual features in emotion detection. + +§ 4 CONCLUSION + +This work presents a multimodal emotion recognition framework using rPPG signals and facial features. It paves the way for practical applications where transparent and interpretable emotion understanding is important. The results highlight the benefits of integrating multiple modalities for emotion recognition, with an early fusion approach yielding the highest accuracy. While there are limitations and potential improvements, our study provides a promising direction for future research in emotion recognition, emphasizing the importance of multimodal data and fusion techniques. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/2w4CsrCUXq/Initial_manuscript_md/Initial_manuscript.md b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/2w4CsrCUXq/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..5997efc765188d9c99f26241d9e27b989e9158a0 --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/2w4CsrCUXq/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,105 @@ +# Chemically Interpretable Molecular Representation for Property Prediction + +M S B Roshan ${}^{+ \dagger * }$ , Nirav Bhatt ${}^{+ \dagger * }$ + +${}^{ + }$ BioSystems Engineering and Control Group, Department of Biotechnology, IIT Madras ${}^{ \dagger }$ Robert Bosch Centre for Data Science and Artificial Intelligence (RBCDSAI), IIT Madras *Centre for Integrative Biology and Systems medicinE (IBSE), IIT Madras + +## Abstract + +Molecular property prediction using a molecule's structure is a crucial step in drug and novel material discovery, as computational screening approaches rely on predicted properties to refine the existing design of molecules. Although the problem has existed for decades, it has recently gained attention due to the advent of big data and deep learning. On average, one FDA drug is approved for 250 compounds entering the preclinical research stage, requiring screening of chemical libraries containing more than 20000 compounds. In-silico property prediction approaches using learnable representations increase the pace of development and reduce the cost of discovery. We propose developing molecule representations using functional groups in chemistry to address the problem of deciphering the relationship between a molecule's structure and property. Functional groups are substructures in a molecule with distinctive chemical properties that influence its chemical characteristics. These substructures are found by (i) curating functional groups annotated by chemists and (ii) mining a large corpus of molecules to extract frequent substructures using a pattern-mining algorithm. We show that the Functional Group Representation (FGR) framework beats state-of-the-art models on several benchmark datasets while ensuring explainability between the predicted property and molecular structure to experimentalists. + +## 1 Introduction + +Molecular property prediction is a task that finds applications in drug discovery, quantum mechanical attribute prediction of molecules, hydrophobicity prediction, material design and drug toxicity prediction. In the field of drug discovery and novel material discovery, computational approaches for predicting molecular properties can boost the processes of finding better drug candidates and materials $\left\lbrack {1,2}\right\rbrack$ . Characterising and predicting molecular properties is one of the most crucial problems in drug discovery. Numerous strategies are being used globally to enhance efficiency and improve the success of the drug discovery and development process. These strategies use a wide range of data such as genomics and proteomics, drug molecule structures and properties, and methods such as pharmaceutical modelling and artificial intelligence [3]. On average, one drug is approved by US FDA for five compounds entering clinical trials that, in turn, are the result of thorough preclinical testing of 250 compounds themselves selected by screening 5000-10000 compounds [4]. Experimentally testing many such compounds is both time and resource-consuming. In recent years, computational methods have significantly increased in the drug discovery domain [3]. The traditional computational approaches for in-silico molecular property prediction have relied on extracting fingerprints or hand-engineered features. Since these features are typically designed based on the property prediction task, it captures features only relevant to the particular task. + +In contrast to traditional computational approaches, deep learning-based (DL) approaches can automatically learn features from molecules directly for the task at hand, and hence, it can reduce the time and cost for property prediction $\left\lbrack {5,6}\right\rbrack$ . Instantaneous molecular property prediction using deep learning algorithms can help generate novel molecules with desired profiles and engineer artificial synthesis pathways faster and cheaper. Graph neural networks (GNN) and their variants have been widely used for molecular property prediction tasks due to their ability to generate better molecular representations $\left\lbrack {7,6,8,9,{10},{11},{12}}\right\rbrack$ . These approaches use the information on atoms, bonds, topology, interactions and molecular geometry (3D spatial structure) of molecules for learning molecular representation. However, GNN-based approaches require a large amount of labelled data for a particular task, and it is impossible to generate such a large number of labelled data for several applications. Several graph-based self-supervised learning approaches have been proposed to learn molecular representation from unlabelled molecular data to handle the problem of limited labelled data $\left\lbrack {9,{13},{14}}\right\rbrack$ . + +Although GNNs and self-supervised learning models have provided promising results on several property prediction tasks, the relationships between properties and molecule structures are challenging to interpret due to the complex molecular representations generated by these methods for chemists. For novel molecule discovery and drug repurposing applications, chemically interpretable molecular representation is essential for testing the generated molecules via wet-lab experiments by chemists. Hence, a chemistry-inspired representation of molecules can be vital in achieving interpretability and improved predictive performance of these models. + +In this work, we propose a molecular representation learning framework that uses the concept of function groups in chemistry. The functional groups are substructures in a molecule that are attributed to the chemical properties of the molecule, including its reactivity. This work proposes a functional group representation (FGR) framework that allows embedding molecules based on their substructures. Firstly, we introduce two approaches for the generation of the functional group vocabulary, namely, functional groups (FG) curated from the OCHEM database [15] and mined functional groups (MFG) from the PubChem database [16]. Then, we develop four different latent feature encodings using the FG- and MFG-based vocabulary generated in the first step for property prediction tasks. Further, we investigate the effect of pretraining using unlabelled molecules in the PubChem database on the property prediction tasks. We perform experiments on several benchmark datasets in the available literature and compare the results of the proposed FGR framework in this work with other state-of-the-art methods. We demonstrate that the FGR framework outperforms several property prediction tasks or provides comparable results on several other tasks compared to the state-of-the-art methods while providing interpretability to chemists and practitioners. + +## 2 Objectives + +O1 Generate a functional group vocabulary characterised by chemists and extract frequent sub-structures from a large chemical corpus. + +O2 Learn functions ${f}_{{\mathbf{x}}_{G}} : {\mathbf{x}}_{G} \rightarrow {\mathbf{z}}_{G}$ using autoencoders [17] where ${\mathbf{x}}_{G}$ is a multi-hot vector of appropriate dimension (say $p$ ) depending on the input representation and ${\mathbf{z}}_{G} \in {\mathbb{R}}^{l}$ is the learnt latent vector. + +O3 Decode the predicted property and molecular structure relationship using gradient-based model agnostic interpretability methods. + +## 3 Methodology + +In this work, a set of SMILES strings for $n$ molecules, $\mathcal{S} = \left\{ {{S}_{1},{S}_{2},\ldots ,{S}_{n}}\right\}$ which might be associated with a property $y$ is considered. Furthermore, we also incorporate 2D global molecular descriptors to augment the learnt representation (FGR-Desc) and increase the performance of downstream property prediction tasks. The methods are summarised in Figure 1. + +- Generation of Functional Group vocabulary: In this study, we use the OCHEM [15] database, which has a collection of 2786 functional groups (FG) characterised by chemists and frequent sub-structures are recognised using a sequential pattern mining algorithm applied on $\mathcal{S}$ from the PubChem database $(n > {114}$ million). Based on the frequency threshold $\eta ,{3000}$ mined functional groups are identified (MFG). Then, any molecule ${S}_{i} \in \mathcal{S}$ can be represented by a multi-one-hot encoded vector, ${\left\lbrack {x}_{1},{x}_{2},\ldots ,{x}_{b}\right\rbrack }^{T}$ where ${x}_{i} = 1$ if ${FG}{R}_{i} \in {S}_{i}$ and ${x}_{i} = 0$ , if ${FG}{R}_{i} \notin {S}_{i}$ . + +- Pretraining and Property Prediction: Pretraining is decoupled from the downstream property prediction to develop a global representation capable of interpreting the chemical space that can be applied to any task. For the pretraining step, the autoencoder is trained separately from the downstream property prediction task. The reconstruction loss of the training phase in is minimized for all the molecules in the database for the pretraining purpose. One of the preliminary challenges of the encoder-decoder pretraining is the determination of the dimension of the latent feature vector. Hyper-parameter optimization is performed to obtain the dimension of the latent feature vectors for all four types of encodings. A fully connected neural network is used to compute a probability score $p\left( {\mathbf{x}}_{G}\right) \in \left\lbrack {0,1}\right\rbrack$ based on ${\mathbf{z}}_{G}$ (latent feature vector) for property prediction. + +- Interpretability: We evaluate each input feature's contribution to the model's output using primary attribution methods like feature permutation, integrated gradients and gradient SHAP $\left\lbrack {{18},{19}}\right\rbrack$ . The goodness of + +![019640df-e814-7c6c-9b2c-e22fd8b77c14_2_212_155_1380_730_0.jpg](images/019640df-e814-7c6c-9b2c-e22fd8b77c14_2_212_155_1380_730_0.jpg) + +Figure 1: Overview of the Proposed Methodology: A) FG Representation, B) MFG Representation, C) Descriptor Representation, D) Latent Representation for FGR and Property Prediction Module + +![019640df-e814-7c6c-9b2c-e22fd8b77c14_2_253_1009_1291_260_0.jpg](images/019640df-e814-7c6c-9b2c-e22fd8b77c14_2_253_1009_1291_260_0.jpg) + +Figure 2: Overview of Interpretability Analysis: For any given property, attribution scores for input features are calculated and the substructures can be visualised overlapped with the scores + +explanations is quantified using infidelity and sensitivity metrics. A visualisation tool is also developed to highlight essential substructures that contribute to predicting desired properties, as shown in Figure 2. + +## 4 Results + +Extensive evaluation of the model was done for robustness and generalizability on classification and regression tasks using five-fold random and scaffold splits. The results are summarized in Table 1 and Table 2. + +## 5 Conclusion + +This work presents a functional group representation (FGR) framework using functional groups in chemistry for molecular representation learning. The framework allows four types of molecular representations: FG, MFG, FG-MFG-based and FG-MFG-descriptors-based representation. The proposed FGR framework-based molecular embeddings have been evaluated on several benchmark datasets. The proposed framework performs at par and sometimes better than the state-of-the-art algorithms in classification tasks. The FGR framework also provides chemically interpretable encoding as it is inspired by rules of chemistry to maintain explainability with the encoding. In the proposed framework, autoencoders are used to learn latent representations. Also, we demonstrated that the pretraining in the FGR framework could be performed due to decoupling between the latent representation learning + +
Scaffold Split Classification (ROC-AUC) $\uparrow$
Dataset$\mathbf{{FGR}}$DMPNNGEM
BACE${0.89} \pm {0.01}$${0.86} \pm {0.05}$${0.86} \pm {0.01}$
BBBP$\mathbf{{0.96} \pm {0.008}}$${0.92} \pm {0.02}$${0.72} \pm {0.00}$
Tox21${0.71} \pm {0.01}$${0.69} \pm {0.01}$${0.78} \pm {0.001}$
ClinTox$\mathbf{{0.99} \pm {0.002}}$${0.88} \pm {0.03}$${0.90} \pm {0.01}$
SIDER${0.72} \pm {0.07}$${0.63} \pm {0.03}$${0.67} \pm {0.004}$
+ +Table 1: Comparison of ROC-AUC scores for FGR, DMPNN [6], and GEM [8] + +
Scaffold Split Regression (RMSE) $\downarrow$
Dataset$\mathbf{{FGR}}$DMPNNGEM
ESOL${0.62} \pm {0.06}$${1.05} \pm {0.008}$${0.79} \pm {0.02}$
FreeSolv${0.78} \pm {0.19}$${2.08} \pm {0.082}$${1.87} \pm {0.094}$
Lipo${0.64} \pm {0.035}$${0.68} \pm {0.016}$${0.66} \pm {0.008}$
+ +Table 2: Comparison of RMSE scores for FGR, DMPNN [6], and GEM [8] + +task and the property prediction task. It is envisaged to extend the FGR framework for building pre-trained models with explainability using self-supervised learning on large-scale molecular data. + +## References + +[1] W Patrick Walters and Regina Barzilay. Applications of deep learning in molecule generation and molecular property prediction. Accounts of chemical research, 54(2):263-270, 2020. + +[2] Oliver Wieder, Stefan Kohlbacher, Mélaine Kuenemann, Arthur Garon, Pierre Ducrot, Thomas Seidel, and Thierry Langer. A compact review of molecular property prediction with graph neural networks. Drug Discovery Today: Technologies, 37:1-12, 2020. + +[3] Geoffrey Kabue Kiriiri, Peter Mbugua Njogu, and Alex Njoroge Mwangi. Exploring different approaches to improve the success of drug discovery and development projects: a review. Future Journal of Pharmaceutical Sciences, 6(1):1-12, 2020. + +[4] Jie Shen and Christos A Nicolaou. Molecular property prediction: recent trends in the era of artificial intelligence. Drug Discovery Today: Technologies, 32:29-36, 2019. + +[5] Andreas Mayr, Günter Klambauer, Thomas Unterthiner, Marvin Steijaert, Jörg K Wegner, Hugo Ceulemans, Djork-Arné Clevert, and Sepp Hochreiter. Large-scale comparison of machine learning methods for drug target prediction on chembl. Chemical science, 9(24):5441-5451, 2018. + +[6] Kevin Yang, Kyle Swanson, Wengong Jin, Connor Coley, Philipp Eiden, Hua Gao, Angel Guzman-Perez, Timothy Hopper, Brian Kelley, Miriam Mathea, Andrew Palmer, Volker Settels, Tommi Jaakkola, Klavs Jensen, and Regina Barzilay. Analyzing Learned Molecular Representations for Property Prediction. Journal of Chemical Information and Modeling, 59(8):3370-3388, 8 2019. + +[7] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pages 1263-1272. PMLR, 2017. + +[8] Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, Fan Wang, Hua Wu, and Haifeng Wang. Geometry-enhanced molecular representation learning for property prediction. Nature Machine Intelligence, 4(2):127-134, 2022. + +[9] Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou Huang. Self-supervised graph transformer on large-scale molecular data. Advances in Neural Information Processing Systems, 33:12559-12571, 2020. + +[10] Fan-Yun Sun, Jordan Hoffman, Vikas Verma, and Jian Tang. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In International Conference on Learning Representations, 2019. + +[11] Chengqiang Lu, Qi Liu, Chao Wang, Zhenya Huang, Peize Lin, and Lixin He. Molecular property prediction: A multilevel quantum interactions modeling perspective. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 1052-1060, 2019. + +[12] Jonathan M Stokes, Kevin Yang, Kyle Swanson, Wengong Jin, Andres Cubillos-Ruiz, Nina M Donghia, Craig R MacNair, Shawn French, Lindsey A Carfrae, Zohar Bloom-Ackermann, et al. A deep learning approach to antibiotic discovery. Cell, 180(4):688-702, 2020. + +[13] Seyone Chithrananda, Gabriel Grand, and Bharath Ramsundar. Chemberta: large-scale self-supervised pretraining for molecular property prediction. arXiv preprint arXiv:2010.09885, 2020. + +[14] Zaixi Zhang, Qi Liu, Hao Wang, Chengqiang Lu, and Chee-Kong Lee. Motif-based graph self-supervised learning for molecular property prediction. Advances in Neural Information Processing Systems, 34:15870- 15882, 2021. + +[15] Iurii Sushko, Sergii Novotarskyi, Robert Körner, Anil Kumar Pandey, Matthias Rupp, Wolfram Teetz, Stefan Brandmaier, Ahmed Abdelaziz, Volodymyr V Prokopenko, Vsevolod Y Tanchuk, et al. Online chemical modeling environment (ochem): web platform for data storage, model development and publishing of chemical information. Journal of computer-aided molecular design, 25:533-554, 2011. + +[16] Sunghwan Kim, Paul A Thiessen, Evan E Bolton, Jie Chen, Gang Fu, Asta Gindulyte, Lianyi Han, Jane He, Siqian He, Benjamin A Shoemaker, et al. Pubchem substance and compound databases. Nucleic acids research, 44(D1):D1202-D1213, 2016. + +[17] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504-507, 2006. + +[18] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319-3328. PMLR, 2017. + +[19] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/2w4CsrCUXq/Initial_manuscript_tex/Initial_manuscript.tex b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/2w4CsrCUXq/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..f43e53bb539068b50b708663add5804f8e536715 --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/2w4CsrCUXq/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,103 @@ +§ CHEMICALLY INTERPRETABLE MOLECULAR REPRESENTATION FOR PROPERTY PREDICTION + +M S B Roshan ${}^{+ \dagger * }$ , Nirav Bhatt ${}^{+ \dagger * }$ + +${}^{ + }$ BioSystems Engineering and Control Group, Department of Biotechnology, IIT Madras ${}^{ \dagger }$ Robert Bosch Centre for Data Science and Artificial Intelligence (RBCDSAI), IIT Madras *Centre for Integrative Biology and Systems medicinE (IBSE), IIT Madras + +§ ABSTRACT + +Molecular property prediction using a molecule's structure is a crucial step in drug and novel material discovery, as computational screening approaches rely on predicted properties to refine the existing design of molecules. Although the problem has existed for decades, it has recently gained attention due to the advent of big data and deep learning. On average, one FDA drug is approved for 250 compounds entering the preclinical research stage, requiring screening of chemical libraries containing more than 20000 compounds. In-silico property prediction approaches using learnable representations increase the pace of development and reduce the cost of discovery. We propose developing molecule representations using functional groups in chemistry to address the problem of deciphering the relationship between a molecule's structure and property. Functional groups are substructures in a molecule with distinctive chemical properties that influence its chemical characteristics. These substructures are found by (i) curating functional groups annotated by chemists and (ii) mining a large corpus of molecules to extract frequent substructures using a pattern-mining algorithm. We show that the Functional Group Representation (FGR) framework beats state-of-the-art models on several benchmark datasets while ensuring explainability between the predicted property and molecular structure to experimentalists. + +§ 1 INTRODUCTION + +Molecular property prediction is a task that finds applications in drug discovery, quantum mechanical attribute prediction of molecules, hydrophobicity prediction, material design and drug toxicity prediction. In the field of drug discovery and novel material discovery, computational approaches for predicting molecular properties can boost the processes of finding better drug candidates and materials $\left\lbrack {1,2}\right\rbrack$ . Characterising and predicting molecular properties is one of the most crucial problems in drug discovery. Numerous strategies are being used globally to enhance efficiency and improve the success of the drug discovery and development process. These strategies use a wide range of data such as genomics and proteomics, drug molecule structures and properties, and methods such as pharmaceutical modelling and artificial intelligence [3]. On average, one drug is approved by US FDA for five compounds entering clinical trials that, in turn, are the result of thorough preclinical testing of 250 compounds themselves selected by screening 5000-10000 compounds [4]. Experimentally testing many such compounds is both time and resource-consuming. In recent years, computational methods have significantly increased in the drug discovery domain [3]. The traditional computational approaches for in-silico molecular property prediction have relied on extracting fingerprints or hand-engineered features. Since these features are typically designed based on the property prediction task, it captures features only relevant to the particular task. + +In contrast to traditional computational approaches, deep learning-based (DL) approaches can automatically learn features from molecules directly for the task at hand, and hence, it can reduce the time and cost for property prediction $\left\lbrack {5,6}\right\rbrack$ . Instantaneous molecular property prediction using deep learning algorithms can help generate novel molecules with desired profiles and engineer artificial synthesis pathways faster and cheaper. Graph neural networks (GNN) and their variants have been widely used for molecular property prediction tasks due to their ability to generate better molecular representations $\left\lbrack {7,6,8,9,{10},{11},{12}}\right\rbrack$ . These approaches use the information on atoms, bonds, topology, interactions and molecular geometry (3D spatial structure) of molecules for learning molecular representation. However, GNN-based approaches require a large amount of labelled data for a particular task, and it is impossible to generate such a large number of labelled data for several applications. Several graph-based self-supervised learning approaches have been proposed to learn molecular representation from unlabelled molecular data to handle the problem of limited labelled data $\left\lbrack {9,{13},{14}}\right\rbrack$ . + +Although GNNs and self-supervised learning models have provided promising results on several property prediction tasks, the relationships between properties and molecule structures are challenging to interpret due to the complex molecular representations generated by these methods for chemists. For novel molecule discovery and drug repurposing applications, chemically interpretable molecular representation is essential for testing the generated molecules via wet-lab experiments by chemists. Hence, a chemistry-inspired representation of molecules can be vital in achieving interpretability and improved predictive performance of these models. + +In this work, we propose a molecular representation learning framework that uses the concept of function groups in chemistry. The functional groups are substructures in a molecule that are attributed to the chemical properties of the molecule, including its reactivity. This work proposes a functional group representation (FGR) framework that allows embedding molecules based on their substructures. Firstly, we introduce two approaches for the generation of the functional group vocabulary, namely, functional groups (FG) curated from the OCHEM database [15] and mined functional groups (MFG) from the PubChem database [16]. Then, we develop four different latent feature encodings using the FG- and MFG-based vocabulary generated in the first step for property prediction tasks. Further, we investigate the effect of pretraining using unlabelled molecules in the PubChem database on the property prediction tasks. We perform experiments on several benchmark datasets in the available literature and compare the results of the proposed FGR framework in this work with other state-of-the-art methods. We demonstrate that the FGR framework outperforms several property prediction tasks or provides comparable results on several other tasks compared to the state-of-the-art methods while providing interpretability to chemists and practitioners. + +§ 2 OBJECTIVES + +O1 Generate a functional group vocabulary characterised by chemists and extract frequent sub-structures from a large chemical corpus. + +O2 Learn functions ${f}_{{\mathbf{x}}_{G}} : {\mathbf{x}}_{G} \rightarrow {\mathbf{z}}_{G}$ using autoencoders [17] where ${\mathbf{x}}_{G}$ is a multi-hot vector of appropriate dimension (say $p$ ) depending on the input representation and ${\mathbf{z}}_{G} \in {\mathbb{R}}^{l}$ is the learnt latent vector. + +O3 Decode the predicted property and molecular structure relationship using gradient-based model agnostic interpretability methods. + +§ 3 METHODOLOGY + +In this work, a set of SMILES strings for $n$ molecules, $\mathcal{S} = \left\{ {{S}_{1},{S}_{2},\ldots ,{S}_{n}}\right\}$ which might be associated with a property $y$ is considered. Furthermore, we also incorporate 2D global molecular descriptors to augment the learnt representation (FGR-Desc) and increase the performance of downstream property prediction tasks. The methods are summarised in Figure 1. + + * Generation of Functional Group vocabulary: In this study, we use the OCHEM [15] database, which has a collection of 2786 functional groups (FG) characterised by chemists and frequent sub-structures are recognised using a sequential pattern mining algorithm applied on $\mathcal{S}$ from the PubChem database $(n > {114}$ million). Based on the frequency threshold $\eta ,{3000}$ mined functional groups are identified (MFG). Then, any molecule ${S}_{i} \in \mathcal{S}$ can be represented by a multi-one-hot encoded vector, ${\left\lbrack {x}_{1},{x}_{2},\ldots ,{x}_{b}\right\rbrack }^{T}$ where ${x}_{i} = 1$ if ${FG}{R}_{i} \in {S}_{i}$ and ${x}_{i} = 0$ , if ${FG}{R}_{i} \notin {S}_{i}$ . + + * Pretraining and Property Prediction: Pretraining is decoupled from the downstream property prediction to develop a global representation capable of interpreting the chemical space that can be applied to any task. For the pretraining step, the autoencoder is trained separately from the downstream property prediction task. The reconstruction loss of the training phase in is minimized for all the molecules in the database for the pretraining purpose. One of the preliminary challenges of the encoder-decoder pretraining is the determination of the dimension of the latent feature vector. Hyper-parameter optimization is performed to obtain the dimension of the latent feature vectors for all four types of encodings. A fully connected neural network is used to compute a probability score $p\left( {\mathbf{x}}_{G}\right) \in \left\lbrack {0,1}\right\rbrack$ based on ${\mathbf{z}}_{G}$ (latent feature vector) for property prediction. + + * Interpretability: We evaluate each input feature's contribution to the model's output using primary attribution methods like feature permutation, integrated gradients and gradient SHAP $\left\lbrack {{18},{19}}\right\rbrack$ . The goodness of + + < g r a p h i c s > + +Figure 1: Overview of the Proposed Methodology: A) FG Representation, B) MFG Representation, C) Descriptor Representation, D) Latent Representation for FGR and Property Prediction Module + + < g r a p h i c s > + +Figure 2: Overview of Interpretability Analysis: For any given property, attribution scores for input features are calculated and the substructures can be visualised overlapped with the scores + +explanations is quantified using infidelity and sensitivity metrics. A visualisation tool is also developed to highlight essential substructures that contribute to predicting desired properties, as shown in Figure 2. + +§ 4 RESULTS + +Extensive evaluation of the model was done for robustness and generalizability on classification and regression tasks using five-fold random and scaffold splits. The results are summarized in Table 1 and Table 2. + +§ 5 CONCLUSION + +This work presents a functional group representation (FGR) framework using functional groups in chemistry for molecular representation learning. The framework allows four types of molecular representations: FG, MFG, FG-MFG-based and FG-MFG-descriptors-based representation. The proposed FGR framework-based molecular embeddings have been evaluated on several benchmark datasets. The proposed framework performs at par and sometimes better than the state-of-the-art algorithms in classification tasks. The FGR framework also provides chemically interpretable encoding as it is inspired by rules of chemistry to maintain explainability with the encoding. In the proposed framework, autoencoders are used to learn latent representations. Also, we demonstrated that the pretraining in the FGR framework could be performed due to decoupling between the latent representation learning + +max width= + +4|c|Scaffold Split Classification (ROC-AUC) $\uparrow$ + +1-4 +Dataset $\mathbf{{FGR}}$ DMPNN GEM + +1-4 +BACE ${0.89} \pm {0.01}$ ${0.86} \pm {0.05}$ ${0.86} \pm {0.01}$ + +1-4 +BBBP $\mathbf{{0.96} \pm {0.008}}$ ${0.92} \pm {0.02}$ ${0.72} \pm {0.00}$ + +1-4 +Tox21 ${0.71} \pm {0.01}$ ${0.69} \pm {0.01}$ ${0.78} \pm {0.001}$ + +1-4 +ClinTox $\mathbf{{0.99} \pm {0.002}}$ ${0.88} \pm {0.03}$ ${0.90} \pm {0.01}$ + +1-4 +SIDER ${0.72} \pm {0.07}$ ${0.63} \pm {0.03}$ ${0.67} \pm {0.004}$ + +1-4 + +Table 1: Comparison of ROC-AUC scores for FGR, DMPNN [6], and GEM [8] + +max width= + +4|c|Scaffold Split Regression (RMSE) $\downarrow$ + +1-4 +Dataset $\mathbf{{FGR}}$ DMPNN GEM + +1-4 +ESOL ${0.62} \pm {0.06}$ ${1.05} \pm {0.008}$ ${0.79} \pm {0.02}$ + +1-4 +FreeSolv ${0.78} \pm {0.19}$ ${2.08} \pm {0.082}$ ${1.87} \pm {0.094}$ + +1-4 +Lipo ${0.64} \pm {0.035}$ ${0.68} \pm {0.016}$ ${0.66} \pm {0.008}$ + +1-4 + +Table 2: Comparison of RMSE scores for FGR, DMPNN [6], and GEM [8] + +task and the property prediction task. It is envisaged to extend the FGR framework for building pre-trained models with explainability using self-supervised learning on large-scale molecular data. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/EGZ8XdoLm0/Initial_manuscript_md/Initial_manuscript.md b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/EGZ8XdoLm0/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..f38c86f41b140acda6cd968bc074e55090f874db --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/EGZ8XdoLm0/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,150 @@ +# Active Learning with Human Heuristics: An Algorithm Robust to Labelling Bias + +Sriram Ravichandran ${}^{ + }$ , Nandan Sudarsanam ${}^{ + }$ , Konstantinos Katsikopoulos ${}^{ \dagger }$ , Balaraman Ravindran ${}^{ + }$ ${}^{ + }$ Indian Institute of Technology, Madras + +${}^{ \dagger }$ University of Southampton, UK + +## Abstract + +Active learning(AL) enables prediction algorithms to achieve better performance with fewer data points by adaptively querying an oracle for output labels. In many instances, the oracle is a human. According to behavioral sciences, humans provide labels by employing decision heuristics which tend to offer biased labels.AL algorithms trained with such labels could in turn provide incorrect predictions, which could make the decisions made by such models unfair. How would modelling the oracle with such heuristics affect the performance of AL algorithms? We investigate three human heuristics (fast-and frugal tree, tallying, and franklin's rule) combined with four active learning algorithms (entropy-based, multi-view learning, density-based, and novel density-based) and apply them to five datasets from domains such as health, wealth and sustainability. A first novel finding is that if a heuristic leads to significant labelling bias, the performance of active learning algorithms significantly drops, sometimes below random sampling. Thus, it is key to design active learning algorithms robust to labeling bias. Our second contribution is a novel density-based algorithm that achieves an overall median improvement of ${31}\%$ over current algorithms when the oracle has a significant labelling bias. In sum, designing and benchmarking active learning algorithms should incorporate the modelling of human decision heuristics. + +## 1 Introduction + +AI is being used in various significant applications that affect human lives. These include recruitment, consumer lending, healthcare, criminal justice, etc. Building prediction models is crucial for automating such decision processes because it enables decisions based on data rather than relying solely on intuition or past experiences. There is an increasing need for training such models in conditions where obtaining labels is significantly more expensive than their attributes. Moreover, due to the sensitivity of the applications the trained models is also be expected to be fair i.e. devoid of bias that exists when a human makes a decision. Active learning (AL) algorithms have the leverage of choosing the data points to be queried at each instance, thereby reaching the benchmark accuracy with fewer queries (labeled instances). A typical active learner starts with a small number of labeled instances and queries for one or more unlabeled instances, then selects additional points to query based on the labels obtained from previous queries. Labeling the queried instances can be done in multiple ways and is therefore typically assumed to be an unbiased random response. For example, building a model to predict the durability of a car involves crash-testing cars to obtain labels that are highly expensive, making this a suitable application for AL algorithms. However, a substantial subset of AL-based querying involves a human annotator. For instance, A review of AL papers searched with the keyword "Active Learning" that were published during 2021-2023 across prominent venues such as Nature Communications, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Journal of Machine Learning Research and Advances in Neural Information Processing Systems shows that about ${63}\%$ of the works involved the usage of human-annotated labels. Traditional literature in behavioral economics[1] highlights the deviation of the human decision-making process from rationality, which they defined as bias. Providing labels for AL should be no exception. + +However, annotator bias and its implications on trained models are acknowledged in only a small subset of AL literature. For instance, Deepesh et.al.[2] noticed that behavioral biases in the oracle decrease the classification accuracy of prediction models built by at least ${20}\%$ . Moreover, Burr et.al.[3], in their extensive literature survey on AL, mentioned the reliability of the labels provided by humans might be compromised due to difficulties faced in comprehending the instances that might impact the quality of the labels obtained. + +This understanding resulted in development of a class of AL algorithms that considers the biases present in the human oracle. + +Works belonging to this class $\left\lbrack {4,5}\right\rbrack$ considered the presence of human bias as random or a uniformly distributed error while proposing novel algorithms.J.Du et.al.[6] on the other hand, proposed an algorithm with an exploration and exploitation approach by relabeling data points that could be wrongly labeled. The oracle here was modeled based on the assumption that the probability of obtaining biased labels depends on the maximum posterior probability of an instance computed with the ground truth labels. + +In all the above works, the oracle was assumed to offer incorrect responses randomly, or the label bias was synthetically injected based on certain assumptions. However, Herbert Simon, the founder of bounded rationality, argues that people must utilize approximations for the majority of tasks, including simple decision heuristics[7]. Additionally, Gigerenzer et al.[8] pointed out several human heuristics existent under bounded rationality that the human mind tends to follow as its incapable of superhuman reasoning. + +The above works support that human oracle is likely to use decision strategies during annotations, and the label bias tends to result from the heuristic used. This makes it essential to study the effect of decision strategies on the active learning models since a model trained with an unfair human decision strategy could make unfair decisions. + +This study contributes to the active learning literature by asserting that the decision strategy used by the oracle significantly affects the relative performance of AL algorithms, thereby necessitating the need to benchmark AL algorithms with human decision strategies. We also propose a novel AL algorithm that pioneers the birth of a new class of algorithms built based on human decision strategies. + +The rest of the paper has been structured as follows. The methodology is laid forth in Section 2, including explanations of the datasets, AL algorithms, and human heuristics utilized in the study. After discussing the results in Section 3, Section 4 concludes by summarising the same. + +## 2 Methodology + +Typically, the active learner chooses the instance to obtain $\operatorname{label}\left( {x}_{i}\right)$ from the pool of unlabeled instances(X) sequentially based on its query strategy and queries the same to the Human. The labels thus obtained $\left( {y}_{i}\right)$ train the AL after every query. In our study, we mimic the functionality of the human oracle using fast and frugal heuristics such as the fast and frugal tree (FFT), tallying, and a conventional heuristic(Franklin's rule). The decision strategies ensure that the bias labels provided to the oracle are not random but are based on the instance for which querying is done.[see section2.1] + +To perform the experiments, we chose five labeled data sets from various domains such as Health[Cleveland Heart disease[9]], Wealth[To predict fraudulent firm[10]], Automobile[Car Condition prediction[11]], Food science[Wine Prediction[12]] and Sustainability[Biodegradable Data set[13]]. + +For our study, we considered the pool-based sampling scenario where the pool of instances is ranked based on the query strategy. The active learner then selects the best query based on these ranks. The AL algorithms considered were Entropy Sampling, Multi-view learning with co-testing, Conventional Density-based learning, and Novel Density-based learning[see section 2.2] + +### 2.1 When is a Fast and Frugal Decision strategy likely to provide an unbiased label? + +To get a rational understanding of situations where fast and frugal heuristics(FFT and Tallying) provide incorrect labels, We postulate the following hypothesis: + +Hypothesis 1 Data points whose attribute values are farther away from their corresponding mean attribute value are less prone to obtaining biased labels from human oracle/heuristics. + +The above hypothesis was formulated based on the intuition that the decisions made by Fast and frugal heuristics always compare the attribute values to constant values. In FFT and Tallying, this constant value tends to be the mean attribute value. + +This hypothesis can be illustrated with a case where the task is to classify a car's condition based on its usage period (Let the average usage be five years). Intuitively, the human oracle would find it easier to classify cars that are $2/{10}$ years old than a car that has been used for five years. i.e., Cars with attribute values closer to their mean. + +On the datasets taken into consideration, fast and frugal heuristics were employed to produce predictions in order to test the hypothesis. Table 1 and Table 2 show that the prediction accuracy of the heuristics was significantly higher for data points that were farther away from the mean(FM) compared to data points that were closer to the mean(CM), thereby supporting our claim. + +
Sr.No.Data-set NameFM(%)CM(%)Overall(%)
1Biodegradable Data set78.7473.3377.02
2Car Prediction80.6168.5671.29
3Cleveland Heart Disease Data set95.4583.8384.72
4Audit Dataset96.594.495.7
5Wine Dataset10086.787.07
+ +Table 1: Accuracy of Predictions made by Tallying heuristic + +
Sr.No.Data-set NameFM(%)CM(%)Overall(%)
1Biodegradable Data set76.4457.3370.97
2Car Prediction94.188.592.59
3Cleveland Heart Disease Data set81.2580.0781.25
4Audit Data96.591.194.42
5Wine Data set10097.197.75
+ +Table 2: Accuracy of Predictions made by FFT heuristic + +### 2.2 Novel Density-based Learning + +The experimentally supported hypothesis(section 2.1) motivates the development of a query strategy that queries data points whose attribute values are farther away from their mean attribute value. It must also be noted those instances tend to have lower cosine Information density values. Existing algorithms, such as conventional density-based learning, are based on metrics directly proportional to entropy and cosine similarity. This makes them prefer querying data points more susceptible to obtaining biased labels. Hence, we consider a modified metric: + +$$ +H\left( x\right) = - \frac{\mathop{\sum }\limits_{k}{p}_{k}\log \left( {p}_{k}\right) }{\left( \frac{1}{U}\right) \mathop{\sum }\limits_{{u = 1}}^{U}\operatorname{sim}\left( {x,{x}^{u}}\right) } \tag{1} +$$ + +As the above formula indicates, the data points are ranked based on their similarity to other unlabeled data points in the pool set $\left( {\frac{1}{U}\mathop{\sum }\limits_{{u = 1}}^{U}\operatorname{sim}\left( {x,{x}^{u}}\right) }\right)$ as well as the entropy measure. U represents the pool of unlabeled instances after every query. The metric is expected to motivate the learner to query data points with high entropy and low information density, i.e.query data points that are useful and tend to obtain accurate labels. + +## 3 Results and Discussion + +The AL models were trained based on the labels produced by human heuristics. This was repeated for every heuristic-AL algorithm-decision strategy combination, and the trained model's accuracy was measured after each query. Conventional studies involve the evaluation of AL algorithms using Learning curves(Accuracy vs. data points queried). However, it is reasonably apparent to expect a decrease in the accuracy of both AL algorithms and random sampling across data points queried when labels are provided due to biased decision strategies. Thereby, evaluating algorithms based on absolute accuracy is redundant in this study. + +However, the relative accuracy of AL algorithms compared to that of Random sampling would help understand the comparative effectiveness within active learning algorithms in the presence of decision strategies. Hence we introduce a particular metric,’Leverage’ $\left\lbrack {L}_{i}\right\rbrack$ , to visualize the same. + +$$ +{L}_{i} = A{L}_{i} - \text{ RandomSampling }{g}_{i} \tag{2} +$$ + +Here, $A{L}_{i}$ and ${RandomSamplin}{g}_{i}$ represent the accuracy obtained by the respective query strategies after "i" no. of queries. + +Furthermore, in order to find the relative robustness within the AL algorithms, we assess the decrease in the effectiveness of AL algorithms observed due to the influx of decision strategies i.e., drop in leverage across the learning phase $\left\lbrack {\nabla }_{i}\right\rbrack$ : + +$$ +{\nabla }_{i} = {\left\lbrack {L}_{i}\right\rbrack }_{\text{Ground }} - {\left\lbrack {L}_{i}\right\rbrack }_{\text{Decision Strategy }} \tag{3} +$$ + +In Eqn.3, ${\left\lbrack {L}_{i}\right\rbrack }_{\text{Decision Strategy }}$ represents the active learning algorithm’s leverage after obtaining labels as a result of the "Decision Strategy" for "i" queries. + +The Leverage curve/Drop in leverage curve plotted based on the above help in representing both the absolute effectiveness and drop in the efficacy of AL algorithms when the fast and frugal heuristics provide significantly incorrect labels(see Appendix). + +
Absolute LeverageEntropy(%)MVL(%)Proposed(%)Conventional(%)Improvement(%)
Biodegradable-FFT1.591.532.68-1.568.29
Biodegradable-Tallying2.261.972.661.6317.62
Car Rate-FFT1.110.511.240.3412.15
Car Rate-Tallying0.520.361-1.7692.03
Cleviand Heart-FFT0.440.490.530.469.41
Cleviand Heart-Tallying1.761.661.461.75-16.82
Wine-Tallying3.743.623.583.76-4.91
Drop in LeverageEntropy(%)MVL(%)Proposed(%)Conventional(%)Decrease in drop(%)
Biodegradable-FFT8.648.135.6210.5230.9
Biodegradable-Tallying7.087.024.716.8631.34
Car Rate-FFT0.110.13-0.450.51524.92
Car Rate-Tallying0.690.29-0.172.46159.01
Cleviand Heart-FFT0.50.580.830.2-317.12
Clevland Heart-Tallying-0.0430.0370.979-0.356-374.89
Wine-Tallying2.592.692.092.5618.5
+ +Figure 1: Top-Avg. leverage of AL algorithms, Bottom-Avg. drop in Leverage of AL algorithms + +Figure 1 represents the average Absolute and Drop in Leverage experienced by the AL algorithms through the learning phase(until convergence) specifically in scenarios where fast and frugal heuristics(FFT and Tallying) provided significantly incorrect labels. + +The proposed density-based learning performs better than other algorithms by showing a median improvement of 11% and a median decrease in a drop of 31% compared to the best-performing algorithm. The notable reduction in drop-in leverage demonstrates the robustness of the proposed algorithm. When heuristics like Franklin's rule gave mostly close-to-ground truth labels, the algorithm was not discovered to perform the best. As a result, the suggested approach is subjected to be used only in situations where heuristics provide considerably biased labels. + +## 4 Conclusion + +The primary motive of the work was to model the oracle with human heuristics, which enabled the study of human heuristics' impact on AL algorithms. The same was achieved with three human heuristics(Fast and frugal tree(FFT), Tallying, Franklin's rule), four AL algorithms(Entropy based, Multi-view Learning, Density-based, Novel-density based), and five data sets. The performance of AL algorithms decreased considerably when human heuristics provided significantly incorrect labels. This necessitated a novel algorithm robust to bias labels provided by decision strategies. Our empirically proven hypothesis that heuristics tend to provide correct labels when queried data points with attribute values farther from the mean led to a novel density-based AL algorithm. + +The proposed density-based learning algorithm improved absolute leverage by 11% in comparison to the best-performing algorithm. Moreover, the median decrease in drop-in leverage was 31% making the proposed algorithm a preferred one. The ability of the proposed algorithm to query instances that are likely to provide accurate labels and its lesser dependency on the labels obtained attributes to its good performance. On the other hand, when biased labels provided by the human heuristics were minimal, the proposed algorithm was not found useful, thereby restricting its usage in such scenarios. + +In sum, the variation in the relative performance of Active Learning algorithms w.r.t decision strategies advocates the need for bench-marking algorithms in existing AL literature using the decision strategy framework proposed in the study. Moreover, the findings strongly motivate the need for a new era of algorithms in the AL domain that considers the uncertainty of the oracle while providing labels on instances, one of which has been achieved in this study. + +## References + +[1] Amos Tversky and Daniel Kahneman. Judgment under uncertainty: Heuristics and biases. Science 185(4157):1124-1131, 1974. + +[2] Deepesh Agarwal, Obdulia Covarrubias-Zambrano, Stefan Bossmann, and Balasubramaniam Natarajan. Impacts of behavioral biases on active learning strategies. In 2022 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), pages 256-261, 2022. + +[3] Burr Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 2009. + +[4] Victor S. Sheng, Foster Provost, and Panagiotis G. Ipeirotis. Get another label? improving data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, page 614-622. Association for Computing Machinery, 2008. + +[5] Perry Groot, Adriana Birlutiu, and Tom Heskes. Learning from multiple annotators with gaussian processes. In Timo Honkela, Włodzistaw Duch, Mark Girolami, and Samuel Kaski, editors, Artificial Neural Networks and Machine Learning - ICANN 2011, pages 159-164, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg. + +[6] J. Du and C. Ling. Active learning with human-like noisy oracle. 2010 IEEE International Conference On Data Mining, pages 797-802, 2010. + +[7] Herbert A. Simon. Invariants of human behavior. Annual Review of Psychology, 41(1):1-20, 1990. + +[8] Gigerenzer, Peter Todd, Jean Czerlinski, Jennifer Davis, Gerd Gigerenzer, Daniel Goldstein, Adam Goodie, Ralph Hertwig, Ulrich Hoffrage, Kathryn Laskey, Laura Martignon, and Geoffrey Miller. Simple Heuristics That Make Us Smart. 01 1999. + +[9] Jeroen Eggermont, Joost Kok, and Walter Kosters. Genetic programming for data classification: Partitioning the search space. volume 2, pages 1001-1005, 032004. + +[10] N. Hooda, S. Bawa, and P. Fraudulent Firm Classification: A Rana. Case study of an external audit. Applied Artificial Intelligence, 32:48-64, 2018. + +[11] Ivan Bratko Demsar.J Zupan. B, Marko Bohanec. Machine learning by function decomposition. In International Conference on Machine Learning, 1997. + +[12] Olivier Y.de Vel Aeberhard. S, Danny Coomans. Improvements to the classification performance of rda. Journal of Chemometrics, 7, 1993. + +[13] K. Mansouri, T. Ringsted, D. Ballabio, R. Todeschini, and V. Consonni. Quantitative structure-activity relationship models for ready biodegradability of chemicals. Journal Of Chemical Information And Modeling, 53:867-878, 2013. + +## A Appendix + +![019640e3-70ee-7f34-9db2-bc41437023ae_5_225_269_1340_299_0.jpg](images/019640e3-70ee-7f34-9db2-bc41437023ae_5_225_269_1340_299_0.jpg) + +Figure 2: Leverage curves of active learning algorithms when the oracle provided labels with significant bias are a result of FFT + +![019640e3-70ee-7f34-9db2-bc41437023ae_5_435_764_917_615_0.jpg](images/019640e3-70ee-7f34-9db2-bc41437023ae_5_435_764_917_615_0.jpg) + +Figure 3: Leverage curves of active learning algorithms when the oracle provided labels with significant bias as a result of tallying heuristic + +![019640e3-70ee-7f34-9db2-bc41437023ae_5_193_1585_1407_302_0.jpg](images/019640e3-70ee-7f34-9db2-bc41437023ae_5_193_1585_1407_302_0.jpg) + +Figure 4: Drop in leverage across the learning phase of active learning algorithms for FFT + +![019640e3-70ee-7f34-9db2-bc41437023ae_6_441_166_909_618_0.jpg](images/019640e3-70ee-7f34-9db2-bc41437023ae_6_441_166_909_618_0.jpg) + +Figure 5: Drop in leverage across the learning phase of active learning algorithms for tallying + diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/EGZ8XdoLm0/Initial_manuscript_tex/Initial_manuscript.tex b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/EGZ8XdoLm0/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..4fdc2801524eb2b5b8c52bbf138718290aa7fbfd --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/EGZ8XdoLm0/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,190 @@ +§ ACTIVE LEARNING WITH HUMAN HEURISTICS: AN ALGORITHM ROBUST TO LABELLING BIAS + +Sriram Ravichandran ${}^{ + }$ , Nandan Sudarsanam ${}^{ + }$ , Konstantinos Katsikopoulos ${}^{ \dagger }$ , Balaraman Ravindran ${}^{ + }$ ${}^{ + }$ Indian Institute of Technology, Madras + +${}^{ \dagger }$ University of Southampton, UK + +§ ABSTRACT + +Active learning(AL) enables prediction algorithms to achieve better performance with fewer data points by adaptively querying an oracle for output labels. In many instances, the oracle is a human. According to behavioral sciences, humans provide labels by employing decision heuristics which tend to offer biased labels.AL algorithms trained with such labels could in turn provide incorrect predictions, which could make the decisions made by such models unfair. How would modelling the oracle with such heuristics affect the performance of AL algorithms? We investigate three human heuristics (fast-and frugal tree, tallying, and franklin's rule) combined with four active learning algorithms (entropy-based, multi-view learning, density-based, and novel density-based) and apply them to five datasets from domains such as health, wealth and sustainability. A first novel finding is that if a heuristic leads to significant labelling bias, the performance of active learning algorithms significantly drops, sometimes below random sampling. Thus, it is key to design active learning algorithms robust to labeling bias. Our second contribution is a novel density-based algorithm that achieves an overall median improvement of ${31}\%$ over current algorithms when the oracle has a significant labelling bias. In sum, designing and benchmarking active learning algorithms should incorporate the modelling of human decision heuristics. + +§ 1 INTRODUCTION + +AI is being used in various significant applications that affect human lives. These include recruitment, consumer lending, healthcare, criminal justice, etc. Building prediction models is crucial for automating such decision processes because it enables decisions based on data rather than relying solely on intuition or past experiences. There is an increasing need for training such models in conditions where obtaining labels is significantly more expensive than their attributes. Moreover, due to the sensitivity of the applications the trained models is also be expected to be fair i.e. devoid of bias that exists when a human makes a decision. Active learning (AL) algorithms have the leverage of choosing the data points to be queried at each instance, thereby reaching the benchmark accuracy with fewer queries (labeled instances). A typical active learner starts with a small number of labeled instances and queries for one or more unlabeled instances, then selects additional points to query based on the labels obtained from previous queries. Labeling the queried instances can be done in multiple ways and is therefore typically assumed to be an unbiased random response. For example, building a model to predict the durability of a car involves crash-testing cars to obtain labels that are highly expensive, making this a suitable application for AL algorithms. However, a substantial subset of AL-based querying involves a human annotator. For instance, A review of AL papers searched with the keyword "Active Learning" that were published during 2021-2023 across prominent venues such as Nature Communications, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Journal of Machine Learning Research and Advances in Neural Information Processing Systems shows that about ${63}\%$ of the works involved the usage of human-annotated labels. Traditional literature in behavioral economics[1] highlights the deviation of the human decision-making process from rationality, which they defined as bias. Providing labels for AL should be no exception. + +However, annotator bias and its implications on trained models are acknowledged in only a small subset of AL literature. For instance, Deepesh et.al.[2] noticed that behavioral biases in the oracle decrease the classification accuracy of prediction models built by at least ${20}\%$ . Moreover, Burr et.al.[3], in their extensive literature survey on AL, mentioned the reliability of the labels provided by humans might be compromised due to difficulties faced in comprehending the instances that might impact the quality of the labels obtained. + +This understanding resulted in development of a class of AL algorithms that considers the biases present in the human oracle. + +Works belonging to this class $\left\lbrack {4,5}\right\rbrack$ considered the presence of human bias as random or a uniformly distributed error while proposing novel algorithms.J.Du et.al.[6] on the other hand, proposed an algorithm with an exploration and exploitation approach by relabeling data points that could be wrongly labeled. The oracle here was modeled based on the assumption that the probability of obtaining biased labels depends on the maximum posterior probability of an instance computed with the ground truth labels. + +In all the above works, the oracle was assumed to offer incorrect responses randomly, or the label bias was synthetically injected based on certain assumptions. However, Herbert Simon, the founder of bounded rationality, argues that people must utilize approximations for the majority of tasks, including simple decision heuristics[7]. Additionally, Gigerenzer et al.[8] pointed out several human heuristics existent under bounded rationality that the human mind tends to follow as its incapable of superhuman reasoning. + +The above works support that human oracle is likely to use decision strategies during annotations, and the label bias tends to result from the heuristic used. This makes it essential to study the effect of decision strategies on the active learning models since a model trained with an unfair human decision strategy could make unfair decisions. + +This study contributes to the active learning literature by asserting that the decision strategy used by the oracle significantly affects the relative performance of AL algorithms, thereby necessitating the need to benchmark AL algorithms with human decision strategies. We also propose a novel AL algorithm that pioneers the birth of a new class of algorithms built based on human decision strategies. + +The rest of the paper has been structured as follows. The methodology is laid forth in Section 2, including explanations of the datasets, AL algorithms, and human heuristics utilized in the study. After discussing the results in Section 3, Section 4 concludes by summarising the same. + +§ 2 METHODOLOGY + +Typically, the active learner chooses the instance to obtain $\operatorname{label}\left( {x}_{i}\right)$ from the pool of unlabeled instances(X) sequentially based on its query strategy and queries the same to the Human. The labels thus obtained $\left( {y}_{i}\right)$ train the AL after every query. In our study, we mimic the functionality of the human oracle using fast and frugal heuristics such as the fast and frugal tree (FFT), tallying, and a conventional heuristic(Franklin's rule). The decision strategies ensure that the bias labels provided to the oracle are not random but are based on the instance for which querying is done.[see section2.1] + +To perform the experiments, we chose five labeled data sets from various domains such as Health[Cleveland Heart disease[9]], Wealth[To predict fraudulent firm[10]], Automobile[Car Condition prediction[11]], Food science[Wine Prediction[12]] and Sustainability[Biodegradable Data set[13]]. + +For our study, we considered the pool-based sampling scenario where the pool of instances is ranked based on the query strategy. The active learner then selects the best query based on these ranks. The AL algorithms considered were Entropy Sampling, Multi-view learning with co-testing, Conventional Density-based learning, and Novel Density-based learning[see section 2.2] + +§ 2.1 WHEN IS A FAST AND FRUGAL DECISION STRATEGY LIKELY TO PROVIDE AN UNBIASED LABEL? + +To get a rational understanding of situations where fast and frugal heuristics(FFT and Tallying) provide incorrect labels, We postulate the following hypothesis: + +Hypothesis 1 Data points whose attribute values are farther away from their corresponding mean attribute value are less prone to obtaining biased labels from human oracle/heuristics. + +The above hypothesis was formulated based on the intuition that the decisions made by Fast and frugal heuristics always compare the attribute values to constant values. In FFT and Tallying, this constant value tends to be the mean attribute value. + +This hypothesis can be illustrated with a case where the task is to classify a car's condition based on its usage period (Let the average usage be five years). Intuitively, the human oracle would find it easier to classify cars that are $2/{10}$ years old than a car that has been used for five years. i.e., Cars with attribute values closer to their mean. + +On the datasets taken into consideration, fast and frugal heuristics were employed to produce predictions in order to test the hypothesis. Table 1 and Table 2 show that the prediction accuracy of the heuristics was significantly higher for data points that were farther away from the mean(FM) compared to data points that were closer to the mean(CM), thereby supporting our claim. + +max width= + +Sr.No. Data-set Name FM(%) CM(%) Overall(%) + +1-5 +1 Biodegradable Data set 78.74 73.33 77.02 + +1-5 +2 Car Prediction 80.61 68.56 71.29 + +1-5 +3 Cleveland Heart Disease Data set 95.45 83.83 84.72 + +1-5 +4 Audit Dataset 96.5 94.4 95.7 + +1-5 +5 Wine Dataset 100 86.7 87.07 + +1-5 + +Table 1: Accuracy of Predictions made by Tallying heuristic + +max width= + +Sr.No. Data-set Name FM(%) CM(%) Overall(%) + +1-5 +1 Biodegradable Data set 76.44 57.33 70.97 + +1-5 +2 Car Prediction 94.1 88.5 92.59 + +1-5 +3 Cleveland Heart Disease Data set 81.25 80.07 81.25 + +1-5 +4 Audit Data 96.5 91.1 94.42 + +1-5 +5 Wine Data set 100 97.1 97.75 + +1-5 + +Table 2: Accuracy of Predictions made by FFT heuristic + +§ 2.2 NOVEL DENSITY-BASED LEARNING + +The experimentally supported hypothesis(section 2.1) motivates the development of a query strategy that queries data points whose attribute values are farther away from their mean attribute value. It must also be noted those instances tend to have lower cosine Information density values. Existing algorithms, such as conventional density-based learning, are based on metrics directly proportional to entropy and cosine similarity. This makes them prefer querying data points more susceptible to obtaining biased labels. Hence, we consider a modified metric: + +$$ +H\left( x\right) = - \frac{\mathop{\sum }\limits_{k}{p}_{k}\log \left( {p}_{k}\right) }{\left( \frac{1}{U}\right) \mathop{\sum }\limits_{{u = 1}}^{U}\operatorname{sim}\left( {x,{x}^{u}}\right) } \tag{1} +$$ + +As the above formula indicates, the data points are ranked based on their similarity to other unlabeled data points in the pool set $\left( {\frac{1}{U}\mathop{\sum }\limits_{{u = 1}}^{U}\operatorname{sim}\left( {x,{x}^{u}}\right) }\right)$ as well as the entropy measure. U represents the pool of unlabeled instances after every query. The metric is expected to motivate the learner to query data points with high entropy and low information density, i.e.query data points that are useful and tend to obtain accurate labels. + +§ 3 RESULTS AND DISCUSSION + +The AL models were trained based on the labels produced by human heuristics. This was repeated for every heuristic-AL algorithm-decision strategy combination, and the trained model's accuracy was measured after each query. Conventional studies involve the evaluation of AL algorithms using Learning curves(Accuracy vs. data points queried). However, it is reasonably apparent to expect a decrease in the accuracy of both AL algorithms and random sampling across data points queried when labels are provided due to biased decision strategies. Thereby, evaluating algorithms based on absolute accuracy is redundant in this study. + +However, the relative accuracy of AL algorithms compared to that of Random sampling would help understand the comparative effectiveness within active learning algorithms in the presence of decision strategies. Hence we introduce a particular metric,’Leverage’ $\left\lbrack {L}_{i}\right\rbrack$ , to visualize the same. + +$$ +{L}_{i} = A{L}_{i} - \text{ RandomSampling }{g}_{i} \tag{2} +$$ + +Here, $A{L}_{i}$ and ${RandomSamplin}{g}_{i}$ represent the accuracy obtained by the respective query strategies after "i" no. of queries. + +Furthermore, in order to find the relative robustness within the AL algorithms, we assess the decrease in the effectiveness of AL algorithms observed due to the influx of decision strategies i.e., drop in leverage across the learning phase $\left\lbrack {\nabla }_{i}\right\rbrack$ : + +$$ +{\nabla }_{i} = {\left\lbrack {L}_{i}\right\rbrack }_{\text{ Ground }} - {\left\lbrack {L}_{i}\right\rbrack }_{\text{ Decision Strategy }} \tag{3} +$$ + +In Eqn.3, ${\left\lbrack {L}_{i}\right\rbrack }_{\text{ Decision Strategy }}$ represents the active learning algorithm’s leverage after obtaining labels as a result of the "Decision Strategy" for "i" queries. + +The Leverage curve/Drop in leverage curve plotted based on the above help in representing both the absolute effectiveness and drop in the efficacy of AL algorithms when the fast and frugal heuristics provide significantly incorrect labels(see Appendix). + +max width= + +Absolute Leverage Entropy(%) MVL(%) Proposed(%) Conventional(%) Improvement(%) + +1-6 +Biodegradable-FFT 1.59 1.53 2.68 -1.5 68.29 + +1-6 +Biodegradable-Tallying 2.26 1.97 2.66 1.63 17.62 + +1-6 +Car Rate-FFT 1.11 0.51 1.24 0.34 12.15 + +1-6 +Car Rate-Tallying 0.52 0.36 1 -1.76 92.03 + +1-6 +Cleviand Heart-FFT 0.44 0.49 0.53 0.46 9.41 + +1-6 +Cleviand Heart-Tallying 1.76 1.66 1.46 1.75 -16.82 + +1-6 +Wine-Tallying 3.74 3.62 3.58 3.76 -4.91 + +1-6 +Drop in Leverage Entropy(%) MVL(%) Proposed(%) Conventional(%) Decrease in drop(%) + +1-6 +Biodegradable-FFT 8.64 8.13 5.62 10.52 30.9 + +1-6 +Biodegradable-Tallying 7.08 7.02 4.71 6.86 31.34 + +1-6 +Car Rate-FFT 0.11 0.13 -0.45 0.51 524.92 + +1-6 +Car Rate-Tallying 0.69 0.29 -0.17 2.46 159.01 + +1-6 +Cleviand Heart-FFT 0.5 0.58 0.83 0.2 -317.12 + +1-6 +Clevland Heart-Tallying -0.043 0.037 0.979 -0.356 -374.89 + +1-6 +Wine-Tallying 2.59 2.69 2.09 2.56 18.5 + +1-6 + +Figure 1: Top-Avg. leverage of AL algorithms, Bottom-Avg. drop in Leverage of AL algorithms + +Figure 1 represents the average Absolute and Drop in Leverage experienced by the AL algorithms through the learning phase(until convergence) specifically in scenarios where fast and frugal heuristics(FFT and Tallying) provided significantly incorrect labels. + +The proposed density-based learning performs better than other algorithms by showing a median improvement of 11% and a median decrease in a drop of 31% compared to the best-performing algorithm. The notable reduction in drop-in leverage demonstrates the robustness of the proposed algorithm. When heuristics like Franklin's rule gave mostly close-to-ground truth labels, the algorithm was not discovered to perform the best. As a result, the suggested approach is subjected to be used only in situations where heuristics provide considerably biased labels. + +§ 4 CONCLUSION + +The primary motive of the work was to model the oracle with human heuristics, which enabled the study of human heuristics' impact on AL algorithms. The same was achieved with three human heuristics(Fast and frugal tree(FFT), Tallying, Franklin's rule), four AL algorithms(Entropy based, Multi-view Learning, Density-based, Novel-density based), and five data sets. The performance of AL algorithms decreased considerably when human heuristics provided significantly incorrect labels. This necessitated a novel algorithm robust to bias labels provided by decision strategies. Our empirically proven hypothesis that heuristics tend to provide correct labels when queried data points with attribute values farther from the mean led to a novel density-based AL algorithm. + +The proposed density-based learning algorithm improved absolute leverage by 11% in comparison to the best-performing algorithm. Moreover, the median decrease in drop-in leverage was 31% making the proposed algorithm a preferred one. The ability of the proposed algorithm to query instances that are likely to provide accurate labels and its lesser dependency on the labels obtained attributes to its good performance. On the other hand, when biased labels provided by the human heuristics were minimal, the proposed algorithm was not found useful, thereby restricting its usage in such scenarios. + +In sum, the variation in the relative performance of Active Learning algorithms w.r.t decision strategies advocates the need for bench-marking algorithms in existing AL literature using the decision strategy framework proposed in the study. Moreover, the findings strongly motivate the need for a new era of algorithms in the AL domain that considers the uncertainty of the oracle while providing labels on instances, one of which has been achieved in this study. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/L-NgOKyH7jZ/Initial_manuscript_md/Initial_manuscript.md b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/L-NgOKyH7jZ/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..7a4754e5eaafd0b17a6155f52701b1dd9906a613 --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/L-NgOKyH7jZ/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,131 @@ +# Guiding Offline Reinforcement Learning Using a Safety Expert + +Richa Verma ${}^{ + }$ , Kartik Bharadwaj ${}^{ + }$ , Harshad Khadilkar ${}^{ \dagger }$ , and Balaraman Ravindran* +TCS Research + +${}^{ \dagger }$ Robert Bosch Centre for Data Science and Artificial Intelligence + +*Department of Computer Science and Engineering, Indian Institute of Technology Madras + +## Abstract + +Offline reinforcement learning is used to train policies in situations where it is expensive or infeasible to access the environment during training. An agent trained under such a scenario does not get corrective feedback once the learned policy starts diverging and may fall prey to the overestimation bias commonly seen in this setting. This increases the chances of the agent choosing unsafe/risky actions, especially in states with sparse to no representation in the training dataset. In this paper, we propose to leverage a safety expert to discourage the offline RL agent from choosing unsafe actions in under-represented states in the dataset. The proposed framework in this paper transfers the safety expert's knowledge in an offline setting for states with high uncertainty to prevent catastrophic failures from occurring in safety-critical domains. We use a simple but effective approach to quantify the state uncertainty based on how frequently they appear in a training dataset. In states with high uncertainty, the offline RL agent mimics the safety expert while maximizing the long-term reward. We modify TD3+BC, an existing offline RL algorithm, as a part of the proposed approach. We demonstrate empirically that our approach performs better than $\mathrm{{TD}}3 + \mathrm{{BC}}$ on some control tasks and comparably on others across two sets of benchmark datasets while reducing the chance of taking unsafe actions in sparse regions of the state space. + +## 1 Introduction + +Reinforcement Learning (RL) has seen advancement and achieved great success in solving complex tasks with high dimensional state and action spaces, including games [1, 2, 3, 4], and some tasks from robotics [5]. An RL agent trained in an online setting takes an action $a$ in state $s$ and interacts with the environment to observe a reward $r$ . It then updates its policy based on the observed reward. However, it may be risky or costly to interact with the environment repeatedly in real-world situations. It may be infeasible in the cases where a high quality simulator is not available or cannot be built. + +In offline RL (also known as batch RL), the agent is not allowed to interact with the environment. It has access to a fixed-sized dataset collected by any arbitrary policy which may or may not be known [6]. Real-world applications can benefit from this setting because access to the environment may be limited, challenging or not possible. Such applications which are already deployed can also generate datasets to learn from. Offline RL enables the use of such logged datasets for learning and can even allow us to leverage an expert in the form of a human operator, rule-based systems or a policy trained with a similar objective. Some approaches such as [7] show that dataset collected by an expert during learning in an online setting can also be used, however, using the expert itself to facilitate learning in offline RL eliminates the need for data collection and is helpful in settings where data privacy needs to be enforced. + +Overestimation of the values of out-of-distribution actions is a fundamental challenge in offline RL. This also applies to certain actions which can be deemed as "unsafe" in safety-critical applications such as autonomous driving, robotic learning, healthcare, etc. For robotic learning, the conditions for a safety breach during an episode are easier to define (e.g. recording how many times the robot has fallen, or a grasped object has been dropped). The challenge in this domain is to learn an optimal policy for a task while minimizing the frequency of above-mentioned instances of catastrophic failures during training. + +In this paper, we study how to utilize a safety expert in an offline RL setting for states with high uncertainty to minimize failures during training. This safety expert isn't necessarily optimal and can be learned or defined by a rule-based system for each task without reference to the underlying task reward. We use a simple but effective approach to quantify the uncertainty of the states based on how frequent the visited states are in a given training dataset. This information is used to conservatively modify the critic target, therefore propagating it to the value function estimation. We believe that incorporating a safety expert in the form of a pre-trained teacher policy along with quantifying state uncertainty can be effective in this setting. It reduces the chances of the offline RL agent engaging in potentially risky exploratory behavior, thus enabling robotic learning from massive datasets. We show that it can allow the agent to learn safe behavior without explicitly defining constraints on actions, which can be hard to do in an offline setting. + +Our goal is to selectively utilize a safe teacher policy to reduce the chances of risky/unsafe behavior encountered during the deployment of a learned offline RL policy while still maintaining high performance. Our main contributions are summarized below: + +- We propose a framework called Guided Offline RL (GORL) that trains an agent to learn efficiently from an offline dataset while leveraging a safety expert in regions of high uncertainty. + +- We evaluate our approach on a set of datasets from the D4RL benchmark of continuous control tasks [8] and show that the proposed framework either performs better or comparably to TD3+BC [9], a popular SOTA offline RL algorithm on most of the tasks. + +## 2 Related Work + +Offline RL. The existing offline RL methods mainly use some approach that allows the learned policy to stay close to the data collection policy. There are various ways of implementing this. One way is to estimate the behavior policy and then learning a parameterized policy [10, 11]. Another line of works uses divergence regularization [12, 13, 14 to keep the two policies close to each other. Some other works suggest the use of a weighted version of behavior cloning to encourage choosing actions with high advantage [15, 16] or use uncertainty as weight of a state-action pair before making updates [17]. Some methods incorporate the notion of safety and modify the set actions that can be chosen based on their counts [18]. promising direction of literature looks at using pessimism and implementing divergence regularization as a part of value estimation [19, 20]. The goal of this work is different from these works which focus on developing RL alorithms specifically for an offline setting. We study knowledge transfer from a safety expert to an agent learning in the offline setting. + +Reinforcement Learning from Demonstration. RL literature has many examples of learning from teacher policies or demonstrations in an online setting, especially in hard exploration environments. There are policy distillation techniques [21, 22] for training student networks such that their outputs (e.g., Q-values) are similar to those of teacher networks. Learning from demonstrations is another promising area. A replay buffer in an off-policy RL setting can be used to hold teacher demonstrations, which can be combined with samples generated by a student agent during training. DQfD [23] and Ape-X DQfD [24] are some of the examples of such methods for a discrete setting while methods suggested by [25, 26] work for continuous control tasks. + +## 3 Proposed Approach + +In offline RL, the problem of extrapolation error [10] is prevalent which means that the agent is unable to evaluate out-of-distribution actions properly. Our focus is on designing a framework to discourage the agent from selecting unsafe OOD actions while trying to learn an optimal policy from the dataset. We present such a framework that requires minimal modifications to a pre-existing offline RL algorithm. Our framework builds on top of TD3+BC [9]. We modify the critic target term to include state uncertainty. We also include a regularization term to push the offline policy towards the safety expert in states with poor confidence. The safety expert can be defined by any rule-based system or a pre-trained policy. We denote the agent’s confidence w.r.t a state as $\operatorname{conf}\left( s\right) \in \left\lbrack {0,1}\right\rbrack$ , where the confidence is computed by using SimHash algorithm [27]. SimHash uses Locality-Sensitive Hashing (LSH) to convert continuous, high-dimensional data to discrete hash codes. LSH preserves the distances among data points, such that those with similar hashes are close to each other. We use SimHash which is a computationally efficient LSH technique and it measures the similarity of the states contained in the training dataset $\mathcal{D}$ by angular distance. Here, we can use any technique which can transform the high-dimensional continuous state space into discrete bins based on their closeness. The following equation shows how hash codes are computed: + +$$ +\mu \left( s\right) = \operatorname{sgn}\left( {{Ag}\left( s\right) }\right) \in {\left\lbrack -1,1\right\rbrack }^{k}. \tag{1} +$$ + +where $A \in {\mathcal{R}}^{k \times d}$ is a matrix with each entry drawn i.i.d. from a standard Gaussian and $g : S \rightarrow {\mathcal{R}}^{\mathcal{D}}$ is a preprocessing function. The dimension of binary codes is $k$ and it controls the granularity of the state space discretization. This algorithm was originally used as an exploration method but we use it to bin the states contained in the dataset $\mathcal{D}$ into hash codes of size $k$ . We use $k = {50}$ for all the tasks after careful experimentation with multiple tasks. We populate the hashtable by recording the counts of states mapped to each hash code, before training an agent. We normalize the state count values by using max-min normalization. Further, we query the hashtable to retrieve these counts during training and use the values as $\operatorname{conf}\left( s\right)$ in the below critic target update equation: + +$$ +Q\left( {s, a}\right) = r + \gamma * \mathop{\max }\limits_{{a}^{\prime }}Q\left( {{s}^{\prime },{a}^{\prime }}\right) \underset{\text{uncertainty weighted learning from the safety expert}}{\underbrace{-\left( {1 - \operatorname{conf}\left( s\right) }\right) * {\left( a - {\pi }_{T}\left( s\right) \right) }^{2}}}. \tag{2} +$$ + +where ${\pi }_{T}\left( s\right)$ is a teacher policy used as the safety expert. It is trained in an online setting using a continuous control algorithm known as TD3 [28]. More details on training the policy ${\pi }_{T}\left( s\right)$ to be safe are provided in the next section. + +Note that the value of $\operatorname{conf}\left( s\right)$ is lower for under-represented states in the given dataset $\mathcal{D}$ and the lower the confidence, the higher will be the push towards the safety expert, ${\pi }_{T}\left( s\right)$ . Also, the modified update equation reduces the values of all the(s, a)pairs in the dataset except the ones with the action suggested by the safety expert. This discourages the agent from picking unsafe action values in regions of high uncertainty. This completes the description of our framework called Guided Offline RL (GORL) which involves making a few small, but effective, modifications to TD3+BC. + +## 4 Experiments + +We evaluate our proposed approach on the D4RL benchmark of OpenAI gym MuJoCo tasks [8]. We use the TD3+BC algorithm trained on MuJoCo tasks (Hopper-v2 and Walker2d-v2) as the baseline. We train a teacher policy ${\pi }_{T}$ to be used as the safety expert using TD3 for $1\mathrm{M}$ online steps. For the policy to be safe, we add a step penalty of the form ctrl_cost_weight $* \operatorname{sum}\left( \right.$ action ${}^{2}$ ) which is simply a cost for penalizing the agent if it takes actions that are too large. We observe that by doing so, we can discourage the agent from applying high values of torques to various joints of a MuJoCo robot and hence prevent it from making jittery moves. We choose ctrl_cost_weight as 0.1 and 0.01 for Hopper-v2 and Walker2d-v2, respectively, after tuning. These environments have in-built rewards which penalise the agent when it falls or when the height of the top (along the z-axis) becomes too high or too low. Further, we train the offline RL agent on various environment-dataset pairs using the safety expert policy ${\pi }_{T}$ as a part of the framework described in the previous section. + +
DatasetEnvironmentTD3+BCGuided Offline RL
RandomHopper-v2${8.53} \pm {0.23}$${6.03} \pm {2.03}$
Walker2d-v2${0.95} \pm {0.33}$${2.83} \pm {3.57}$
MediumHopper-v2${60.12} \pm {1.35}$${57.77} \pm {3.07}$
Walker2d-v2${86.17} \pm {0.3}$${83.78} \pm {2.91}$
Medium-ReplayHopper-v2${56.71} \pm {19.16}$${85.61} \pm {5.14}$
Walker2d-v2${73.56} \pm {11.19}$${84.67} \pm {0.77}$
Medium-ExpertHopper-v2${95.16} \pm {9.85}$${106.11} \pm {5.92}$
Walker2d-v2${110.26} \pm {0.65}$${110.6} \pm {0.21}$
ExpertHopper-v2${110.97} \pm {1.45}$${111.62} \pm {0.37}$
Walker2d-v2${110.12} \pm {0.47}$${109.91} \pm {0.13}$
Total${712.55} \pm {44.98}$${758.93} \pm {24.12}$
+ +Table 1: Average normalized score using the D4RL -v2 datasets. The highest performing scores are highlighted. $\pm$ captures the standard deviation over seeds. TD3+BC algorithm is re-run using author-provided implementation. The results are after averaging over the final 10 evaluations and 3 seeds. No additional hyperparameter tuning was performed. TD3+BC and Guided TD3+BC achieve comparable performance. + +![019640de-0d8d-70ea-ad71-1f2906142b0d_3_168_163_1467_613_0.jpg](images/019640de-0d8d-70ea-ad71-1f2906142b0d_3_168_163_1467_613_0.jpg) + +Figure 1: Percent difference of performance of Guided Offline RL w.r.t baseline TD3+BC algorithm. Here, h = Hopper-v2, w $=$ Walker2d-v2, $\mathrm{r} =$ random, $\mathrm{m} =$ medium, $\mathrm{{mr}} =$ medium-replay, $\mathrm{{me}} =$ medium-expert, $\mathrm{e} =$ expert. The proposed approach works better in reducing the number of falls in Walker2d environment as compared to Hopper (left). The reduction of the cumulative sum of the actions is more pronounced for Hopper (right). + +We use the author-provided implementations for both TD3 and TD3+BC. We use the same base hyperparameters as the respective authors for these algorithms and train the baseline and the offline RL agent for three random seeds. In all experiments, the offline agent and the baseline agent do 10 evaluation episodes after every 5000 offline training steps till they reach 1M training steps. We use the normalized score from D4RL for evaluation and we average the scores of all seeds for each environment. We report the final performance results in Table 1. In Figure 1, we report the percentage difference between Guided Offline RL and TD3+BC w.r.t. the total number of times the agent falls or its agent's height crosses the safe range (Walker2d-v2) during all the evaluation episodes occurring within $1\mathrm{M}$ training steps. We also report the percentage difference between the cumulative sum of the actions across all evaluation steps for each dataset-environment pair. + +Our results show that including a safe teacher policy can help in reducing the number of falls that an agent has. We also show that the approach can keep the sum of actions low in most cases, as compared to the baseline. The proposed approach works better in reducing the number of falls in Walker2d environment as compared to Hopper (left). Here, our approach works better for the dataset-environment pairs for which the dataset collection policy is less similar to the safe teacher policy. The reduction of the cumulative sum of the actions is more pronounced for Hopper. We believe that if ${\pi }_{T}$ is trained using a constrained method to keep the sum of the actions low, the results could be better. We find our approach only marginally increases the training time as compared to that of the baseline. All run time experiments were run with a single GeForce GTX 1080 Ti GPU and an Intel(R) Xeon(R) CPU E5-2640 v4. + +## 5 Conclusion + +In this paper, we present Guided Offline RL framework which relies on state uncertainty estimation and safety expert knowledge to discourage an offline RL agent from choosing risky/unsafe actions. We have shown that an existing offline RL algorithm called TD3+BC can be easily modified to design the proposed framework. Our experiments show that our approach performs comparably or better on multiple MuJoCo tasks from D4RL benchmark while trying to minimize unsafe incidents during evaluation. We believe that our framework can be used as an add-on to help to achieve better results while adhering to safety. As future work, we consider using other forms of the safety expert such as human interventions, heuristics etc. and evaluate them on a diverse set of safety tasks. We also plan on studying the effectiveness of the framework when coupled with other SOTA offline RL algorithms. + +## References + +[1] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and + +Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. + +[2] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. nature, 518(7540):529-533, 2015. + +[3] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. + +[4] Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In 2017 IEEE international conference on robotics and automation (ICRA), pages 3389-3396. IEEE, 2017. + +[5] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334-1373, 2016. + +[6] Sascha Lange, Thomas Gabel, and Martin Riedmiller. Batch reinforcement learning. In Reinforcement learning, pages 45-73. Springer, 2012. + +[7] Seunghyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, and Jinwoo Shin. Addressing distribution shift in online reinforcement learning with offline datasets. 2020. + +[8] Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219, 2020. + +[9] Scott Fujimoto and Shixiang Shane Gu. A minimalist approach to offline reinforcement learning. Advances in neural information processing systems, 34:20132-20145, 2021. + +[10] Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In International conference on machine learning, pages 2052-2062. PMLR, 2019. + +[11] Seyed Kamyar Seyed Ghasemipour, Richard Zemel, and Shixiang Gu. A divergence minimization perspective on imitation learning methods. In Conference on Robot Learning, pages 1259-1277. PMLR, 2020. + +[12] Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. arXiv preprint arXiv:1907.00456, 2019. + +[13] Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy q-learning via bootstrapping error reduction. Advances in Neural Information Processing Systems, 32, 2019. + +[14] Scott Fujimoto, Edoardo Conti, Mohammad Ghavamzadeh, and Joelle Pineau. Benchmarking batch deep reinforcement learning algorithms. arXiv preprint arXiv:1910.01708, 2019. + +[15] Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177, 2019. + +[16] Ashvin Nair, Abhishek Gupta, Murtaza Dalal, and Sergey Levine. Awac: Accelerating online reinforcement learning with offline datasets. arXiv preprint arXiv:2006.09359, 2020. + +[17] Yue Wu, Shuangfei Zhai, Nitish Srivastava, Joshua Susskind, Jian Zhang, Ruslan Salakhutdinov, and Hanlin Goh. Uncertainty weighted actor-critic for offline reinforcement learning. arXiv preprint arXiv:2105.08140, 2021. + +[18] Romain Laroche, Paul Trichelair, and Remi Tachet Des Combes. Safe policy improvement with baseline bootstrapping. In International Conference on Machine Learning, pages 3652-3661. PMLR, 2019. + +[19] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179-1191, 2020. + +[20] Jacob Buckman, Carles Gelada, and Marc G Bellemare. The importance of pessimism in fixed-dataset policy optimization. arXiv preprint arXiv:2009.06799, 2020. + +[21] Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. arXiv preprint arXiv:1511.06342, 2015. + +[22] Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distillation. arXiv preprint arXiv:1511.06295, 2015. + +[23] Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Ian Osband, et al. Deep q-learning from demonstrations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. + +[24] Tobias Pohlen, Bilal Piot, Todd Hester, Mohammad Gheshlaghi Azar, Dan Horgan, David Budden, Gabriel Barth-Maron, Hado Van Hasselt, John Quan, Mel Večerík, et al. Observe and look further: Achieving consistent performance on atari. arXiv preprint arXiv:1805.11593, 2018. + +[25] Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Overcoming exploration in reinforcement learning with demonstrations. In 2018 IEEE international conference on robotics and automation (ICRA), pages 6292-6299. IEEE, 2018. + +[26] Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, and Martin Riedmiller. Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. arXiv preprint arXiv:1707.08817, 2017. + +[27] Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, OpenAI Xi Chen, Yan Duan, John Schulman, Filip DeTurck, and Pieter Abbeel. # exploration: A study of count-based exploration for deep reinforcement learning. Advances in neural information processing systems, 30, 2017. + +[28] Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pages 1587-1596. PMLR, 2018. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/L-NgOKyH7jZ/Initial_manuscript_tex/Initial_manuscript.tex b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/L-NgOKyH7jZ/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..1e3506805f064f355cb08bc2e0947a9a2b48657d --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/L-NgOKyH7jZ/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,108 @@ +§ GUIDING OFFLINE REINFORCEMENT LEARNING USING A SAFETY EXPERT + +Richa Verma ${}^{ + }$ , Kartik Bharadwaj ${}^{ + }$ , Harshad Khadilkar ${}^{ \dagger }$ , and Balaraman Ravindran* +TCS Research + +${}^{ \dagger }$ Robert Bosch Centre for Data Science and Artificial Intelligence + +*Department of Computer Science and Engineering, Indian Institute of Technology Madras + +§ ABSTRACT + +Offline reinforcement learning is used to train policies in situations where it is expensive or infeasible to access the environment during training. An agent trained under such a scenario does not get corrective feedback once the learned policy starts diverging and may fall prey to the overestimation bias commonly seen in this setting. This increases the chances of the agent choosing unsafe/risky actions, especially in states with sparse to no representation in the training dataset. In this paper, we propose to leverage a safety expert to discourage the offline RL agent from choosing unsafe actions in under-represented states in the dataset. The proposed framework in this paper transfers the safety expert's knowledge in an offline setting for states with high uncertainty to prevent catastrophic failures from occurring in safety-critical domains. We use a simple but effective approach to quantify the state uncertainty based on how frequently they appear in a training dataset. In states with high uncertainty, the offline RL agent mimics the safety expert while maximizing the long-term reward. We modify TD3+BC, an existing offline RL algorithm, as a part of the proposed approach. We demonstrate empirically that our approach performs better than $\mathrm{{TD}}3 + \mathrm{{BC}}$ on some control tasks and comparably on others across two sets of benchmark datasets while reducing the chance of taking unsafe actions in sparse regions of the state space. + +§ 1 INTRODUCTION + +Reinforcement Learning (RL) has seen advancement and achieved great success in solving complex tasks with high dimensional state and action spaces, including games [1, 2, 3, 4], and some tasks from robotics [5]. An RL agent trained in an online setting takes an action $a$ in state $s$ and interacts with the environment to observe a reward $r$ . It then updates its policy based on the observed reward. However, it may be risky or costly to interact with the environment repeatedly in real-world situations. It may be infeasible in the cases where a high quality simulator is not available or cannot be built. + +In offline RL (also known as batch RL), the agent is not allowed to interact with the environment. It has access to a fixed-sized dataset collected by any arbitrary policy which may or may not be known [6]. Real-world applications can benefit from this setting because access to the environment may be limited, challenging or not possible. Such applications which are already deployed can also generate datasets to learn from. Offline RL enables the use of such logged datasets for learning and can even allow us to leverage an expert in the form of a human operator, rule-based systems or a policy trained with a similar objective. Some approaches such as [7] show that dataset collected by an expert during learning in an online setting can also be used, however, using the expert itself to facilitate learning in offline RL eliminates the need for data collection and is helpful in settings where data privacy needs to be enforced. + +Overestimation of the values of out-of-distribution actions is a fundamental challenge in offline RL. This also applies to certain actions which can be deemed as "unsafe" in safety-critical applications such as autonomous driving, robotic learning, healthcare, etc. For robotic learning, the conditions for a safety breach during an episode are easier to define (e.g. recording how many times the robot has fallen, or a grasped object has been dropped). The challenge in this domain is to learn an optimal policy for a task while minimizing the frequency of above-mentioned instances of catastrophic failures during training. + +In this paper, we study how to utilize a safety expert in an offline RL setting for states with high uncertainty to minimize failures during training. This safety expert isn't necessarily optimal and can be learned or defined by a rule-based system for each task without reference to the underlying task reward. We use a simple but effective approach to quantify the uncertainty of the states based on how frequent the visited states are in a given training dataset. This information is used to conservatively modify the critic target, therefore propagating it to the value function estimation. We believe that incorporating a safety expert in the form of a pre-trained teacher policy along with quantifying state uncertainty can be effective in this setting. It reduces the chances of the offline RL agent engaging in potentially risky exploratory behavior, thus enabling robotic learning from massive datasets. We show that it can allow the agent to learn safe behavior without explicitly defining constraints on actions, which can be hard to do in an offline setting. + +Our goal is to selectively utilize a safe teacher policy to reduce the chances of risky/unsafe behavior encountered during the deployment of a learned offline RL policy while still maintaining high performance. Our main contributions are summarized below: + + * We propose a framework called Guided Offline RL (GORL) that trains an agent to learn efficiently from an offline dataset while leveraging a safety expert in regions of high uncertainty. + + * We evaluate our approach on a set of datasets from the D4RL benchmark of continuous control tasks [8] and show that the proposed framework either performs better or comparably to TD3+BC [9], a popular SOTA offline RL algorithm on most of the tasks. + +§ 2 RELATED WORK + +Offline RL. The existing offline RL methods mainly use some approach that allows the learned policy to stay close to the data collection policy. There are various ways of implementing this. One way is to estimate the behavior policy and then learning a parameterized policy [10, 11]. Another line of works uses divergence regularization [12, 13, 14 to keep the two policies close to each other. Some other works suggest the use of a weighted version of behavior cloning to encourage choosing actions with high advantage [15, 16] or use uncertainty as weight of a state-action pair before making updates [17]. Some methods incorporate the notion of safety and modify the set actions that can be chosen based on their counts [18]. promising direction of literature looks at using pessimism and implementing divergence regularization as a part of value estimation [19, 20]. The goal of this work is different from these works which focus on developing RL alorithms specifically for an offline setting. We study knowledge transfer from a safety expert to an agent learning in the offline setting. + +Reinforcement Learning from Demonstration. RL literature has many examples of learning from teacher policies or demonstrations in an online setting, especially in hard exploration environments. There are policy distillation techniques [21, 22] for training student networks such that their outputs (e.g., Q-values) are similar to those of teacher networks. Learning from demonstrations is another promising area. A replay buffer in an off-policy RL setting can be used to hold teacher demonstrations, which can be combined with samples generated by a student agent during training. DQfD [23] and Ape-X DQfD [24] are some of the examples of such methods for a discrete setting while methods suggested by [25, 26] work for continuous control tasks. + +§ 3 PROPOSED APPROACH + +In offline RL, the problem of extrapolation error [10] is prevalent which means that the agent is unable to evaluate out-of-distribution actions properly. Our focus is on designing a framework to discourage the agent from selecting unsafe OOD actions while trying to learn an optimal policy from the dataset. We present such a framework that requires minimal modifications to a pre-existing offline RL algorithm. Our framework builds on top of TD3+BC [9]. We modify the critic target term to include state uncertainty. We also include a regularization term to push the offline policy towards the safety expert in states with poor confidence. The safety expert can be defined by any rule-based system or a pre-trained policy. We denote the agent’s confidence w.r.t a state as $\operatorname{conf}\left( s\right) \in \left\lbrack {0,1}\right\rbrack$ , where the confidence is computed by using SimHash algorithm [27]. SimHash uses Locality-Sensitive Hashing (LSH) to convert continuous, high-dimensional data to discrete hash codes. LSH preserves the distances among data points, such that those with similar hashes are close to each other. We use SimHash which is a computationally efficient LSH technique and it measures the similarity of the states contained in the training dataset $\mathcal{D}$ by angular distance. Here, we can use any technique which can transform the high-dimensional continuous state space into discrete bins based on their closeness. The following equation shows how hash codes are computed: + +$$ +\mu \left( s\right) = \operatorname{sgn}\left( {{Ag}\left( s\right) }\right) \in {\left\lbrack -1,1\right\rbrack }^{k}. \tag{1} +$$ + +where $A \in {\mathcal{R}}^{k \times d}$ is a matrix with each entry drawn i.i.d. from a standard Gaussian and $g : S \rightarrow {\mathcal{R}}^{\mathcal{D}}$ is a preprocessing function. The dimension of binary codes is $k$ and it controls the granularity of the state space discretization. This algorithm was originally used as an exploration method but we use it to bin the states contained in the dataset $\mathcal{D}$ into hash codes of size $k$ . We use $k = {50}$ for all the tasks after careful experimentation with multiple tasks. We populate the hashtable by recording the counts of states mapped to each hash code, before training an agent. We normalize the state count values by using max-min normalization. Further, we query the hashtable to retrieve these counts during training and use the values as $\operatorname{conf}\left( s\right)$ in the below critic target update equation: + +$$ +Q\left( {s,a}\right) = r + \gamma * \mathop{\max }\limits_{{a}^{\prime }}Q\left( {{s}^{\prime },{a}^{\prime }}\right) \underset{\text{ uncertainty weighted learning from the safety expert }}{\underbrace{-\left( {1 - \operatorname{conf}\left( s\right) }\right) * {\left( a - {\pi }_{T}\left( s\right) \right) }^{2}}}. \tag{2} +$$ + +where ${\pi }_{T}\left( s\right)$ is a teacher policy used as the safety expert. It is trained in an online setting using a continuous control algorithm known as TD3 [28]. More details on training the policy ${\pi }_{T}\left( s\right)$ to be safe are provided in the next section. + +Note that the value of $\operatorname{conf}\left( s\right)$ is lower for under-represented states in the given dataset $\mathcal{D}$ and the lower the confidence, the higher will be the push towards the safety expert, ${\pi }_{T}\left( s\right)$ . Also, the modified update equation reduces the values of all the(s, a)pairs in the dataset except the ones with the action suggested by the safety expert. This discourages the agent from picking unsafe action values in regions of high uncertainty. This completes the description of our framework called Guided Offline RL (GORL) which involves making a few small, but effective, modifications to TD3+BC. + +§ 4 EXPERIMENTS + +We evaluate our proposed approach on the D4RL benchmark of OpenAI gym MuJoCo tasks [8]. We use the TD3+BC algorithm trained on MuJoCo tasks (Hopper-v2 and Walker2d-v2) as the baseline. We train a teacher policy ${\pi }_{T}$ to be used as the safety expert using TD3 for $1\mathrm{M}$ online steps. For the policy to be safe, we add a step penalty of the form ctrl_cost_weight $* \operatorname{sum}\left( \right.$ action ${}^{2}$ ) which is simply a cost for penalizing the agent if it takes actions that are too large. We observe that by doing so, we can discourage the agent from applying high values of torques to various joints of a MuJoCo robot and hence prevent it from making jittery moves. We choose ctrl_cost_weight as 0.1 and 0.01 for Hopper-v2 and Walker2d-v2, respectively, after tuning. These environments have in-built rewards which penalise the agent when it falls or when the height of the top (along the z-axis) becomes too high or too low. Further, we train the offline RL agent on various environment-dataset pairs using the safety expert policy ${\pi }_{T}$ as a part of the framework described in the previous section. + +max width= + +Dataset Environment TD3+BC Guided Offline RL + +1-4 +2*Random Hopper-v2 ${8.53} \pm {0.23}$ ${6.03} \pm {2.03}$ + +2-4 + Walker2d-v2 ${0.95} \pm {0.33}$ ${2.83} \pm {3.57}$ + +1-4 +2*Medium Hopper-v2 ${60.12} \pm {1.35}$ ${57.77} \pm {3.07}$ + +2-4 + Walker2d-v2 ${86.17} \pm {0.3}$ ${83.78} \pm {2.91}$ + +1-4 +2*Medium-Replay Hopper-v2 ${56.71} \pm {19.16}$ ${85.61} \pm {5.14}$ + +2-4 + Walker2d-v2 ${73.56} \pm {11.19}$ ${84.67} \pm {0.77}$ + +1-4 +2*Medium-Expert Hopper-v2 ${95.16} \pm {9.85}$ ${106.11} \pm {5.92}$ + +2-4 + Walker2d-v2 ${110.26} \pm {0.65}$ ${110.6} \pm {0.21}$ + +1-4 +2*Expert Hopper-v2 ${110.97} \pm {1.45}$ ${111.62} \pm {0.37}$ + +2-4 + Walker2d-v2 ${110.12} \pm {0.47}$ ${109.91} \pm {0.13}$ + +1-4 +X Total ${712.55} \pm {44.98}$ ${758.93} \pm {24.12}$ + +1-4 + +Table 1: Average normalized score using the D4RL -v2 datasets. The highest performing scores are highlighted. $\pm$ captures the standard deviation over seeds. TD3+BC algorithm is re-run using author-provided implementation. The results are after averaging over the final 10 evaluations and 3 seeds. No additional hyperparameter tuning was performed. TD3+BC and Guided TD3+BC achieve comparable performance. + + < g r a p h i c s > + +Figure 1: Percent difference of performance of Guided Offline RL w.r.t baseline TD3+BC algorithm. Here, h = Hopper-v2, w $=$ Walker2d-v2, $\mathrm{r} =$ random, $\mathrm{m} =$ medium, $\mathrm{{mr}} =$ medium-replay, $\mathrm{{me}} =$ medium-expert, $\mathrm{e} =$ expert. The proposed approach works better in reducing the number of falls in Walker2d environment as compared to Hopper (left). The reduction of the cumulative sum of the actions is more pronounced for Hopper (right). + +We use the author-provided implementations for both TD3 and TD3+BC. We use the same base hyperparameters as the respective authors for these algorithms and train the baseline and the offline RL agent for three random seeds. In all experiments, the offline agent and the baseline agent do 10 evaluation episodes after every 5000 offline training steps till they reach 1M training steps. We use the normalized score from D4RL for evaluation and we average the scores of all seeds for each environment. We report the final performance results in Table 1. In Figure 1, we report the percentage difference between Guided Offline RL and TD3+BC w.r.t. the total number of times the agent falls or its agent's height crosses the safe range (Walker2d-v2) during all the evaluation episodes occurring within $1\mathrm{M}$ training steps. We also report the percentage difference between the cumulative sum of the actions across all evaluation steps for each dataset-environment pair. + +Our results show that including a safe teacher policy can help in reducing the number of falls that an agent has. We also show that the approach can keep the sum of actions low in most cases, as compared to the baseline. The proposed approach works better in reducing the number of falls in Walker2d environment as compared to Hopper (left). Here, our approach works better for the dataset-environment pairs for which the dataset collection policy is less similar to the safe teacher policy. The reduction of the cumulative sum of the actions is more pronounced for Hopper. We believe that if ${\pi }_{T}$ is trained using a constrained method to keep the sum of the actions low, the results could be better. We find our approach only marginally increases the training time as compared to that of the baseline. All run time experiments were run with a single GeForce GTX 1080 Ti GPU and an Intel(R) Xeon(R) CPU E5-2640 v4. + +§ 5 CONCLUSION + +In this paper, we present Guided Offline RL framework which relies on state uncertainty estimation and safety expert knowledge to discourage an offline RL agent from choosing risky/unsafe actions. We have shown that an existing offline RL algorithm called TD3+BC can be easily modified to design the proposed framework. Our experiments show that our approach performs comparably or better on multiple MuJoCo tasks from D4RL benchmark while trying to minimize unsafe incidents during evaluation. We believe that our framework can be used as an add-on to help to achieve better results while adhering to safety. As future work, we consider using other forms of the safety expert such as human interventions, heuristics etc. and evaluate them on a diverse set of safety tasks. We also plan on studying the effectiveness of the framework when coupled with other SOTA offline RL algorithms. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/VJvluDhBfOS/Initial_manuscript_md/Initial_manuscript.md b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/VJvluDhBfOS/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..7eb1ab5b4805b0d8f052e15ab981e67b1ee234e2 --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/VJvluDhBfOS/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,141 @@ +# Transcribing Educational Videos Using Whisper A preliminary study on using AI for transcribing educational videos + +Ashwin Rao + +University of Helsinki + +## Abstract + +Videos are increasingly being used for e-learning, and transcripts are vital to enhance the learning experience. The costs and delays of generating transcripts can be alleviated by automatic speech recognition (ASR) systems. In this article, we quantify the transcripts generated by whisper for 25 educational videos and identify some open avenues of research when leveraging ASR for transcribing educational videos. + +## 1 Introduction + +During the last decade, we have witnessed an increase in the volume of video content that is disseminated over the Internet. The pandemic further exacerbated this trend as people started to consume a wide category of videos from their houses [1]. Along with lectures, we have also witnessed a rise in the conferences and talks that are being recorded and uploaded online on streaming sites. These videos augment the material taught in the classrooms and are increasingly being leveraged for educational purposes [2]. + +Educational videos, like entertainment videos, are consumed in a combination of personal devices such as laptops, tablets, smartphones, and studies. The capabilities of the audio systems on these devices vary significantly, and a given audio file may sound different on each of these devices [3]. Words in an audio segment recorded by amateurs may sound clear and comprehensible on one device, and the same audio segment may be unintelligible on another device. Furthermore, the educational videos might include the voices of people from a wide range of ethnicities, and the speakers might also not be native speakers of the language in which they are speaking. Clearly, the audio quality of educational videos is vital, and addressing acoustic issues can result in drastic improvement in the quality of the material [4]. However, the video and audio quality of educational videos might not be optimal for all devices because they may not be professionally created, edited, and processed. + +Audio transcripts and subcaptions help alleviate the issues in the audio quality and enable the viewers to receive a correct interpretation of the content. For instance, Gernsbacher has shown that captions are particularly beneficial for persons watching videos in their non-native language [5]. Although generating transcripts has been non-trivial, recent advances in speech-to-text generation have shown promising results in transcribing audio content. In the context of videos, transcripts are different from subtitles: transcripts typically refer to a textual copy of the words someone has said in the video, while subtitles refer to the textual versions of the dialogues in the video [6]. Subtitles can either be open or closed: open subtitles are embedded in the video frames, while closed subtitles are stored separately and can be overlayed over the video frames or can be displayed on a second screen. A variant of closed subtitles is closed captions which contain an additional description of the audio-video content being shown, such as the sound made by animals, etc. At times, a transcript can also include additional description; examples include laughter by students, audience clapping, etc. A key difference between a transcript and the subtitles is that a transcript does not contain the time stamp at which the words in the transcript were said. + +--- + + WEBVTT + + Kind: captions + + Language: en + + ${00} : {00} : {00.040}\cdots > {00} : {00} : {02.460}$ + + The following content is + +provided under a Creative + + ${00} : {00} : {02.460}\; - \rightarrow {00} : {00} : {03.870}$ + + Commons license. + +--- + +Figure 1: Example Closed Caption. The metadata (the file format and language) is followed by the time stamps during which the text can be shown. + +In this article, we do a preliminary evaluation of the quality of transcripts generated by whisper [7]. We focus on the speech-to-text translation, and not on the time stamp at which the word was spoken. Although there is a wide range of tools and models for generating transcripts, we focus our attention on whisper. Our goal is to get an understanding of using whisper for academic videos and identify open avenues of research in the area of leveraging ASR for transcribing academic videos. + +## 2 Methodology + +Tools used and data processing pipeline. For our analysis, we first collect a set of 25 YouTube videos that have closed captions that are not automatically generated; YouTube shows if the captions are auto-generated or provided by the content creator. For each video, we use yt-dlp to download the best audio files corresponding to the video and the available captions (as transcripts). The downloaded captions are the baseline for our evaluation. We do this because YouTube keeps multiple versions of the same video, and dynamically adapts to the optimal audio/video quality depending on the network connectivity. We then use whisper [7] to generate the transcripts, and run it in our cluster powered by NVidia V100 GPUs [8]. The generated transcripts are then compared with our baseline transcripts downloaded from YouTube using jiwer. We summarize the tools used in Table 1. + +Automatic Transcript Generation (Speech to Text). In this article, we restrict ourselves to whisper [7]. Whisper offers multiple models which can be used to process the transcribe the audio files, and in our evaluation we restrict ourselves to the following five models (number of parameters in parenthesis) of which large-v2 is a multilingual model: base.en (74 M), tiny.en (39 M), small.en (244 M), medium.en (769 M), and large-v2 (1550 M). We acknowledge that there is a wide range of open-source tools and models including Kaldi [9], Flashlight [10], and Paddlespeech [11]. We plan to analyze the efficiency of these tools in our subsequent works. + +
ToolVersionUsage
whisper20230314Speech to text conversion.
jiwer3.0.1Compare the text in two files.
yt-dlp2023.03.04Download audio files and transcripts.
opusinfo0.1.10Extract metadata from audio files.
+ +Table 1: Software Tools + +Metrics for evaluating transcript quality. The Word Error Rate (WER) is a commonly used metric for comparing texts [12] and it is computed as ${WER} = \frac{S + D + I}{N = H + S + D}$ where $H$ is the number of hits (correct words), $S$ is the number of substitutions, $D$ is the number of deletions, and $I$ is the number of insertions, and $N$ denotes the number of words in the reference (baseline) against which the hypothesis (results of the transcribing tool) is being evaluated. In contrast, the Match Error Rate (MER) is the probability of an incorrect match [12], and is given by ${MER} = \frac{S + D + I}{H + S + D + I}$ . The Word Information Lost (WIL) is an approximation for the Relative Information Lost (RIL) which is computed using the hits, substitutions, insertions, and deletions [12]; the RIL measures the statistical dependence between the reference and the hypothesis and is calculated using the Shannon entropy. Our goal is not to compare the metrics, and instead we rely on the WER, MER, and WIL to evaluate the performance of the transcription. We use jiwer to compute the WER, MER, and WIL. It is known that jiwer can end up computing a higher WER without normalizing the text [7], and the WER depends on the normalization technique used. For this preliminary analysis we avoid using any custom normalizations, and we plan to explore the impact of normalization in a subsequent study. + +Dataset Description. Of the 25 YouTube videos, 15 were from lectures on MIT OCW. The remaining 10 included 5 talks at Google, one talk at MIT OCW, and four Turing Award lectures. ${}^{1}$ . In Figure 2, we present the playback duration (size in seconds) of each of the videos and the average bitrate of the audio file. The quality of the audio file is important because it can affect the quality of the transcripts being generated, and we observe that the audio files downloaded have an average bit rate of at least ${92}\mathrm{{kbps}}$ . Note that the audio file was encoded in opus audio format which supports variable bitrate and is optimized for speech [13]. We also observe that the audio files were sampled at ${48}\mathrm{{kHz}}$ . Whisper internally converts the audio file to ${16}\mathrm{{kHz}}$ , and we believe that the audio files in our dataset have a sufficiently higher frequency from which audio segments can be sampled at ${16}\mathrm{{kHz}}$ . + +![019640dc-d171-7680-adf4-9fe2684b8091_1_887_1566_747_398_0.jpg](images/019640dc-d171-7680-adf4-9fe2684b8091_1_887_1566_747_398_0.jpg) + +Figure 2: Average Bitrate of the Audio Files. + +--- + +${}^{1}$ Availability: The details of these videos are available with our code and datasets at: https://version.helsinki.fi/ transcribe-educational-videos/preliminary-study-dai2023/ + +--- + +## 3 Evaluation + +In Figure 3, we present the time required to transcribe a video for a given playback time (see Figure 3(a)), and also for a given word count in our baseline transcripts (see Figure 3(b)). We observe that the time to transcribe increases linearly with the playback duration and word count, and the larger models require more time. We present these results to give a ballpark on what to expect, and we are aware that these times are heavily biased to the audio content, and the computational capabilities in our cluster. + +![019640dc-d171-7680-adf4-9fe2684b8091_2_152_428_1479_365_0.jpg](images/019640dc-d171-7680-adf4-9fe2684b8091_2_152_428_1479_365_0.jpg) + +Figure 3: Transcription Time. The transcription time, i.e., the time to generate transcripts, increases linearly with the playback duration and word count. The larger models require more time than their smaller counterparts. + +In Figure 4, we plot the fraction of the playback time that a given model took to transcribe the video. We observe that even the large-v2 model was able to complete the transcription process in less than ${25}\%$ of the time required to playback the video. For the videos in our dataset, and while running whisper on our servers, we observe that the base, tiny, and small models took less than ${10}\%$ of the playback time to transcribe the video, and the larger models took less than 25% of the playback time. A typical human transcriber would require at least the playback time to listen to the whole audio. In Table 2, we present a snippet of the transcripts generated using Whisper. In this snippet, the speaker asks the audience member to repeat what they said because of audio issues. We see that the original transcript marks the conversation as inaudible while the whisper tries to guess what is said, and the results vary with the model size. Clearly, this speed-up when using smaller models is meaningless if the quality of the transcription is poor. + +![019640dc-d171-7680-adf4-9fe2684b8091_2_962_956_680_352_0.jpg](images/019640dc-d171-7680-adf4-9fe2684b8091_2_962_956_680_352_0.jpg) + +Figure 4: Relative transcription time. If the playback time is ${50}\mathrm{\;s}$ and it takes ${10}\mathrm{\;s}$ to generate the transcript then the fraction of playback time is ${10}/{50} = {0.2}$ , i.e., generating a transcript required ${20}\%$ of the playback time. (Range $= \min ,\max$ ) + +![019640dc-d171-7680-adf4-9fe2684b8091_2_161_1583_1513_477_0.jpg](images/019640dc-d171-7680-adf4-9fe2684b8091_2_161_1583_1513_477_0.jpg) + +Table 2: Example transcript with high WER. The above transcripts are for a segment at time offset 1h:02m:58s of the the following video https://www.youtube.com/watch?v=3LVeEjsn8Ts#t=62m58s. + +![019640dc-d171-7680-adf4-9fe2684b8091_3_154_167_751_295_0.jpg](images/019640dc-d171-7680-adf4-9fe2684b8091_3_154_167_751_295_0.jpg) + +Figure 5: Transcript quality. The error bars represent the min and max across the files in the dataset. + +In Figure 5, we present the WER, MER, and WIL when using the various models. Across all the metrics, we observe that the WER, MER, and WIL decreases as the number of parameters in the models increases. An exception is for the large-v2 model. We believe that this is primarily due to the lack of using a normalizer [7], and the audio segments that were marked inaudible in the original transcripts. As shown in Table 2, whisper transcribes the conversation marked inaudible by the human transcriber, and the volume of text generated (sans punctuations) by the large-v2 model is larger than the other models thus resulting in a higher error rate. + +Along with the example provided in Table 2, we also observe a high WER, a high WIL, and a high MER for other videos, as highlighted by the error bars in Figure 5. To better understand this behavior, we present the fraction of hits, substitutions, deletions, and insertions in Figure 6. Across all models, we observe that the hits are above ${80}\%$ for the majority of videos, and the fraction of hits increases with the number of parameters. However, for some videos, such as the one in Table 2, we observe a large number of substitutions, insertions, and deletions. + +![019640dc-d171-7680-adf4-9fe2684b8091_3_744_627_892_293_0.jpg](images/019640dc-d171-7680-adf4-9fe2684b8091_3_744_627_892_293_0.jpg) + +Figure 6: Fraction of Hits, Substitutions, Deletions, and Insertions. Error bars represent the min and max across files in our dataset. The cutout zooms into the Deletions and Insertions. + +One reason for the high error rates is that whisper does not provide inaudible as output and tries to extract the text even from the audio which a human transcriber might mark as inaudible. This is further exacerbated by not leveraging the context. For instance, in the example shown in Table 2 the conversation was about domain-specific architecture, and the question being asked was on the same topic, and yet some of the models wrongly predicted the outcome to be Thomas version architecture or Thomas’s certificate architecture. These predictions are bullshit ${}^{2}$ because they (and the underlying models) are indifferent to truth. Furthermore, although only two substitutions are needed to replace thomas certificate architecture to domain specific architecture, incorrect predictions like these diminish the usefulness of the generated transcripts. We believe that marking the audio segments as either inaudible or its equivalent that indicates a low confidence in the transcription result would be more beneficial in such scenarios. This is achievable by tweaking some thresholds in whisper's configurations, and we plan to explore their impact in subsequent works. + +## 4 Concluding Remarks and Avenues for Future Work + +We performed a preliminary analysis on the transcription capabilities of Whisper, however we cannot draw any strong conclusions: our dataset is heavily biased to the videos picked by the author, and the results are only for the models of one tool, whisper. However, we gained some insights such as the importance of marking audio segments as inaudible, and how inaudible audio segments affect the quality of transcripts generated by ASR systems. + +Some avenues for future work in this area include: a) metrics that account for the semantic information such as the importance of each word, and evaluate the quality of transcripts in end-user studies; b) comparing the transcription results from different models; c) evaluating transcription capabilities for languages other than English, and also for non-native speakers for these languages; d) quantifying the impact of multiple speakers from different ethical backgrounds in the same video/audio; e) approaches to identify the context of the lecture/talk, and leveraging it for better transcriptions; f) quantifying the costs for generating transcripts for different accelerators, and identifying effectiveness of accelerators for transcript generation on end-user devices; and g) quantifying the quality of subtiles including the timestamp of the words and descriptions of the sounds that are generated by the ASR system. + +Acknowledgement. The authors wish to thank the Finnish Computing Competence Infrastructure (FCCI) for supporting this project with computational and data storage resources + +--- + +${}^{2}$ We apologize for the use of profanity, and we rely on the following quote by Harry Frankfurt [14] for describing the term bullshit: "it is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction." + +--- + +## References + +[1] Anja Feldmann, Oliver Gasser, Franziska Lichtblau, Enric Pujol, Ingmar Poese, Christoph Dietzel, Daniel Wagner, Matthias Wichtlhuber, Juan Tapiador, Narseo Vallina-Rodriguez, Oliver Hohlfeld, and Georgios Smarag-dakis. The lockdown effect: Implications of the covid-19 pandemic on internet traffic. In Proceedings of the ACM Internet Measurement Conference, IMC '20, page 1-18, New York, NY, USA, 2020. Association for Computing Machinery. + +[2] Daniel T Seaton, Sergiy Nesterko, Tommy Mullaney, Justin Reich, Andrew Ho, and Isaac Chuang. Characterizing video use in the catalogue of mitx moocs. Proceedings of the European MOOC Stakeholder Summit, pages 140-146, 2014. + +[3] Why we all need subtitles now. https://www.youtube.com/watch?v=VYJtb2YXae8.accessed 2023-May-01. + +[4] Craig H Richardson. Improving audio quality in distance learning applications. 1998. + +[5] Morton Ann Gernsbacher. Video captions benefit everyone. Policy insights from the behavioral and brain sciences, 2(1):195-202, 2015. + +[6] Subtitles - wikipedia. https://en.wikipedia.org/wiki/Subtitles.accessed 2023-May-01. + +[7] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision, 2022. + +$\left\lbrack 8\right\rbrack \;$ https://wiki.helsinki.fi/display/it4sci/HPC+Environment+User+Guide.accessed 2023-May-01. + +[9] Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Han-nemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. The kaldi speech recognition toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding, number CONF. IEEE Signal Processing Society, 2011. + +[10] Jacob Kahn, Vineel Pratap, Tatiana Likhomanenko, Qiantong Xu, Awni Hannun, Jeff Cai, Paden Tomasello, Ann Lee, Edouard Grave, Gilad Avidov, Benoit Steiner, Vitaliy Liptchinsky, Gabriel Synnaeve, and Ronan Collobert. Flashlight: Enabling innovation in tools for machine learning, 2022. + +[11] Junkun Chen Xintong Li Renjie Zheng Yuxin Huang Xiaojie Chen Enlei Gong Zeyu Chen Xiaoguang Hu dianhai yu Yanjun Ma Liang Huang Hui Zhang, Tian Yuan. Paddlespeech: An easy-to-use all-in-one speech toolkit. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations. Association for Computational Linguistics, 2022. + +[12] Andrew Cameron Morris, Viktoria Maier, and Phil Green. From WER and RIL to MER and WIL: improved evaluation measures for connected speech recognition. In Eighth International Conference on Spoken Language Processing, 2004. + +[13] Jean-Marc Valin, Koen Vos, and T Terriberry. Rfc 6716: Definition of the opus audio codec, 2012. + +[14] Harry G Frankfurt. On bullshit. Princeton University Press, 2005. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/VJvluDhBfOS/Initial_manuscript_tex/Initial_manuscript.tex b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/VJvluDhBfOS/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..8d502427d9d5a3676425e0dbaf7055d0591c91fd --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/VJvluDhBfOS/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,115 @@ +§ TRANSCRIBING EDUCATIONAL VIDEOS USING WHISPER A PRELIMINARY STUDY ON USING AI FOR TRANSCRIBING EDUCATIONAL VIDEOS + +Ashwin Rao + +University of Helsinki + +§ ABSTRACT + +Videos are increasingly being used for e-learning, and transcripts are vital to enhance the learning experience. The costs and delays of generating transcripts can be alleviated by automatic speech recognition (ASR) systems. In this article, we quantify the transcripts generated by whisper for 25 educational videos and identify some open avenues of research when leveraging ASR for transcribing educational videos. + +§ 1 INTRODUCTION + +During the last decade, we have witnessed an increase in the volume of video content that is disseminated over the Internet. The pandemic further exacerbated this trend as people started to consume a wide category of videos from their houses [1]. Along with lectures, we have also witnessed a rise in the conferences and talks that are being recorded and uploaded online on streaming sites. These videos augment the material taught in the classrooms and are increasingly being leveraged for educational purposes [2]. + +Educational videos, like entertainment videos, are consumed in a combination of personal devices such as laptops, tablets, smartphones, and studies. The capabilities of the audio systems on these devices vary significantly, and a given audio file may sound different on each of these devices [3]. Words in an audio segment recorded by amateurs may sound clear and comprehensible on one device, and the same audio segment may be unintelligible on another device. Furthermore, the educational videos might include the voices of people from a wide range of ethnicities, and the speakers might also not be native speakers of the language in which they are speaking. Clearly, the audio quality of educational videos is vital, and addressing acoustic issues can result in drastic improvement in the quality of the material [4]. However, the video and audio quality of educational videos might not be optimal for all devices because they may not be professionally created, edited, and processed. + +Audio transcripts and subcaptions help alleviate the issues in the audio quality and enable the viewers to receive a correct interpretation of the content. For instance, Gernsbacher has shown that captions are particularly beneficial for persons watching videos in their non-native language [5]. Although generating transcripts has been non-trivial, recent advances in speech-to-text generation have shown promising results in transcribing audio content. In the context of videos, transcripts are different from subtitles: transcripts typically refer to a textual copy of the words someone has said in the video, while subtitles refer to the textual versions of the dialogues in the video [6]. Subtitles can either be open or closed: open subtitles are embedded in the video frames, while closed subtitles are stored separately and can be overlayed over the video frames or can be displayed on a second screen. A variant of closed subtitles is closed captions which contain an additional description of the audio-video content being shown, such as the sound made by animals, etc. At times, a transcript can also include additional description; examples include laughter by students, audience clapping, etc. A key difference between a transcript and the subtitles is that a transcript does not contain the time stamp at which the words in the transcript were said. + +WEBVTT + +Kind: captions + +Language: en + + ${00} : {00} : {00.040}\cdots > {00} : {00} : {02.460}$ + +The following content is + +provided under a Creative + + ${00} : {00} : {02.460}\; - \rightarrow {00} : {00} : {03.870}$ + + Commons license. + +Figure 1: Example Closed Caption. The metadata (the file format and language) is followed by the time stamps during which the text can be shown. + +In this article, we do a preliminary evaluation of the quality of transcripts generated by whisper [7]. We focus on the speech-to-text translation, and not on the time stamp at which the word was spoken. Although there is a wide range of tools and models for generating transcripts, we focus our attention on whisper. Our goal is to get an understanding of using whisper for academic videos and identify open avenues of research in the area of leveraging ASR for transcribing academic videos. + +§ 2 METHODOLOGY + +Tools used and data processing pipeline. For our analysis, we first collect a set of 25 YouTube videos that have closed captions that are not automatically generated; YouTube shows if the captions are auto-generated or provided by the content creator. For each video, we use yt-dlp to download the best audio files corresponding to the video and the available captions (as transcripts). The downloaded captions are the baseline for our evaluation. We do this because YouTube keeps multiple versions of the same video, and dynamically adapts to the optimal audio/video quality depending on the network connectivity. We then use whisper [7] to generate the transcripts, and run it in our cluster powered by NVidia V100 GPUs [8]. The generated transcripts are then compared with our baseline transcripts downloaded from YouTube using jiwer. We summarize the tools used in Table 1. + +Automatic Transcript Generation (Speech to Text). In this article, we restrict ourselves to whisper [7]. Whisper offers multiple models which can be used to process the transcribe the audio files, and in our evaluation we restrict ourselves to the following five models (number of parameters in parenthesis) of which large-v2 is a multilingual model: base.en (74 M), tiny.en (39 M), small.en (244 M), medium.en (769 M), and large-v2 (1550 M). We acknowledge that there is a wide range of open-source tools and models including Kaldi [9], Flashlight [10], and Paddlespeech [11]. We plan to analyze the efficiency of these tools in our subsequent works. + +max width= + +Tool Version Usage + +1-3 +whisper 20230314 Speech to text conversion. + +1-3 +jiwer 3.0.1 Compare the text in two files. + +1-3 +yt-dlp 2023.03.04 Download audio files and transcripts. + +1-3 +opusinfo 0.1.10 Extract metadata from audio files. + +1-3 + +Table 1: Software Tools + +Metrics for evaluating transcript quality. The Word Error Rate (WER) is a commonly used metric for comparing texts [12] and it is computed as ${WER} = \frac{S + D + I}{N = H + S + D}$ where $H$ is the number of hits (correct words), $S$ is the number of substitutions, $D$ is the number of deletions, and $I$ is the number of insertions, and $N$ denotes the number of words in the reference (baseline) against which the hypothesis (results of the transcribing tool) is being evaluated. In contrast, the Match Error Rate (MER) is the probability of an incorrect match [12], and is given by ${MER} = \frac{S + D + I}{H + S + D + I}$ . The Word Information Lost (WIL) is an approximation for the Relative Information Lost (RIL) which is computed using the hits, substitutions, insertions, and deletions [12]; the RIL measures the statistical dependence between the reference and the hypothesis and is calculated using the Shannon entropy. Our goal is not to compare the metrics, and instead we rely on the WER, MER, and WIL to evaluate the performance of the transcription. We use jiwer to compute the WER, MER, and WIL. It is known that jiwer can end up computing a higher WER without normalizing the text [7], and the WER depends on the normalization technique used. For this preliminary analysis we avoid using any custom normalizations, and we plan to explore the impact of normalization in a subsequent study. + +Dataset Description. Of the 25 YouTube videos, 15 were from lectures on MIT OCW. The remaining 10 included 5 talks at Google, one talk at MIT OCW, and four Turing Award lectures. ${}^{1}$ . In Figure 2, we present the playback duration (size in seconds) of each of the videos and the average bitrate of the audio file. The quality of the audio file is important because it can affect the quality of the transcripts being generated, and we observe that the audio files downloaded have an average bit rate of at least ${92}\mathrm{{kbps}}$ . Note that the audio file was encoded in opus audio format which supports variable bitrate and is optimized for speech [13]. We also observe that the audio files were sampled at ${48}\mathrm{{kHz}}$ . Whisper internally converts the audio file to ${16}\mathrm{{kHz}}$ , and we believe that the audio files in our dataset have a sufficiently higher frequency from which audio segments can be sampled at ${16}\mathrm{{kHz}}$ . + + < g r a p h i c s > + +Figure 2: Average Bitrate of the Audio Files. + +${}^{1}$ Availability: The details of these videos are available with our code and datasets at: https://version.helsinki.fi/ transcribe-educational-videos/preliminary-study-dai2023/ + +§ 3 EVALUATION + +In Figure 3, we present the time required to transcribe a video for a given playback time (see Figure 3(a)), and also for a given word count in our baseline transcripts (see Figure 3(b)). We observe that the time to transcribe increases linearly with the playback duration and word count, and the larger models require more time. We present these results to give a ballpark on what to expect, and we are aware that these times are heavily biased to the audio content, and the computational capabilities in our cluster. + + < g r a p h i c s > + +Figure 3: Transcription Time. The transcription time, i.e., the time to generate transcripts, increases linearly with the playback duration and word count. The larger models require more time than their smaller counterparts. + +In Figure 4, we plot the fraction of the playback time that a given model took to transcribe the video. We observe that even the large-v2 model was able to complete the transcription process in less than ${25}\%$ of the time required to playback the video. For the videos in our dataset, and while running whisper on our servers, we observe that the base, tiny, and small models took less than ${10}\%$ of the playback time to transcribe the video, and the larger models took less than 25% of the playback time. A typical human transcriber would require at least the playback time to listen to the whole audio. In Table 2, we present a snippet of the transcripts generated using Whisper. In this snippet, the speaker asks the audience member to repeat what they said because of audio issues. We see that the original transcript marks the conversation as inaudible while the whisper tries to guess what is said, and the results vary with the model size. Clearly, this speed-up when using smaller models is meaningless if the quality of the transcription is poor. + + < g r a p h i c s > + +Figure 4: Relative transcription time. If the playback time is ${50}\mathrm{\;s}$ and it takes ${10}\mathrm{\;s}$ to generate the transcript then the fraction of playback time is ${10}/{50} = {0.2}$ , i.e., generating a transcript required ${20}\%$ of the playback time. (Range $= \min ,\max$ ) + + < g r a p h i c s > + +Table 2: Example transcript with high WER. The above transcripts are for a segment at time offset 1h:02m:58s of the the following video https://www.youtube.com/watch?v=3LVeEjsn8Ts#t=62m58s. + + < g r a p h i c s > + +Figure 5: Transcript quality. The error bars represent the min and max across the files in the dataset. + +In Figure 5, we present the WER, MER, and WIL when using the various models. Across all the metrics, we observe that the WER, MER, and WIL decreases as the number of parameters in the models increases. An exception is for the large-v2 model. We believe that this is primarily due to the lack of using a normalizer [7], and the audio segments that were marked inaudible in the original transcripts. As shown in Table 2, whisper transcribes the conversation marked inaudible by the human transcriber, and the volume of text generated (sans punctuations) by the large-v2 model is larger than the other models thus resulting in a higher error rate. + +Along with the example provided in Table 2, we also observe a high WER, a high WIL, and a high MER for other videos, as highlighted by the error bars in Figure 5. To better understand this behavior, we present the fraction of hits, substitutions, deletions, and insertions in Figure 6. Across all models, we observe that the hits are above ${80}\%$ for the majority of videos, and the fraction of hits increases with the number of parameters. However, for some videos, such as the one in Table 2, we observe a large number of substitutions, insertions, and deletions. + + < g r a p h i c s > + +Figure 6: Fraction of Hits, Substitutions, Deletions, and Insertions. Error bars represent the min and max across files in our dataset. The cutout zooms into the Deletions and Insertions. + +One reason for the high error rates is that whisper does not provide inaudible as output and tries to extract the text even from the audio which a human transcriber might mark as inaudible. This is further exacerbated by not leveraging the context. For instance, in the example shown in Table 2 the conversation was about domain-specific architecture, and the question being asked was on the same topic, and yet some of the models wrongly predicted the outcome to be Thomas version architecture or Thomas’s certificate architecture. These predictions are bullshit ${}^{2}$ because they (and the underlying models) are indifferent to truth. Furthermore, although only two substitutions are needed to replace thomas certificate architecture to domain specific architecture, incorrect predictions like these diminish the usefulness of the generated transcripts. We believe that marking the audio segments as either inaudible or its equivalent that indicates a low confidence in the transcription result would be more beneficial in such scenarios. This is achievable by tweaking some thresholds in whisper's configurations, and we plan to explore their impact in subsequent works. + +§ 4 CONCLUDING REMARKS AND AVENUES FOR FUTURE WORK + +We performed a preliminary analysis on the transcription capabilities of Whisper, however we cannot draw any strong conclusions: our dataset is heavily biased to the videos picked by the author, and the results are only for the models of one tool, whisper. However, we gained some insights such as the importance of marking audio segments as inaudible, and how inaudible audio segments affect the quality of transcripts generated by ASR systems. + +Some avenues for future work in this area include: a) metrics that account for the semantic information such as the importance of each word, and evaluate the quality of transcripts in end-user studies; b) comparing the transcription results from different models; c) evaluating transcription capabilities for languages other than English, and also for non-native speakers for these languages; d) quantifying the impact of multiple speakers from different ethical backgrounds in the same video/audio; e) approaches to identify the context of the lecture/talk, and leveraging it for better transcriptions; f) quantifying the costs for generating transcripts for different accelerators, and identifying effectiveness of accelerators for transcript generation on end-user devices; and g) quantifying the quality of subtiles including the timestamp of the words and descriptions of the sounds that are generated by the ASR system. + +Acknowledgement. The authors wish to thank the Finnish Computing Competence Infrastructure (FCCI) for supporting this project with computational and data storage resources + +${}^{2}$ We apologize for the use of profanity, and we rely on the following quote by Harry Frankfurt [14] for describing the term bullshit: "it is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction." \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/c4A2txzl82P/Initial_manuscript_md/Initial_manuscript.md b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/c4A2txzl82P/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..705e80a4c59b0013ceb8bc614f731edc3b683934 --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/c4A2txzl82P/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,89 @@ +# Coincidence Detection Is All You Need + +Celestine Preetham Lawrence ${}^{ + }$ + +${}^{ + }$ Bernoulli Institute and Groningen Cognitive Systems and Materials Center (CogniGron), University of Groningen, 9700 AB, Groningen, Netherlands. + +## Abstract + +This paper demonstrates that the performance of coincidence detection - a classic neuromorphic signal processing method found in Rosenblatt's perceptrons with distributed transmission times, can be competitive to a state-of-the-art deep learning method for pattern recognition. Hence, we cannot remain comfortably numb to the prevailing dogma that efficient matrix-vector operations is all we need; but should enquire with greater vigour if more advanced continual learning methods (running on spiking neural network hardware with neuromodulatory mechanisms at multiple timescales) can beat the accuracy of task-specific deep learning methods. With regards to deployability, coincidence detection is an interpretable shallow learning method and its applications provide a commercial use-case for neuromorphic hardware such as Intel Loihi. + +## 1 Introduction + +Frank Rosenblatt and his team (1957-1971) built and analyzed several kinds of perceptrons [1, 2, 3, 4, - networks of sensory, association and receptor neurons; which in contemporary deep learning terminology relates to the input, hidden and output layers. The propagating signals were binary (compatible with a spike-based view), the synaptic delays (transmission times) and weights (memory states) could be analog, the network could be recurrent and was often randomly interconnected, and learning often meant tuning the weights of the association-receptor subnetwork by some error-corrective reinforcement. The synaptic delays were not learnt but instead randomly distributed in Rosenblatt's Tobermory perceptrons [5], and this was rich enough to realize concentration-invariant and uniform time-warp invariant spatiotemporal classification by logarithmic encoding and coincidence detection. However, the processing speed of commercial Von Neumann computers advanced exponentially and outperformed neuromorphic hardware on yesterdecade's benchmarks [6]. The Tobermory perceptron was forgotten, nevertheless, the utility of logarithmic encoding and coincidence detection was formalized by John Hopfield [7] as an efficient solution to the analog match problem in pattern recognition. + +Now, half a century after the accidental demise of Rosenblatt, neuromorphic signal processors are making a comeback. For example, (1) Intel's Loihi with spike-time dependent plasticity mechanisms for learning olfactory pattern recognizers [8]; (2) Physical reservoir computing networks [9] where the interconnectivity of the hidden layer is unchanged, closer to the spirit of Rosenblatt's randomly interconnected sensory-association subnetwork. + +Here, to strengthen the case for revisiting classic methods on novel and modern hardware, we evaluate the performance of coincidence detection in comparison to a deep learning method. Nothing more, nothing less, although this work was triggered by a rabid interest in employing artificial intelligence to sniff out infections and prevent future pandemics. + +## 2 Methods + +Here, we consider the work [10] of an interdisciplinary team, where a 26 layer convolutional neural network with residual connections (ResNet-26) was successfully trained for classifying pathogenic bacteria by Raman spectroscopy. In their work, there are $N = {30}$ classes of bacterial isolates and they begin with a ResNet-26 pre-trained on $N \times {2000}$ spectra, then for each class $n = 1 : N$ there are $M = {100}$ training spectra, and similarly $N \times M = {3000}$ test spectra. Each spectrum $\mathbf{x}$ contains 1000 floating-point numbers ranging between 0 and 1 . Although compute intensive, their deep learning method proved to be a tool of great convenience for pattern recognition in a challenging dataset, where intra-isolate spectra were often more dissimilar than inter-isolate spectra. + +Our method to tackle the above dataset, is inspired by the theory of how coincidence detection [7] in animal brains is fundamental for odour classification in complex and turbulent mixtures. Each class $n$ has a vector representation ${\mathbf{w}}_{n}$ that is learnt, and an input vector $\mathbf{x}$ results in an output class $y\left( \mathbf{x}\right) = {\arg }_{n}\max \left( {\mathbf{x} \land {\mathbf{w}}_{n}}\right)$ where we introduce the operator $\land$ to represent the coincidence between two signals. The analytical nature of coincidence detection depends on the specificities of the ion-channels and the membranes involved [11], and may even incorporate nonlinear leaky-integrate [12] multiple timescale mechanisms. We do not yet have a complete theory of neuromorphic signal processing, so here we introduce an approximation for the translation and scale-invariant property of coincidence detection as + +$$ +{\arg }_{n}\max \left( {\mathbf{x}\bigwedge {\mathbf{w}}_{n}}\right) \approx {\arg }_{n}\max \left( {{\mathbf{w}}_{n} \cdot \widehat{\mathbf{x}}}\right) , \tag{1} +$$ + +Table 1: Test accuracy (%) + +
ResNet-26Coincidence detection
${82.2} \pm {0.3}$ (from [10])82.7 (this work)
+ +where $\widehat{\mathbf{x}}$ is the zero-mean unit-variance normalization of $\mathbf{x}$ . + +Thus, the approximation in Eq. (1) allows $y\left( \mathbf{x}\right)$ to be learnt by a logistic regression on the normalized dataset. We discard the pre-training data, pre-process the training and test spectra by a range-1 mean filter, and use the default method for logistic regression in Wolfram Mathematica (L2-regularization $= {0.0001}$ , optimization method $=$ limited-memory BFGS). Code is provided in the supplemental material for reproducibility. + +## 3 Result and outlook + +The coincidence detection (via normalized logistic regression) method introduced here achieves a test accuracy greater than ResNet-26 (see Table 1), and it took less than 3 seconds to train the classifier on a modern desktop (without any special-purpose GPUs). Check https://openreview.net/attachment?id=xT5rDp5VqK0&name= supplementary_material for Wolfram Mathematica and Python code and plots of the training and test data, and confusion matrices. Note that the training data was fit all at once to a ${100}\%$ accuracy. With a more neuromorphic coincidence detection method and a learning method that adapts the synaptic delays $\mathbf{w}$ continually, to keep track under changing environmental conditions, we may achieve even greater accuracies. + +## Reviewer contributions + +This paper has been previously reviewed at NeurIPS 2022 (https://openreview.net/forum?id=xT5rDp5VqK0) but not recommended for immediate publication for reasons including that it has been only tested on a single dataset. I believe it is good to present this work in a reasonable venue and thereby motivate stakeholders to test coincidence detection on more datasets. Here below, I summarize relevant contributions as author responses to a selection of reviews. Note that the review process also revealed a typo in the supplementary material, where it was wrongly commented that "standardization is performed across samples..." - it should instead read as "standardization is performed samplewise - each sample has a zero-mean and unit-variance across its features...". + +Reviewer V6Wx: The simple "coincidence" detector gives very good results compared with a deep net. Although this could be demonstrating an advantage of coincidence detection, it may also be that the classification problem is actually not that difficult. Paper [10] seems to only apply a deep net to the problem. The authors only apply a linear function. What do other functions do? k-nearest neighbors, SVMs, ...? + +1. Is there no more suitable implementation of coincidence detection, e.g., within a spiking net? + +2. Is your model in eq 1 not simply a perceptron? (With normalized inputs and a max on the outputs) + +Response: Ref. [10] already explored traditional methods (k-NN, SVM) and justified their choice for a deep learning method. + +1. Yes, references [11] and [12] point to this, but are expensive to implement on conventional hardware. Future work should compare how the approximate implementation of coincidence detection compares to more advanced methods on neuromorphic hardware. + +2. Yes, is it not beautiful? Did you notice that the normalization is performed across a different axis in comparison to the standard suggestion of Python sklearn for logistic regression? (Conventional wisdom is that it is a bad idea to do a normalization in this way, which is why perceptrons were not employed with this kind of pre-processing until now. This paper instead argues from the theory of coincidence detection that it is actually a good idea for preprocessing datasets that are compatible with the analog match problem, which turns out to be true upon evaluation in this empirical dataset.) + +Reviewer ctyh: There is an interesting empirical observation here, yet the narrative is too shallow... + +Response: The result in table-1 speaks for itself (i.e. here is a novel method with better performance in comparison to the impactful deep learning method by a large team of researchers in Stanford university, cited over 300 times). Of course, this novel method will need to be applied to other datasets (which is why it needs to be presented in a conference to gain the attention of fellow researchers). Moreover, references [7], [11], [12] have been thoughtfully chosen as related work. + +Reviewer QphW: Authors should consider generating more stats on their accuracy % and provide a more thorough comparison with the baseline (ResNet-26). Further, authors should share additional experiments breaking down the contribution of standardization and smoothing steps. Lastly, explaining why their model fares better than the deep learning model... + +Response: The reviewer asks for more stats, but is it not futile? Given that this is anyhow based on performance on a single dataset? The focus of this paper is to demonstrate that the approximation for coincidence detection introduced here is able to solve an analog match problem (discussed insightfully by Hopfield [7], but not as wellknown as it should be). That the model fares slightly better is a bonus, actually deep learning methods can surely learn a coincidence detector (albeit in a computationally expensive way). Moreover, in order to ensure reproducibility, the method was tested in two programming languages Mathematica (yielding an accuracy of 82.7% as reported in the main text) and Python (yielding an accuracy of 82.9% as reported in the supplementary material). + +## References + +[1] Frank Rosenblatt. The perceptron, a perceiving and recognizing automaton Project Para. Cornell Aeronautical Laboratory, Inc. Report no. 85-460-1, 1957. + +[2] Frank Rosenblatt. The perceptron: A theory of statistical separability in cognitive systems. Cornell Aeronautical Laboratory, Inc. Report no. VG-1196-G-1, 1958. + +[3] Frank Rosenblatt. Principles of neurodynamics. perceptrons and the theory of brain mechanisms. Cornell Aeronautical Laboratory, Inc. Report no. 1196-G-8, 1961. + +[4] Frank Rosenblatt. Cognitive systems research program. Technical report, Cornell University, Ithaca, New York, 1971. + +[5] Frank Rosenblatt. A description of the tobermory perceptron. In Collected Technical Papers, volume 2. Cornell University, Ithaca, New York, 1963. + +[6] George Nagy. Neural networks-then and now. IEEE Transactions on Neural Networks, 2(2):316-318, 1991. + +[7] John J Hopfield. Pattern recognition computation using action potential timing for stimulus representation. Nature, 376(6535):33-36, 1995. + +[8] Nabil Imam and Thomas A Cleland. Rapid online learning and robust recall in a neuromorphic olfactory circuit. Nature Machine Intelligence, 2(3):181-191, 2020. + +[9] G. Tanaka, T. Yamane, J.B. Héroux, R. Nakane, N. Kanazawa, S. Takeda, H. Numata, D. Nakano, and A. Hirose. Recent advances in physical reservoir computing: A review. Neural Networks, 115:100-123, 2019. + +[10] Chi-Sing Ho, Neal Jean, Catherine A Hogan, Lena Blackmon, Stefanie S Jeffrey, Mark Holodniy, Niaz Banaei, Amr AE Saleh, Stefano Ermon, and Jennifer Dionne. Rapid identification of pathogenic bacteria using raman spectroscopy and deep learning. Nature communications, 10(1):1-8, 2019. + +[11] Nelson Spruston. Pyramidal neurons: dendritic structure and synaptic integration. Nature Reviews Neuroscience, 9(3):206-221, 2008. + +[12] Wondimu Teka, Toma M Marinov, and Fidel Santamaria. Neuronal spike timing adaptation described with a fractional leaky integrate-and-fire model. PLoS computational biology, 10(3):e1003526, 2014. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/c4A2txzl82P/Initial_manuscript_tex/Initial_manuscript.tex b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/c4A2txzl82P/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..a398a284fb9bfbf7b51cd8df0b375740ae7ba59c --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/c4A2txzl82P/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,70 @@ +§ COINCIDENCE DETECTION IS ALL YOU NEED + +Celestine Preetham Lawrence ${}^{ + }$ + +${}^{ + }$ Bernoulli Institute and Groningen Cognitive Systems and Materials Center (CogniGron), University of Groningen, 9700 AB, Groningen, Netherlands. + +§ ABSTRACT + +This paper demonstrates that the performance of coincidence detection - a classic neuromorphic signal processing method found in Rosenblatt's perceptrons with distributed transmission times, can be competitive to a state-of-the-art deep learning method for pattern recognition. Hence, we cannot remain comfortably numb to the prevailing dogma that efficient matrix-vector operations is all we need; but should enquire with greater vigour if more advanced continual learning methods (running on spiking neural network hardware with neuromodulatory mechanisms at multiple timescales) can beat the accuracy of task-specific deep learning methods. With regards to deployability, coincidence detection is an interpretable shallow learning method and its applications provide a commercial use-case for neuromorphic hardware such as Intel Loihi. + +§ 1 INTRODUCTION + +Frank Rosenblatt and his team (1957-1971) built and analyzed several kinds of perceptrons [1, 2, 3, 4, - networks of sensory, association and receptor neurons; which in contemporary deep learning terminology relates to the input, hidden and output layers. The propagating signals were binary (compatible with a spike-based view), the synaptic delays (transmission times) and weights (memory states) could be analog, the network could be recurrent and was often randomly interconnected, and learning often meant tuning the weights of the association-receptor subnetwork by some error-corrective reinforcement. The synaptic delays were not learnt but instead randomly distributed in Rosenblatt's Tobermory perceptrons [5], and this was rich enough to realize concentration-invariant and uniform time-warp invariant spatiotemporal classification by logarithmic encoding and coincidence detection. However, the processing speed of commercial Von Neumann computers advanced exponentially and outperformed neuromorphic hardware on yesterdecade's benchmarks [6]. The Tobermory perceptron was forgotten, nevertheless, the utility of logarithmic encoding and coincidence detection was formalized by John Hopfield [7] as an efficient solution to the analog match problem in pattern recognition. + +Now, half a century after the accidental demise of Rosenblatt, neuromorphic signal processors are making a comeback. For example, (1) Intel's Loihi with spike-time dependent plasticity mechanisms for learning olfactory pattern recognizers [8]; (2) Physical reservoir computing networks [9] where the interconnectivity of the hidden layer is unchanged, closer to the spirit of Rosenblatt's randomly interconnected sensory-association subnetwork. + +Here, to strengthen the case for revisiting classic methods on novel and modern hardware, we evaluate the performance of coincidence detection in comparison to a deep learning method. Nothing more, nothing less, although this work was triggered by a rabid interest in employing artificial intelligence to sniff out infections and prevent future pandemics. + +§ 2 METHODS + +Here, we consider the work [10] of an interdisciplinary team, where a 26 layer convolutional neural network with residual connections (ResNet-26) was successfully trained for classifying pathogenic bacteria by Raman spectroscopy. In their work, there are $N = {30}$ classes of bacterial isolates and they begin with a ResNet-26 pre-trained on $N \times {2000}$ spectra, then for each class $n = 1 : N$ there are $M = {100}$ training spectra, and similarly $N \times M = {3000}$ test spectra. Each spectrum $\mathbf{x}$ contains 1000 floating-point numbers ranging between 0 and 1 . Although compute intensive, their deep learning method proved to be a tool of great convenience for pattern recognition in a challenging dataset, where intra-isolate spectra were often more dissimilar than inter-isolate spectra. + +Our method to tackle the above dataset, is inspired by the theory of how coincidence detection [7] in animal brains is fundamental for odour classification in complex and turbulent mixtures. Each class $n$ has a vector representation ${\mathbf{w}}_{n}$ that is learnt, and an input vector $\mathbf{x}$ results in an output class $y\left( \mathbf{x}\right) = {\arg }_{n}\max \left( {\mathbf{x} \land {\mathbf{w}}_{n}}\right)$ where we introduce the operator $\land$ to represent the coincidence between two signals. The analytical nature of coincidence detection depends on the specificities of the ion-channels and the membranes involved [11], and may even incorporate nonlinear leaky-integrate [12] multiple timescale mechanisms. We do not yet have a complete theory of neuromorphic signal processing, so here we introduce an approximation for the translation and scale-invariant property of coincidence detection as + +$$ +{\arg }_{n}\max \left( {\mathbf{x}\bigwedge {\mathbf{w}}_{n}}\right) \approx {\arg }_{n}\max \left( {{\mathbf{w}}_{n} \cdot \widehat{\mathbf{x}}}\right) , \tag{1} +$$ + +Table 1: Test accuracy (%) + +max width= + +ResNet-26 Coincidence detection + +1-2 +${82.2} \pm {0.3}$ (from [10]) 82.7 (this work) + +1-2 + +where $\widehat{\mathbf{x}}$ is the zero-mean unit-variance normalization of $\mathbf{x}$ . + +Thus, the approximation in Eq. (1) allows $y\left( \mathbf{x}\right)$ to be learnt by a logistic regression on the normalized dataset. We discard the pre-training data, pre-process the training and test spectra by a range-1 mean filter, and use the default method for logistic regression in Wolfram Mathematica (L2-regularization $= {0.0001}$ , optimization method $=$ limited-memory BFGS). Code is provided in the supplemental material for reproducibility. + +§ 3 RESULT AND OUTLOOK + +The coincidence detection (via normalized logistic regression) method introduced here achieves a test accuracy greater than ResNet-26 (see Table 1), and it took less than 3 seconds to train the classifier on a modern desktop (without any special-purpose GPUs). Check https://openreview.net/attachment?id=xT5rDp5VqK0&name= supplementary_material for Wolfram Mathematica and Python code and plots of the training and test data, and confusion matrices. Note that the training data was fit all at once to a ${100}\%$ accuracy. With a more neuromorphic coincidence detection method and a learning method that adapts the synaptic delays $\mathbf{w}$ continually, to keep track under changing environmental conditions, we may achieve even greater accuracies. + +§ REVIEWER CONTRIBUTIONS + +This paper has been previously reviewed at NeurIPS 2022 (https://openreview.net/forum?id=xT5rDp5VqK0) but not recommended for immediate publication for reasons including that it has been only tested on a single dataset. I believe it is good to present this work in a reasonable venue and thereby motivate stakeholders to test coincidence detection on more datasets. Here below, I summarize relevant contributions as author responses to a selection of reviews. Note that the review process also revealed a typo in the supplementary material, where it was wrongly commented that "standardization is performed across samples..." - it should instead read as "standardization is performed samplewise - each sample has a zero-mean and unit-variance across its features...". + +Reviewer V6Wx: The simple "coincidence" detector gives very good results compared with a deep net. Although this could be demonstrating an advantage of coincidence detection, it may also be that the classification problem is actually not that difficult. Paper [10] seems to only apply a deep net to the problem. The authors only apply a linear function. What do other functions do? k-nearest neighbors, SVMs, ...? + +1. Is there no more suitable implementation of coincidence detection, e.g., within a spiking net? + +2. Is your model in eq 1 not simply a perceptron? (With normalized inputs and a max on the outputs) + +Response: Ref. [10] already explored traditional methods (k-NN, SVM) and justified their choice for a deep learning method. + +1. Yes, references [11] and [12] point to this, but are expensive to implement on conventional hardware. Future work should compare how the approximate implementation of coincidence detection compares to more advanced methods on neuromorphic hardware. + +2. Yes, is it not beautiful? Did you notice that the normalization is performed across a different axis in comparison to the standard suggestion of Python sklearn for logistic regression? (Conventional wisdom is that it is a bad idea to do a normalization in this way, which is why perceptrons were not employed with this kind of pre-processing until now. This paper instead argues from the theory of coincidence detection that it is actually a good idea for preprocessing datasets that are compatible with the analog match problem, which turns out to be true upon evaluation in this empirical dataset.) + +Reviewer ctyh: There is an interesting empirical observation here, yet the narrative is too shallow... + +Response: The result in table-1 speaks for itself (i.e. here is a novel method with better performance in comparison to the impactful deep learning method by a large team of researchers in Stanford university, cited over 300 times). Of course, this novel method will need to be applied to other datasets (which is why it needs to be presented in a conference to gain the attention of fellow researchers). Moreover, references [7], [11], [12] have been thoughtfully chosen as related work. + +Reviewer QphW: Authors should consider generating more stats on their accuracy % and provide a more thorough comparison with the baseline (ResNet-26). Further, authors should share additional experiments breaking down the contribution of standardization and smoothing steps. Lastly, explaining why their model fares better than the deep learning model... + +Response: The reviewer asks for more stats, but is it not futile? Given that this is anyhow based on performance on a single dataset? The focus of this paper is to demonstrate that the approximation for coincidence detection introduced here is able to solve an analog match problem (discussed insightfully by Hopfield [7], but not as wellknown as it should be). That the model fares slightly better is a bonus, actually deep learning methods can surely learn a coincidence detector (albeit in a computationally expensive way). Moreover, in order to ensure reproducibility, the method was tested in two programming languages Mathematica (yielding an accuracy of 82.7% as reported in the main text) and Python (yielding an accuracy of 82.9% as reported in the supplementary material). \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/kjTVwUVVWP/Initial_manuscript_md/Initial_manuscript.md b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/kjTVwUVVWP/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..d9dbade1e6a7dfbe09bb36c9e30353dfec984222 --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/kjTVwUVVWP/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,201 @@ +# A Bandits Approach to Intelligent Tutoring Systems using Concept Evolution + +Sudha ${\mathrm{S}}^{ + }$ , Arun Rajkumar ${}^{ + }$ + +${}^{ + }$ Indian Institute of Technology Madras + +## Abstract + +With the huge number of learning resources available online today, the Intelligent Tutoring Systems (ITS) are of great need more than ever. An ITS is a system that personalizes the course contents to each learner. In this paper, we address the problem of suggesting an effective & efficient learning sequences to learners based on their knowledge levels. We take a multi-armed bandits approach to action choosing where we suggest that action which has the highest estimated learning outcome at each step. We model the actions as Beta distributions & the learners' knowledge level as concept vectors. We also learn the prerequisite relationships that can exist among the concepts automatically. We propose a novel algorithm that achieves the goal efficiently. Our experimental results show that our algorithm's performance is comparable to that of the optimal algorithm. + +## 1 Introduction + +Traditional teaching methods utilize a uniform approach for all learners, disregarding individual abilities and needs. Intelligent Tutoring Systems (ITS) adapt teaching strategies according to learner's unique parameters. This paper presents an ITS framework for devising tailored learning actions sequences for each learner, optimizing concept learning. We model this problem as a Multi-Armed Bandits setting, viewing learning actions as arms and the learning level gained as rewards. The model also considers prerequisite relationships between concepts. + +Our approach allows a learner's knowledge level to range between 0 and 1 , a shift from the conventional binary (0,1)states. This accounts for varying mastery levels of a concept. Our framework permits each learning action to contribute variably to multiple concepts. We also incorporate prerequisite relationships between concepts with varying intensity levels. The algorithm autonomously learns these prerequisite relationships, negating the need for expert input. + +## 2 Related Work + +[1] suggests a Zone of Proximal Development (ZPD)-based action sequence selection, incorporating multi-armed bandits to maximize rewards. Their method relies heavily on time-consuming ZPD graph creation by an expert, a dependency absent in our approach. + +[2] applies a POMDP approach to ITS in a question-and-answer context, limiting learner concept understanding to binary(0,1)values. Our method allows continuous values in $\left\lbrack {0,1}\right\rbrack$ for knowledge levels, uses practical learning actions like videos, and doesn't require prerequisite information.[3] also applies a POMDP approach to ITS, but solving a POMDP is generally challenging due to the polynomial number of states. + +[4] embeds Personalised Learning Action (PLA) between fixed assessment sequences to boost immediate assessment performance using the CLUB & ACLUB algorithms. Unlike them, our goal is efficient concept learning, not immediate assessment performance. + +[5] proposes a Thompson Sampling & Knowledge Gradient variation for PLAs to improve immediate assessment performance, but doesn't address prerequisite dependencies. Our focus is on concept learning. [6] merges automatic curriculum generation with ZPDES bandits approach, framing curriculum generation as a graph coloring problem. This approach requires intensive ZPD graph initialization. + +## 3 Problem Setting & Modelling Assumptions + +$N$ denotes the count of learners in an ITS system aiming to learn $K$ concepts. Each learner $i$ ’s knowledge state is indicated by vector ${C}_{i} \in {\left\lbrack 0,1\right\rbrack }^{K}$ , with ${C}_{ij}$ signifying learner $i$ ’s mastery of concept $\mathrm{j}$ (e.g., ${C}_{23} = {0.7}$ means learner 2 has ${70}\%$ grasp of concept 3). ITS system’s objective is to teach all $N$ learners all $K$ concepts to a threshold $\theta$ level of mastery. + +ITS possesses a set of actions $A$ (e.g., videos, lectures) affecting the learner’s knowledge level. The system learns the impact of these actions over time. Concept relationships are considered in two cases: one assumes independence, and the other considers prerequisite relationships affecting the impact of an action on a concept. + +Learner-specific parameters determine individual learning rates, accommodating variations between fast and slow learners. The ITS must deduce these rates. We assume learner knowledge evolves Markovianly, and knowledge level estimates are assumed to be noisy. + +## Independent Concepts: + +For the independent concepts, the effect of action $a$ on concept $i$ at round $t$ is given as follows: + +$$ +{c}_{i}^{t + 1} = {c}_{i}^{t} + \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \cdot \left( {1 - {c}_{i}^{t}}\right) \tag{1} +$$ + +where $a$ is the action chosen at time step $t$ and $\operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right)$ is the CDF value of the action $a$ at value ${c}_{i}^{t}$ . + +## Dependent Concepts: + +The value update for the dependent concepts is as follows: + +$$ +{c}_{i}^{t + 1} = {c}_{i}^{t} + \mathop{\sum }\limits_{{j = 1}}^{D}{c}_{j}^{t}{\lambda }_{j - > i} \cdot \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \cdot \left( {1 - {c}_{i}^{t}}\right) \tag{2} +$$ + +where $D$ is the number of prerequisite concepts to ${c}_{i}$ and $\mathop{\sum }\limits_{{j = 1}}^{D}{\lambda }_{j - > i} = 1$ + +Here again, $\operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right)$ is the value of the Beta CDF at value ${c}_{i}^{t}$ . + +Learner Specific Parameter:To model the learner’s unique abilities, we use a user specific parameter ${\gamma }_{i} \in$ $\left\lbrack {0,1}\right\rbrack$ . The effect of an action on a learner then will depend on the action, the specific learner $\&$ the current knowledge state of the learner. This is made formal below: + +$$ +{c}_{i}^{t + 1} = {c}_{i}^{t} + {\gamma }_{i} \cdot \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \cdot \left( {1 - {c}_{i}^{t}}\right) \tag{3} +$$ + +Parameters to Estimate: The ITS system is completely specified using $2 * K$ action parameters that govern the Beta CDFs, and $K * N$ parameters that describe the learner’s knowledge state and $N$ learner specific parameters. + +## 4 ITS-BPECE - Bandits Based Parameter Estimtation for Concept Evolution + +This section gives an overview of the parameter estimation for the independent $\&$ dependent concepts. The parameters that need to be estimated for the independent and the dependent concepts are different. Hence, the estimation approaches vary as well. The subsequent subsections give an overview of the algorithm we propose which we call Bandits based Parameter Estimation for Concept Evolution (BPECE) and the section ends with a pseudo code of the BPECE in Algorithm 1. + +## Algorithm Overview: + +We start off by choosing an action uniformly at random till each action has been chosen a minimum of (a small value) ${A}_{min}$ times. We observe the data thus generated which looks as: + +$$ +\left\{ {\ldots ,\left( {{C}_{i1}^{t},{C}_{i1}^{t + 1}}\right) ,\left( {{C}_{i2}^{{t}^{\prime }},{C}_{i2}^{{t}^{\prime } + 1}}\right) ,\ldots }\right\} \tag{4} +$$ + +If it is an independent concept in question, we use the Zeroth-Order(ZO) optimization to estimate the action parameters. The objective function for the $\mathrm{{ZO}}$ in the case of independent concepts is: + +$$ +f\left( {{\alpha }_{a},{\beta }_{a}}\right) = \left( \frac{{c}_{i}^{t + 1} - {c}_{i}^{t}}{1 - {c}_{i}^{t}}\right) - \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \tag{5} +$$ + +We use the ZO estimation after every ${D}_{min}$ number of data samples we collect and we increase the value of ${D}_{\text{min }}$ over time. + +In the dependent concepts case, not only do we have to estimate the action parameters, but also the ${\lambda }_{j - > i}$ parameters for all dependency pairs(i, j). We start off by fixing the values of ${\lambda }_{j - > i} = \frac{1}{K - 1}$ for all(i, j). We estimate the Beta parameters using the ZO optimization. To estimate the ${\lambda }_{j - > i}$ parameters, we fix the Beta parameters thus obtained. We train a Neural Network (NN) for each dependent concept with the concept vector being the input and the objective value being the output. + +We alternatively fix ${\lambda }_{j - > i}$ and estimate Beta parameters and fix Beta parameters and estimate ${\lambda }_{j - > i}$ till the values of the parameter converge. Algorithm 1 presents the pseudo code of the algorithm. + +We incorporate the MAB idea of choosing the arms that have the highest reward by picking those actions that push the learner concept vectors the farthest. We use a version of $\epsilon$ -greedy where we pick the best action with probability $\left( {1 - \epsilon }\right)$ and an action uniformly at random with probability $\epsilon$ . While we use an $\epsilon$ -Greedy strategy, more sophisticated bandit strategies can also be used in the framework. + +Algorithm 1: BPECE + +--- + +Input: A set of learner concept vector estimates, ${C}_{j}, j = 1,2,\ldots N$ + + Parameters ${A}_{min},{D}_{min},\epsilon$ + +Output: Next action ${a}_{j}$ for each learner $j$ + +for $j \leftarrow 1$ to $N$ do + + if $\exists a \in A$ where $\operatorname{count}\left( a\right) < {A}_{\text{min }}$ then + + ${a}_{j} \leftarrow a$ + + end + + else + + for ${c}_{ji} \in {C}_{j}$ do + + if ${c}_{ji}$ is Independent then + + Estimate the $\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ using Zeroth-Order Optimization on 5 + + end + + if ${c}_{ji}$ is Dependent then + + Initialize ${\lambda }_{k - > {ji}}$ values uniformly $\forall \left( {k,{ij}}\right)$ + + while ${\lambda }_{k - > {ji}}$ AND $\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ are not converged do + + Fix ${\lambda }_{k - > {ji}}$ + + Estimate $\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ using Zeroth-Order Optimization on 5 + + $\operatorname{Fix}\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ + + Estimate ${\lambda }_{k - > {ji}}$ using the Neural Nets + + end + + end + + end + + Update ${C}_{{j}_{a}}^{\prime }$ using Equation $3\& 2\forall a \in A$ + + With probability $1 - \epsilon$ + + ${a}_{j} \leftarrow \arg \mathop{\max }\limits_{{a \in A}}{\begin{Vmatrix}{C}_{{j}_{a}}^{\prime } - {C}_{j}\end{Vmatrix}}_{2}$ + + With probability $\epsilon$ + + ${a}_{j} \leftarrow$ choose an action $a \in A$ uniformly at random + + end + +end + +--- + +## 5 Experiments + +Setup: We use as performance metric the number of steps/rounds it takes for all concept values to go beyond 0.9. We compare our algorithm results against an optimal algorithm. The optimal algorithm we consider is an algorithm that has all the true parameter values of the actions and the dependencies and uses those to pick the best action for the learners greedily. + +## Results for Independent Concepts + +Figure 1 depicts the results for the Independent case where we vary different parameters. + +![019640d8-202e-7239-aead-065d5f5616df_3_213_180_1367_285_0.jpg](images/019640d8-202e-7239-aead-065d5f5616df_3_213_180_1367_285_0.jpg) + +Figure 1: Number of Steps for the Independent Concepts with varying parameters + +![019640d8-202e-7239-aead-065d5f5616df_3_560_573_678_260_0.jpg](images/019640d8-202e-7239-aead-065d5f5616df_3_560_573_678_260_0.jpg) + +Figure 2: Total & Average Number of Steps for Independent Concepts for varying number of learners with the learner-specific parameter + +## Results for Independent Concepts with Student-Specific Parameter + +Figure 2 shows the results for the case where we include a learner-specific parameter $\gamma$ that accounts for each learner's learning rate. We vary the number of learners from 2 through 50 while fixing the number of actions and concepts. + +## Results for Dependent Concepts + +Figure 3 shows the results for the number of steps taken with for dependent concepts, while the Figure ?? shows the average number of steps taken per learner. We vary the number of dependent concepts from 1 to 4 to show how the algorithm performs in each case. + +![019640d8-202e-7239-aead-065d5f5616df_3_388_1403_1003_264_0.jpg](images/019640d8-202e-7239-aead-065d5f5616df_3_388_1403_1003_264_0.jpg) + +Figure 3: Number of Steps for Dependent Concepts with Varying No of Dependent Concepts + +## 6 Conclusion & Future Work + +We proposed a novel bandits based parameter estimation approach to suggest learning actions to learners based on each learner's knowledge level. We considered the cases where the concepts are independent and dependent. In the dependent case, we took into consideration the prerequisite relationships between various concepts. We modeled each learning action's effect on a concept as a function of Beta distribution. For the prerequisite relationships, we trained NNs to estimate the degree of dependence. Finally, we used an $\epsilon$ -greedy approach to choose the best action for the learners. We back our proposed method with extensive experimental results. + +As a future work, we can extend the learner-specific parameters to account for the different learning rate each learner for the dependent concepts as well. + +## References + +[1] Benjamin Clement, Didier Roy, Pierre-Yves Oudeyer, and Manuel Lopes. Multi-armed bandits for intelligent tutoring systems. arXiv preprint arXiv:1310.3174, 2013. + +[2] Fangju Wang. Pomdp framework for building an intelligent tutoring system. In CSEDU (1), pages 233-240, 2014. + +[3] Jeremiah T Folsom-Kovarik, Gita Sukthankar, and Sae Schatz. Tractable pomdp representations for intelligent tutoring systems. ACM Transactions on Intelligent Systems and Technology (TIST), 4(2):1-22, 2013. + +[4] Andrew S Lan and Richard G Baraniuk. A contextual bandits framework for personalized learning action selection. In ${EDM}$ , pages ${424} - {429},{2016}$ . + +[5] Indu Manickam, Andrew S Lan, and Richard G Baraniuk. Contextual multi-armed bandit algorithms for personalized learning action selection. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6344-6348. IEEE, 2017. + +[6] Tong Mu, Karan Goel, and Emma Brunskill. Program2tutor: Combining automatic curriculum generation with multi-armed bandits for intelligent tutoring systems. In Conference on Neural Information Processing Systems, 2017. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/kjTVwUVVWP/Initial_manuscript_tex/Initial_manuscript.tex b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/kjTVwUVVWP/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..bfe0a355ea285c6bd09b2399a993384d878e96b4 --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/kjTVwUVVWP/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,183 @@ +§ A BANDITS APPROACH TO INTELLIGENT TUTORING SYSTEMS USING CONCEPT EVOLUTION + +Sudha ${\mathrm{S}}^{ + }$ , Arun Rajkumar ${}^{ + }$ + +${}^{ + }$ Indian Institute of Technology Madras + +§ ABSTRACT + +With the huge number of learning resources available online today, the Intelligent Tutoring Systems (ITS) are of great need more than ever. An ITS is a system that personalizes the course contents to each learner. In this paper, we address the problem of suggesting an effective & efficient learning sequences to learners based on their knowledge levels. We take a multi-armed bandits approach to action choosing where we suggest that action which has the highest estimated learning outcome at each step. We model the actions as Beta distributions & the learners' knowledge level as concept vectors. We also learn the prerequisite relationships that can exist among the concepts automatically. We propose a novel algorithm that achieves the goal efficiently. Our experimental results show that our algorithm's performance is comparable to that of the optimal algorithm. + +§ 1 INTRODUCTION + +Traditional teaching methods utilize a uniform approach for all learners, disregarding individual abilities and needs. Intelligent Tutoring Systems (ITS) adapt teaching strategies according to learner's unique parameters. This paper presents an ITS framework for devising tailored learning actions sequences for each learner, optimizing concept learning. We model this problem as a Multi-Armed Bandits setting, viewing learning actions as arms and the learning level gained as rewards. The model also considers prerequisite relationships between concepts. + +Our approach allows a learner's knowledge level to range between 0 and 1, a shift from the conventional binary (0,1)states. This accounts for varying mastery levels of a concept. Our framework permits each learning action to contribute variably to multiple concepts. We also incorporate prerequisite relationships between concepts with varying intensity levels. The algorithm autonomously learns these prerequisite relationships, negating the need for expert input. + +§ 2 RELATED WORK + +[1] suggests a Zone of Proximal Development (ZPD)-based action sequence selection, incorporating multi-armed bandits to maximize rewards. Their method relies heavily on time-consuming ZPD graph creation by an expert, a dependency absent in our approach. + +[2] applies a POMDP approach to ITS in a question-and-answer context, limiting learner concept understanding to binary(0,1)values. Our method allows continuous values in $\left\lbrack {0,1}\right\rbrack$ for knowledge levels, uses practical learning actions like videos, and doesn't require prerequisite information.[3] also applies a POMDP approach to ITS, but solving a POMDP is generally challenging due to the polynomial number of states. + +[4] embeds Personalised Learning Action (PLA) between fixed assessment sequences to boost immediate assessment performance using the CLUB & ACLUB algorithms. Unlike them, our goal is efficient concept learning, not immediate assessment performance. + +[5] proposes a Thompson Sampling & Knowledge Gradient variation for PLAs to improve immediate assessment performance, but doesn't address prerequisite dependencies. Our focus is on concept learning. [6] merges automatic curriculum generation with ZPDES bandits approach, framing curriculum generation as a graph coloring problem. This approach requires intensive ZPD graph initialization. + +§ 3 PROBLEM SETTING & MODELLING ASSUMPTIONS + +$N$ denotes the count of learners in an ITS system aiming to learn $K$ concepts. Each learner $i$ ’s knowledge state is indicated by vector ${C}_{i} \in {\left\lbrack 0,1\right\rbrack }^{K}$ , with ${C}_{ij}$ signifying learner $i$ ’s mastery of concept $\mathrm{j}$ (e.g., ${C}_{23} = {0.7}$ means learner 2 has ${70}\%$ grasp of concept 3). ITS system’s objective is to teach all $N$ learners all $K$ concepts to a threshold $\theta$ level of mastery. + +ITS possesses a set of actions $A$ (e.g., videos, lectures) affecting the learner’s knowledge level. The system learns the impact of these actions over time. Concept relationships are considered in two cases: one assumes independence, and the other considers prerequisite relationships affecting the impact of an action on a concept. + +Learner-specific parameters determine individual learning rates, accommodating variations between fast and slow learners. The ITS must deduce these rates. We assume learner knowledge evolves Markovianly, and knowledge level estimates are assumed to be noisy. + +§ INDEPENDENT CONCEPTS: + +For the independent concepts, the effect of action $a$ on concept $i$ at round $t$ is given as follows: + +$$ +{c}_{i}^{t + 1} = {c}_{i}^{t} + \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \cdot \left( {1 - {c}_{i}^{t}}\right) \tag{1} +$$ + +where $a$ is the action chosen at time step $t$ and $\operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right)$ is the CDF value of the action $a$ at value ${c}_{i}^{t}$ . + +§ DEPENDENT CONCEPTS: + +The value update for the dependent concepts is as follows: + +$$ +{c}_{i}^{t + 1} = {c}_{i}^{t} + \mathop{\sum }\limits_{{j = 1}}^{D}{c}_{j}^{t}{\lambda }_{j - > i} \cdot \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \cdot \left( {1 - {c}_{i}^{t}}\right) \tag{2} +$$ + +where $D$ is the number of prerequisite concepts to ${c}_{i}$ and $\mathop{\sum }\limits_{{j = 1}}^{D}{\lambda }_{j - > i} = 1$ + +Here again, $\operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right)$ is the value of the Beta CDF at value ${c}_{i}^{t}$ . + +Learner Specific Parameter:To model the learner’s unique abilities, we use a user specific parameter ${\gamma }_{i} \in$ $\left\lbrack {0,1}\right\rbrack$ . The effect of an action on a learner then will depend on the action, the specific learner $\&$ the current knowledge state of the learner. This is made formal below: + +$$ +{c}_{i}^{t + 1} = {c}_{i}^{t} + {\gamma }_{i} \cdot \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \cdot \left( {1 - {c}_{i}^{t}}\right) \tag{3} +$$ + +Parameters to Estimate: The ITS system is completely specified using $2 * K$ action parameters that govern the Beta CDFs, and $K * N$ parameters that describe the learner’s knowledge state and $N$ learner specific parameters. + +§ 4 ITS-BPECE - BANDITS BASED PARAMETER ESTIMTATION FOR CONCEPT EVOLUTION + +This section gives an overview of the parameter estimation for the independent $\&$ dependent concepts. The parameters that need to be estimated for the independent and the dependent concepts are different. Hence, the estimation approaches vary as well. The subsequent subsections give an overview of the algorithm we propose which we call Bandits based Parameter Estimation for Concept Evolution (BPECE) and the section ends with a pseudo code of the BPECE in Algorithm 1. + +§ ALGORITHM OVERVIEW: + +We start off by choosing an action uniformly at random till each action has been chosen a minimum of (a small value) ${A}_{min}$ times. We observe the data thus generated which looks as: + +$$ +\left\{ {\ldots ,\left( {{C}_{i1}^{t},{C}_{i1}^{t + 1}}\right) ,\left( {{C}_{i2}^{{t}^{\prime }},{C}_{i2}^{{t}^{\prime } + 1}}\right) ,\ldots }\right\} \tag{4} +$$ + +If it is an independent concept in question, we use the Zeroth-Order(ZO) optimization to estimate the action parameters. The objective function for the $\mathrm{{ZO}}$ in the case of independent concepts is: + +$$ +f\left( {{\alpha }_{a},{\beta }_{a}}\right) = \left( \frac{{c}_{i}^{t + 1} - {c}_{i}^{t}}{1 - {c}_{i}^{t}}\right) - \operatorname{Beta}\left( {{\alpha }_{a},{\beta }_{a},{c}_{i}^{t}}\right) \tag{5} +$$ + +We use the ZO estimation after every ${D}_{min}$ number of data samples we collect and we increase the value of ${D}_{\text{ min }}$ over time. + +In the dependent concepts case, not only do we have to estimate the action parameters, but also the ${\lambda }_{j - > i}$ parameters for all dependency pairs(i, j). We start off by fixing the values of ${\lambda }_{j - > i} = \frac{1}{K - 1}$ for all(i, j). We estimate the Beta parameters using the ZO optimization. To estimate the ${\lambda }_{j - > i}$ parameters, we fix the Beta parameters thus obtained. We train a Neural Network (NN) for each dependent concept with the concept vector being the input and the objective value being the output. + +We alternatively fix ${\lambda }_{j - > i}$ and estimate Beta parameters and fix Beta parameters and estimate ${\lambda }_{j - > i}$ till the values of the parameter converge. Algorithm 1 presents the pseudo code of the algorithm. + +We incorporate the MAB idea of choosing the arms that have the highest reward by picking those actions that push the learner concept vectors the farthest. We use a version of $\epsilon$ -greedy where we pick the best action with probability $\left( {1 - \epsilon }\right)$ and an action uniformly at random with probability $\epsilon$ . While we use an $\epsilon$ -Greedy strategy, more sophisticated bandit strategies can also be used in the framework. + +Algorithm 1: BPECE + +Input: A set of learner concept vector estimates, ${C}_{j},j = 1,2,\ldots N$ + + Parameters ${A}_{min},{D}_{min},\epsilon$ + +Output: Next action ${a}_{j}$ for each learner $j$ + +for $j \leftarrow 1$ to $N$ do + + if $\exists a \in A$ where $\operatorname{count}\left( a\right) < {A}_{\text{ min }}$ then + + ${a}_{j} \leftarrow a$ + + end + + else + + for ${c}_{ji} \in {C}_{j}$ do + + if ${c}_{ji}$ is Independent then + + Estimate the $\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ using Zeroth-Order Optimization on 5 + + end + + if ${c}_{ji}$ is Dependent then + + Initialize ${\lambda }_{k - > {ji}}$ values uniformly $\forall \left( {k,{ij}}\right)$ + + while ${\lambda }_{k - > {ji}}$ AND $\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ are not converged do + + Fix ${\lambda }_{k - > {ji}}$ + + Estimate $\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ using Zeroth-Order Optimization on 5 + + $\operatorname{Fix}\left( {{\alpha }_{a},{\beta }_{a}}\right) \forall a \in A$ + + Estimate ${\lambda }_{k - > {ji}}$ using the Neural Nets + + end + + end + + end + + Update ${C}_{{j}_{a}}^{\prime }$ using Equation $3\& 2\forall a \in A$ + + With probability $1 - \epsilon$ + + ${a}_{j} \leftarrow \arg \mathop{\max }\limits_{{a \in A}}{\begin{Vmatrix}{C}_{{j}_{a}}^{\prime } - {C}_{j}\end{Vmatrix}}_{2}$ + + With probability $\epsilon$ + + ${a}_{j} \leftarrow$ choose an action $a \in A$ uniformly at random + + end + +end + +§ 5 EXPERIMENTS + +Setup: We use as performance metric the number of steps/rounds it takes for all concept values to go beyond 0.9. We compare our algorithm results against an optimal algorithm. The optimal algorithm we consider is an algorithm that has all the true parameter values of the actions and the dependencies and uses those to pick the best action for the learners greedily. + +§ RESULTS FOR INDEPENDENT CONCEPTS + +Figure 1 depicts the results for the Independent case where we vary different parameters. + +2000 Opt algorith 350 Opt algori BPECE BPECE 120 100 50 #Action (c) # Actions vs # Steps, (d) # Actions vs # Steps, # Stds = 10 Stds = 20 BPECE BPECI 1500 250 50 200 2.5 7.5 12.5 17.5 20.0 #Students Concepts (a) # learners vs # Steps (b) # Concepts vs # Steps + +Figure 1: Number of Steps for the Independent Concepts with varying parameters + +BPECE #Learners (b) # learners vs # Avg Steps #Learners (a) # learners vs # Steps + +Figure 2: Total & Average Number of Steps for Independent Concepts for varying number of learners with the learner-specific parameter + +§ RESULTS FOR INDEPENDENT CONCEPTS WITH STUDENT-SPECIFIC PARAMETER + +Figure 2 shows the results for the case where we include a learner-specific parameter $\gamma$ that accounts for each learner's learning rate. We vary the number of learners from 2 through 50 while fixing the number of actions and concepts. + +§ RESULTS FOR DEPENDENT CONCEPTS + +Figure 3 shows the results for the number of steps taken with for dependent concepts, while the Figure ?? shows the average number of steps taken per learner. We vary the number of dependent concepts from 1 to 4 to show how the algorithm performs in each case. + +Opt algorithm BPECE, kdep=2 BPECE, kdep=3 15.0 17. #Concepts #Actions (b) # Concepts vs # Steps (c) # Actions vs # Steps 700 600 BPECE, kclcp=3 100 5.0 #Students (a) # learners vs # Steps + +Figure 3: Number of Steps for Dependent Concepts with Varying No of Dependent Concepts + +§ 6 CONCLUSION & FUTURE WORK + +We proposed a novel bandits based parameter estimation approach to suggest learning actions to learners based on each learner's knowledge level. We considered the cases where the concepts are independent and dependent. In the dependent case, we took into consideration the prerequisite relationships between various concepts. We modeled each learning action's effect on a concept as a function of Beta distribution. For the prerequisite relationships, we trained NNs to estimate the degree of dependence. Finally, we used an $\epsilon$ -greedy approach to choose the best action for the learners. We back our proposed method with extensive experimental results. + +As a future work, we can extend the learner-specific parameters to account for the different learning rate each learner for the dependent concepts as well. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/vj3XDDuF3s/Initial_manuscript_md/Initial_manuscript.md b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/vj3XDDuF3s/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..08324125ec77642ea86976347c99de02f4983bbc --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/vj3XDDuF3s/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,121 @@ +# GAN-MPC: Training Model Predictive Controllers with Parameterized Cost Functions using Demonstrations from Non-identical Experts + +Returaj Burnwal ${}^{+ \dagger }$ , Anirban Santara ${}^{ * }$ , Nirav P. Bhatt ${}^{+ \dagger }$ , + +Balaraman Ravindran ${}^{+ \dagger }$ and Gaurav Aggarwal* + +${}^{ + }$ Indian Institute of Technology, Madras + +${}^{ \dagger }$ Robert Bosch Centre for Data Science and AI + +*Google Research + +## Abstract + +Model predictive control (MPC) is a popular approach for trajectory optimization in practical robotics applications due to their guarantees on safety, optimality, generalizability, interpretability, and explainability. Traditional MPC needs a hand-crafted cost function for trajectory optimization. However, some behaviors are complex and hand-crafting is difficult and error-prone. A special class of MPC policies called Learnable-MPC addresses this difficulty using imitation learning from expert demonstrations. However, they require the demonstrator and the imitator agents to have identical state-action spaces and transition dynamics which is hard to satisfy in many practical applications of robotics. In this paper, we address this practical problem through a novel approach that uses a generative adversarial network (GAN) to match state-trajectory distributions of the demonstrator and the imitator. We evaluate our approach on a variety of simulated robotics tasks of DeepMind Control suite and demonstrate the efficacy of our approach at learning the demonstrator's behavior without having to copy their actions. + +## 1 Introduction + +Large-scale deployment of robots in real-world human-centric environments is faced with the challenges of safety, social compatibility and robustness to unforeseen changes in the environment [1]. Model predictive control (MPC) [2, 3, 4, 5] is a popular approach for trajectory optimization in robotics. MPC policies can optimize trajectory parameters under kinodynamic and safety constraints, and they provide guarantees on safety, optimality, and generalizability. However, it is difficult to hand-craft an MPC objective function for complex behaviors. Learnable MPC [6, 7, 8, 9, 10, 11, 1] addresses this difficulty using imitation learning. Learnable MPC policies use a parameterized objective function that can be trained from expert demonstrations. The learnable parameters also allow it to easily adapt to a wide variety of robot-environment situations. However, even state-of-the-art learnable MPC formulations require the demonstrator and the imitator to be identical. This is a limitation because robots in real-world applications may have different dynamics. Changes to a robot's dynamics can be caused by internal changes, such as mechanical faults [12], dropping battery charge-level [13], and external changes, such as changes in the operating environment, e.g., surface friction [14], or the robot's task, e.g., increased load [13]. In this paper, we address the practical problem of training Learnable-MPC policies when the demonstrator and the imitator do not share the same dynamics and their state spaces only have a partial overlap. Our proposed method uses a generative adversarial network (GAN) to match the state-trajectory distributions of the demonstrator and the imitator by minimizing the Jensen Shannon (JS) divergence [15]. The GAN consists of two networks: a generator and a discriminator. The generator is a neural network modeling the learnable cost function. This, along with the engineered cost is minimized by the imitator to produce trajectories. The discriminator is responsible for distinguishing between state trajectories from the demonstrator and the imitator. At Nash equilibrium [16], the state-trajectory distributions of the demonstrator and the imitator would be identical. Empirical evaluation on three continuous control tasks of DeepMind Control Suite [17] shows that our method is effective in mimicking complex behaviors even when the dynamics of the demonstrator and the imitator are widely different. + +## 2 Problem Statement + +Imitation learning [18] involves two agents - demonstrator (also referred to as the "expert") $\mathbf{D}$ and the imitator $\mathbf{I}$ . Let ${\mathcal{M}}^{\mathbf{D}} = \left( {{S}^{\mathbf{D}},{A}^{\mathbf{D}},{T}^{\mathbf{D}},{\rho }^{\mathbf{D}}}\right)$ and ${\mathcal{M}}^{\mathbf{I}} = \left( {{S}^{\mathbf{I}},{A}^{\mathbf{I}},{T}^{\mathbf{I}},{\rho }^{\mathbf{I}}}\right)$ be the Markov Decision Processes (MDPs) [19] associated with the $\mathbf{D}$ and $\mathbf{I}$ respectively. Equation 1 describes the optimization problem solved by MPC. + +$$ +{\mathbf{a}}_{1 : H}^{ * } = \;\arg \mathop{\min }\limits_{{\mathbf{a}}_{1 : H - 1}}J\left( {{s}_{t},{a}_{1 : H - 1}}\right) \tag{1} +$$ + +$$ += \;\arg \mathop{\min }\limits_{{\mathbf{a}}_{1 : H - 1}}\mathop{\sum }\limits_{{t = 1}}^{{H - 1}}{C}_{stg}\left( {{s}_{t},{a}_{t}, t}\right) + \gamma {C}_{term}\left( {s}_{H}\right) +$$ + +$$ +\text{s.t.}\forall t,{s}_{t + 1} = \widetilde{T}\left( {{s}_{t},{a}_{t}}\right) , g\left( {{s}_{t},{a}_{t}}\right) = 0, h\left( {{s}_{t},{a}_{t}}\right) \leq 0 +$$ + +$H$ is the planning horizon of the MPC. ${C}_{stg} : S \times A \rightarrow \mathbb{R}$ is the staging cost that applies to each step of the plan and ${C}_{\text{term }} : S \rightarrow \mathbb{R}$ is the terminal cost that applies only to the final state. $g : S \times A \rightarrow \mathbb{R}$ and $h : S \times A \rightarrow \mathbb{R}$ are equality and inequality constraints on the solution. $\gamma$ is a hyperparameter that controls the relative weightage of the staging and the terminal costs. $\widetilde{T}$ is a local model of the transition dynamics $T$ around the initial control guess. At every step of planning, the MPC plans a trajectory ${\mathbf{a}}_{1 : H - 1}^{ * }$ of length $H$ that minimizes the objective in Equation 1 . To address the inevitability of modeling error in the estimation of $\widetilde{T}$ , MPC only executes the first action ${a}_{1}^{ * }$ and updates $\widetilde{T}$ with the observed outcome. We denote an MPC policy by ${\pi }^{MPC} : S \rightarrow A$ where ${\pi }^{MPC}\left( {s}_{t}\right) = {a}_{1}^{ * }$ . This planning algorithm is repeated for every step of the agent's trajectory. Motivated by real world applications in robotics and accessibility, we study the problem of imitation learning of Learnable MPC policies when the demonstrator and the imitator do not share the same dynamics - ${T}^{\mathbf{D}} \neq {T}^{\mathbf{I}}$ . Our method can also be applied to settings where the state and action spaces do not overlap completely, by considering only the overlapping state and action variables. + +### 2.1 Challenges + +MPC requires a model of the transition dynamics for planning. This is challenging in real world complex continuous control tasks with large state-action spaces. Some parts of the state-action space are difficult to reach and hence difficult to collect data from. Also, parts of the state-action space are often inaccessible due to hard kinodynamic constraints. Neural networks provide an efficient way of modeling highly non-linear functions over large state-action spaces. However, they find it hard to model the constraints and end up halucinating in the inaccessible areas, often leading to infeasible solutions. MPC solvers like iLQR can be highly sensitive to the "initial control guess" in complex non-linear dynamical systems. The challenge is to predict an ${a}_{0 : H - 1}^{g}$ close to the optimal solution ${a}_{0}^{ * }$ . The terminal $\operatorname{cost}{C}_{\text{term }}$ is used to measure how close the agent would get to a "target" state at the end of the planning horizon $H$ . For dynamic tasks like Cheetah Run the target state is different for each time step and making it difficult to calculate ${C}_{term}$ . + +## 3 Proposed Method: GAN-MPC + +The proposed appraoch uses the GAN framework [20] that consists of a generator and a discriminator. Given a set of expert demonstrations, the task of the discriminator is to learn an accurate binary classifier to tell apart expert demonstrations from other trajectories. The task of the generator is to produce samples that are indistinguishable from demonstrator’s trajectories. Our generator is the Learnable MPC policy ${\pi }^{MPC}\left( {\cdot \mid {\Phi }^{\text{gen }}}\right)$ of $\mathbf{I}$ along with a model of the transition dynamics ${\widetilde{T}}^{\mathbf{I}}.{\Phi }^{\text{gen }}$ is the set of learnable parameters of the terminal cost function. Given a demonstrated trajectory ${\tau }_{s}^{\mathbf{D}} = \left( {{s}_{0}^{\mathbf{D}},{s}_{1}^{\mathbf{D}},{s}_{2}^{\mathbf{D}},\ldots }\right) \in {\mathcal{X}}_{s}^{\mathbf{D}}$ , a generator rollout ${\tau }^{\mathbf{I}, g} = \left( {{s}_{0}^{\mathbf{I}, g},{a}_{0}^{\mathbf{I}, g},{s}_{1}^{\mathbf{I}, g},{a}_{1}^{\mathbf{I}, g},{s}_{2}^{\mathbf{I}, g},{a}_{2}^{\mathbf{I}, g},\ldots ,{s}_{P - 1}^{\mathbf{I}, g}}\right)$ of maximum length $P$ (a hyper parameter) is created by starting from the same initial state ${s}_{0}^{\mathbf{I}, g} = {s}_{0}^{\mathbf{D}}$ , solving for actions using the MPC policy ${a}_{t}^{\mathbf{I}, g} = {\pi }^{MPC}\left( {s}_{t}^{\mathbf{I}, g}\right)$ and the next state from the transition dynamics model ${s}_{t + 1}^{\mathrm{I}, g} = {\widetilde{T}}^{\mathrm{I}, g}\left( {{s}_{t}^{\mathrm{I}, g},{a}_{t}^{\mathrm{I}, g}}\right)$ . We denote the state trajectory distribution of the generator rollouts by ${\mathcal{G}}_{s}\left( {\cdot \mid {\Phi }^{gen},{\Theta }^{\mathrm{I}}}\right)$ . The discriminator $Q\left( {\cdot \mid {\Phi }^{\text{disc }}}\right)$ is modelled using an LSTM network with parameters ${\Phi }^{\text{disc }}$ . + +The performance of an MPC policy is strongly dependent on the accuracy of transition dynamics model $T$ . As noted in Section 2.1 learning a model of ${T}^{\mathbf{I}}$ can be challenging in large state-action spaces. The dynamics function must be trained on $\left( {{s}_{t},{a}_{t},{s}_{t + 1}}\right)$ transitions collected by the agent while interacting with the environment. In order to model the function accurately in the regions of the state-action space traversed during the execution of the target task, enough data must be collected from those regions. This is not a big issue when $\mathbf{D}$ and $\mathbf{I}$ are identical as the demonstrated trajectories ${\mathcal{X}}^{\mathbf{D}}$ can be used for training ${T}^{\mathbf{I}}$ . However, in our case, getting $\mathbf{I}$ to the desired regions of the state-action space can be as hard as learning the policy. We address this challenge by by pre-training ${T}^{\mathbf{I}}$ on ${\mathcal{X}}^{\mathbf{D}}$ for a small number of epochs under the assumption that the demonstrator and the imitator dynamics have some degree of similarity. We continue to update the dynamics model in each training iteration with transitions recorded from physical interaction of $\mathbf{I}$ with the environment with ${\pi }^{MPC}$ . We use the popular iLQR solver in our experiments. As noted in Section 2.1, the performance is a strong function of the initial control guess ${a}_{0 : H - 1}^{1, g}$ . We again make the assumption that the demonstrator and imitator dynamics have some degree of similarity. We train a behavior cloning policy ${\pi }_{Y}^{BC} : {S}^{\mathbf{I}} \rightarrow {A}^{\mathbf{I}}$ with parameters $\chi$ on ${\mathcal{X}}^{\mathbf{D}}$ . At each iteration of iLQR, we set ${a}_{t}^{\mathbf{I}, g} = {\pi }^{BC}\left( {\widetilde{s}}_{t}^{\mathbf{I}}\right)$ . The terminal component of the MPC cost function ${C}_{\text{term }}$ is intended to estimate how far the agent would be from the target state at the end of the planning horizon. In dynamic tasks like Cheetah Run, the target state is not singular making it difficult to specify ${C}_{\text{term }}$ . With a motivation to set as target state as somewhere the expert would be in the next time step, we train a neural network model ${\mathcal{N}}_{\Psi } : {S}^{\mathbf{D}} \rightarrow {S}^{\mathbf{D}}$ with trainable parameters $\Psi$ on ${\mathcal{X}}^{\mathbf{D}}$ to predict the next state ${s}_{t + 1}^{\mathbf{D}}$ given the current state ${s}_{t}^{\mathbf{D}}$ . + +![019640e0-59d4-73af-b68f-cdd9e988ae36_2_375_152_1043_309_0.jpg](images/019640e0-59d4-73af-b68f-cdd9e988ae36_2_375_152_1043_309_0.jpg) + +Figure 1: Physical properties of the imitators relative to the demonstrators in our experiments. We have 4 imitators each for Cartpole-Balance and Pendulum-Standup. In case of Cheetah-Run, we have 12 imitators with different levels of disability and different torso-masses as denoted by the set product " $\times$ " in the figure. + +Our algorithm, GAN-MPC, starts by pre-training the dynamics model of the imitator on $\mathcal{D}$ for a small number of epochs. In the main training loop, in the first step, we let the imitator interact with the environment for $K$ time steps and use this data to update the dynamics model by running a small number of epochs ${N}^{dyn}$ of training. Next, the discriminator network is trained on ${\mathcal{D}}^{s}$ and the imitator’s state trajectories. In the final step, the learnable parameters of the MPC policy and the relative weight of the engineered and learnable cost components are updated slowly [21]. + +## 4 Experiments + +Our experimental study aims to understand whether GAN-MPC can learn an expert's skills by trying to visit the same sequence of states and planning an appropriate sequence of actions, even though the imitator's actions may be different from the expert's due to differences in dynamics. We evaluate the efficacy of GAN-MPC on three continuous control tasks from the DeepMind Control suite: CartPole-Balance, Pendulum-Standup, and Cheetah Run. For each task we train a SAC agent for sampling demonstrator trajectories. We choose a set of imitator agents that have similar morphology as the demonstrators but different physical properties as described in Figure 1. We compare the performance of our proposed algorithm (GAN-MPC) with Behavioral Cloning (BC) and two Learnable-MPC formulation that minimizes the L2 distance between the demonstrator and imitator trajectories: a) L2-MPC-SA that matches state-action trajectories and b) L2-MPC-S that matches state-only trajectories of the demonstrator and the imitator. In many practical applications, the entire state space of the demonstrator may not be observable or the state spaces of the demonstrator and the imitator may only overlap partially. We study this case also in the Cheetah Run task environment. + +In all experiments, a training set of 50 trajectories is collected from the demonstrator. L2-MPC-SA, L2-MPC-S and GAN-MPC imitators are allowed to interact with the environment for a total of 5000 steps for Cartpole-Balance and Pendulum-Swingup; and 10000 steps for Cheetah-Run. The performance of each agent is measured by rolling out 50 trajectories with different random seeds and computing the average trajectory reward ${R}^{\tau }$ . We measure the performance of the imitators in terms of average trajectory reward relative to the demonstrator, ${\widetilde{R}}^{\tau } = \frac{{R}_{\text{imitator }}^{\tau }}{{R}_{\text{demonstrator }}^{\tau }}$ . Figures 2, 3 and 4 provide a summary of the results. The bars represent means and the whiskers represent standard deviations. We observe that GAN-MPC outperforms or matches the baselines in most of the settings. We also observe that the performance of GAN-MPC gracefully degrades (like most of the baselines) as the dynamics of the imitator becomes more and more different from the demonstrator. In our experiments on Cheetah-Run we observe that the disabled imitators, in their quest to learn the fit demonstrator's skills, learn alternative strategies to work around their disabilities. This establishes GAN-MPC as a viable step towards achieving the goal of learning skills from non-identical experts without having to copy their actions. In Figure 4, we also observe that under partial observability of the demonstrator’s state space the GAN-MPC agents (" GAN-MPC: ${S}^{\mathbf{D}} \subset {S}^{\mathbf{I}}$ ") are able to learn the desired behavior and outperform the baselines that have access to the full state observations. This shows the viability of GAN-MPC as a method to learn skills from experts with non-identical dynamics and partial observability of their state spaces. + +![019640e0-59d4-73af-b68f-cdd9e988ae36_3_330_245_412_366_0.jpg](images/019640e0-59d4-73af-b68f-cdd9e988ae36_3_330_245_412_366_0.jpg) + +Figure 2: Results of the Pendulum-Swingup experiment. The imitators are denoted by ${P}_{x}$ where $P$ stands for pole mass and $x =$ ${P}_{\text{imitator }}/{P}_{\text{demonstrator }}$ . + +![019640e0-59d4-73af-b68f-cdd9e988ae36_3_1051_161_412_425_0.jpg](images/019640e0-59d4-73af-b68f-cdd9e988ae36_3_1051_161_412_425_0.jpg) + +Figure 3: Results of the Cartpole-Balance experiment. The imitators are denoted by ${P}_{x}{C}_{y}{D}_{z}$ where $P, C$ and $D$ stand for pole mass, cart mass and cart dimension, respectively. The subscripts - $x, y$ and $z$ - denote ratios relative to the demonstrator, e.g. $x = {P}_{\text{imitator }}/{P}_{\text{demonstrator }}$ . The legend of Figure 2 has been followed. + +![019640e0-59d4-73af-b68f-cdd9e988ae36_3_298_889_1194_320_0.jpg](images/019640e0-59d4-73af-b68f-cdd9e988ae36_3_298_889_1194_320_0.jpg) + +Figure 4: Results of the Cheetah-Run experiment. The captions of the sub-figures mention "torso-mass" of the imitators relative to the demonstrator. As described in Section 4 and Figure 1, we have three categories of imitators in terms of disability - No Disability (ND), Front Ankle broken (FA) and Back Ankle broken (BA). All the agents except " GAN-MPC: ${S}^{\mathbf{D}} \subset {S}^{\mathbf{I}}$ " are trained on the same set of demonstrations ${\mathcal{X}}_{s}^{\mathbf{D}}$ . As described in Section 4," GAN-MPC: ${S}^{\mathbf{D}} \subset {S}^{\mathbf{I}}$ " is trained on ${\mathcal{X}}_{s}^{\mathbf{D}}$ but only a subset of the state variables are exposed. + +## 5 Conclusion + +In this paper, we study imitation learning of MPC policies with parameterised cost functions. We consider the practical challenges of mismatch in the dynamics of the demonstrator and the imitator agents and partial observability of the state space of the demonstrator. We propose a novel approach called GAN-MPC that minimizes the statistical divergence between state-trajectories of the demonstator and the imitator using the GAN framework. Experiments on continuous control tasks of the DeepMind Control suite demonstrate the viability of the proposed method. The GAN-MPC framework needs significantly fewer samples of real world interaction of the imitator compared to RL based methods and this makes it viable for real world applications. + +## References + +[1] Xuesu Xiao, Tingnan Zhang, Krzysztof Choromanski, Edward Lee, Anthony Francis, Jake Varley, Stephen + +Tu, Sumeet Singh, Peng Xu, Fei Xia, et al. Learning model predictive controllers with real-time attention for real-world navigation. arXiv preprint arXiv:2209.10780, 2022. + +[2] Manfred Morari, Carlos E Garcia, and David M Prett. Model predictive control: theory and practice. IFAC Proceedings Volumes, 21(4):1-12, 1988. + +[3] Yang Wang and Stephen Boyd. Fast model predictive control using online optimization. IEEE Transactions on control systems technology, 18(2):267-278, 2009. + +[4] Spyros Maniatopoulos, Dimitra Panagou, and Kostas J Kyriakopoulos. Model predictive control for the navigation of a nonholonomic vehicle with field-of-view constraints. In 2013 American control conference, pages 3967-3972. IEEE, 2013. + +[5] Thomas Fork, H Eric Tseng, and Francesco Borrelli. Models and predictive control for nonplanar vehicle navigation. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pages 749-754. IEEE, 2021. + +[6] Rahul Shridhar and Douglas J Cooper. A tuning strategy for unconstrained siso model predictive control. Industrial & Engineering Chemistry Research, 36(3):729-746, 1997. + +[7] Rahul Shridhar and Douglas J Cooper. A tuning strategy for unconstrained multivariable model predictive control. Industrial & engineering chemistry research, 37(10):4003-4016, 1998. + +[8] Jorge L Garriga and Masoud Soroush. Model predictive control tuning methods: A review. Industrial & Engineering Chemistry Research, 49(8):3505-3515, 2010. + +[9] William Edwards, Gao Tang, Giorgos Mamakoukas, Todd Murphey, and Kris Hauser. Automatic tuning for data-driven model predictive control. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 7379-7385. IEEE, 2021. + +[10] Andre Shigueo Yamashita, Antônio Carlos Zanin, and Darci Odloak. Tuning of model predictive control with multi-objective optimization. Brazilian Journal of Chemical Engineering, 33:333-346, 2016. + +[11] Valarmathi Ramasamy, Rakesh Kumar Sidharthan, Ramkumar Kannan, and Guruprasath Muralidharan. Optimal tuning of model predictive controller weights using genetic algorithm with interactive decision tree for industrial cement kiln process. Processes, 7(12):938, 2019. + +[12] Vandi Verma, Geoff Gordon, Reid Simmons, and Sebastian Thrun. Real-time fault diagnosis [robot fault diagnosis]. IEEE Robotics & Automation Magazine, 11(2):56-66, 2004. + +[13] Marco Hutter, Christian Gehring, Andreas Lauber, Fabian Gunther, Carmine Dario Bellicoso, Vassilios Tsounis, Péter Fankhauser, Remo Diethelm, Samuel Bachmann, Michael Blösch, et al. Anymal-toward legged robots for harsh environments. Advanced Robotics, 31(17):918-931, 2017. + +[14] Lei Hao, Roberto Pagani, Manuel Beschi, and Giovanni Legnani. Dynamic and friction parameters of an industrial robot: Identification, comparison and repetitiveness analysis. Robotics, 10(1):49, 2021. + +[15] Gérard Biau, Benoît Cadre, Maxime Sangnier, and Ugo Tanielian. Some theoretical properties of gans. 2020. + +[16] Farzan Farnia and Asuman Ozdaglar. Do gans always have nash equilibria? In International Conference on Machine Learning, pages 3029-3039. PMLR, 2020. + +[17] Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018. + +[18] Ahmed Hussein, Mohamed Medhat Gaber, Eyad Elyan, and Chrisina Jayne. Imitation learning: A survey of learning methods. ACM Computing Surveys (CSUR), 50(2):1-35, 2017. + +[19] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018. + +[20] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139-144, 2020. + +[21] Boris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging. SIAM journal on control and optimization, 30(4):838-855, 1992. \ No newline at end of file diff --git a/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/vj3XDDuF3s/Initial_manuscript_tex/Initial_manuscript.tex b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/vj3XDDuF3s/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..0a1c543d0643058d56c4e0531fd67f355f7e4786 --- /dev/null +++ b/RBCDSAI/RBCDSAI DAI/RBCDSAI DAI 2023/RBCDSAI DAI 2023 Conference/vj3XDDuF3s/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,75 @@ +§ GAN-MPC: TRAINING MODEL PREDICTIVE CONTROLLERS WITH PARAMETERIZED COST FUNCTIONS USING DEMONSTRATIONS FROM NON-IDENTICAL EXPERTS + +Returaj Burnwal ${}^{+ \dagger }$ , Anirban Santara ${}^{ * }$ , Nirav P. Bhatt ${}^{+ \dagger }$ , + +Balaraman Ravindran ${}^{+ \dagger }$ and Gaurav Aggarwal* + +${}^{ + }$ Indian Institute of Technology, Madras + +${}^{ \dagger }$ Robert Bosch Centre for Data Science and AI + +*Google Research + +§ ABSTRACT + +Model predictive control (MPC) is a popular approach for trajectory optimization in practical robotics applications due to their guarantees on safety, optimality, generalizability, interpretability, and explainability. Traditional MPC needs a hand-crafted cost function for trajectory optimization. However, some behaviors are complex and hand-crafting is difficult and error-prone. A special class of MPC policies called Learnable-MPC addresses this difficulty using imitation learning from expert demonstrations. However, they require the demonstrator and the imitator agents to have identical state-action spaces and transition dynamics which is hard to satisfy in many practical applications of robotics. In this paper, we address this practical problem through a novel approach that uses a generative adversarial network (GAN) to match state-trajectory distributions of the demonstrator and the imitator. We evaluate our approach on a variety of simulated robotics tasks of DeepMind Control suite and demonstrate the efficacy of our approach at learning the demonstrator's behavior without having to copy their actions. + +§ 1 INTRODUCTION + +Large-scale deployment of robots in real-world human-centric environments is faced with the challenges of safety, social compatibility and robustness to unforeseen changes in the environment [1]. Model predictive control (MPC) [2, 3, 4, 5] is a popular approach for trajectory optimization in robotics. MPC policies can optimize trajectory parameters under kinodynamic and safety constraints, and they provide guarantees on safety, optimality, and generalizability. However, it is difficult to hand-craft an MPC objective function for complex behaviors. Learnable MPC [6, 7, 8, 9, 10, 11, 1] addresses this difficulty using imitation learning. Learnable MPC policies use a parameterized objective function that can be trained from expert demonstrations. The learnable parameters also allow it to easily adapt to a wide variety of robot-environment situations. However, even state-of-the-art learnable MPC formulations require the demonstrator and the imitator to be identical. This is a limitation because robots in real-world applications may have different dynamics. Changes to a robot's dynamics can be caused by internal changes, such as mechanical faults [12], dropping battery charge-level [13], and external changes, such as changes in the operating environment, e.g., surface friction [14], or the robot's task, e.g., increased load [13]. In this paper, we address the practical problem of training Learnable-MPC policies when the demonstrator and the imitator do not share the same dynamics and their state spaces only have a partial overlap. Our proposed method uses a generative adversarial network (GAN) to match the state-trajectory distributions of the demonstrator and the imitator by minimizing the Jensen Shannon (JS) divergence [15]. The GAN consists of two networks: a generator and a discriminator. The generator is a neural network modeling the learnable cost function. This, along with the engineered cost is minimized by the imitator to produce trajectories. The discriminator is responsible for distinguishing between state trajectories from the demonstrator and the imitator. At Nash equilibrium [16], the state-trajectory distributions of the demonstrator and the imitator would be identical. Empirical evaluation on three continuous control tasks of DeepMind Control Suite [17] shows that our method is effective in mimicking complex behaviors even when the dynamics of the demonstrator and the imitator are widely different. + +§ 2 PROBLEM STATEMENT + +Imitation learning [18] involves two agents - demonstrator (also referred to as the "expert") $\mathbf{D}$ and the imitator $\mathbf{I}$ . Let ${\mathcal{M}}^{\mathbf{D}} = \left( {{S}^{\mathbf{D}},{A}^{\mathbf{D}},{T}^{\mathbf{D}},{\rho }^{\mathbf{D}}}\right)$ and ${\mathcal{M}}^{\mathbf{I}} = \left( {{S}^{\mathbf{I}},{A}^{\mathbf{I}},{T}^{\mathbf{I}},{\rho }^{\mathbf{I}}}\right)$ be the Markov Decision Processes (MDPs) [19] associated with the $\mathbf{D}$ and $\mathbf{I}$ respectively. Equation 1 describes the optimization problem solved by MPC. + +$$ +{\mathbf{a}}_{1 : H}^{ * } = \;\arg \mathop{\min }\limits_{{\mathbf{a}}_{1 : H - 1}}J\left( {{s}_{t},{a}_{1 : H - 1}}\right) \tag{1} +$$ + +$$ += \;\arg \mathop{\min }\limits_{{\mathbf{a}}_{1 : H - 1}}\mathop{\sum }\limits_{{t = 1}}^{{H - 1}}{C}_{stg}\left( {{s}_{t},{a}_{t},t}\right) + \gamma {C}_{term}\left( {s}_{H}\right) +$$ + +$$ +\text{ s.t. }\forall t,{s}_{t + 1} = \widetilde{T}\left( {{s}_{t},{a}_{t}}\right) ,g\left( {{s}_{t},{a}_{t}}\right) = 0,h\left( {{s}_{t},{a}_{t}}\right) \leq 0 +$$ + +$H$ is the planning horizon of the MPC. ${C}_{stg} : S \times A \rightarrow \mathbb{R}$ is the staging cost that applies to each step of the plan and ${C}_{\text{ term }} : S \rightarrow \mathbb{R}$ is the terminal cost that applies only to the final state. $g : S \times A \rightarrow \mathbb{R}$ and $h : S \times A \rightarrow \mathbb{R}$ are equality and inequality constraints on the solution. $\gamma$ is a hyperparameter that controls the relative weightage of the staging and the terminal costs. $\widetilde{T}$ is a local model of the transition dynamics $T$ around the initial control guess. At every step of planning, the MPC plans a trajectory ${\mathbf{a}}_{1 : H - 1}^{ * }$ of length $H$ that minimizes the objective in Equation 1 . To address the inevitability of modeling error in the estimation of $\widetilde{T}$ , MPC only executes the first action ${a}_{1}^{ * }$ and updates $\widetilde{T}$ with the observed outcome. We denote an MPC policy by ${\pi }^{MPC} : S \rightarrow A$ where ${\pi }^{MPC}\left( {s}_{t}\right) = {a}_{1}^{ * }$ . This planning algorithm is repeated for every step of the agent's trajectory. Motivated by real world applications in robotics and accessibility, we study the problem of imitation learning of Learnable MPC policies when the demonstrator and the imitator do not share the same dynamics - ${T}^{\mathbf{D}} \neq {T}^{\mathbf{I}}$ . Our method can also be applied to settings where the state and action spaces do not overlap completely, by considering only the overlapping state and action variables. + +§ 2.1 CHALLENGES + +MPC requires a model of the transition dynamics for planning. This is challenging in real world complex continuous control tasks with large state-action spaces. Some parts of the state-action space are difficult to reach and hence difficult to collect data from. Also, parts of the state-action space are often inaccessible due to hard kinodynamic constraints. Neural networks provide an efficient way of modeling highly non-linear functions over large state-action spaces. However, they find it hard to model the constraints and end up halucinating in the inaccessible areas, often leading to infeasible solutions. MPC solvers like iLQR can be highly sensitive to the "initial control guess" in complex non-linear dynamical systems. The challenge is to predict an ${a}_{0 : H - 1}^{g}$ close to the optimal solution ${a}_{0}^{ * }$ . The terminal $\operatorname{cost}{C}_{\text{ term }}$ is used to measure how close the agent would get to a "target" state at the end of the planning horizon $H$ . For dynamic tasks like Cheetah Run the target state is different for each time step and making it difficult to calculate ${C}_{term}$ . + +§ 3 PROPOSED METHOD: GAN-MPC + +The proposed appraoch uses the GAN framework [20] that consists of a generator and a discriminator. Given a set of expert demonstrations, the task of the discriminator is to learn an accurate binary classifier to tell apart expert demonstrations from other trajectories. The task of the generator is to produce samples that are indistinguishable from demonstrator’s trajectories. Our generator is the Learnable MPC policy ${\pi }^{MPC}\left( {\cdot \mid {\Phi }^{\text{ gen }}}\right)$ of $\mathbf{I}$ along with a model of the transition dynamics ${\widetilde{T}}^{\mathbf{I}}.{\Phi }^{\text{ gen }}$ is the set of learnable parameters of the terminal cost function. Given a demonstrated trajectory ${\tau }_{s}^{\mathbf{D}} = \left( {{s}_{0}^{\mathbf{D}},{s}_{1}^{\mathbf{D}},{s}_{2}^{\mathbf{D}},\ldots }\right) \in {\mathcal{X}}_{s}^{\mathbf{D}}$ , a generator rollout ${\tau }^{\mathbf{I},g} = \left( {{s}_{0}^{\mathbf{I},g},{a}_{0}^{\mathbf{I},g},{s}_{1}^{\mathbf{I},g},{a}_{1}^{\mathbf{I},g},{s}_{2}^{\mathbf{I},g},{a}_{2}^{\mathbf{I},g},\ldots ,{s}_{P - 1}^{\mathbf{I},g}}\right)$ of maximum length $P$ (a hyper parameter) is created by starting from the same initial state ${s}_{0}^{\mathbf{I},g} = {s}_{0}^{\mathbf{D}}$ , solving for actions using the MPC policy ${a}_{t}^{\mathbf{I},g} = {\pi }^{MPC}\left( {s}_{t}^{\mathbf{I},g}\right)$ and the next state from the transition dynamics model ${s}_{t + 1}^{\mathrm{I},g} = {\widetilde{T}}^{\mathrm{I},g}\left( {{s}_{t}^{\mathrm{I},g},{a}_{t}^{\mathrm{I},g}}\right)$ . We denote the state trajectory distribution of the generator rollouts by ${\mathcal{G}}_{s}\left( {\cdot \mid {\Phi }^{gen},{\Theta }^{\mathrm{I}}}\right)$ . The discriminator $Q\left( {\cdot \mid {\Phi }^{\text{ disc }}}\right)$ is modelled using an LSTM network with parameters ${\Phi }^{\text{ disc }}$ . + +The performance of an MPC policy is strongly dependent on the accuracy of transition dynamics model $T$ . As noted in Section 2.1 learning a model of ${T}^{\mathbf{I}}$ can be challenging in large state-action spaces. The dynamics function must be trained on $\left( {{s}_{t},{a}_{t},{s}_{t + 1}}\right)$ transitions collected by the agent while interacting with the environment. In order to model the function accurately in the regions of the state-action space traversed during the execution of the target task, enough data must be collected from those regions. This is not a big issue when $\mathbf{D}$ and $\mathbf{I}$ are identical as the demonstrated trajectories ${\mathcal{X}}^{\mathbf{D}}$ can be used for training ${T}^{\mathbf{I}}$ . However, in our case, getting $\mathbf{I}$ to the desired regions of the state-action space can be as hard as learning the policy. We address this challenge by by pre-training ${T}^{\mathbf{I}}$ on ${\mathcal{X}}^{\mathbf{D}}$ for a small number of epochs under the assumption that the demonstrator and the imitator dynamics have some degree of similarity. We continue to update the dynamics model in each training iteration with transitions recorded from physical interaction of $\mathbf{I}$ with the environment with ${\pi }^{MPC}$ . We use the popular iLQR solver in our experiments. As noted in Section 2.1, the performance is a strong function of the initial control guess ${a}_{0 : H - 1}^{1,g}$ . We again make the assumption that the demonstrator and imitator dynamics have some degree of similarity. We train a behavior cloning policy ${\pi }_{Y}^{BC} : {S}^{\mathbf{I}} \rightarrow {A}^{\mathbf{I}}$ with parameters $\chi$ on ${\mathcal{X}}^{\mathbf{D}}$ . At each iteration of iLQR, we set ${a}_{t}^{\mathbf{I},g} = {\pi }^{BC}\left( {\widetilde{s}}_{t}^{\mathbf{I}}\right)$ . The terminal component of the MPC cost function ${C}_{\text{ term }}$ is intended to estimate how far the agent would be from the target state at the end of the planning horizon. In dynamic tasks like Cheetah Run, the target state is not singular making it difficult to specify ${C}_{\text{ term }}$ . With a motivation to set as target state as somewhere the expert would be in the next time step, we train a neural network model ${\mathcal{N}}_{\Psi } : {S}^{\mathbf{D}} \rightarrow {S}^{\mathbf{D}}$ with trainable parameters $\Psi$ on ${\mathcal{X}}^{\mathbf{D}}$ to predict the next state ${s}_{t + 1}^{\mathbf{D}}$ given the current state ${s}_{t}^{\mathbf{D}}$ . + + < g r a p h i c s > + +Figure 1: Physical properties of the imitators relative to the demonstrators in our experiments. We have 4 imitators each for Cartpole-Balance and Pendulum-Standup. In case of Cheetah-Run, we have 12 imitators with different levels of disability and different torso-masses as denoted by the set product " $\times$ " in the figure. + +Our algorithm, GAN-MPC, starts by pre-training the dynamics model of the imitator on $\mathcal{D}$ for a small number of epochs. In the main training loop, in the first step, we let the imitator interact with the environment for $K$ time steps and use this data to update the dynamics model by running a small number of epochs ${N}^{dyn}$ of training. Next, the discriminator network is trained on ${\mathcal{D}}^{s}$ and the imitator’s state trajectories. In the final step, the learnable parameters of the MPC policy and the relative weight of the engineered and learnable cost components are updated slowly [21]. + +§ 4 EXPERIMENTS + +Our experimental study aims to understand whether GAN-MPC can learn an expert's skills by trying to visit the same sequence of states and planning an appropriate sequence of actions, even though the imitator's actions may be different from the expert's due to differences in dynamics. We evaluate the efficacy of GAN-MPC on three continuous control tasks from the DeepMind Control suite: CartPole-Balance, Pendulum-Standup, and Cheetah Run. For each task we train a SAC agent for sampling demonstrator trajectories. We choose a set of imitator agents that have similar morphology as the demonstrators but different physical properties as described in Figure 1. We compare the performance of our proposed algorithm (GAN-MPC) with Behavioral Cloning (BC) and two Learnable-MPC formulation that minimizes the L2 distance between the demonstrator and imitator trajectories: a) L2-MPC-SA that matches state-action trajectories and b) L2-MPC-S that matches state-only trajectories of the demonstrator and the imitator. In many practical applications, the entire state space of the demonstrator may not be observable or the state spaces of the demonstrator and the imitator may only overlap partially. We study this case also in the Cheetah Run task environment. + +In all experiments, a training set of 50 trajectories is collected from the demonstrator. L2-MPC-SA, L2-MPC-S and GAN-MPC imitators are allowed to interact with the environment for a total of 5000 steps for Cartpole-Balance and Pendulum-Swingup; and 10000 steps for Cheetah-Run. The performance of each agent is measured by rolling out 50 trajectories with different random seeds and computing the average trajectory reward ${R}^{\tau }$ . We measure the performance of the imitators in terms of average trajectory reward relative to the demonstrator, ${\widetilde{R}}^{\tau } = \frac{{R}_{\text{ imitator }}^{\tau }}{{R}_{\text{ demonstrator }}^{\tau }}$ . Figures 2, 3 and 4 provide a summary of the results. The bars represent means and the whiskers represent standard deviations. We observe that GAN-MPC outperforms or matches the baselines in most of the settings. We also observe that the performance of GAN-MPC gracefully degrades (like most of the baselines) as the dynamics of the imitator becomes more and more different from the demonstrator. In our experiments on Cheetah-Run we observe that the disabled imitators, in their quest to learn the fit demonstrator's skills, learn alternative strategies to work around their disabilities. This establishes GAN-MPC as a viable step towards achieving the goal of learning skills from non-identical experts without having to copy their actions. In Figure 4, we also observe that under partial observability of the demonstrator’s state space the GAN-MPC agents (" GAN-MPC: ${S}^{\mathbf{D}} \subset {S}^{\mathbf{I}}$ ") are able to learn the desired behavior and outperform the baselines that have access to the full state observations. This shows the viability of GAN-MPC as a method to learn skills from experts with non-identical dynamics and partial observability of their state spaces. + + < g r a p h i c s > + +Figure 2: Results of the Pendulum-Swingup experiment. The imitators are denoted by ${P}_{x}$ where $P$ stands for pole mass and $x =$ ${P}_{\text{ imitator }}/{P}_{\text{ demonstrator }}$ . + + < g r a p h i c s > + +Figure 3: Results of the Cartpole-Balance experiment. The imitators are denoted by ${P}_{x}{C}_{y}{D}_{z}$ where $P,C$ and $D$ stand for pole mass, cart mass and cart dimension, respectively. The subscripts - $x,y$ and $z$ - denote ratios relative to the demonstrator, e.g. $x = {P}_{\text{ imitator }}/{P}_{\text{ demonstrator }}$ . The legend of Figure 2 has been followed. + + < g r a p h i c s > + +Figure 4: Results of the Cheetah-Run experiment. The captions of the sub-figures mention "torso-mass" of the imitators relative to the demonstrator. As described in Section 4 and Figure 1, we have three categories of imitators in terms of disability - No Disability (ND), Front Ankle broken (FA) and Back Ankle broken (BA). All the agents except " GAN-MPC: ${S}^{\mathbf{D}} \subset {S}^{\mathbf{I}}$ " are trained on the same set of demonstrations ${\mathcal{X}}_{s}^{\mathbf{D}}$ . As described in Section 4," GAN-MPC: ${S}^{\mathbf{D}} \subset {S}^{\mathbf{I}}$ " is trained on ${\mathcal{X}}_{s}^{\mathbf{D}}$ but only a subset of the state variables are exposed. + +§ 5 CONCLUSION + +In this paper, we study imitation learning of MPC policies with parameterised cost functions. We consider the practical challenges of mismatch in the dynamics of the demonstrator and the imitator agents and partial observability of the state space of the demonstrator. We propose a novel approach called GAN-MPC that minimizes the statistical divergence between state-trajectories of the demonstator and the imitator using the GAN framework. Experiments on continuous control tasks of the DeepMind Control suite demonstrate the viability of the proposed method. The GAN-MPC framework needs significantly fewer samples of real world interaction of the imitator compared to RL based methods and this makes it viable for real world applications. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/3LnP1W8pKm/Initial_manuscript_md/Initial_manuscript.md b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/3LnP1W8pKm/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..cc66eebc80c3792a9a2dcec3c13573b0ca07deb1 --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/3LnP1W8pKm/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,213 @@ +# Euclidean Equivariant Models for Generative Graphical Inverse Kinematics + +Oliver Limoyo, ${}^{1, \dagger }$ Filip Marić, ${}^{1,2, \dagger }$ Matthew Giamou, ${}^{1}$ Petra Alexson, ${}^{1}$ Ivan Petrović, ${}^{2}$ and Jonathan Kelly ${}^{1}$ ${}^{ \dagger }$ Denotes equal contribution. + +${}^{1}$ Institute for Aerospace Studies, University of Toronto, + +${}^{2}$ Laboratory for Autonomous Systems and Mobile Robotics, University of Zagreb + +Abstract-Quickly and reliably finding accurate inverse kinematics (IK) solutions remains a challenging problem for robotic manipulation. Existing numerical solvers typically only produce a single solution and rely on local search techniques to minimize highly nonconvex objective functions. Recently, learning-based approaches that approximate the entire feasible set of solutions have shown promise as a means to generate multiple fast and accurate IK results in parallel. However, existing learning-based techniques have a significant drawback: each robot of interest requires a specialized model that must be trained from scratch. To address this key shortcoming, we investigate a novel distance-geometric robot representation coupled with a graph structure that allows us to leverage the flexibility of graph neural networks (GNNs). We use this approach to train a generative graphical inverse kinematics (GGIK) solver that is able to produce a large number of diverse solutions in parallel while also generalizing well-a single learned model can be used to produce IK solutions for a variety of different robots. The graphical formulation elegantly exposes the symmetry and Euclidean equivariance of the IK problem, stemming from the spatial nature of robot manipulators. We exploit this symmetry by explicitly encoding it into the architecture of our learned model, yielding a flexible solver that is able to produce IK solutions for multiple robots. + +## I. INTRODUCTION + +Robotic manipulation tasks are naturally defined in terms of end-effector poses (e.g., for bin-picking or path following). However, the configuration of a manipulator is typically specified in terms of joint angles, and determining the joint configuration(s) that correspond to a given end-effector pose requires solving the inverse kinematics (IK) problem. For redundant manipulators (i.e., those with more than six degrees of freedom or DOF), target poses may be reachable by an infinite set of feasible configurations. While redundancy allows high-level algorithms such as motion planners to choose configurations that best fit the overall task, it makes solving IK substantially more involved. + +Since the full set of IK solutions cannot, in general, be derived analytically for redundant manipulators, individual configurations reaching a target pose are found by locally searching the configuration space using numerical optimization methods and geometric heuristics. These limitations have motivated the use of learned models that approximate the entire feasible set of solutions. In terms of success rate, learned models that output individual solutions are able to compete with the best numerical IK solvers when high accuracy is not required [18]. Data-driven methods are also useful for integrating abstract criteria such as "human-like" poses or motions [2]. Generative approaches [7, 14] have demonstrated the ability to rapidly produce a large number of approximate IK solutions and even model the entire feasible set for specific robots [1]. Unfortunately, these learned models, parameterized by deep neural networks (DNNs), require specific configuration and end-effector input-output vector pairs for training (by design). In turn, it is not possible to generalize learned solutions to robots that vary in link geometry and DOF. Ultimately, this drawback limits the utility of learning for IK over well-established numerical methods that are easier to implement and generalize [3]. + +In this paper, we describe a novel generative inverse kinematics solver and explain its capacity to simultaneously represent general (i.e., not tied to a single robot manipulator model or geometry) IK mappings and to produce approximations of entire feasible sets of solutions. In contrast to existing DNN-based approaches [1, 7, 10, 14, 18], we explore a new path towards learning generalized IK by adopting a graphical model of robot kinematics [12, 13]. This graph-based description allows us to make use of graph neural networks (GNNs) to capture varying robot geometries and DOF within a single model. Furthermore, crucial to the success of our method, the graphical formulation exposes the symmetry and Euclidean equivariance of the IK problem that stems from the spatial nature of robot manipulators. We exploit this symmetry by encoding it into the architecture of our learned model, which we call GGIK (for generative graphical inverse kinematics), to learn accurate IK solutions. + +## II. DISTANCE-GEOMETRIC GRAPH REPRESENTATION OF INVERSE KINEMATICS + +The mapping ${IK} : \mathcal{T} \rightarrow \mathcal{C}$ defines the inverse kinematics of the robot, connecting a target pose $\mathbf{T} \in \mathrm{{SE}}\left( 3\right)$ to one or more feasible configurations $\mathbf{\theta } \in \mathcal{C}$ . In this paper, we consider the associated problem of determining this mapping for manipulators with $n > 6$ DOF (also known as redundant manipulators), where each end-effector pose corresponds to a set of configurations + +$$ +{IK}\left( \mathbf{T}\right) = \{ \mathbf{\theta } \in \mathcal{C} \mid {FK}\left( \mathbf{\theta }\right) = \mathbf{T}\} \tag{1} +$$ + +that we refer to as the full set of IK solutions. + +We eschew the common angle-based representation of the configuration space in favour of a distance-geometric model of robotic manipulators comprised of revolute joints [13]. This allows us to represent configurations $\mathbf{\theta }$ as complete graphs $G = \left( {V, E}\right)$ . The edges $E$ are weighted by distances $d$ between a collection of $N$ points $\mathbf{p} = {\left\{ {\mathbf{p}}_{i}\right\} }_{i = 1}^{N} \in {\mathbb{R}}^{N \times D}$ indexed by vertices $V$ , where $D \in \{ 2,3\}$ is the workspace dimension. The coordinates of points corresponding to these distances are recovered by solving the distance geometry problem (DGP): + +![01964103-4213-7ebd-852b-f3ac7f9dcb3f_1_193_145_1413_343_0.jpg](images/01964103-4213-7ebd-852b-f3ac7f9dcb3f_1_193_145_1413_343_0.jpg) + +Fig. 1: The process of defining an IK problem as an incomplete or partial graph $\widetilde{G}$ of inter-point distances. (a) Conventional forward kinematics model parameterized by joint angles and joint rotation axes. (b) The point placement procedure for the distance based description, first introduced in [12]. Note that the four distances between points associated with pairs of consecutive joints remain constant regardless of the of configuration. (c) A structure graph of the robot based on inter-point distances. (d) Addition of distances describing the robot end-effector pose using auxiliary points to define the base coordinate system, which completes the graphical IK problem description. All configurations of the robot reaching this end effector pose will result in a partial graph of distances shown in (c) and (d). + +![01964103-4213-7ebd-852b-f3ac7f9dcb3f_1_360_718_1083_363_0.jpg](images/01964103-4213-7ebd-852b-f3ac7f9dcb3f_1_360_718_1083_363_0.jpg) + +Fig. 2: Our GGIK solver is based on the CVAE framework. ${\mathrm{{GNN}}}_{enc}$ encodes a complete manipulator graph into a latent graph representation and ${\mathrm{{GNN}}}_{\text{dec }}$ "reconstructs" it. The prior network, ${\mathrm{{GNN}}}_{\text{prior }}$ , encodes the partial graph into a latent embedding that is near the embedding of the full graph. At test time, we decode the latent embedding of a partial graph into a complete graph to generate a solution. + +Distance Geometry Problem ([11]). Given an integer $D > 0$ , a set of vertices $V$ , and a simple undirected graph $G = \left( {V, E}\right)$ whose edges $\{ u, v\} \in E$ are assigned non-negative weights $\{ u, v\} \mapsto {d}_{u, v} \in {\mathbb{R}}_{ + }$ , find a function $p : V \rightarrow {\mathbb{R}}^{D}$ such that the Euclidean distances between neighbouring vertices match their edges’ weights (i.e., $\forall \{ u, v\} \in E,\parallel p\left( u\right) - p\left( v\right) \parallel = {d}_{u, v}$ ). + +It was shown in [12] that any solution $\mathbf{p} \in {DGP}\left( G\right)$ may be mapped to a unique corresponding configuration $\mathbf{\theta }$ . Trucially, this allows us to a construct a partial graph $G = \left( {\bar{V}, E}\right)$ , with $\widetilde{E} \subset E$ corresponding to distances determined by an end-effector pose $\mathbf{T}$ and the robot’s structure (i.e., those common to all elements of ${IK}\left( \mathbf{T}\right)$ ), where each $\mathbf{p} \in {DGP}\left( \widetilde{G}\right)$ corresponds to a particular IK solution $\mathbf{\theta } \in {IK}\left( \mathbf{T}\right)$ . The generic procedure for constructing $\widetilde{G}$ is demonstrated for a simple manipulator in Figure 1. For a more thorough overview of the distance-geometric graph representation, please see [12]. + +For a complete graph $G$ , we define the GNN node features as a combination of point positions $\mathbf{p} = {\left\{ {\mathbf{p}}_{i}\right\} }_{i = 1}^{N} \in {\mathbb{R}}^{N \times D}$ and general features $\mathbf{h} = {\left\{ {\mathbf{h}}_{i}\right\} }_{i = 1}^{N}$ , where each ${\mathbf{h}}_{i}$ is a feature vector containing extra information about the node. We use a three-dimensional one-hot-encoding, ${\mathbf{h}}_{i} \in \{ 0,1{\} }^{3}$ and $\mathop{\sum }\limits_{{j = 1}}^{3}{h}_{i, j} = 1$ , that indicates whether the node defines the base coordinate system, a general joint or link, or the end-effector. Similarly, we define the $M$ known point positions of the partial graph $\widetilde{G}$ as $\widetilde{\mathbf{p}} = {\left\{ {\widetilde{\mathbf{p}}}_{i}\right\} }_{i = 1}^{M} \in {\mathbb{R}}^{M \times D}$ and set the remaining unknown $N - M$ node positions to zero. The partial graph shares the same general features $\mathbf{h}$ as the complete graph. In both cases, the edge features are simply the corresponding inter-point distances between known node point positions or initialized to zero if unknown. + +## III. Generative Graphical Inverse Kinematics + +At its core, GGIK is a CVAE model [16] that parameterizes the conditional distribution $p\left( {G \mid \widetilde{G}}\right)$ using GNNs. By introducing an unobserved stochastic latent variable $\mathbf{z}$ , our generative model is defined as + +$$ +{p}_{\gamma }\left( {G \mid \widetilde{G}}\right) = \int {p}_{\gamma }\left( {G \mid \widetilde{G},\mathbf{z}}\right) {p}_{\gamma }\left( {\mathbf{z} \mid \widetilde{G}}\right) d\mathbf{z}, \tag{2} +$$ + +where ${p}_{\gamma }\left( {G \mid \widetilde{G},\mathbf{z}}\right)$ is the likelihood of the full graph, ${p}_{\gamma }\left( {\mathbf{z} \mid \widetilde{G}}\right)$ is the prior, and $\gamma$ are the learnable generative parameters. The likelihood is given by + +$$ +{p}_{\gamma }\left( {G \mid \widetilde{G},\mathbf{z}}\right) = \mathop{\prod }\limits_{{i = 1}}^{N}{p}_{\gamma }\left( {{\mathbf{p}}_{i} \mid \widetilde{G},{\mathbf{z}}_{i}}\right) ,\text{ with } \tag{3} +$$ + +$$ +{p}_{\gamma }\left( {{\mathbf{p}}_{i} \mid \widetilde{G},{\mathbf{z}}_{i}}\right) = \mathcal{N}\left( {{\mathbf{p}}_{i} \mid {\mathbf{\mu }}_{i},\mathbf{I}}\right) , +$$ + +--- + +${}^{1}\mathrm{{Up}}$ to any Euclidean transformation of $\mathbf{p}$ , since distances are invariant to such a transformation. + +--- + +![01964103-4213-7ebd-852b-f3ac7f9dcb3f_2_131_149_1463_293_0.jpg](images/01964103-4213-7ebd-852b-f3ac7f9dcb3f_2_131_149_1463_293_0.jpg) + +Fig. 3: Sampled conditional distributions from GGIK for various robotic manipulators. From left to right: KUKA IIWA, Franka Emika Panda, Schunk LWA4D, Schunk LWA4P, and Universal Robots UR10. Note that the end-effector poses are nearly identical in all cases, highlighting kinematic redundancy. Our model is able to capture the discrete solution set for the two non-redundant robots as well. + +where $\mathbf{p} = {\left\{ {\mathbf{p}}_{i}\right\} }_{i = i}^{N}$ are the positions of all $N$ nodes, $\mathbf{z} = {\left\{ {\mathbf{z}}_{i}\right\} }_{i = i}^{N}$ are the latent embeddings of each node, and $\mathbf{\mu } = {\left\{ {\mathbf{\mu }}_{i}\right\} }_{i = i}^{N}$ are the predicted means of the distribution of node positions. We parametrize the likelihood distribution with a GNN decoder, in other words, $\mathbf{\mu }$ is the output of ${\operatorname{GNN}}_{dec}\left( {\widetilde{G},\mathbf{z}}\right)$ . In practice, for the input of ${\operatorname{GNN}}_{dec}\left( \cdot \right)$ , we concatenate each latent node with the respective position node features $\widetilde{\mathbf{p}}$ of the original partial graph $\widetilde{G}$ when available and the general features $\mathbf{h}$ . If unavailable, we concatenate the latent nodes with the initialized point positions set to zero. The prior distribution is given by + +$$ +{p}_{\gamma }\left( {\mathbf{z} \mid \widetilde{G}}\right) = \mathop{\prod }\limits_{{i = 1}}^{N}{p}_{\gamma }\left( {{\mathbf{z}}_{i} \mid \widetilde{G}}\right) ,\text{ with } \tag{4} +$$ + +$$ +{p}_{\gamma }\left( {{\mathbf{z}}_{i} \mid \widetilde{G}}\right) = \mathop{\sum }\limits_{{k = 1}}^{K}{\pi }_{k, i}\mathcal{N}\left( {{\mathbf{z}}_{i} \mid {\mathbf{\mu }}_{k, i},\operatorname{diag}\left( {\mathbf{\sigma }}_{k, i}^{2}\right) }\right) . +$$ + +Here, we parameterize the prior as a Gaussian mixture model with $K$ components. Each Gaussian is in turn parameterized by a mean ${\mathbf{\mu }}_{k} = {\left\{ {\mathbf{\mu }}_{k, i}\right\} }_{i = 1}^{N}$ , diagonal covariance ${\mathbf{\sigma }}_{k} =$ ${\left\{ {\mathbf{\sigma }}_{k, i}\right\} }_{i = 1}^{N}$ , and a mixing coefficient ${\mathbf{\pi }}_{k} = {\left\{ {\pi }_{k, i}\right\} }_{i = 1}^{N}$ , where $\mathop{\sum }\limits_{{k = 1}}^{K}{\pi }_{k, i} = 1,\forall i = 1,\ldots , N$ . We chose a mixture model to have an expressive prior capable of capturing the latent distribution of multiple solutions. We parameterize the prior distribution with a multi-headed GNN encoder ${\mathrm{{GNN}}}_{\text{prior }}\left( \widetilde{G}\right)$ that outputs parameters ${\left\{ {\mathbf{\mu }}_{k},{\mathbf{\sigma }}_{k},{\mathbf{\pi }}_{k}\right\} }_{k = 1}^{K}$ . + +Algorithm 1: GGIK + +--- + +Parameters: $\widetilde{G},{\mathbf{T}}_{\text{goal }}, N, M$ + +Result: Solution configurations with the lowest pose + + error ${\mathbf{\theta }}^{ * } \in {\mathbb{R}}^{M \times {n}_{\text{joints }}}$ . + +${\mathbf{z}}_{N} \sim {p}_{\gamma }\left( {\mathbf{z} \mid \widetilde{G}}\right) \; \vartriangleright$ Sample $N$ latents $\mathbf{z}$ from ${\mathrm{{GNN}}}_{\text{prior }}$ . + +${\mathbf{p}}_{N} \sim {p}_{\gamma }\left( {\mathbf{p} \mid \widetilde{G},{\mathbf{z}}_{N}}\right) \; \vartriangleright$ Get $N$ solutions via ${\mathrm{{GNN}}}_{\text{dec }}$ . + +${\mathbf{\theta }}_{N} \leftarrow$ fromPoints $\left( {\mathbf{p}}_{N}\right) \; \vartriangleright$ Recover $N$ configurations. + +${\mathbf{\theta }}^{ * } \leftarrow$ selectSolution $\left( {{\mathbf{T}}_{\text{goal }},{\mathbf{\theta }}_{N}, M}\right) \; \vartriangleright$ Choose best $M$ . + +--- + +The goal of learning is to maximize the marginal likelihood or evidence of the data as shown in Eq. 2. As commonly done in the variational inference literature [8], we instead maximize a tractable evidence lower bound (ELBO): + +$$ +\mathcal{L} = {\mathbb{E}}_{{q}_{\phi }\left( {\mathbf{z} \mid G}\right) }\left\lbrack {\log p\left( {G \mid \widetilde{G},\mathbf{z}}\right) }\right\rbrack - {KL}\left( {{q}_{\phi }\left( {\mathbf{z} \mid G}\right) \parallel {p}_{\gamma }\left( {\mathbf{z} \mid \widetilde{G}}\right) }\right) , +$$ + +(5) + +where ${KL}\left( {\cdot \parallel \cdot }\right)$ is the Kullback-Leibler (KL) divergence and the inference model ${q}_{\phi }\left( {\mathbf{z} \mid G}\right)$ with learnable parameters $\phi$ is + +$$ +{q}_{\phi }\left( {\mathbf{z} \mid G}\right) = \mathop{\prod }\limits_{{i = 1}}^{N}{q}_{\phi }\left( {{\mathbf{z}}_{i} \mid G}\right) ,\;\text{ with } \tag{6} +$$ + +$$ +{q}_{\phi }\left( {{\mathbf{z}}_{i} \mid G}\right) = \mathcal{N}\left( {{\mathbf{z}}_{i} \mid {\mathbf{\mu }}_{i},\operatorname{diag}\left( {\mathbf{\sigma }}_{i}^{2}\right) }\right) . +$$ + +As with the prior distribution, we parameterize the inference distribution with a multi-headed GNN encoder, ${\mathrm{{GNN}}}_{\text{enc }}\left( G\right)$ , that outputs parameters $\mathbf{\mu } = {\left\{ {\mathbf{\mu }}_{i}\right\} }_{i = 1}^{N}$ and $\mathbf{\sigma } = {\left\{ {\mathbf{\sigma }}_{i}\right\} }_{i = 1}^{N}$ . We summarize the full sampling procedure in Algorithm 1 and we visualize samples of these IK solutions in Figure 3. This procedure can be done quickly and in parallel on the GPU. + +## IV. $\mathrm{E}\left( n\right)$ Equivariance and Symmetry + +We are interested in mapping partial graphs $\widetilde{G}$ into full graphs $G$ . Once trained, our model maps partial point sets to full point sets $f : {\mathbb{R}}^{M \times D} \rightarrow {\mathbb{R}}^{N \times D}$ , where $f$ is a combination of networks ${\mathrm{{GNN}}}_{\text{prior }}$ and ${\mathrm{{GNN}}}_{\text{dec }}$ applied sequentially. The point positions (i.e., $\mathbf{p}$ and $\widetilde{\mathbf{p}}$ ) of each node in the distance geometry problem contain underlying geometric relationships that we would like to preserve with our choice of architecture. Most importantly, the point sets are equivariant to the Euclidean group $\mathrm{E}\left( n\right)$ of rotations, translations, and reflections. Let $S : {\mathbb{R}}^{M \times D} \rightarrow {\mathbb{R}}^{M \times D}$ be a transformation consisting of some combination of rotations, translations and reflections on the initial partial point set $\widetilde{\mathbf{p}}$ . Then, there exists an equivalent transformation $T : {\mathbb{R}}^{N \times D} \rightarrow {\mathbb{R}}^{N \times D}$ on the complete point set $\mathbf{p}$ such that: + +$$ +f\left( {S\left( \widetilde{\mathbf{p}}\right) }\right) = T\left( {f\left( \widetilde{\mathbf{p}}\right) }\right) . \tag{7} +$$ + +To leverage this structure or geometric prior in the data, we use $\mathrm{E}\left( n\right)$ -equivariant graph neural networks (EGNNs) [15] for ${\mathrm{{GNN}}}_{\text{dec }},{\mathrm{{GNN}}}_{\text{enc }}$ , and ${\mathrm{{GNN}}}_{\text{prior }}$ . The EGNN layer splits up the node features into an equivariant coordinate or position-based part and a non-equivariant part. We treat the positions $\mathbf{p}$ and $\widetilde{\mathbf{p}}$ as the equivariant portion and the general features $\mathbf{h}$ as non-equivariant. As an example, a single EGNN layer $l$ from ${\mathrm{{GNN}}}_{\text{enc }}$ is then defined as: + +$$ +{\mathbf{m}}_{ij} = {\phi }_{e}\left( {{\mathbf{h}}_{i}^{l},{\mathbf{h}}_{j}^{l},{\begin{Vmatrix}{\mathbf{p}}_{i}^{l} - {\mathbf{p}}_{j}^{l}\end{Vmatrix}}^{2}}\right) +$$ + +$$ +{\mathbf{p}}_{i}^{l + 1} = {\mathbf{p}}_{i}^{l} + C\mathop{\sum }\limits_{{j \neq i}}\left( {{\mathbf{p}}_{i}^{l} - {\mathbf{p}}_{j}^{l}}\right) {\phi }_{x}\left( {\mathbf{m}}_{ij}\right) \tag{8} +$$ + +$$ +{\mathbf{m}}_{i} = \mathop{\sum }\limits_{{j \neq i}}{\mathbf{m}}_{ij} +$$ + +$$ +{\mathbf{h}}_{i}^{l + 1} = {\phi }_{h}\left( {{\mathbf{h}}_{i}^{l},{\mathbf{m}}_{i}}\right) , +$$ + +
RobotErr. Pos. [mm]meanErr. Rot. [deg]${\mathrm{Q}}_{3}$
meanmin$\max$${\mathrm{Q}}_{1}$${\mathrm{Q}}_{3}$min$\max$${\mathrm{Q}}_{1}$
KUKA5.31.79.73.86.60.40.10.60.30.5
Lwa4d4.71.49.13.25.90.40.10.60.30.5
Lwa4p5.72.210.24.17.10.40.10.70.30.6
Panda12.33.225.57.915.91.00.21.80.71.3
UR109.24.214.77.311.10.50.20.90.40.7
UR10 with DT [18]35.0----16.0----
Panda with IKFlow [1]7.7----2.8----
Panda with IKNet [4]31.0--13.548.6-----
+ +TABLE I: Performance of GGIK on 2,000 randomly generated IK problems for a single model trained on five different robotic manipulators. Taking 32 samples from the learned distribution, the error statistics are presented as the mean and mean minimum and maximum error per problem and the two quartiles of the distribution. Note that all solutions were produced by a single model for GGIK. We include baseline results from various models that were trained on a single robot type. Dashed results were unavailable. + +
Model NameErr. Pos. [mm]Err. Rot. [deg]Test ELBO
meanmin$\max$${\mathrm{Q}}_{1}$${\mathrm{Q}}_{3}$meanmin$\max$${\mathrm{Q}}_{1}$${\mathrm{Q}}_{3}$
EGNN4.61.58.53.35.80.40.10.60.30.4-0.05
MPNN143.262.9273.7113.1169.117.75.313.621.634.1-8.3
GAT-----------12.41
GCN-----------12.42
GRAPHsage-----------10.5
+ +TABLE II: Comparison of different network architectures. EGNN outperforms existing architectures that are not equivariant in terms of overall accuracy and test ELBO. Dashed results are models with output point sets that were too far from a valid joint configuration and diverged during the configuration reconstruction procedure. + +where, $\mathbf{m} \in {\mathbb{R}}^{{f}_{m}}$ with a message embedding dimension of ${f}_{m},{\phi }_{x} : {\mathbb{R}}^{{f}_{m}} \rightarrow {\mathbb{R}}^{1}, C = \frac{1}{N - 1}$ divides the sum by the number of elements, and ${\phi }_{e}$ and ${\phi }_{h}$ are typical edge and node operations approximated by multilayer perceptrons (MLPs). For more details about the model and a proof of the equivariance property, we refer readers to [15]. + +## V. EXPERIMENTS + +We evaluate GGIK's capability to learn accurate solutions and generalize within a class of manipulator structures, and investigate the importance of capturing the Euclidean equiv-ariance of the graphical formulation of inverse kinematics. + +## A. Accuracy and Generalization + +In Table 1, we evaluate the accuracy of GGIK for a variety of existing commercial manipulators featuring different structures and numbers of joints: the Kuka IIWA, Schunk LWA4D, Schunk LWA4P, Universal Robots UR10, and Franka Emika Panda. We trained a single instance of GGIK on a total of $2,{560},{000}\mathrm{{IK}}$ problems uniformly distributed over all five manipulators. We compare GGIK to other learned IK baselines [1, 4, 18] that are trained specifically for each robot. GGIK achieves better or comparable accuracy to all baselines despite generalizing across multiple manipulator types. + +## B. Ablation Study on the Equivariant Network Architecture + +We conducted an ablation experiment to evaluate the importance of capturing the underlying $\mathrm{E}\left( n\right)$ equivariance of the distance geometry problem (Problem II) in our learning architecture. We compare the use of the EGNN network [15] to four common and popular GNN layers that are not $\mathrm{E}\left( n\right)$ equivari-ant: GRAPHsage [6], GAT [17], GCN [9] and MPNN [5]. We match the number of parameters for each GNN architecture as closely as possible and keep all other experimental parameters fixed. Out of the five different architectures that we compare, only the EGNN and MPNN output point sets that can be successfully mapped to valid joint configurations. Point sets that are too far from those representing a valid joint configuration result in the configuration reconstruction procedure diverging. The equivariant EGNN model outperforms all other models in terms of the ELBO value attained on a held-out test set. + +## VI. CONCLUSION + +GGIK provides a framework for learned 'general' IK, that is, a solver (or initializer) that can provide multiple diverse solutions and can be used with any manipulator in a way that complements or replaces numerical optimization. The graphical formulation of IK naturally leads to the use of a GNN for learning, since the GNN can accept problems for arbitrary robots with different kinematic structures and numbers of degrees of freedom. Our formulation also exposes the Euclidean equivariance of the problem. We exploit this symmetry by explicitly encoding it into the architecture of our learned model. + +## REFERENCES + +[1] Barrett Ames, Jeremy Morgan, and George Konidaris. Ikflow: Generating diverse inverse kinematics solutions. IEEE Robotics and Automation Letters, 7(3):7177-7184, 2022. doi: 10.1109/LRA.2022.3181374. + +[2] A. Aristidou, J. Lasenby, Y. Chrysanthou, and A. Shamir. Inverse Kinematics Techniques in Computer Graphics: A Survey. Computer Graphics Forum, 37(6):35-58, September 2018. ISSN 01677055. doi: 10.1111/cgf. 13310. + +[3] Patrick Beeson and Barrett Ames. TRAC-IK: An open-source library for improved solving of generic inverse kinematics. In 15th International Conf. on Humanoid Robots (Humanoids), pages 928-935. IEEE, 2015. + +[4] Raphael Bensadoun, Shir Gur, Nitsan Blau, and Lior Wolf. Neural inverse kinematic. In Kamalika Chaud-huri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 1787-1797, Baltimore, Maryland, USA, July 2022. PMLR. + +[5] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International Conference on Machine Learning, pages 1263-1272. PMLR, 2017. + +[6] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017. + +[7] Chi-Kai Ho and Chung-Ta King. Selective inverse kinematics: A novel approach to finding multiple solutions fast for high-dof robotic. arXiv preprint arXiv:2202.07869, 2022. + +[8] Diederik P. Kingma and Max Welling. Auto-encoding variational Bayes. In Yoshua Bengio and Yann LeCun, editors, International Conference on Learning Representations, 2014. + +[9] Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations, 2017. + +[10] Teguh Santoso Lembono, Emmanuel Pignat, Julius Jankowski, and Sylvain Calinon. Learning constrained distributions of robot configurations with generative adversarial network. IEEE Robotics and Automation Letters, 6(2):4233-4240, 2021. + +[11] Leo Liberti, Carlile Lavor, Nelson Maculan, and Antonio Mucherino. Euclidean Distance Geometry and Applications. SIAM Rev., 56(1):3-69, January 2014. ISSN 0036- 1445, 1095-7200. doi: 10.1137/120875909. + +[12] Filip Maric, Matthew Giamou, Adam W. Hall, Soroush Khoubyarian, Ivan Petrovic, and Jonathan Kelly. Riemannian optimization for distance-geometric inverse kinematics. IEEE Transactions on Robotics, 2021. doi: 10.1109/TRO.2021.3123841. URL https://arxiv.org/abs/ 2108.13720. + +[13] J.M. Porta, L. Ros, F. Thomas, and C. Torras. A branch-and-prune solver for distance constraints. IEEE Trans. Robot., 21:176-187, April 2005. + +[14] Hailin Ren and Pinhas Ben-Tzvi. Learning inverse kinematics and dynamics of a robotic manipulator using generative adversarial networks. Robotics and Autonomous Systems, 124:103386, 2020. + +[15] Víctor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In International Conference on Machine Learning, pages 9323-9332. PMLR, 2021. + +[16] Kihyuk Sohn, Xinchen Yan, and Honglak Lee. Learning structured output representation using deep conditional generative models. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, NeurIPS'15, pages 3483-3491, Cambridge, MA, USA, 2015. MIT Press. + +[17] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations, 2018. + +[18] Tim von Oehsen, Alexander Fabisch, Shivesh Kumar, and Frank Kirchner. Comparison of Distal Teacher Learning with Numerical and Analytical Methods to Solve Inverse Kinematics for Rigid-Body Mechanisms. arXiv:2003.00225 [cs], February 2020. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/3LnP1W8pKm/Initial_manuscript_tex/Initial_manuscript.tex b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/3LnP1W8pKm/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..6e85e426afecc2f86f165dc4386234ae616da0ea --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/3LnP1W8pKm/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,221 @@ +§ EUCLIDEAN EQUIVARIANT MODELS FOR GENERATIVE GRAPHICAL INVERSE KINEMATICS + +Oliver Limoyo, ${}^{1, \dagger }$ Filip Marić, ${}^{1,2, \dagger }$ Matthew Giamou, ${}^{1}$ Petra Alexson, ${}^{1}$ Ivan Petrović, ${}^{2}$ and Jonathan Kelly ${}^{1}$ ${}^{ \dagger }$ Denotes equal contribution. + +${}^{1}$ Institute for Aerospace Studies, University of Toronto, + +${}^{2}$ Laboratory for Autonomous Systems and Mobile Robotics, University of Zagreb + +Abstract-Quickly and reliably finding accurate inverse kinematics (IK) solutions remains a challenging problem for robotic manipulation. Existing numerical solvers typically only produce a single solution and rely on local search techniques to minimize highly nonconvex objective functions. Recently, learning-based approaches that approximate the entire feasible set of solutions have shown promise as a means to generate multiple fast and accurate IK results in parallel. However, existing learning-based techniques have a significant drawback: each robot of interest requires a specialized model that must be trained from scratch. To address this key shortcoming, we investigate a novel distance-geometric robot representation coupled with a graph structure that allows us to leverage the flexibility of graph neural networks (GNNs). We use this approach to train a generative graphical inverse kinematics (GGIK) solver that is able to produce a large number of diverse solutions in parallel while also generalizing well-a single learned model can be used to produce IK solutions for a variety of different robots. The graphical formulation elegantly exposes the symmetry and Euclidean equivariance of the IK problem, stemming from the spatial nature of robot manipulators. We exploit this symmetry by explicitly encoding it into the architecture of our learned model, yielding a flexible solver that is able to produce IK solutions for multiple robots. + +§ I. INTRODUCTION + +Robotic manipulation tasks are naturally defined in terms of end-effector poses (e.g., for bin-picking or path following). However, the configuration of a manipulator is typically specified in terms of joint angles, and determining the joint configuration(s) that correspond to a given end-effector pose requires solving the inverse kinematics (IK) problem. For redundant manipulators (i.e., those with more than six degrees of freedom or DOF), target poses may be reachable by an infinite set of feasible configurations. While redundancy allows high-level algorithms such as motion planners to choose configurations that best fit the overall task, it makes solving IK substantially more involved. + +Since the full set of IK solutions cannot, in general, be derived analytically for redundant manipulators, individual configurations reaching a target pose are found by locally searching the configuration space using numerical optimization methods and geometric heuristics. These limitations have motivated the use of learned models that approximate the entire feasible set of solutions. In terms of success rate, learned models that output individual solutions are able to compete with the best numerical IK solvers when high accuracy is not required [18]. Data-driven methods are also useful for integrating abstract criteria such as "human-like" poses or motions [2]. Generative approaches [7, 14] have demonstrated the ability to rapidly produce a large number of approximate IK solutions and even model the entire feasible set for specific robots [1]. Unfortunately, these learned models, parameterized by deep neural networks (DNNs), require specific configuration and end-effector input-output vector pairs for training (by design). In turn, it is not possible to generalize learned solutions to robots that vary in link geometry and DOF. Ultimately, this drawback limits the utility of learning for IK over well-established numerical methods that are easier to implement and generalize [3]. + +In this paper, we describe a novel generative inverse kinematics solver and explain its capacity to simultaneously represent general (i.e., not tied to a single robot manipulator model or geometry) IK mappings and to produce approximations of entire feasible sets of solutions. In contrast to existing DNN-based approaches [1, 7, 10, 14, 18], we explore a new path towards learning generalized IK by adopting a graphical model of robot kinematics [12, 13]. This graph-based description allows us to make use of graph neural networks (GNNs) to capture varying robot geometries and DOF within a single model. Furthermore, crucial to the success of our method, the graphical formulation exposes the symmetry and Euclidean equivariance of the IK problem that stems from the spatial nature of robot manipulators. We exploit this symmetry by encoding it into the architecture of our learned model, which we call GGIK (for generative graphical inverse kinematics), to learn accurate IK solutions. + +§ II. DISTANCE-GEOMETRIC GRAPH REPRESENTATION OF INVERSE KINEMATICS + +The mapping ${IK} : \mathcal{T} \rightarrow \mathcal{C}$ defines the inverse kinematics of the robot, connecting a target pose $\mathbf{T} \in \mathrm{{SE}}\left( 3\right)$ to one or more feasible configurations $\mathbf{\theta } \in \mathcal{C}$ . In this paper, we consider the associated problem of determining this mapping for manipulators with $n > 6$ DOF (also known as redundant manipulators), where each end-effector pose corresponds to a set of configurations + +$$ +{IK}\left( \mathbf{T}\right) = \{ \mathbf{\theta } \in \mathcal{C} \mid {FK}\left( \mathbf{\theta }\right) = \mathbf{T}\} \tag{1} +$$ + +that we refer to as the full set of IK solutions. + +We eschew the common angle-based representation of the configuration space in favour of a distance-geometric model of robotic manipulators comprised of revolute joints [13]. This allows us to represent configurations $\mathbf{\theta }$ as complete graphs $G = \left( {V,E}\right)$ . The edges $E$ are weighted by distances $d$ between a collection of $N$ points $\mathbf{p} = {\left\{ {\mathbf{p}}_{i}\right\} }_{i = 1}^{N} \in {\mathbb{R}}^{N \times D}$ indexed by vertices $V$ , where $D \in \{ 2,3\}$ is the workspace dimension. The coordinates of points corresponding to these distances are recovered by solving the distance geometry problem (DGP): + + < g r a p h i c s > + +Fig. 1: The process of defining an IK problem as an incomplete or partial graph $\widetilde{G}$ of inter-point distances. (a) Conventional forward kinematics model parameterized by joint angles and joint rotation axes. (b) The point placement procedure for the distance based description, first introduced in [12]. Note that the four distances between points associated with pairs of consecutive joints remain constant regardless of the of configuration. (c) A structure graph of the robot based on inter-point distances. (d) Addition of distances describing the robot end-effector pose using auxiliary points to define the base coordinate system, which completes the graphical IK problem description. All configurations of the robot reaching this end effector pose will result in a partial graph of distances shown in (c) and (d). + + < g r a p h i c s > + +Fig. 2: Our GGIK solver is based on the CVAE framework. ${\mathrm{{GNN}}}_{enc}$ encodes a complete manipulator graph into a latent graph representation and ${\mathrm{{GNN}}}_{\text{ dec }}$ "reconstructs" it. The prior network, ${\mathrm{{GNN}}}_{\text{ prior }}$ , encodes the partial graph into a latent embedding that is near the embedding of the full graph. At test time, we decode the latent embedding of a partial graph into a complete graph to generate a solution. + +Distance Geometry Problem ([11]). Given an integer $D > 0$ , a set of vertices $V$ , and a simple undirected graph $G = \left( {V,E}\right)$ whose edges $\{ u,v\} \in E$ are assigned non-negative weights $\{ u,v\} \mapsto {d}_{u,v} \in {\mathbb{R}}_{ + }$ , find a function $p : V \rightarrow {\mathbb{R}}^{D}$ such that the Euclidean distances between neighbouring vertices match their edges’ weights (i.e., $\forall \{ u,v\} \in E,\parallel p\left( u\right) - p\left( v\right) \parallel = {d}_{u,v}$ ). + +It was shown in [12] that any solution $\mathbf{p} \in {DGP}\left( G\right)$ may be mapped to a unique corresponding configuration $\mathbf{\theta }$ . Trucially, this allows us to a construct a partial graph $G = \left( {\bar{V},E}\right)$ , with $\widetilde{E} \subset E$ corresponding to distances determined by an end-effector pose $\mathbf{T}$ and the robot’s structure (i.e., those common to all elements of ${IK}\left( \mathbf{T}\right)$ ), where each $\mathbf{p} \in {DGP}\left( \widetilde{G}\right)$ corresponds to a particular IK solution $\mathbf{\theta } \in {IK}\left( \mathbf{T}\right)$ . The generic procedure for constructing $\widetilde{G}$ is demonstrated for a simple manipulator in Figure 1. For a more thorough overview of the distance-geometric graph representation, please see [12]. + +For a complete graph $G$ , we define the GNN node features as a combination of point positions $\mathbf{p} = {\left\{ {\mathbf{p}}_{i}\right\} }_{i = 1}^{N} \in {\mathbb{R}}^{N \times D}$ and general features $\mathbf{h} = {\left\{ {\mathbf{h}}_{i}\right\} }_{i = 1}^{N}$ , where each ${\mathbf{h}}_{i}$ is a feature vector containing extra information about the node. We use a three-dimensional one-hot-encoding, ${\mathbf{h}}_{i} \in \{ 0,1{\} }^{3}$ and $\mathop{\sum }\limits_{{j = 1}}^{3}{h}_{i,j} = 1$ , that indicates whether the node defines the base coordinate system, a general joint or link, or the end-effector. Similarly, we define the $M$ known point positions of the partial graph $\widetilde{G}$ as $\widetilde{\mathbf{p}} = {\left\{ {\widetilde{\mathbf{p}}}_{i}\right\} }_{i = 1}^{M} \in {\mathbb{R}}^{M \times D}$ and set the remaining unknown $N - M$ node positions to zero. The partial graph shares the same general features $\mathbf{h}$ as the complete graph. In both cases, the edge features are simply the corresponding inter-point distances between known node point positions or initialized to zero if unknown. + +§ III. GENERATIVE GRAPHICAL INVERSE KINEMATICS + +At its core, GGIK is a CVAE model [16] that parameterizes the conditional distribution $p\left( {G \mid \widetilde{G}}\right)$ using GNNs. By introducing an unobserved stochastic latent variable $\mathbf{z}$ , our generative model is defined as + +$$ +{p}_{\gamma }\left( {G \mid \widetilde{G}}\right) = \int {p}_{\gamma }\left( {G \mid \widetilde{G},\mathbf{z}}\right) {p}_{\gamma }\left( {\mathbf{z} \mid \widetilde{G}}\right) d\mathbf{z}, \tag{2} +$$ + +where ${p}_{\gamma }\left( {G \mid \widetilde{G},\mathbf{z}}\right)$ is the likelihood of the full graph, ${p}_{\gamma }\left( {\mathbf{z} \mid \widetilde{G}}\right)$ is the prior, and $\gamma$ are the learnable generative parameters. The likelihood is given by + +$$ +{p}_{\gamma }\left( {G \mid \widetilde{G},\mathbf{z}}\right) = \mathop{\prod }\limits_{{i = 1}}^{N}{p}_{\gamma }\left( {{\mathbf{p}}_{i} \mid \widetilde{G},{\mathbf{z}}_{i}}\right) ,\text{ with } \tag{3} +$$ + +$$ +{p}_{\gamma }\left( {{\mathbf{p}}_{i} \mid \widetilde{G},{\mathbf{z}}_{i}}\right) = \mathcal{N}\left( {{\mathbf{p}}_{i} \mid {\mathbf{\mu }}_{i},\mathbf{I}}\right) , +$$ + +${}^{1}\mathrm{{Up}}$ to any Euclidean transformation of $\mathbf{p}$ , since distances are invariant to such a transformation. + + < g r a p h i c s > + +Fig. 3: Sampled conditional distributions from GGIK for various robotic manipulators. From left to right: KUKA IIWA, Franka Emika Panda, Schunk LWA4D, Schunk LWA4P, and Universal Robots UR10. Note that the end-effector poses are nearly identical in all cases, highlighting kinematic redundancy. Our model is able to capture the discrete solution set for the two non-redundant robots as well. + +where $\mathbf{p} = {\left\{ {\mathbf{p}}_{i}\right\} }_{i = i}^{N}$ are the positions of all $N$ nodes, $\mathbf{z} = {\left\{ {\mathbf{z}}_{i}\right\} }_{i = i}^{N}$ are the latent embeddings of each node, and $\mathbf{\mu } = {\left\{ {\mathbf{\mu }}_{i}\right\} }_{i = i}^{N}$ are the predicted means of the distribution of node positions. We parametrize the likelihood distribution with a GNN decoder, in other words, $\mathbf{\mu }$ is the output of ${\operatorname{GNN}}_{dec}\left( {\widetilde{G},\mathbf{z}}\right)$ . In practice, for the input of ${\operatorname{GNN}}_{dec}\left( \cdot \right)$ , we concatenate each latent node with the respective position node features $\widetilde{\mathbf{p}}$ of the original partial graph $\widetilde{G}$ when available and the general features $\mathbf{h}$ . If unavailable, we concatenate the latent nodes with the initialized point positions set to zero. The prior distribution is given by + +$$ +{p}_{\gamma }\left( {\mathbf{z} \mid \widetilde{G}}\right) = \mathop{\prod }\limits_{{i = 1}}^{N}{p}_{\gamma }\left( {{\mathbf{z}}_{i} \mid \widetilde{G}}\right) ,\text{ with } \tag{4} +$$ + +$$ +{p}_{\gamma }\left( {{\mathbf{z}}_{i} \mid \widetilde{G}}\right) = \mathop{\sum }\limits_{{k = 1}}^{K}{\pi }_{k,i}\mathcal{N}\left( {{\mathbf{z}}_{i} \mid {\mathbf{\mu }}_{k,i},\operatorname{diag}\left( {\mathbf{\sigma }}_{k,i}^{2}\right) }\right) . +$$ + +Here, we parameterize the prior as a Gaussian mixture model with $K$ components. Each Gaussian is in turn parameterized by a mean ${\mathbf{\mu }}_{k} = {\left\{ {\mathbf{\mu }}_{k,i}\right\} }_{i = 1}^{N}$ , diagonal covariance ${\mathbf{\sigma }}_{k} =$ ${\left\{ {\mathbf{\sigma }}_{k,i}\right\} }_{i = 1}^{N}$ , and a mixing coefficient ${\mathbf{\pi }}_{k} = {\left\{ {\pi }_{k,i}\right\} }_{i = 1}^{N}$ , where $\mathop{\sum }\limits_{{k = 1}}^{K}{\pi }_{k,i} = 1,\forall i = 1,\ldots ,N$ . We chose a mixture model to have an expressive prior capable of capturing the latent distribution of multiple solutions. We parameterize the prior distribution with a multi-headed GNN encoder ${\mathrm{{GNN}}}_{\text{ prior }}\left( \widetilde{G}\right)$ that outputs parameters ${\left\{ {\mathbf{\mu }}_{k},{\mathbf{\sigma }}_{k},{\mathbf{\pi }}_{k}\right\} }_{k = 1}^{K}$ . + +Algorithm 1: GGIK + +Parameters: $\widetilde{G},{\mathbf{T}}_{\text{ goal }},N,M$ + +Result: Solution configurations with the lowest pose + + error ${\mathbf{\theta }}^{ * } \in {\mathbb{R}}^{M \times {n}_{\text{ joints }}}$ . + +${\mathbf{z}}_{N} \sim {p}_{\gamma }\left( {\mathbf{z} \mid \widetilde{G}}\right) \; \vartriangleright$ Sample $N$ latents $\mathbf{z}$ from ${\mathrm{{GNN}}}_{\text{ prior }}$ . + +${\mathbf{p}}_{N} \sim {p}_{\gamma }\left( {\mathbf{p} \mid \widetilde{G},{\mathbf{z}}_{N}}\right) \; \vartriangleright$ Get $N$ solutions via ${\mathrm{{GNN}}}_{\text{ dec }}$ . + +${\mathbf{\theta }}_{N} \leftarrow$ fromPoints $\left( {\mathbf{p}}_{N}\right) \; \vartriangleright$ Recover $N$ configurations. + +${\mathbf{\theta }}^{ * } \leftarrow$ selectSolution $\left( {{\mathbf{T}}_{\text{ goal }},{\mathbf{\theta }}_{N},M}\right) \; \vartriangleright$ Choose best $M$ . + +The goal of learning is to maximize the marginal likelihood or evidence of the data as shown in Eq. 2. As commonly done in the variational inference literature [8], we instead maximize a tractable evidence lower bound (ELBO): + +$$ +\mathcal{L} = {\mathbb{E}}_{{q}_{\phi }\left( {\mathbf{z} \mid G}\right) }\left\lbrack {\log p\left( {G \mid \widetilde{G},\mathbf{z}}\right) }\right\rbrack - {KL}\left( {{q}_{\phi }\left( {\mathbf{z} \mid G}\right) \parallel {p}_{\gamma }\left( {\mathbf{z} \mid \widetilde{G}}\right) }\right) , +$$ + +(5) + +where ${KL}\left( {\cdot \parallel \cdot }\right)$ is the Kullback-Leibler (KL) divergence and the inference model ${q}_{\phi }\left( {\mathbf{z} \mid G}\right)$ with learnable parameters $\phi$ is + +$$ +{q}_{\phi }\left( {\mathbf{z} \mid G}\right) = \mathop{\prod }\limits_{{i = 1}}^{N}{q}_{\phi }\left( {{\mathbf{z}}_{i} \mid G}\right) ,\;\text{ with } \tag{6} +$$ + +$$ +{q}_{\phi }\left( {{\mathbf{z}}_{i} \mid G}\right) = \mathcal{N}\left( {{\mathbf{z}}_{i} \mid {\mathbf{\mu }}_{i},\operatorname{diag}\left( {\mathbf{\sigma }}_{i}^{2}\right) }\right) . +$$ + +As with the prior distribution, we parameterize the inference distribution with a multi-headed GNN encoder, ${\mathrm{{GNN}}}_{\text{ enc }}\left( G\right)$ , that outputs parameters $\mathbf{\mu } = {\left\{ {\mathbf{\mu }}_{i}\right\} }_{i = 1}^{N}$ and $\mathbf{\sigma } = {\left\{ {\mathbf{\sigma }}_{i}\right\} }_{i = 1}^{N}$ . We summarize the full sampling procedure in Algorithm 1 and we visualize samples of these IK solutions in Figure 3. This procedure can be done quickly and in parallel on the GPU. + +§ IV. $\MATHRM{E}\LEFT( N\RIGHT)$ EQUIVARIANCE AND SYMMETRY + +We are interested in mapping partial graphs $\widetilde{G}$ into full graphs $G$ . Once trained, our model maps partial point sets to full point sets $f : {\mathbb{R}}^{M \times D} \rightarrow {\mathbb{R}}^{N \times D}$ , where $f$ is a combination of networks ${\mathrm{{GNN}}}_{\text{ prior }}$ and ${\mathrm{{GNN}}}_{\text{ dec }}$ applied sequentially. The point positions (i.e., $\mathbf{p}$ and $\widetilde{\mathbf{p}}$ ) of each node in the distance geometry problem contain underlying geometric relationships that we would like to preserve with our choice of architecture. Most importantly, the point sets are equivariant to the Euclidean group $\mathrm{E}\left( n\right)$ of rotations, translations, and reflections. Let $S : {\mathbb{R}}^{M \times D} \rightarrow {\mathbb{R}}^{M \times D}$ be a transformation consisting of some combination of rotations, translations and reflections on the initial partial point set $\widetilde{\mathbf{p}}$ . Then, there exists an equivalent transformation $T : {\mathbb{R}}^{N \times D} \rightarrow {\mathbb{R}}^{N \times D}$ on the complete point set $\mathbf{p}$ such that: + +$$ +f\left( {S\left( \widetilde{\mathbf{p}}\right) }\right) = T\left( {f\left( \widetilde{\mathbf{p}}\right) }\right) . \tag{7} +$$ + +To leverage this structure or geometric prior in the data, we use $\mathrm{E}\left( n\right)$ -equivariant graph neural networks (EGNNs) [15] for ${\mathrm{{GNN}}}_{\text{ dec }},{\mathrm{{GNN}}}_{\text{ enc }}$ , and ${\mathrm{{GNN}}}_{\text{ prior }}$ . The EGNN layer splits up the node features into an equivariant coordinate or position-based part and a non-equivariant part. We treat the positions $\mathbf{p}$ and $\widetilde{\mathbf{p}}$ as the equivariant portion and the general features $\mathbf{h}$ as non-equivariant. As an example, a single EGNN layer $l$ from ${\mathrm{{GNN}}}_{\text{ enc }}$ is then defined as: + +$$ +{\mathbf{m}}_{ij} = {\phi }_{e}\left( {{\mathbf{h}}_{i}^{l},{\mathbf{h}}_{j}^{l},{\begin{Vmatrix}{\mathbf{p}}_{i}^{l} - {\mathbf{p}}_{j}^{l}\end{Vmatrix}}^{2}}\right) +$$ + +$$ +{\mathbf{p}}_{i}^{l + 1} = {\mathbf{p}}_{i}^{l} + C\mathop{\sum }\limits_{{j \neq i}}\left( {{\mathbf{p}}_{i}^{l} - {\mathbf{p}}_{j}^{l}}\right) {\phi }_{x}\left( {\mathbf{m}}_{ij}\right) \tag{8} +$$ + +$$ +{\mathbf{m}}_{i} = \mathop{\sum }\limits_{{j \neq i}}{\mathbf{m}}_{ij} +$$ + +$$ +{\mathbf{h}}_{i}^{l + 1} = {\phi }_{h}\left( {{\mathbf{h}}_{i}^{l},{\mathbf{m}}_{i}}\right) , +$$ + +max width= + +2*Robot 5|c|Err. Pos. [mm] 2*mean 3|c|Err. Rot. [deg] 2*${\mathrm{Q}}_{3}$ + +2-6 +8-10 + mean min $\max$ ${\mathrm{Q}}_{1}$ ${\mathrm{Q}}_{3}$ min $\max$ ${\mathrm{Q}}_{1}$ + +1-11 +KUKA 5.3 1.7 9.7 3.8 6.6 0.4 0.1 0.6 0.3 0.5 + +1-11 +Lwa4d 4.7 1.4 9.1 3.2 5.9 0.4 0.1 0.6 0.3 0.5 + +1-11 +Lwa4p 5.7 2.2 10.2 4.1 7.1 0.4 0.1 0.7 0.3 0.6 + +1-11 +Panda 12.3 3.2 25.5 7.9 15.9 1.0 0.2 1.8 0.7 1.3 + +1-11 +UR10 9.2 4.2 14.7 7.3 11.1 0.5 0.2 0.9 0.4 0.7 + +1-11 +UR10 with DT [18] 35.0 - - - - 16.0 - - - - + +1-11 +Panda with IKFlow [1] 7.7 - - - - 2.8 - - - - + +1-11 +Panda with IKNet [4] 31.0 - - 13.5 48.6 - - - - - + +1-11 + +TABLE I: Performance of GGIK on 2,000 randomly generated IK problems for a single model trained on five different robotic manipulators. Taking 32 samples from the learned distribution, the error statistics are presented as the mean and mean minimum and maximum error per problem and the two quartiles of the distribution. Note that all solutions were produced by a single model for GGIK. We include baseline results from various models that were trained on a single robot type. Dashed results were unavailable. + +max width= + +2*Model Name 5|c|Err. Pos. [mm] 5|c|Err. Rot. [deg] 2*Test ELBO + +2-11 + mean min $\max$ ${\mathrm{Q}}_{1}$ ${\mathrm{Q}}_{3}$ mean min $\max$ ${\mathrm{Q}}_{1}$ ${\mathrm{Q}}_{3}$ + +1-12 +EGNN 4.6 1.5 8.5 3.3 5.8 0.4 0.1 0.6 0.3 0.4 -0.05 + +1-12 +MPNN 143.2 62.9 273.7 113.1 169.1 17.7 5.3 13.6 21.6 34.1 -8.3 + +1-12 +GAT - - - - - - - - - - -12.41 + +1-12 +GCN - - - - - - - - - - -12.42 + +1-12 +GRAPHsage - - - - - - - - - - -10.5 + +1-12 + +TABLE II: Comparison of different network architectures. EGNN outperforms existing architectures that are not equivariant in terms of overall accuracy and test ELBO. Dashed results are models with output point sets that were too far from a valid joint configuration and diverged during the configuration reconstruction procedure. + +where, $\mathbf{m} \in {\mathbb{R}}^{{f}_{m}}$ with a message embedding dimension of ${f}_{m},{\phi }_{x} : {\mathbb{R}}^{{f}_{m}} \rightarrow {\mathbb{R}}^{1},C = \frac{1}{N - 1}$ divides the sum by the number of elements, and ${\phi }_{e}$ and ${\phi }_{h}$ are typical edge and node operations approximated by multilayer perceptrons (MLPs). For more details about the model and a proof of the equivariance property, we refer readers to [15]. + +§ V. EXPERIMENTS + +We evaluate GGIK's capability to learn accurate solutions and generalize within a class of manipulator structures, and investigate the importance of capturing the Euclidean equiv-ariance of the graphical formulation of inverse kinematics. + +§ A. ACCURACY AND GENERALIZATION + +In Table 1, we evaluate the accuracy of GGIK for a variety of existing commercial manipulators featuring different structures and numbers of joints: the Kuka IIWA, Schunk LWA4D, Schunk LWA4P, Universal Robots UR10, and Franka Emika Panda. We trained a single instance of GGIK on a total of $2,{560},{000}\mathrm{{IK}}$ problems uniformly distributed over all five manipulators. We compare GGIK to other learned IK baselines [1, 4, 18] that are trained specifically for each robot. GGIK achieves better or comparable accuracy to all baselines despite generalizing across multiple manipulator types. + +§ B. ABLATION STUDY ON THE EQUIVARIANT NETWORK ARCHITECTURE + +We conducted an ablation experiment to evaluate the importance of capturing the underlying $\mathrm{E}\left( n\right)$ equivariance of the distance geometry problem (Problem II) in our learning architecture. We compare the use of the EGNN network [15] to four common and popular GNN layers that are not $\mathrm{E}\left( n\right)$ equivari-ant: GRAPHsage [6], GAT [17], GCN [9] and MPNN [5]. We match the number of parameters for each GNN architecture as closely as possible and keep all other experimental parameters fixed. Out of the five different architectures that we compare, only the EGNN and MPNN output point sets that can be successfully mapped to valid joint configurations. Point sets that are too far from those representing a valid joint configuration result in the configuration reconstruction procedure diverging. The equivariant EGNN model outperforms all other models in terms of the ELBO value attained on a held-out test set. + +§ VI. CONCLUSION + +GGIK provides a framework for learned 'general' IK, that is, a solver (or initializer) that can provide multiple diverse solutions and can be used with any manipulator in a way that complements or replaces numerical optimization. The graphical formulation of IK naturally leads to the use of a GNN for learning, since the GNN can accept problems for arbitrary robots with different kinematic structures and numbers of degrees of freedom. Our formulation also exposes the Euclidean equivariance of the problem. We exploit this symmetry by explicitly encoding it into the architecture of our learned model. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/BbFl6GOleK/Initial_manuscript_md/Initial_manuscript.md b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/BbFl6GOleK/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..408d5434c765425fc5db161ddebe8999b5ea9199 --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/BbFl6GOleK/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,571 @@ +# Geometric Algebra Transformers + +Author names omitted for anonymous review. Paper ID: 1. + +Abstract-Problems involving geometric data arise in a variety of fields, including computer vision, robotics, chemistry, and physics. Such data can take numerous forms, such as points, direction vectors, planes, or transformations, but to date there is no single architecture that can be applied to such a wide variety of geometric types while respecting their symmetries. In this paper we introduce the Geometric Algebra Transformer (GATr), a general-purpose architecture for geometric data. GATr represents inputs, outputs, and hidden states in the projective geometric algebra, which offers an efficient 16-dimensional vector space representation of common geometric objects as well as operators acting on them. GATr is equivariant with respect to $\mathrm{E}\left( 3\right)$ , the symmetry group of 3D Euclidean space. As a transformer, GATr is scalable, expressive, and versatile. In experiments with $n$ -body modeling and robotic planning, GATr shows strong improvements over non-geometric baselines. + +## I. INTRODUCTION + +From molecular dynamics to astrophysics, from material design to robotics, fields across science and engineering deal with geometric data: points, directions, surfaces, orientations, and so on. The geometric nature of data provides a rich structure: a notion of common operations between geometric types (computing distances between points, applying rotations to orientations, etc.), a well-defined behaviour of data under transformations of a system, and the independence of certain properties of coordinate system choices. + +When learning relations from geometric data, incorporating this rich structure into the architecture has the potential to improve the performance, especially in the low-data regime. To implement such an inductive bias, it is useful to first categorize inputs, outputs, and internal data into certain object types, for instance group representations. Next, the functions mapping between these types have certain regularity constraints imposed, for instance based on equivariance [6]. + +In this spirit, we introduce the Geometric Algebra Transformer (GATr), a general-purpose network architecture for geometric data. GATr brings together three key ideas. + +Geometric algebra: To naturally describe both geometric objects as well as their transformations in three-dimensional space, GATr represents data as multivectors of the projective geometric algebra ${\mathbb{G}}_{3,0,1}$ . Geometric algebra is an elegant, versatile and practical mathematical framework for geometrical computations. The particular algebra ${\mathbb{G}}_{3,0,1}$ extends the vector space ${\mathbb{R}}^{3}$ to 16-dimensional multivectors, which can natively represent various geometric types and $\mathrm{E}\left( 3\right)$ poses. In this framework, common interactions between geometric data types can be computed with few operations, in particular the geometric product. + +Equivariance: To behave consistently under transformations, GATr is equivariant with respect to $\mathrm{E}\left( 3\right)$ , the symmetry group of three-dimensional space. To this end, we develop several new $\mathrm{E}\left( 3\right)$ -equivariant primitives mapping between multivectors, including equivariant linear maps, an attention mechanism, nonlinearities, and normalization layers. + +Transformer: Due to its favorable scaling properties, expressiveness, trainability, and versatility, the transformer architecture [23] has become the de-facto standard for a wide range of problems. GATr is based on the transformer architecture, and hence inherits these benefits. + +GATr hence combines two lines of research: the representation of geometric objects with geometric algebra [9, 10, 18], popular in computer graphics and physics and recently gaining traction in deep learning $\left\lbrack {3,{19},{21}}\right\rbrack$ , and the encoding of symmetries through equivariant deep learning [7]. The result-to the best of our knowledge the first $\mathrm{E}\left( 3\right)$ -equivariant architecture with internal geometric algebra representations-is a versatile network for problems involving geometric data. We demonstrate GATr in a robotic planning problem, where it significantly outperforms non-geometric baselines. + +## II. GEOMETRIC ALGEBRA IN A NUTSHELL + +We begin with the briefest of introductions to geometric algebra. For an in-depth introduction, we point the interested reader to Refs. [9, 10, 18, 19]. + +Whereas a plain vector space like ${\mathbb{R}}^{3}$ allows us to take linear combinations of elements $x$ and $y$ (vectors), a geometric algebra additionally has a bilinear associative operation: the geometric product, denoted simply by ${xy}$ . By multiplying vectors, one obtains so-called multivectors, which can represent both geometrical objects and operators. Multivectors can be expanded on a multivector basis, characterized by their dimensionality or grade, such as scalars (grade 0 ), vectors ${e}_{i}$ (grade 1), bivectors ${e}_{i}{e}_{j}$ (grade 2), all the way up to the pseudoscalar ${e}_{1}\cdots {e}_{d}$ (grade $d$ ). The symmetric and antisymmetric parts of the geometric product are called the interior and exterior (wedge) product. Finally, we will require is the dualization operator $x \mapsto {x}^{ * }$ . It acts on basis elements by swapping "empty" and "full" dimensions, e. g. sending ${e}_{1} \mapsto {e}_{23}$ . + +In order to represent three-dimensional objects as well as arbitrary rotations and translations acting on them, we work with the projective geometric algebra ${\mathbb{G}}_{3,0,1}\left\lbrack {9,{18},{19}}\right\rbrack$ . Here one adds a fourth homogeneous coordinate ${x}_{0}{e}_{0}$ to the $3\mathrm{D}$ vector space, yielding a ${2}^{4} = {16}$ -dimensional geometric algebra. The metric of ${\mathbb{G}}_{3,0,1}$ is such that ${e}_{0}^{2} = 0$ and ${e}_{i}^{2} = 1$ for $i = 1,2,3$ . + +We can use ${\mathbb{G}}_{3,0,1}$ to represent transformations: a vector $u$ represents the reflection of other elements in the hyperplane orthogonal to $u$ . Since any orthogonal transformation is equal to a sequence of reflections, this allows us to express any such transformation as a geometric product of (unit) vectors, $u = {u}_{1}\cdots {u}_{k}$ . These form the Pin group, which turns out to be the double cover of $\mathrm{E}\left( 3\right)$ . In order to apply elements of the Pin group to an arbitrary multivector $x$ , one uses the sandwich product: + +$$ +{\rho }_{u}\left( x\right) = \left\{ \begin{array}{ll} {ux}{u}^{-1} & \text{ if }u\text{ is even } \\ u\widehat{x}{u}^{-1} & \text{ if }u\text{ is odd } \end{array}\right. \tag{1} +$$ + +
Object / operatorScalar 1VectorBivectorTrivectorPS
${e}_{0}$${e}_{i}$${e}_{0i}$${e}_{ij}$${e}_{0ij}$${e}_{123}$${e}_{0123}$
Scalar $\lambda \in \mathbb{R}$$\lambda$0000000
Plane w/ normal $n \in {\mathbb{R}}^{3}$ , origin shift $d \in \mathbb{R}$0$d$$n$00000
Line $\mathrm{w}$ / direction $n \in {\mathbb{R}}^{3}$ , orthogonal shift $s \in {\mathbb{R}}^{3}$000$S$$n$000
Point $p \in {\mathbb{R}}^{3}$00000$p$10
Pseudoscalar $\mu \in \mathbb{R}$0000000$\mu$
Reflection through plane w/ normal $n \in {\mathbb{R}}^{3}$ , origin shift $d \in \mathbb{R}$0$d$$n$00000
Translation $t \in {\mathbb{R}}^{3}$100$\frac{1}{2}t$0000
Rotation expressed as quaternion $q \in {\mathbb{R}}^{4}$${q}_{0}$000${q}_{i}$000
Point reflection through $p \in {\mathbb{R}}^{3}$00000$p$10
+ +TABLE I: Embeddings of common geometric objects and transformations into the projective geometric algebra ${\mathbb{G}}_{3,0,1}$ . The columns show different components of the multivectors with the corresponding basis elements, with $i, j \in \{ 1,2,3\} , j \neq i$ , i.e. ${ij} \in \{ {12},{13},{23}\}$ . For simplicity, we fix gauge ambiguities (the weight of the multivectors) and leave out signs (which depend on the ordering of indices in the basis elements). + +Here $\widehat{x}$ is the grade involution, which flips the sign of odd-grade elements such as vectors and trivectors, while leaving even-grade elements unchanged. + +Following Refs. $\left\lbrack {9,{18},{19}}\right\rbrack$ , we represent planes with vectors, and require that the intersection of two geometric objects is given by the wedge product of their representations. Lines (the intersection of two planes) are thus represented as bivectors, points (the intersection of three planes) as trivectors. This leads to a duality between objects and operators, where objects are represented like transformations that leave them invariant. Table I provides a dictionary of these embeddings. It is easy to check that this representation is consistent with using the sandwich product for transformations. + +We construct network layers that are equivariant with respect to $\mathrm{E}\left( 3\right)$ , or equivalently its double cover $\operatorname{Pin}\left( {3,0,1}\right)$ . A function $f : {\mathbb{G}}_{3,0,1} \rightarrow {\mathbb{G}}_{3,0,1}$ is $\operatorname{Pin}\left( {3,0,1}\right)$ -equivariant with respect to the representation $\rho$ (or $\operatorname{Pin}\left( {3,0,1}\right)$ -equivariant for short) if $f\left( {{\rho }_{u}\left( x\right) }\right) = {\rho }_{u}\left( {f\left( x\right) }\right)$ for any $u \in \operatorname{Pin}\left( {3,0,1}\right)$ and $x \in {\mathbb{G}}_{3,0,1}$ . + +### III.The Geometric Algebra Transformer + +## A. Architecture overview + +The Geometric Algebra Transformer (GATr) is designed based on three principles outlines in the introduction: a strong inductive bias for geometric data through a representation based on geometric algebra, symmetry awareness through E(3) equivariance, and scalability and versatility through a transformer architecture. + +We sketch GATr in Fig. 1. In the top row, we show the overall workflow. If necessary, raw inputs are first preprocessed into geometric types. The geometric objects are then embedded into multivectors of the geometric algebra ${\mathbb{G}}_{3,0,1}$ , following the recipe described in Tbl. I. + +The multivector-valued data are processed with a GATr network. We show this architecture in more detail in the bottom row of Fig. 1. GATr consists of $N$ transformer blocks, each consisting of an equivariant multivector LayerNorm, an equivariant multivector self-attention mechanism, a residual connection, another equivariant LayerNorm, an equivariant multivector MLP with geometric bilinear interactions, and another residual connection. The architecture is thus similar to a typical transformer [23] with pre-layer normalization [1, 24], but adapted to correctly handle multivector data and be $\mathrm{E}\left( 3\right)$ equivariant. We describe the individual layers below. + +Finally, from the outputs of the GATr network we extract the target variables, again following the mapping given in Tbl. I. + +## B. GATr primitives + +a) Linear layers: We begin with linear layers between multivectors. In Appendix A, we show that the equivariance condition severely constrains them: + +Proposition 1. Any linear map $\phi : {\mathbb{G}}_{d,0,1} \rightarrow {\mathbb{G}}_{d,0,1}$ that is equivariant to $\operatorname{Pin}\left( {d,0,1}\right)$ is of the form + +$$ +\phi \left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{{d + 1}}{w}_{k}\langle x{\rangle }_{k} + \mathop{\sum }\limits_{{k = 0}}^{d}{v}_{k}{e}_{0}\langle x{\rangle }_{k} \tag{2} +$$ + +for parameters $w \in {\mathbb{R}}^{d + 2}, v \in {\mathbb{R}}^{d + 1}$ . Here $\langle x{\rangle }_{k}$ is the blade projection of a multivector, which sets all non-grade- $k$ elements to zero. + +Thus, $\mathrm{E}\left( 3\right)$ -equivariant linear maps between ${\mathbb{G}}_{3,0,1}$ multivec-tors can be parameterized with nine coefficients, five of which are the grade projections and four include a multiplication with the homogeneous basis vector ${e}_{0}$ . We thus parameterize affine layers between multivector-valued arrays with Eq. (2), with learnable coefficients ${w}_{k}$ and ${v}_{k}$ for each combination of input channel and output channel. In addition, there is a learnable bias term for the scalar components of the outputs (biases for the other components are not equivariant). + +![019640fe-24fe-7951-97b3-4c97a1b721dc_2_134_146_1530_499_0.jpg](images/019640fe-24fe-7951-97b3-4c97a1b721dc_2_134_146_1530_499_0.jpg) + +Fig. 1: Overview over the GATr architecture. Boxes with solid lines are learnable components, those with dashed lines are fixed. + +b) Geometric bilinears: Equivariant linear maps are not sufficient to build expressive networks. The reason is that these operations allow for only very limited grade mixing. For the network to be able to construct new geometric features from existing ones, such as the translation vector between two points, two additional primitives are essential. + +The first is the geometric product $x, y \mapsto {xy}$ , the fundamental bilinear operation of geometric algebra. It allows for substantial mixing between grades: for instance, the geometric product of vectors consists of scalars and bivector components. The geometric product is equivariant (Appendix A). + +The second geometric primitive we use derived from the so-called join ${}^{1}x, y \mapsto {\left( {x}^{ * } \land {y}^{ * }\right) }^{ * }$ . This map may appear complicated, but it plays a simple role in our architecture: an equivariant map that involves the dual $x \mapsto {x}^{ * }$ . Including the dual in an architecture is essential for expressivity: in ${\mathbb{G}}_{3,0,1}$ , without any dualization it is impossible to represent even simple functions such as the Euclidean distance between two points [9]; we show this in Appendix A. While the dual itself is not $\operatorname{Pin}\left( {3,0,1}\right)$ -equivariant (w.r.t. $\rho$ ), the join operation is equivariant to even (non-mirror) transformations. To make the join equivariant to mirrorings as well, we multiply its output with a pseudoscalar derived from the network inputs: $x, y, z \mapsto$ EquiJoin $\left( {x, y, z}\right) = {z}_{0123}{\left( {x}^{ * } \land {y}^{ * }\right) }^{ * }$ , where ${z}_{0123} \in \mathbb{R}$ is the pseudoscalar component of a reference multivector $z$ . + +We define a geometric bilinear layer that combines the geometric product and the join of the two inputs as $\operatorname{Geometric}\left( {x, y;z}\right) =$ ${\text{Concatenate}}_{\text{channels }}\left( {{xy},\operatorname{EquiJoin}\left( {x, y;z}\right) }\right)$ . In GATr, this layer is included in the MLP. + +c) Nonlinearities and normalization: We use scalar-gated GELU nonlinearities [12] GatedGELU $\left( x\right) = \operatorname{GELU}\left( {x}_{1}\right) x$ , where ${x}_{1}$ is the scalar component of the multivector $x$ . Moreover, we define an E(3)-equivariant LayerNorm operation for multivectors as LayerNorm $\left( x\right) = x/\sqrt{{\mathbb{E}}_{c}\langle x, x\rangle }$ , where the expectation goes over channels and we use the invariant inner product $\langle \cdot , \cdot \rangle$ of ${\mathbb{G}}_{3,0,1}$ . + +d) Attention: Given multivector-valued query, key, and value tensors, each consisting of ${n}_{i}$ items (or tokens) and ${n}_{c}$ channels (key length), we define the $\mathrm{E}\left( 3\right)$ -equivariant multivector attention as + +$$ +\operatorname{Attention}{\left( q, k, v\right) }_{{i}^{\prime }{c}^{\prime }} = \mathop{\sum }\limits_{i}{\operatorname{Softmax}}_{i}\left( \frac{\mathop{\sum }\limits_{c}\left\langle {{q}_{{i}^{\prime }c},{k}_{ic}}\right\rangle }{\sqrt{8{n}_{c}}}\right) {v}_{i{c}^{\prime }}. +$$ + +(3) + +Here the indices $i,{i}^{\prime }$ label items, $c,{c}^{\prime }$ label channels, and $\langle \cdot , \cdot \rangle$ is the invariant inner product of the geometric algebra. Just as in the original transformer [23], we thus compute scalar attention weights with a scaled dot product; the difference is that we use the inner product of ${\mathbb{G}}_{3,0,1}$ . We extend this attention mechanism to multi-head self-attention in the usual way. + +## C. Extensions + +a) Auxiliary scalar representations: While multivectors are well-suited to model geometric data, many problems contain non-geometric information as well. Such scalar information may be high-dimensional, for instance in sinosoidal positional encoding schemes. Rather than embedding into the scalar components of the multivectors, we add an auxiliary scalar representation to the hidden states of GATr. Each layer thus has both scalar and multivector inputs and outputs. They have the same batch dimension and item dimension, but may have different number of channels. + +This additional scalar information interacts with the multi-vector data in two ways. In linear layers, we allow the auxiliary scalars to mix with the scalar component of the multivectors. In the attention layer, we compute attention weights both from the multivectors, as given in Eq. (3), and from the auxiliary scalars, using a regular scaled dot-product attention. The two attention maps are summed before computing the softmax, and the normalizing factor is adapted. In all other layers, the scalar information is processed separately from the multivector information, using the unrestricted form of the multivector map. For instance, nonlinearities transform multivectors with equivariant gated GELUs and auxiliary scalars with regular GELU functions. + +--- + +${}^{1}$ Technically, the join has an anti-dual, not the dual, in the output. We leave this detail out for notational simplicity. + +--- + +
MethodReward
GATr-Diffuser (ours)${74.8} \pm {1.7}$
Transformer-Diffuser${69.8} \pm {1.9}$
Diffuser [15] (reproduced)${57.7} \pm {1.8}$
Diffuser [15]${58.7} \pm {2.5}$
EDGI [5]${62.0} \pm {2.1}$
CQL [17]24.4
BCQ [11]0.0
+ +TABLE II: Diffusion-based robotic planning. We show the normalized cumulative rewards achieved on a robotic block stacking task [15], where 100 is optimal and means that each block stacking task is completed successfully, while 0 corresponds to a failure to stack any blocks. We show the mean and standard error over at least 100 evaluation episodes. The top three results were computed in the GATr code base, the bottom four taken from the literature [5, 15]. + +b) Rotary positional embeddings: GATr assumes the data can be described as a set of items (or tokens). If these items are distinguishable and form a sequence, we encode their position using rotary position embeddings [22] in the auxiliary scalar variables. + +c) Axial attention over objects and time: The architecture is flexible about the structure of the data. In some use cases, there will be a single dimension along which objects are organized, for instance when describing a static scene or the time evolution of a single object. But GATr also supports the organization of a problem along multiple axes, for example with one dimension describing objects and another time steps. In this case, we follow an axial transformer layout [13], alternating between transformer blocks that attend over different dimensions. (The not-attended dimensions in each block are treated like a batch dimension.) + +## IV. Robotic planning through invariant diffusion + +In Appendix C, we demonstrate Kuka on a synthetic $n$ -body regression problem. We find that it outperforms non-geometric baselines and the $\mathrm{E}\left( 3\right)$ -equivariant SEGNN in terms of sample efficiency and generalization. + +In this section of the main paper, we restrict ourselves to a robotics experiment. We show how GATr defines an $\mathrm{E}\left( 3\right)$ - invariant diffusion model, that it can be used for model-based reinforcement learning and planning, and that this combination is well-suited to solve robotics problems. + +We follow Janner et al. [15], who propose to treat learning a world model and planning within that model as a unified generative modeling problem. After training a diffusion model [20] on offline trajectories, one can use it in a planning loop, sampling from it conditional on the current state, desired future states, or to maximize a given reward, as needed. + +We embed a GATr model in this algorithm and call this combination GATr-Diffuser. GATr is equivariant with respect to $\mathrm{E}\left( 3\right)$ and the object permutation group ${\mathrm{S}}_{n}$ . When used together with a base density that is $\mathrm{E}\left( 3\right) \times {\mathrm{S}}_{n}$ -invariant, the diffusion model is also $\mathrm{E}\left( 3\right) \times {\mathrm{S}}_{n}$ -invariant [2,16]. Often, a particular task requires breaking this symmetry: imagine, for instance, that a particular object needs to be moved to a particular location. The Diffuser approach is an excellent match for such situations, as conditioning on the current state, future state, or a reward model as proposed by Janner et al. [15] can softly break the symmetry group as desired [5]. + +![019640fe-24fe-7951-97b3-4c97a1b721dc_3_914_151_745_747_0.jpg](images/019640fe-24fe-7951-97b3-4c97a1b721dc_3_914_151_745_747_0.jpg) + +Fig. 2: Diffusion-based robotic planning. We show normalized rewards (higher is better) as in Tbl. II as a function of training dataset size. GATr (—) is more successful at block stacking and more sample-efficient than the baselines, including the original Diffuser model [15] (一) and our modification of it based on a Transformer(- -). In grey, we show results reported in the literature [5, 15]. + +GATr-Diffuser is demonstrated on the problem of a Kuka robotic gripper stacking blocks using the "unconditional" environment introduced by Janner et al. [15]. We train a GATr-Diffuser model on the offline trajectory dataset published with that paper. To facilitate a geometric interpretation, we parameterize the data in terms of geometric quantities like object positions and orientations. In particular, we use the position and pose of the robotic endeffector as features and map to joint angles with an inverse kinematics model. We then test GATr-Diffuser on its ability to stack four blocks on each other. We compare our GATr-Diffuser model to a reproduction of the original Diffuser model (based on the published code, but using our data parameterization) and a new transformer backbone for the Diffuser model. In addition, we show the published results of Diffuser [15], the equivariant EDGI [5], and the offline RL algorithms CQL [17] and BCQ [11] as published in Ref. [15]. The problem and hyperparameters are described in detail in Appendix D. + +As shown in Tbl. II and Fig. 2, GATr-Diffuser is able to solve the block-stacking problem better than all baselines. It is also clearly more sample-efficient, matching the performance of a Diffuser model trained on the full dataset even when training only on $1\%$ of the trajectories. The fact that GATr-Diffuser also outperforms the E(3)-equivariant EDGI model [5] is evidence that equivariance alone is not the key to its success, hinting that the geometric algebra provides a useful inductive bias. + +## REFERENCES + +[1] Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853, 2018. + +[2] Avishek Joey Bose and Ivan Kobyzev. Equivariant finite normalizing flows. arXiv preprint arXiv:2110.08649, 2021. + +[3] Johannes Brandstetter, Rianne van den Berg, Max Welling, and Jayesh K Gupta. Clifford neural layers for PDE modeling. arXiv preprint arXiv:2209.04934, 2022. + +[4] Johannes Brandstetter, Rob Hesselink, Elise van der Pol, Erik J Bekkers, and Max Welling. Geometric and physical quantities improve $\mathrm{E}\left( 3\right)$ equivariant message passing. In International Conference on Learning Representations, 2022. + +[5] Johann Brehmer, Joey Bose, Pim De Haan, and Taco Cohen. EDGI: Equivariant Diffusion for Planning with Embodied Agents. ICLR workshop on Reincarnating Reinforcement Learning, 2023. + +[6] Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. 2021. + +[7] Taco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pages 2990-2999. PMLR, 2016. + +[8] Erwin Coumans and Yunfei Bai. PyBullet, a Python module for physics simulation for games, robotics and machine learning. http://pybullet.org, 2016-2019. + +[9] Leo Dorst. A guided tour to the plane-based geometric algebra pga. 2020. URL https://geometricalgebra.org/ downloads/PGA4CS.pdf. + +[10] Leo Dorst, Daniel Fontijne, and Stephen Mann. Geometric Algebra for Computer Science: An Object-oriented Approach to Geometry. Morgan Kaufmann Series in Computer Graphics. Morgan Kaufmann, Amsterdam, 2007. ISBN 978-0-12-369465-2. + +[11] Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In International conference on machine learning, pages 2052-2062. PMLR, 2019. + +[12] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. + +[13] Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, and Tim Salimans. Axial attention in multidimensional transformers. arXiv:1912.12180 [cs], December 2019. + +[14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Neural Information Processing Systems, 2020. + +[15] Michael Janner, Yilun Du, Joshua Tenenbaum, and Sergey Levine. Planning with diffusion for flexible behavior synthesis. In International Conference on Machine Learning, 2022. + +[16] Jonas Köhler, Leon Klein, and Frank Noé. Equivariant flows: exact likelihood generative learning for symmetric densities. In International conference on machine + +learning, pages 5361-5370. PMLR, 2020. + +[17] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179-1191, 2020. + +[18] Martin Roelfs and Steven De Keninck. Graded symmetry groups: plane and simple. arXiv preprint arXiv:2107.03771, 2021. + +[19] David Ruhe, Jayesh K Gupta, Steven de Keninck, Max Welling, and Johannes Brandstetter. Geometric clifford algebra networks. arXiv preprint arXiv:2302.06594, 2023. + +[20] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256-2265. PMLR, 2015. + +[21] Matthew Spellings. Geometric algebra attention networks for small point clouds. arXiv preprint arXiv:2110.02393, 2021. + +[22] Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864, 2021. + +[23] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30, 2017. + +[24] Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. On layer normalization in the transformer architecture. In International Conference on Machine Learning, pages 10524-10533. PMLR, 2020. + +## Appendix + +## A. Theoretical results + +In this section, we state or prove several properties of equivariant maps between geometric algebras that we use in the construction of GATr. + +The grade involution is a linear involutive bijection $\widehat{ \cdot }$ : ${\mathbb{G}}_{n,0, r} : {\mathbb{G}}_{n,0, r}$ , which sends a $k$ -blade $x$ to $\widehat{x} = {\left( -1\right) }^{k}x$ . Note that this is an algebra automorphism $\overset{⏜}{xy} = \widehat{x}\widehat{y}$ , and also an $\land$ -algebra automorphism. The reversal in a linear involutive bijection $\widetilde{ \cdot } : {\mathbb{G}}_{n,0, r} : {\mathbb{G}}_{n,0, r}$ which sends a $k$ -blade $x = {x}_{1} \land {x}_{2} \land \ldots \land {x}_{k}$ to the reverse: $\widetilde{x} = {x}_{k} \land \ldots \land {x}_{2} \land {x}_{1} = \pm x$ with $+ x$ if $k \in \{ 0,1,4,5,\ldots ,8,9,\ldots \}$ and $- x$ otherwise. Note that this is an anti-automorphism (contravariant functor): $\widetilde{xy} = \widetilde{y}\widetilde{x}.$ + +Here we denote the sandwich action of $u \in \operatorname{Pin}\left( {n,0, r}\right)$ on a multivector $x$ not as ${\rho }_{u}\left( x\right)$ , but as $u\left\lbrack x\right\rbrack$ . For odd $u$ , $u\left\lbrack x\right\rbrack = u\widehat{x}{u}^{-1}$ , while for even $u, u\left\lbrack x\right\rbrack = {ux}{u}^{-1}$ . The sandwich action is linear by linearity of the $\widehat{ \cdot }$ and bilinearity of the geometric product. Furthermore, note that for any particular $u \in$ $\operatorname{Pin}\left( {n,0, r}\right)$ , the action is a geometric algebra homomorphism: $u\left\lbrack {ab}\right\rbrack = u\overset{⏜}{ab}{u}^{-1} = u\widehat{a}{u}^{-1}u\widehat{b}{u}^{-1} = u\left\lbrack a\right\rbrack u\left\lbrack b\right\rbrack$ . By linearity and a symmetrization argument [10, Sec 7.1], one can show that it also a $\land$ -algebra homomorphism (outermorphism): $u\left\lbrack {a \land b}\right\rbrack =$ $u\left\lbrack a\right\rbrack \land u\left\lbrack b\right\rbrack$ . + +Let $l \geq k$ . Given a $k$ -vector $a$ and $l$ -vector $b$ , define the left contraction as $a\rfloor b \mathrel{\text{:=}} \langle {ab}{\rangle }_{l - k}$ , which is a $l - k$ -vector. For $k = 1$ , and $b$ a blade $b = {b}_{1} \land \ldots \land {b}_{l}$ . Geometrically, $a \mid b$ is the projection of $a$ to the space spanned by the vectors ${b}_{i}$ . Thus we have that $a\rfloor b = 0 \Leftrightarrow \forall i,\left\langle {a,{b}_{i}}\right\rangle = 0$ [10, Sec 3.2.3], in which case we define $a$ and $b$ to be orthogonal. In particular, two vectors $a, b$ are orthogonal if their inner product is zero. Futhermore, we define a vector $a$ to be tangential to blade $b$ if $a \land b = 0$ . + +In the projective algebra, a blade $x$ is defined to be ideal if it can be written as $x = {e}_{0} \land y$ for another blade $y$ . + +1) Linear maps: We begin with Pin-equivariant linear maps. After some technical lemmata, we prove the most general form of linear equivariant maps in the Euclidean geometric algebra ${\mathbb{G}}_{n,0,0}$ , and then also in projective geometric algebra ${\mathbb{G}}_{n,0,1}$ . + +Proposition 2. The grade projection $\langle \cdot {\rangle }_{k}$ is equivariant [10, Sec 13.2.3]. + +## Proof: + +Choose an $l$ -blade $x = {a}_{1} \land {a}_{2} \land \ldots \land {a}_{l}$ . Let $u$ be a 1-versor. As the action $u$ is an outermorphism, $u\left\lbrack x\right\rbrack = u\left\lbrack {a}_{1}\right\rbrack \land \ldots \land u\left\lbrack {a}_{l}\right\rbrack$ is an $l$ -blade. Now if $l \neq k$ , then $\langle x{\rangle }_{k} = 0$ and thus $u\left\lbrack {\langle x{\rangle }_{k}}\right\rbrack =$ $\langle u\left\lbrack x\right\rbrack {\rangle }_{k}$ . If $l = k$ , then $\langle x{\rangle }_{k} = x$ and thus $u\left\lbrack {\langle x{\rangle }_{k}}\right\rbrack = \langle u\left\lbrack x\right\rbrack {\rangle }_{k}$ . As the grade projection is linear, equivariance extends to any multivector. + +Proposition 3. The following map is equivariant: $\phi : {\mathbb{G}}_{3,0,1} \rightarrow$ ${\mathbb{G}}_{3,0,1} : x \mapsto {e}_{0}x$ . + +Proof: Let $u$ be a 1-versor; then $u$ acts on a multivector as $x \mapsto u\left\lbrack x\right\rbrack = u\widehat{x}{u}^{-1}$ , where $\widehat{x}$ is the grade involution. Note that ${e}_{0}$ is invariant: $u\left\lbrack {e}_{0}\right\rbrack = - u{e}_{0}{u}^{-1} = {e}_{0}u{u}^{-1} = {e}_{0}$ , where $u{e}_{0} = - {e}_{0}u$ because $u$ and ${e}_{0}$ are orthogonal: $u{e}_{0} =$ $\left\langle {u,{e}_{0}}\right\rangle + u \land {e}_{0} = - {e}_{0} \land u = - {e}_{0} \land u$ . Then $\phi$ is equivariant, as the action is an algebra homomorphism: $u\left\lbrack {\phi \left( x\right) }\right\rbrack = u\left\lbrack {{e}_{0}x}\right\rbrack =$ $u\widehat{{e}_{0}x}{u}^{-1} = u{\widehat{e}}_{0}{u}^{-1}u\widehat{x}{u}^{-1} = u\left\lbrack {e}_{0}\right\rbrack u\left\lbrack x\right\rbrack = {e}_{0}u\left\lbrack x\right\rbrack = \phi \left( {u\left\lbrack x\right\rbrack }\right) .$ It follows that $\phi$ is also equivariant to any product of vectors, i.e. any versor $u$ . + +a) Euclidean geometric algebra: Before constructing the most general equivariant linear map between multivectors in projective geometric algebra, we begin with the Euclidean case ${\mathbb{G}}_{n,0,0}$ . + +Theorem 1 (Cartan-Dieuodonné). Every orthogonal transformation of an $n$ -dimensional space can be decomposed into at most $n$ reflections in hyperplanes. + +Proof: This theorem is proven in Roelfs and De Keninck [18]. + +Lemma 1. In the $n$ -dimensional Euclidean geometric algebra ${\mathbb{G}}_{n,0,0}$ , the group $\operatorname{Pin}\left( {n,0,0}\right)$ acts transitively on the space of $k$ -blades of norm $\lambda \in {\mathbb{R}}^{ > 0}$ . + +Proof: As the Pin group preserves norm, choose $\lambda = 1$ without loss of generality. Any $k$ -blade $x$ of unit norm can be written by Gram-Schmidt factorization as the wedge product of $k$ orthogonal vectors of unit norm $x = {v}_{1} \land {v}_{2} \land \ldots \land {v}_{k}$ . Consider another $k$ -blade $y = {w}_{1} \land {w}_{2} \land \ldots \land {w}_{k}$ with ${w}_{i}$ orthonormal. We’ll construct a $u \in \operatorname{Pin}\left( {n,0,0}\right)$ such that $u\left\lbrack x\right\rbrack = y.$ + +Choose $n - k$ additional orthonormal vectors ${v}_{k + 1},\ldots ,{v}_{n}$ and ${w}_{k + 1},..,.{w}_{n}$ to form orthonormal bases. Then, there exists a unique orthogonal transformation ${\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n}$ that maps ${v}_{i}$ into ${w}_{i}$ for all $i \in \{ 1,\ldots , n\}$ . By the Cartan-Dieuodonné theorem 1, this orthogonal transformation can be expressed as the product of reflections, thus there exists $au \in \operatorname{Pin}\left( {n,0,0}\right)$ such that $u\left\lbrack {v}_{i}\right\rbrack = {w}_{i}$ . As the $u$ action is $a \land$ -algebra homomorphism $(u\left\lbrack {a \land b}\right\rbrack = u\left\lbrack a\right\rbrack \land u\left\lbrack b\right\rbrack$ , for any multivectors $a, b$ ), we have that $u\left\lbrack x\right\rbrack = y$ . + +Lemma 2. In the Euclidean $\left( {r = 0}\right)$ or projective $\left( {r = 1}\right)$ geometric algebra ${\mathbb{G}}_{n,0, r}$ , let $x$ be a $k$ -blade. Let $u$ be a 1- versor. Then $u\left\lbrack x\right\rbrack = x \Leftrightarrow u \mid x = 0$ and $u\left\lbrack x\right\rbrack = - x \Leftrightarrow$ $u \land x = 0.$ + +Proof: Let $x$ be a $k$ -blade and $u$ a vector of unit norm. We can decompose $u$ into $u = t + v$ with $t \land x = 0$ (the part tangential to the subspace of $x$ ) and $v \mid x = 0$ (the normal part). This decomposition is unique unless $x$ is ideal in the projective ${GA}$ , in which case the ${e}_{0}$ component of $u$ is both normal and tangential, and we choose $t$ Euclidean. + +In either case, note the following equalities: ${xt} =$ ${\left( -1\right) }^{k - 1}{tx};{xv} = {\left( -1\right) }^{k}{vx};{vt} = - {tv}$ and note $\nexists \lambda \neq 0$ such that ${vtx} = {\lambda x}$ , which can be shown e. g. by picking a basis. Then: + +$$ +u\left\lbrack x\right\rbrack = {\left( -1\right) }^{k}\left( {t + v}\right) x\left( {t + v}\right) +$$ + +$$ += \left( {t + v}\right) \left( {-t + v}\right) x +$$ + +$$ += \left( {-\parallel t{\parallel }^{2} + \parallel v{\parallel }^{2}}\right) x - {2vtx}. +$$ + +We have $u\left\lbrack x\right\rbrack \propto x \Leftrightarrow {vtx} = 0$ . If $x$ is not ideal, this implies that either $v = 0$ (thus $u \land x = 0$ and $u\left\lbrack x\right\rbrack = - x$ ) or $t = 0$ (thus $u\rfloor x = 0$ and $u\left\lbrack x\right\rbrack = x$ ). If $x$ is ideal, this implies that either $v \propto {e}_{0}$ (thus $u \land x = 0$ and $u\left\lbrack x\right\rbrack = - x$ ) or $t = 0$ (thus $u\rfloor x = 0$ and $u\left\lbrack x\right\rbrack = x$ ). + +Lemma 3. Let $r \in \{ 0,1\}$ . Any linear $\operatorname{Pin}\left( {n,0, r}\right)$ -equivariant map $\phi : {\mathbb{G}}_{n,0, r} \rightarrow {\mathbb{G}}_{n,0, r}$ can be decomposed into a sum of equivariant maps $\phi = \mathop{\sum }\limits_{{lkm}}{\phi }_{lkm}$ , with ${\phi }_{lkm}$ equivariantly mapping $k$ -blades to l-blades. If $r = 0$ (Euclidean algebra) or $k < n + 1$ , such a map ${\phi }_{lkm}$ is defined by the image of any one non-ideal $k$ -blade, like ${e}_{{12}\ldots k}$ . Instead, if $r = 1$ (projective algebra) and $k = n + 1$ , then such a map is defined by the image of a pseudoscalar, like ${e}_{{01}\ldots n}$ . + +Proof: The $\operatorname{Pin}\left( {n,0, r}\right)$ group action maps $k$ -vectors to $k$ -vectors. Therefore, $\phi$ can be decomposed into equivariant maps from grade $k$ to grade $l : \phi \left( x\right) = \mathop{\sum }\limits_{{lk}}{\phi }_{lk}\left( {\langle x{\rangle }_{k}}\right)$ , with ${\phi }_{lk}$ having $l$ -vectors as image, and all ${k}^{\prime }$ -vectors in the kernel, for ${k}^{\prime } \neq k$ . Let $x$ be an non-ideal $k$ -blade (or pseudoscalar if $k = n + 1$ ). By lemmas 1 and 4, in both Euclidean and projective ${GA}$ , the span of the $k$ -vectors in the orbit of $x$ contains any $k$ -vector. So ${\phi }_{lk}$ is defined by the $l$ -vector $y =$ ${\phi }_{lk}\left( x\right)$ . Any $l$ -vector can be decomposed as a finite sum of $l$ - blades: $y = {y}_{1} + \ldots {y}_{M}$ . We can define ${\phi }_{lkm}\left( x\right) = {y}_{m}$ , extended to all $l$ -vectors by equivariance, and note that ${\phi }_{lk} = \mathop{\sum }\limits_{m}{\phi }_{lkm}$ . + +Proposition 4. For an n-dimensional Euclidean geometric algebra ${\mathbb{G}}_{n,0,0}$ , any linear endomorphism $\phi : {\mathbb{G}}_{n,0,0} \rightarrow {\mathbb{G}}_{n,0,0}$ that is equivariant to the $\operatorname{Pin}\left( {n,0,0}\right)$ group (equivalently to $O\left( n\right)$ ) is of the type $\phi \left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{n}{w}_{k}\langle x{\rangle }_{k}$ , for parameters $w \in {\mathbb{R}}^{n + 1}$ . + +Proof: By decomposition of Lemma 3, let $\phi$ map from $k$ -blades to l-blades. Let $x$ be a $k$ -blade. Let $u$ be a 1-versor. By Lemma 2, if $u$ is orthogonal to $x, u\left\lbrack {\phi \left( x\right) }\right\rbrack = \phi \left( {u\left\lbrack x\right\rbrack }\right) =$ $\phi \left( x\right)$ and $u$ is also orthogonal to $\phi \left( x\right)$ . If $u \land x = 0$ , then $u\left\lbrack {\phi \left( x\right) }\right\rbrack = \phi \left( {u\left\lbrack x\right\rbrack }\right) = \phi \left( {-x}\right) = - \phi \left( x\right)$ and $u \land \phi \left( x\right) = 0$ . Thus any vector in $x$ is in $\phi \left( x\right)$ and any vector orthogonal to $x$ is orthogonal to $\phi \left( x\right)$ , this implies $\phi \left( x\right) = {w}_{k}x$ , for some ${w}_{k} \in \mathbb{R}$ . By Lemma 3, we can extend $\phi$ to $\phi \left( y\right) = {w}_{k}y$ for any $k$ -vector $y$ . + +b) Projective geometric algebra: How about equivariant linear maps in projective geometric algebra? The degenerate metric makes the derivation more involved, but in the end we will arrive at a result that is only slightly more general. + +Lemma 4. The Pin group of the projective geometric algebra, $\operatorname{Pin}\left( {n,0,1}\right)$ , acts transitively on the space of $k$ -blades with positive norm $\parallel x\parallel = \lambda > 0$ . Additionally, the group acts transitively on the space of zero-norm $k$ -blades of the form $x = {e}_{0} \land y$ (called ideal blades), with $\parallel y\parallel = \kappa$ . + +Proof: Let $x = {x}_{1} \land \ldots \land {x}_{k}$ be a $k$ -blade with positive norm $\lambda$ . All vectors ${x}_{i}$ can be written as ${x}_{i} = {v}_{i} + {\delta }_{i}{e}_{0}$ , for a nonzero Euclidean vector ${v}_{i}$ (meaning with no ${e}_{0}$ component) and ${\delta }_{i} \in \mathbb{R}$ , because if ${v}_{i} = 0$ , the norm of $x$ would have been 0 . Orthogonalize them as ${x}_{2}^{\prime } = {x}_{2} - \left\langle {{x}_{1},{x}_{2}}\right\rangle {x}_{1}$ , etc., resulting in $x = {x}_{1}^{\prime } \land \cdots \land {x}_{k}^{\prime }$ with ${x}_{i}^{\prime } = {v}_{i}^{\prime } + {\delta }_{i}^{\prime }{e}_{0}$ with orthogonal ${v}_{i}^{\prime }$ . + +Define the translation $t = 1 + \frac{1}{2}\mathop{\sum }\limits_{i}{\delta }_{i}^{\prime }{e}_{0} \land {v}_{i}^{\prime }$ , which makes ${x}^{\prime }$ Euclidean: $t\left\lbrack {x}^{\prime }\right\rbrack = {v}_{1}^{\prime } \land \ldots \land {v}_{k}^{\prime }$ . By Lemma 1, the Euclidean Pin group $\operatorname{Pin}\left( {n,0,0}\right)$ , which is a subgroup of $\operatorname{Pin}\left( {n,0,1}\right)$ , acts transitively on Euclidean $k$ -blades of a given norm. Thus, in the projective geometric algebra $\operatorname{Pin}\left( {n,0,1}\right)$ , any two $k$ - blades of equal positive norm $\lambda$ are related by a translation to the origin and then a $\operatorname{Pin}\left( {n,0,0}\right)$ transformation. + +For the ideal blades, let $x = {e}_{0} \land y$ , with $\parallel y\parallel = \kappa$ . We take $y$ to be Euclidean without loss of generality. For any $g \in \operatorname{Pin}\left( {n,0,1}\right) , g\left\lbrack {e}_{0}\right\rbrack = {e}_{0}$ , so $g\left\lbrack x\right\rbrack = {e}_{0} \land g\left\lbrack y\right\rbrack$ . Consider another ${x}^{\prime } = {e}_{0} \land {y}^{\prime }$ with $\begin{Vmatrix}{y}^{\prime }\end{Vmatrix} = \kappa$ and taking ${y}^{\prime }$ Euclidean. As $\operatorname{Pin}\left( {n,0,0}\right)$ acts transitively on Euclidean(k - 1)-blades with norm $\kappa$ , let $g \in \operatorname{Pin}\left( {n,0,0}\right)$ such that $g\left\lbrack y\right\rbrack = {y}^{\prime }$ . Then $g\left\lbrack x\right\rbrack = {x}^{\prime }$ . + +We can now construct the most general equivariant linear map between projective geometric algebras, a key ingredient for GATr: + +Proposition 5. For the projective geometric algebra ${\mathbb{G}}_{n,0,1}$ , any linear endomorphism $\phi : {\mathbb{G}}_{n,0,1} \rightarrow {\mathbb{G}}_{n,0,1}$ that is equivariant to the group $\operatorname{Pin}\left( {n,0, r}\right)$ (equivalently to $E\left( n\right)$ ) is of the type $\phi \left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{{n + 1}}{w}_{k}\langle x{\rangle }_{k} + \mathop{\sum }\limits_{{k = 0}}^{n}{v}_{k}{e}_{0}\langle x{\rangle }_{k}$ , for parameters $w \in {\mathbb{R}}^{n + 2}, v \in {\mathbb{R}}^{n + 1}$ . + +Proof: Following Lemma 3, decompose $\phi$ into a linear equivariant map from $k$ -blades to $l$ -blades. For $k < n + 1$ , let $x = {e}_{{12}\ldots k}$ . Then following Lemma 2, for any $1 \leq i \leq k$ , ${e}_{i} \land x = 0,{e}_{i}\left\lbrack x\right\rbrack = - x$ , and ${e}_{i}\left\lbrack {\phi \left( x\right) }\right\rbrack = \phi \left( {{e}_{i}\left\lbrack x\right\rbrack }\right) = \phi \left( {-x}\right) =$ $- \phi \left( x\right)$ and thus ${e}_{i} \land \phi \left( x\right) = 0$ . Therefore, we can write $\phi \left( x\right) =$ $x \land {y}_{1} \land \ldots \land {y}_{l - k}$ , for $l - k$ vectors ${y}_{j}$ orthogonal to $x$ . + +Also, again using Lemma 2, for $k < i \leq n,{e}_{i}\rfloor x = 0 \Rightarrow$ ${e}_{i}\left\lbrack {\phi \left( x\right) }\right\rbrack = \phi \left( x\right) \Rightarrow {e}_{i}\rfloor \phi \left( x\right) = 0 \Rightarrow \forall i,\left\langle {{e}_{i},{y}_{j}}\right\rangle = 0.$ Thus, ${y}_{j}$ is orthogonal to all ${e}_{i}$ with $1 \leq i \leq n$ . Hence, $l = k$ or $l = k + 1$ and ${y}_{1} \propto {e}_{0}$ . + +For $k = n + 1$ , let $x = {e}_{{012}\ldots k}$ . By a similar argument, all invertible vectors $u$ tangent to $x$ must be tangent to $\phi \left( x\right)$ , thus we find that $\phi \left( x\right) = x \land y$ for some blade $y$ . For any non-zero $\phi \left( x\right) , y \propto 1$ , and thus $\phi \left( x\right) \propto x$ . By Lemma 3, by equivariance and linearity, this fully defines $\phi$ . + +2) Bilinear maps: Next, we turn towards bilinear operations. In particular, we show that the geometric product and the join are equivariant. + +For the geometric product, equivariance is straightforward: Any transformation $u \in \operatorname{Pin}\left( {n,0, r}\right)$ , gives a homomorphism of the geometric algebra, as for any multivectors $x, y$ , $u\left\lbrack {xy}\right\rbrack = u\widehat{xy}{u}^{-1} = u\widehat{x}\widehat{y}{u}^{-1} = u\widehat{x}{u}^{-1}u\widehat{y}{u}^{-1} = u\left\lbrack x\right\rbrack u\left\lbrack y\right\rbrack$ . The geometric product is thus equivariant. + +a) Dual and join in Euclidean algebra: For the join and the closely related dual, we again begin with the Euclidean geometric algebra, before turning to the projective case later. + +The role of the dual is to have a bijection ${ \cdot }^{ * } : {\mathbb{G}}_{n,0,0} \rightarrow$ ${\mathbb{G}}_{n,0,0}$ that maps $k$ -vectors to(n - k)-vectors. For the Euclidean algebra, with a choice of pseudoscalar $\mathcal{I}$ , we can define a dual as: + +$$ +{x}^{ * } = x{\mathcal{I}}^{-1} = x\widetilde{\mathcal{I}} \tag{4} +$$ + +This dual is bijective, and involutive up to a sign: ${\left( {y}^{ * }\right) }^{ * } =$ $y\widetilde{\mathcal{I}}\widetilde{\mathcal{I}} = \pm y$ , with $+ y = 1$ for $n \in \{ 1,4,5,8,9,\ldots \}$ and $- y$ for $n \in \{ 2,3,6,7,\ldots \}$ . We choose $\widetilde{\mathcal{I}}$ instead of $\mathcal{I}$ in the definition of the dual so that given $n$ vectors ${x}_{1},\ldots ,{x}_{n}$ , the dual of the multivector $x = {x}_{1} \land \ldots {x}_{n}$ , is given by the scalar of the oriented volume spanned by the vector. We denote the inverse of the dual as ${x}^{- * } = x\mathcal{I}$ . Expressed in a basis, the dual yields the complementary indices and a sign. For example, for $n = 3$ and $\mathcal{I} = {e}_{123}$ , we have ${\left( {e}_{1}\right) }^{ * } = - {e}_{23},{\left( {e}_{12}\right) }^{ * } = {e}_{3}$ . + +Via the dual, we can define the bilinear join operation, for multivectors $x, y$ : + +$$ +x \vee y \mathrel{\text{:=}} {\left( {x}^{ * } \land {y}^{ * }\right) }^{- \star } = \left( {\left( {x\widetilde{\mathcal{I}}}\right) \land \left( {y\widetilde{\mathcal{I}}}\right) }\right) \mathcal{I}. +$$ + +Lemma 5. In Euclidean algebra ${\mathbb{G}}_{n,0,0}$ , the join is $\operatorname{Spin}\left( {n,0,0}\right)$ equivariant. Furthermore, it is $\operatorname{Pin}\left( {n,0,0}\right)$ equiv-ariant if and only if $n$ is even. + +Proof: The join is equivariant to the transformations from the group $\operatorname{Spin}\left( {n,0,0}\right)$ , which consists of the product of an even amount of unit vectors, because such transformations leave the pseudoscalar $\mathcal{I}$ invariant, and the operation consists otherwise of equivariant geometric and wedge products. + +However, let ${e}_{{12}\ldots n} = \mathcal{I} \in \operatorname{Pin}\left( {n,0,0}\right)$ be the point reflection, which negates vectors of odd grades by the grade involution: $\mathcal{I}\left\lbrack x\right\rbrack = \widehat{x}$ . Let $x$ be a $k$ -vector and $y$ an $l$ -vector. Then $x \vee y$ is a vector of grade $n - \left( {\left( {n - k}\right) + \left( {n - l}\right) }\right) = k + l - n$ (and zero if $k + l < n$ ). Given that the join is bilinear, the inputs transform as ${\left( -1\right) }^{k + l}$ under the point reflection, while the transformed output gets a sign ${\left( -1\right) }^{k + l - n}$ . Thus for odd $n$ , the join is not $\operatorname{Pin}\left( {n,0,0}\right)$ equivariant. + +To address this, given a pseudoscalar $z = \lambda \mathcal{I}$ , we can create an equivariant Euclidean join via: + +$$ +\text{EquiJoin}\left( {x, y, z = \lambda \mathcal{I}}\right) \mathrel{\text{:=}} \lambda \left( {x \vee y}\right) = \lambda {\left( {x}^{ * } \land {y}^{ * }\right) }^{- * }\text{.} \tag{5} +$$ + +Proposition 6. In Euclidean algebra ${\mathbb{G}}_{n,0,0}$ , the equivariant join EquiJoin is $\operatorname{Pin}\left( {n,0,0}\right)$ equivariant. + +Proof: The EquiJoin is a multilinear operation, so for $k$ -vector $x$ and $l$ -vector $y$ , under a point reflection, the input gets a sign ${\left( -1\right) }^{k + l + n}$ while the output is still a $k + l - n$ vector and gets sign ${\left( -1\right) }^{k + l - n}$ . These signs differ by even ${\left( -1\right) }^{2n} = 1$ and thus EquiJoin is $\operatorname{Pin}\left( {n,0,1}\right)$ -equivariant. + +We prove two equalities of the Euclidean join which we use later. + +Lemma 6. In the algebra ${\mathbb{G}}_{n,0,0}$ , let $v$ be a vector and $x, y$ be multivectors. Then + +$$ +v\rfloor \left( {x \vee y}\right) = \left( {v\rfloor x}\right) \vee y \tag{6} +$$ + +and + +$$ +x \vee \left( {v\rfloor y}\right) = - {\left( -1\right) }^{n}\overset{⏜}{v\rfloor x} \vee y. \tag{7} +$$ + +Proof: For the first statement, let $a$ be a $k$ -vector and $b$ an l-vector. Then note the following two identities: + +$$ +a \vee b = {\left\langle {a}^{ * }b\widetilde{\mathcal{I}}\right\rangle }_{{2n} - k - l}\mathcal{I} = {\left\langle {a}^{ * }b\right\rangle }_{n - \left( {{2n} - k - l}\right) }\widetilde{\mathcal{I}}\mathcal{I} = {\left\langle {a}^{ * }b\right\rangle }_{k + l - n} +$$ + +$$ += {a}^{ * }\rfloor b\text{,} +$$ + +$$ +{\left( v\rfloor a\right) }^{ * } = \langle {va}{\rangle }_{k - 1}\widetilde{\mathcal{I}} = \langle {va}\widetilde{\mathcal{I}}{\rangle }_{n - k + 1} = {\left\langle v{a}^{ * }\right\rangle }_{n - k + 1} +$$ + +$$ += v\rfloor \left( {a}^{ * }\right) \text{.} +$$ + +Combining these and the associativity of $\rfloor$ gives: + +$$ +\left( {v\rfloor a}\right) \vee b = \left. {\left( v\rfloor a\right) }^{ * }\right\rbrack b = v\rfloor \left( {a}^{ * }\right) \rfloor b = v\rfloor \left( {a \vee b}\right) +$$ + +For the second statement, swapping $k$ -vector a and $l$ -vector $b$ incurs $a \vee b = {\left( {a}^{ * } \land {b}^{ * }\right) }^{- * } = {\left( -1\right) }^{\left( {n - k}\right) \left( {n - l}\right) }{\left( {b}^{ * } \land {a}^{ * }\right) }^{- * } =$ ${\left( -1\right) }^{\left( {n - k}\right) \left( {n - l}\right) }\left( {b \vee a}\right)$ . Then we get: + +$$ +a \vee \left( {v\rfloor b}\right) = {\left( -1\right) }^{\left( {n - k}\right) \left( {n - l - 1}\right) }\left( {v\rfloor b}\right) \vee a +$$ + +$$ += {\left( -1\right) }^{\left( {n - k}\right) \left( {n - l - 1}\right) }v\rfloor \left( {b \vee a}\right) +$$ + +$$ += {\left( -1\right) }^{\left( {n - k}\right) \left( {n - l - 1}\right) + \left( {n - k}\right) \left( {n - l}\right) }v\rfloor \left( {a \vee b}\right) +$$ + +$$ += {\left( -1\right) }^{\left( {n - k}\right) \left( {n - l - 1}\right) + \left( {n - k}\right) \left( {n - l}\right) }\left( {v\rfloor a}\right) \vee b +$$ + +$$ += {\left( -1\right) }^{\left( {n - k}\right) \left( {{2n} - {2l} - 1}\right) }\left( {v\rfloor a}\right) \vee b +$$ + +$$ += {\left( -1\right) }^{k - n}\left( {v\rfloor a}\right) \vee b +$$ + +$$ += - {\left( -1\right) }^{k - 1 - n}\left( {v\rfloor a}\right) \vee b +$$ + +$$ += - {\left( -1\right) }^{n}\overset{⏜}{\left( v\rfloor a\right) } \vee b\text{.} +$$ + +This generalizes to multivectors $x, y$ by linearity. + +b) Dual and join in projective algebra: For the projective algebra ${\mathbb{G}}_{n,0,1}$ with its degenerate inner product, the dual definition of Eq. 4 unfortunately does not yield a bijective dual. For example, ${e}_{0}\widetilde{{e}_{{012}\ldots n}} = 0$ . For a bijective dual that yields the complementary indices on basis elements, a different definition is needed. Following Dorst [9], we use the right complement. This involves choosing an orthogonal basis and then for a basis $k$ -vector $x$ to define the dual ${x}^{ * }$ to be the basis $n + 1 - k$ -vector such that $x \land {x}^{ * } = \mathcal{I}$ , for pseudoscalar $\mathcal{I} = {e}_{{012}\ldots n}$ . For example, this gives dual ${e}_{01}^{ * } = {e}_{23}$ , so that ${e}_{01} \land {e}_{23} = {e}_{0123}$ . + +This dual is still easy to compute numerically, but it can no longer be constructed solely from operations available to us in the geometric algebra. This makes it more difficult to reason about equivariance. + +Proposition 7. In the algebra ${\mathbb{G}}_{n,0,1}$ , the join $a \vee b = \left( {{a}^{ * } \land }\right.$ ${b}^{ * }{)}^{- * }$ is equivariant to $\operatorname{Spin}\left( {n,0,1}\right)$ . + +Proof: Even though the dual is not a ${\mathbb{G}}_{n,0,1}$ operation, we can express the join in the algebra as follows. We decompose a k-vector $x$ as $x = {t}_{x} + {e}_{0}{p}_{x}$ into a Euclidean $k$ -vector ${t}_{x}$ and a Euclidean(k - 1)-vector ${p}_{x}$ . Then Dorst [9, Eq (35)] computes the following expression + +$$ +\left( {{t}_{x} + {e}_{0}{p}_{x}}\right) \vee \left( {{t}_{y} + {e}_{0}{p}_{y}}\right) +$$ + +$$ += {\left( {\left( {t}_{x} + {e}_{0}{p}_{x}\right) }^{ * } \land {\left( {t}_{y} + {e}_{0}{p}_{y}\right) }^{ * }\right) }^{- * } +$$ + +$$ += {t}_{x}{ \vee }_{\text{Euc }}{p}_{y} + {\left( -1\right) }^{n}\widehat{{p}_{x}}{ \vee }_{\text{Euc }}{t}_{y} + {e}_{0}\left( {{p}_{x}{ \vee }_{\text{Euc }}{p}_{y}}\right) , \tag{8} +$$ + +where the Euclidean join of vectors $a, b$ in the projective algebra is defined to equal the join of the corresponding vectors in the Euclidean algebra: + +$$ +a{ \vee }_{\text{Euc }}b \mathrel{\text{:=}} \left( {\left( {a\widetilde{{e}_{{12}\ldots n}}}\right) \land \left( {b\widetilde{{e}_{{12}\ldots n}}}\right) }\right) {e}_{{12}\ldots n} +$$ + +The operation a ${ \vee }_{\mathrm{E}{uc}}b$ is $\operatorname{Spin}\left( {n,0,0}\right)$ equivariant, as discussed in Lemma 5. For any rotation $r \in \operatorname{Spin}\left( {n,0,1}\right)$ (which is Euclidean), we thus have $r\left\lbrack {a{ \vee }_{\text{Euc }}b}\right\rbrack = r\left\lbrack a\right\rbrack { \vee }_{\text{Euc }}r\left\lbrack b\right\rbrack$ . This makes the PGA dual in Eq. (8) equivariant to the rotational subgroup $\operatorname{Spin}\left( {n,0,0}\right) \subset \operatorname{Spin}\left( {n,0,1}\right)$ . + +We also need to show equivariance to translations. Let $v$ be a Euclidean vector and $\tau = 1 - {e}_{0}v/2$ a translation. Translations act by shifting with ${e}_{0}$ times a contraction: $\tau \left\lbrack x\right\rbrack =$ $x - {e}_{0}\left( {v\rfloor x}\right)$ . This acts on the decomposed $x$ in the following way: $\tau \left\lbrack {{t}_{x} + {e}_{0}{p}_{x}}\right\rbrack = \tau \left\lbrack {t}_{x}\right\rbrack + {e}_{0}{p}_{x} = {t}_{x} + {e}_{0}\left( {{p}_{x} - v\rfloor {t}_{x}}\right)$ . + +We thus get: + +$$ +\tau \left\lbrack x\right\rbrack \vee \tau \left\lbrack y\right\rbrack +$$ + +$$ += \left( {\tau \left\lbrack {t}_{x}\right\rbrack + {e}_{0}{p}_{x}}\right) \vee \left( {\tau \left\lbrack {t}_{y}\right\rbrack + {e}_{0}{p}_{y}}\right) +$$ + +$$ += \left( {{t}_{x} + {e}_{0}\left( {{p}_{x} - v\rfloor t}\right) }\right) \vee \left( {{t}_{y} + {e}_{0}\left( {{p}_{y} - v\rfloor {t}_{y}}\right) }\right) +$$ + +$$ += x \vee y - {t}_{x}{ \vee }_{\text{Euc }}\left( {v\rfloor {t}_{y}}\right) - {\left( -1\right) }^{n}\overset{⏜}{v\rfloor {t}_{x}}{ \vee }_{\text{Euc }}{t}_{y} +$$ + +$$ +\left. {-{e}_{0}\left( {{p}_{x}{ \vee }_{\text{Euc }}\left( {v\rfloor {t}_{y}}\right) + \left( {v\rfloor {t}_{x}}\right) { \vee }_{\text{Euc }}{p}_{y}}\right) }\right) +$$ + +$$ +\text{(used (8) & linearity)} +$$ + +$$ += x \vee y - {e}_{0}\left( {{p}_{x}{ \vee }_{\text{Euc }}\left( {v\rfloor {t}_{y}}\right) + \left( {v\rfloor {t}_{x}}\right) { \vee }_{\text{Euc }}{p}_{y}}\right) +$$ + +$$ +\text{(used (7))} +$$ + +$$ += x \vee y - {e}_{0}\left( {-{\left( -1\right) }^{n}v\rfloor {p}_{x}{ \vee }_{\text{Euc }}{t}_{y} + \left( {v\rfloor {t}_{x}}\right) { \vee }_{\text{Euc }}{p}_{y}}\right) +$$ + +$$ +\text{(used (7))} +$$ + +$$ += x \vee y - {e}_{0}\left( {{\left( -1\right) }^{n}\left( {v\rfloor \widehat{{p}_{x}}}\right) { \vee }_{\text{Euc }}{t}_{y} + \left( {v\rfloor {t}_{x}}\right) { \vee }_{\text{Euc }}{p}_{y}}\right) +$$ + +$$ += x \vee y - {e}_{0}\left( {v\rfloor \left\{ {{\left( -1\right) }^{n}\widehat{{p}_{x}}{ \vee }_{\text{Euc }}{t}_{y} + {t}_{x}{ \vee }_{\text{Euc }}{p}_{y}}\right\} }\right) +$$ + +$$ +\text{(used (6))} \tag{used 6} +$$ + +$$ += \tau \left\lbrack {x \vee y}\right\rbrack \text{.} +$$ + +The join is thus equivariant ${}^{2}$ to translations and rotations and is therefore $\operatorname{Spin}\left( {n,0,1}\right)$ equivariant. + +Similar to the Euclidean case, we obtain full $\operatorname{Pin}\left( {n,0,1}\right)$ equivariance via multiplication with a pseudoscalar. We thus also use the EquiJoin from Eq. (5) in the projective case. + +3) Expressivity: As also noted in Ref. [9], in the projective algebra, the geometric product itself is unable to compute many quantities. It is thus insufficient to build expressive networks. This follows from the fact that the geometric product preserves norms. + +Lemma 7. For the algebra ${\mathbb{G}}_{n,0, r}$ , for multivectors $x, y$ , we have $\parallel {xy}\parallel = \parallel x\parallel \parallel y\parallel$ . + +Proof: $\parallel {xy}{\parallel }^{2} = {xy}\widetilde{xy} = {xy}\widetilde{y}\widetilde{x} = x\parallel y{\parallel }^{2}\widetilde{x} = x\widetilde{x}\parallel y{\parallel }^{2} =$ $\parallel x{\parallel }^{2}\parallel y{\parallel }^{2}$ . + +Hence, any null vector in the algebra can never be mapped to a non-null vector, including scalars. The projective algebra can have substantial information encoded as null vector, such as the position of points. This information can never influence scalars or null vectors. For example, there is no way to compute the distance (a scalar) between points just using the projective algebra. In the GATr architecture, the input to the MLPs that operate on the scalars, or the attention weights, thus could not be affected by the null information, had we only used the geometric product on multivectors. + +To address this limitation, we use besides the geometric product also the join. The join is able to compute such quantities. For example, given the Euclidean vector ${e}_{{12}\ldots n}$ , we can map a null vector $x = {e}_{{012}\ldots k}$ to a non-null vector $x \vee {e}_{{12}\ldots n} \propto {e}_{{12}\ldots k}$ . + +## B. Architecture + +In this section, we provide some details on the GATr architecture that did not fit into the main paper. + +a) Equivariant join: One of the primitives in GATr is the equivariant join EquiJoin $\left( {x, y;z}\right)$ , which we define in Eq. (5). For $x$ and $y$ , we use hidden states of the neural network after the previous layer. The nature of $z$ is different: it is a reference multivector and only necessary to ensure that the function correctly changes sign under mirrorings of the inputs. We find it beneficial to choose this reference multivector $z$ based on the input data rather than the hidden representations, and choose it as the mean of all inputs to the network. + +b) Auxiliary scalars: In addition to multivector representations, GATr supports auxiliary scalar representations, for instance to describe non-geometric side information such as positional encodings or diffusion time embeddings. In most layers, these scalar variables are processed like in a standard transformer, with two exceptions. In linear layers, we allow for the scalar components of multivectors and the auxiliary scalars to freely mix. In the attention operation, we compute attention weights as + +$$ +{\operatorname{Softmax}}_{i}\left( \frac{\mathop{\sum }\limits_{c}\left\langle {{q}_{{i}^{\prime }c}^{MV},{k}_{ic}^{MV}}\right\rangle + \mathop{\sum }\limits_{c}{q}_{{i}^{\prime }c}^{s}{k}_{ic}^{s}}{\sqrt{8{n}_{MV} + {n}_{s}}}\right) , \tag{9} +$$ + +where ${q}^{MV}$ and ${k}^{MV}$ are query and key multivector representations, ${q}^{s}$ and ${k}^{s}$ are query and key scalar representations, ${n}_{MV}$ is the number of multivector channels, and ${n}_{s}$ is the number of scalar channels. + +## C. n-body dynamics prediction + +a) Dataset: We first demonstrate GATr on a $n$ -body dynamics prediction problem. Given the masses, initial positions, and velocities of a star and a few planets, the goal is to predict the final position after the system has evolved under Newtonian gravity for some time. + +To be more precise, we generate data (for $n$ objects) as follows: + +1) The masses of $n$ objects are sampled from log-uniform distributions. For one object (the star), we use ${m}_{0} \in$ $\left\lbrack {1,{10}}\right\rbrack$ ; for the remaining objects (the planets), we use ${m}_{i} \in \left\lbrack {{0.01},{0.1}}\right\rbrack$ . (Following common practice in theoretical physics, we use dimensionless quantities such that the gravitational constant is 1 .) + +2) The initial positions of all bodies are sampled. We first use a heliocentric reference frame. Here the initial positions of all bodies are sampled. The star is set to the origin, while the planets are sampled uniformly on a plane within a distance ${r}_{i} \in \left\lbrack {{0.1},{1.0}}\right\rbrack$ from the star. + +3) The initial velocities are sampled. In the heliocentric reference frame, the star is at rest. The planet velocities are determined by computing the velocity of a stable circular orbit corresponding to the initial positions and masses, and then adding isotropic Gaussian noise (with standard deviation 0.01 ) to it. + +--- + +${}^{2}$ The authors agree with the reader that there must be an easier way to prove this. + +--- + +
ParameterGATrTransformerMLPSEGNN
Layers10 blocks10 blocks10 layersn/a
Channels16 multivectors +128 scalars384384n/a
Attention heads88$\mathrm{n}/\mathrm{a}$n/a
Parameters $\left\lbrack {10}^{6}\right\rbrack$1.911.81.30.1
+ +TABLE III: Hyperparameters used in the $n$ -body experiments. + +4) We transform the positions and velocities from the heliocentric reference frame to a global reference frame by applying a random translation and rotation to it. The translation is sampled from a multivariate Gaussian with standard deviation 20 and zero mean (except for the domain generalization evaluation set, where we use a mean of ${\left( {200},0,0\right) }^{T}$ ). The rotation is sampled from the Haar measure on ${SO}\left( 3\right)$ . In addition, we apply a random permutation of the bodies. + +5) We compute the final state of the system by evolving it under Newton's equations of motion, using Euler's method and 100 time steps with a time interval of ${10}^{-4}$ each. + +6) Finally, samples in which any bodies have traveled more than a distance of 2 (the diamater of the solar system) are rejected. (Otherwise, rare gravitational slingshot effects dominate the regression loss and all methods become unreliable.) + +We generate training datasets with $n = 4$ and between 100 and ${10}^{5}$ samples; a validation dataset with $n = 4$ and 5000 samples; a regular evaluation set with $n = 4$ and 5000 samples; a number-generalization evaluation set with $n = 6$ and 5000 samples; and a $\mathrm{E}\left( 3\right)$ generalization set with $n = 4$ , an additional translation (see step 4 above), and 5000 samples. + +All models are tasked with predicting the final object positions given the initial positions, initial velocities, and masses. + +b) Models: Our GATr model is explained in III. We embed object masses as scalars, positions as trivectors, and velocities (like translation vectors) as bivectors. + +GATr is compared to three baselines: the equivariant SEGNN [4], a vanilla transformer, and an MLP. For SEGNN, we use the code published by Brandstetter et al. [4] and the hyperparameters that publication uses for $n$ -body experiments. We vary the number of nearest neighbours between 3 and the number of objects in the scene (corresponding a fully connected graph) and show the best result. For the Transformer baseline, we follow a pre-layer normalization $\left\lbrack {1,{24}}\right\rbrack$ architecture with GELU activations [12] in the MLP block. For the MLP, we use GELU activations as well. + +In Tbl. IV we show hyperparameter choices and parameter counts. + +c) Training: All models are trained by minimizing a ${L}_{2}$ loss on the final position of all objects. We train for 50000 steps with the Adam optimizer, using a batch size of 64 and exponentially decaying the learning rate from $3 \cdot {10}^{-4}$ to $3 \cdot {10}^{-6}$ . + +d) Results: In the left panel of Fig. 3 we show the prediction errors as a function of the number of training samples used. The MLP, which has the least strong inductive bias and treats the object positions and velocities as a single, structureless feature vector, performs poorly on this task. The transformer structures the data in terms of objects and is permutation-equivariant, but not aware of the geometry; it achieves a reasonable prediction accuracy when using the full training set. SEGNN, which is E(3)-equivariant, achieves a substantially better performance than the non-geometric baselines. Our GATr architecture outperforms all three, achieving an asymptotic performance on par with SEGNN while being clearly more sample-efficient. It is able to predict final positions with high accuracy even from just 100 training samples. + +GATr also generalizes robustly out of domain, as we show in the middle and right panels of Fig. 1. When evaluated on a larger number of planets, the mean error becomes larger, as non-trivial gravitational interactions become more frequent, but GATr still outperforms the baselines. In particular, both GATr and the baseline transformer generalizes better than SEGNN, providing evidence that a softmax-based attention mechanisms is more robust to object number generalization than the message passing algorithm of SEGNN. Finally, the performance of the $\mathrm{E}\left( 3\right)$ -equivariant GATr and SEGNN does not drop when evaluated on spatially translated data, while the non-equivariant baselines fail in this setting. + +## D. Robotic planning through invariant diffusion + +a) Environment: We use the block stacking environment from Janner et al. [15]. It consists of a Kuka robotic arm interacting with four blocks on a table, simulated with Py-Bullet [8]. The state consists of seven robotic joint angles as well as the positions and orientations of the four blocks. We consider the task of stacking four blocks on top of each other in any order. The reward is the stacking success probability and is normalized such that 0 means that no blocks are ever successfully stacked, while 100 denotes perfect block stacking. + +b) Dataset and data parameterization: We train models on the offline trajectory dataset published by Janner et al. [15]. It consists of 11000 expert demonstrations. + +To describe the problem in terms of geometric quantities, we re-parameterize the environment state into the positions and orientations of the robotic endeffector as well as the four blocks. The orientations of all objects are given by two direction vectors. In addition, there are attachment variables that characterize whether the endeffector is in contact with either of the four blocks. In this parameterization, the environment state is 49- dimensional. + +![019640fe-24fe-7951-97b3-4c97a1b721dc_10_162_143_1472_552_0.jpg](images/019640fe-24fe-7951-97b3-4c97a1b721dc_10_162_143_1472_552_0.jpg) + +Fig. 3: Results on a synthetic $n$ -body dynamics dataset. We show the error in predicting future positions of planets as a function of the training dataset size. Out of five independent training runs, the mean and standard error are shown. Left: Evaluating without distribution shift. GATr $\left( \rightarrow \right)$ is more sample efficient than SEGNN [4] $\left( {- \cdot }\right)$ and outperforms non-geometric baselines $\left( {- - ,\cdots \cdots }\right)$ . Middle: Evaluating on systems with more planets than trained on. Both GATr and the baseline transformer generalize well to different object counts. Right: Evaluating on translated data. Because GATr is $\mathrm{E}\left( 3\right)$ equivariant, it generalizes under this domain shift. + +
ParameterGATr-DiffuserTransformer-DiffuserDiffuser
Transformer blocks$\{ {10},{20},{30}\}$$\{ {10},{20},{30}\}$n/a
Channels16 multivectors +128 scalars(144, 384)n/a
Attention heads88n/a
Parameters $\left\lbrack {10}^{6}\right\rbrack$$\{ {2.1},{4.0},{5.9}\}$$\{ {1.8},\ldots ,{3.5},\ldots ,{35.7}\}$65.1
+ +TABLE IV: Hyperparameters used in the robotic planning experiments. For GATr-Diffuser and the Transformer-Diffuser, we experimented with different depth and (for the Transformer-Diffuser) channel counts. For each model, we independently chose the best-performing setting, shown here in bold. The Diffuser model uses a substantially different architecture based on a U-net, we refer the reader to Janner et al. [15] for details. + +We train models in this geometric parameterization of the problem. To map back to the original parameterization in terms of joint angles, we use a simple inverse kinematics model that solves for the joint angles consistent with a given endeffector pose. + +c) Models: Our GATr model is explained in Sec. III. We use the axial version, alternating between attending over time steps and over objects. We embed object positions as trivectors, object orientations as oriented planes, gripper attachment variables as scalars, and the diffusion time as scalars. + +For the Transformer baseline, we follow a pre-layer normalization [1, 24] architecture with GELU activations [12] in the MLP block and rotary positional embeddings [22]. For the Diffuser baseline, we follow the architecture and hyperparameters described by Janner et al. [15]. + +For all models, we use the diffusion time embedding of Ref. [15]. In Tbl. IV we show hyperparameter choices and parameter counts. + +All models are embedded in a diffusion pipeline as described by Ho et al. [14], using the hyperparameter choices of Ref. [15]. In particular, we use univariate Gaussian base densities and 1000 diffusion steps. + +d) Training: We train all models by minimizing the simplified diffusion loss proposed by Ho et al. [14]. For our GATr models and the Diffuser baselines we use an ${L}_{2}$ loss and train for 200000 steps with the Adam optimizer, exponentially decaying the learning rate from $3 \cdot {10}^{-4}$ to $3 \cdot {10}^{-6}$ . This setup did not work well for the Diffuser model, where (following Janner et al. [15]) we use a ${L}_{1}$ loss and a low constant learning rate instead. + +e) Evaluation: All models are evaluated by rolling out at least 200 episodes in a block stacking environment and reporting the mean task and the standard error. We use the planning algorithm and parameter choices of Janner et al. [15] (we do not optimize these, as our focus in this work is on architectural improvements). It consists of sampling trajectories of length 128 from the model, conditional on the current state; then executing these in the environment using PyBullet's PID controller. Each rollout consists of three such phases. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/BbFl6GOleK/Initial_manuscript_tex/Initial_manuscript.tex b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/BbFl6GOleK/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..a55bfe4c88955c9e20bfd913f976fe1c16ee974c --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/BbFl6GOleK/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,188 @@ +§ GEOMETRIC ALGEBRA TRANSFORMERS + +Author names omitted for anonymous review. Paper ID: 1. + +Abstract-Problems involving geometric data arise in a variety of fields, including computer vision, robotics, chemistry, and physics. Such data can take numerous forms, such as points, direction vectors, planes, or transformations, but to date there is no single architecture that can be applied to such a wide variety of geometric types while respecting their symmetries. In this paper we introduce the Geometric Algebra Transformer (GATr), a general-purpose architecture for geometric data. GATr represents inputs, outputs, and hidden states in the projective geometric algebra, which offers an efficient 16-dimensional vector space representation of common geometric objects as well as operators acting on them. GATr is equivariant with respect to $\mathrm{E}\left( 3\right)$ , the symmetry group of 3D Euclidean space. As a transformer, GATr is scalable, expressive, and versatile. In experiments with $n$ -body modeling and robotic planning, GATr shows strong improvements over non-geometric baselines. + +§ I. INTRODUCTION + +From molecular dynamics to astrophysics, from material design to robotics, fields across science and engineering deal with geometric data: points, directions, surfaces, orientations, and so on. The geometric nature of data provides a rich structure: a notion of common operations between geometric types (computing distances between points, applying rotations to orientations, etc.), a well-defined behaviour of data under transformations of a system, and the independence of certain properties of coordinate system choices. + +When learning relations from geometric data, incorporating this rich structure into the architecture has the potential to improve the performance, especially in the low-data regime. To implement such an inductive bias, it is useful to first categorize inputs, outputs, and internal data into certain object types, for instance group representations. Next, the functions mapping between these types have certain regularity constraints imposed, for instance based on equivariance [6]. + +In this spirit, we introduce the Geometric Algebra Transformer (GATr), a general-purpose network architecture for geometric data. GATr brings together three key ideas. + +Geometric algebra: To naturally describe both geometric objects as well as their transformations in three-dimensional space, GATr represents data as multivectors of the projective geometric algebra ${\mathbb{G}}_{3,0,1}$ . Geometric algebra is an elegant, versatile and practical mathematical framework for geometrical computations. The particular algebra ${\mathbb{G}}_{3,0,1}$ extends the vector space ${\mathbb{R}}^{3}$ to 16-dimensional multivectors, which can natively represent various geometric types and $\mathrm{E}\left( 3\right)$ poses. In this framework, common interactions between geometric data types can be computed with few operations, in particular the geometric product. + +Equivariance: To behave consistently under transformations, GATr is equivariant with respect to $\mathrm{E}\left( 3\right)$ , the symmetry group of three-dimensional space. To this end, we develop several new $\mathrm{E}\left( 3\right)$ -equivariant primitives mapping between multivectors, including equivariant linear maps, an attention mechanism, nonlinearities, and normalization layers. + +Transformer: Due to its favorable scaling properties, expressiveness, trainability, and versatility, the transformer architecture [23] has become the de-facto standard for a wide range of problems. GATr is based on the transformer architecture, and hence inherits these benefits. + +GATr hence combines two lines of research: the representation of geometric objects with geometric algebra [9, 10, 18], popular in computer graphics and physics and recently gaining traction in deep learning $\left\lbrack {3,{19},{21}}\right\rbrack$ , and the encoding of symmetries through equivariant deep learning [7]. The result-to the best of our knowledge the first $\mathrm{E}\left( 3\right)$ -equivariant architecture with internal geometric algebra representations-is a versatile network for problems involving geometric data. We demonstrate GATr in a robotic planning problem, where it significantly outperforms non-geometric baselines. + +§ II. GEOMETRIC ALGEBRA IN A NUTSHELL + +We begin with the briefest of introductions to geometric algebra. For an in-depth introduction, we point the interested reader to Refs. [9, 10, 18, 19]. + +Whereas a plain vector space like ${\mathbb{R}}^{3}$ allows us to take linear combinations of elements $x$ and $y$ (vectors), a geometric algebra additionally has a bilinear associative operation: the geometric product, denoted simply by ${xy}$ . By multiplying vectors, one obtains so-called multivectors, which can represent both geometrical objects and operators. Multivectors can be expanded on a multivector basis, characterized by their dimensionality or grade, such as scalars (grade 0 ), vectors ${e}_{i}$ (grade 1), bivectors ${e}_{i}{e}_{j}$ (grade 2), all the way up to the pseudoscalar ${e}_{1}\cdots {e}_{d}$ (grade $d$ ). The symmetric and antisymmetric parts of the geometric product are called the interior and exterior (wedge) product. Finally, we will require is the dualization operator $x \mapsto {x}^{ * }$ . It acts on basis elements by swapping "empty" and "full" dimensions, e. g. sending ${e}_{1} \mapsto {e}_{23}$ . + +In order to represent three-dimensional objects as well as arbitrary rotations and translations acting on them, we work with the projective geometric algebra ${\mathbb{G}}_{3,0,1}\left\lbrack {9,{18},{19}}\right\rbrack$ . Here one adds a fourth homogeneous coordinate ${x}_{0}{e}_{0}$ to the $3\mathrm{D}$ vector space, yielding a ${2}^{4} = {16}$ -dimensional geometric algebra. The metric of ${\mathbb{G}}_{3,0,1}$ is such that ${e}_{0}^{2} = 0$ and ${e}_{i}^{2} = 1$ for $i = 1,2,3$ . + +We can use ${\mathbb{G}}_{3,0,1}$ to represent transformations: a vector $u$ represents the reflection of other elements in the hyperplane orthogonal to $u$ . Since any orthogonal transformation is equal to a sequence of reflections, this allows us to express any such transformation as a geometric product of (unit) vectors, $u = {u}_{1}\cdots {u}_{k}$ . These form the Pin group, which turns out to be the double cover of $\mathrm{E}\left( 3\right)$ . In order to apply elements of the Pin group to an arbitrary multivector $x$ , one uses the sandwich product: + +$$ +{\rho }_{u}\left( x\right) = \left\{ \begin{array}{ll} {ux}{u}^{-1} & \text{ if }u\text{ is even } \\ u\widehat{x}{u}^{-1} & \text{ if }u\text{ is odd } \end{array}\right. \tag{1} +$$ + +max width= + +2*Object / operator 2*Scalar 1 2|c|Vector 2|c|Bivector 2|c|Trivector PS + +3-9 + ${e}_{0}$ ${e}_{i}$ ${e}_{0i}$ ${e}_{ij}$ ${e}_{0ij}$ ${e}_{123}$ ${e}_{0123}$ + +1-9 +Scalar $\lambda \in \mathbb{R}$ $\lambda$ 0 0 0 0 0 0 0 + +1-9 +Plane w/ normal $n \in {\mathbb{R}}^{3}$ , origin shift $d \in \mathbb{R}$ 0 $d$ $n$ 0 0 0 0 0 + +1-9 +Line $\mathrm{w}$ / direction $n \in {\mathbb{R}}^{3}$ , orthogonal shift $s \in {\mathbb{R}}^{3}$ 0 0 0 $S$ $n$ 0 0 0 + +1-9 +Point $p \in {\mathbb{R}}^{3}$ 0 0 0 0 0 $p$ 1 0 + +1-9 +Pseudoscalar $\mu \in \mathbb{R}$ 0 0 0 0 0 0 0 $\mu$ + +1-9 +Reflection through plane w/ normal $n \in {\mathbb{R}}^{3}$ , origin shift $d \in \mathbb{R}$ 0 $d$ $n$ 0 0 0 0 0 + +1-9 +Translation $t \in {\mathbb{R}}^{3}$ 1 0 0 $\frac{1}{2}t$ 0 0 0 0 + +1-9 +Rotation expressed as quaternion $q \in {\mathbb{R}}^{4}$ ${q}_{0}$ 0 0 0 ${q}_{i}$ 0 0 0 + +1-9 +Point reflection through $p \in {\mathbb{R}}^{3}$ 0 0 0 0 0 $p$ 1 0 + +1-9 + +TABLE I: Embeddings of common geometric objects and transformations into the projective geometric algebra ${\mathbb{G}}_{3,0,1}$ . The columns show different components of the multivectors with the corresponding basis elements, with $i,j \in \{ 1,2,3\} ,j \neq i$ , i.e. ${ij} \in \{ {12},{13},{23}\}$ . For simplicity, we fix gauge ambiguities (the weight of the multivectors) and leave out signs (which depend on the ordering of indices in the basis elements). + +Here $\widehat{x}$ is the grade involution, which flips the sign of odd-grade elements such as vectors and trivectors, while leaving even-grade elements unchanged. + +Following Refs. $\left\lbrack {9,{18},{19}}\right\rbrack$ , we represent planes with vectors, and require that the intersection of two geometric objects is given by the wedge product of their representations. Lines (the intersection of two planes) are thus represented as bivectors, points (the intersection of three planes) as trivectors. This leads to a duality between objects and operators, where objects are represented like transformations that leave them invariant. Table I provides a dictionary of these embeddings. It is easy to check that this representation is consistent with using the sandwich product for transformations. + +We construct network layers that are equivariant with respect to $\mathrm{E}\left( 3\right)$ , or equivalently its double cover $\operatorname{Pin}\left( {3,0,1}\right)$ . A function $f : {\mathbb{G}}_{3,0,1} \rightarrow {\mathbb{G}}_{3,0,1}$ is $\operatorname{Pin}\left( {3,0,1}\right)$ -equivariant with respect to the representation $\rho$ (or $\operatorname{Pin}\left( {3,0,1}\right)$ -equivariant for short) if $f\left( {{\rho }_{u}\left( x\right) }\right) = {\rho }_{u}\left( {f\left( x\right) }\right)$ for any $u \in \operatorname{Pin}\left( {3,0,1}\right)$ and $x \in {\mathbb{G}}_{3,0,1}$ . + +§ III.THE GEOMETRIC ALGEBRA TRANSFORMER + +§ A. ARCHITECTURE OVERVIEW + +The Geometric Algebra Transformer (GATr) is designed based on three principles outlines in the introduction: a strong inductive bias for geometric data through a representation based on geometric algebra, symmetry awareness through E(3) equivariance, and scalability and versatility through a transformer architecture. + +We sketch GATr in Fig. 1. In the top row, we show the overall workflow. If necessary, raw inputs are first preprocessed into geometric types. The geometric objects are then embedded into multivectors of the geometric algebra ${\mathbb{G}}_{3,0,1}$ , following the recipe described in Tbl. I. + +The multivector-valued data are processed with a GATr network. We show this architecture in more detail in the bottom row of Fig. 1. GATr consists of $N$ transformer blocks, each consisting of an equivariant multivector LayerNorm, an equivariant multivector self-attention mechanism, a residual connection, another equivariant LayerNorm, an equivariant multivector MLP with geometric bilinear interactions, and another residual connection. The architecture is thus similar to a typical transformer [23] with pre-layer normalization [1, 24], but adapted to correctly handle multivector data and be $\mathrm{E}\left( 3\right)$ equivariant. We describe the individual layers below. + +Finally, from the outputs of the GATr network we extract the target variables, again following the mapping given in Tbl. I. + +§ B. GATR PRIMITIVES + +a) Linear layers: We begin with linear layers between multivectors. In Appendix A, we show that the equivariance condition severely constrains them: + +Proposition 1. Any linear map $\phi : {\mathbb{G}}_{d,0,1} \rightarrow {\mathbb{G}}_{d,0,1}$ that is equivariant to $\operatorname{Pin}\left( {d,0,1}\right)$ is of the form + +$$ +\phi \left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{{d + 1}}{w}_{k}\langle x{\rangle }_{k} + \mathop{\sum }\limits_{{k = 0}}^{d}{v}_{k}{e}_{0}\langle x{\rangle }_{k} \tag{2} +$$ + +for parameters $w \in {\mathbb{R}}^{d + 2},v \in {\mathbb{R}}^{d + 1}$ . Here $\langle x{\rangle }_{k}$ is the blade projection of a multivector, which sets all non-grade- $k$ elements to zero. + +Thus, $\mathrm{E}\left( 3\right)$ -equivariant linear maps between ${\mathbb{G}}_{3,0,1}$ multivec-tors can be parameterized with nine coefficients, five of which are the grade projections and four include a multiplication with the homogeneous basis vector ${e}_{0}$ . We thus parameterize affine layers between multivector-valued arrays with Eq. (2), with learnable coefficients ${w}_{k}$ and ${v}_{k}$ for each combination of input channel and output channel. In addition, there is a learnable bias term for the scalar components of the outputs (biases for the other components are not equivariant). + + < g r a p h i c s > + +Fig. 1: Overview over the GATr architecture. Boxes with solid lines are learnable components, those with dashed lines are fixed. + +b) Geometric bilinears: Equivariant linear maps are not sufficient to build expressive networks. The reason is that these operations allow for only very limited grade mixing. For the network to be able to construct new geometric features from existing ones, such as the translation vector between two points, two additional primitives are essential. + +The first is the geometric product $x,y \mapsto {xy}$ , the fundamental bilinear operation of geometric algebra. It allows for substantial mixing between grades: for instance, the geometric product of vectors consists of scalars and bivector components. The geometric product is equivariant (Appendix A). + +The second geometric primitive we use derived from the so-called join ${}^{1}x,y \mapsto {\left( {x}^{ * } \land {y}^{ * }\right) }^{ * }$ . This map may appear complicated, but it plays a simple role in our architecture: an equivariant map that involves the dual $x \mapsto {x}^{ * }$ . Including the dual in an architecture is essential for expressivity: in ${\mathbb{G}}_{3,0,1}$ , without any dualization it is impossible to represent even simple functions such as the Euclidean distance between two points [9]; we show this in Appendix A. While the dual itself is not $\operatorname{Pin}\left( {3,0,1}\right)$ -equivariant (w.r.t. $\rho$ ), the join operation is equivariant to even (non-mirror) transformations. To make the join equivariant to mirrorings as well, we multiply its output with a pseudoscalar derived from the network inputs: $x,y,z \mapsto$ EquiJoin $\left( {x,y,z}\right) = {z}_{0123}{\left( {x}^{ * } \land {y}^{ * }\right) }^{ * }$ , where ${z}_{0123} \in \mathbb{R}$ is the pseudoscalar component of a reference multivector $z$ . + +We define a geometric bilinear layer that combines the geometric product and the join of the two inputs as $\operatorname{Geometric}\left( {x,y;z}\right) =$ ${\text{ Concatenate }}_{\text{ channels }}\left( {{xy},\operatorname{EquiJoin}\left( {x,y;z}\right) }\right)$ . In GATr, this layer is included in the MLP. + +c) Nonlinearities and normalization: We use scalar-gated GELU nonlinearities [12] GatedGELU $\left( x\right) = \operatorname{GELU}\left( {x}_{1}\right) x$ , where ${x}_{1}$ is the scalar component of the multivector $x$ . Moreover, we define an E(3)-equivariant LayerNorm operation for multivectors as LayerNorm $\left( x\right) = x/\sqrt{{\mathbb{E}}_{c}\langle x,x\rangle }$ , where the expectation goes over channels and we use the invariant inner product $\langle \cdot , \cdot \rangle$ of ${\mathbb{G}}_{3,0,1}$ . + +d) Attention: Given multivector-valued query, key, and value tensors, each consisting of ${n}_{i}$ items (or tokens) and ${n}_{c}$ channels (key length), we define the $\mathrm{E}\left( 3\right)$ -equivariant multivector attention as + +$$ +\operatorname{Attention}{\left( q,k,v\right) }_{{i}^{\prime }{c}^{\prime }} = \mathop{\sum }\limits_{i}{\operatorname{Softmax}}_{i}\left( \frac{\mathop{\sum }\limits_{c}\left\langle {{q}_{{i}^{\prime }c},{k}_{ic}}\right\rangle }{\sqrt{8{n}_{c}}}\right) {v}_{i{c}^{\prime }}. +$$ + +(3) + +Here the indices $i,{i}^{\prime }$ label items, $c,{c}^{\prime }$ label channels, and $\langle \cdot , \cdot \rangle$ is the invariant inner product of the geometric algebra. Just as in the original transformer [23], we thus compute scalar attention weights with a scaled dot product; the difference is that we use the inner product of ${\mathbb{G}}_{3,0,1}$ . We extend this attention mechanism to multi-head self-attention in the usual way. + +§ C. EXTENSIONS + +a) Auxiliary scalar representations: While multivectors are well-suited to model geometric data, many problems contain non-geometric information as well. Such scalar information may be high-dimensional, for instance in sinosoidal positional encoding schemes. Rather than embedding into the scalar components of the multivectors, we add an auxiliary scalar representation to the hidden states of GATr. Each layer thus has both scalar and multivector inputs and outputs. They have the same batch dimension and item dimension, but may have different number of channels. + +This additional scalar information interacts with the multi-vector data in two ways. In linear layers, we allow the auxiliary scalars to mix with the scalar component of the multivectors. In the attention layer, we compute attention weights both from the multivectors, as given in Eq. (3), and from the auxiliary scalars, using a regular scaled dot-product attention. The two attention maps are summed before computing the softmax, and the normalizing factor is adapted. In all other layers, the scalar information is processed separately from the multivector information, using the unrestricted form of the multivector map. For instance, nonlinearities transform multivectors with equivariant gated GELUs and auxiliary scalars with regular GELU functions. + +${}^{1}$ Technically, the join has an anti-dual, not the dual, in the output. We leave this detail out for notational simplicity. + +max width= + +Method Reward + +1-2 +GATr-Diffuser (ours) ${74.8} \pm {1.7}$ + +1-2 +Transformer-Diffuser ${69.8} \pm {1.9}$ + +1-2 +Diffuser [15] (reproduced) ${57.7} \pm {1.8}$ + +1-2 +Diffuser [15] ${58.7} \pm {2.5}$ + +1-2 +EDGI [5] ${62.0} \pm {2.1}$ + +1-2 +CQL [17] 24.4 + +1-2 +BCQ [11] 0.0 + +1-2 + +TABLE II: Diffusion-based robotic planning. We show the normalized cumulative rewards achieved on a robotic block stacking task [15], where 100 is optimal and means that each block stacking task is completed successfully, while 0 corresponds to a failure to stack any blocks. We show the mean and standard error over at least 100 evaluation episodes. The top three results were computed in the GATr code base, the bottom four taken from the literature [5, 15]. + +b) Rotary positional embeddings: GATr assumes the data can be described as a set of items (or tokens). If these items are distinguishable and form a sequence, we encode their position using rotary position embeddings [22] in the auxiliary scalar variables. + +c) Axial attention over objects and time: The architecture is flexible about the structure of the data. In some use cases, there will be a single dimension along which objects are organized, for instance when describing a static scene or the time evolution of a single object. But GATr also supports the organization of a problem along multiple axes, for example with one dimension describing objects and another time steps. In this case, we follow an axial transformer layout [13], alternating between transformer blocks that attend over different dimensions. (The not-attended dimensions in each block are treated like a batch dimension.) + +§ IV. ROBOTIC PLANNING THROUGH INVARIANT DIFFUSION + +In Appendix C, we demonstrate Kuka on a synthetic $n$ -body regression problem. We find that it outperforms non-geometric baselines and the $\mathrm{E}\left( 3\right)$ -equivariant SEGNN in terms of sample efficiency and generalization. + +In this section of the main paper, we restrict ourselves to a robotics experiment. We show how GATr defines an $\mathrm{E}\left( 3\right)$ - invariant diffusion model, that it can be used for model-based reinforcement learning and planning, and that this combination is well-suited to solve robotics problems. + +We follow Janner et al. [15], who propose to treat learning a world model and planning within that model as a unified generative modeling problem. After training a diffusion model [20] on offline trajectories, one can use it in a planning loop, sampling from it conditional on the current state, desired future states, or to maximize a given reward, as needed. + +We embed a GATr model in this algorithm and call this combination GATr-Diffuser. GATr is equivariant with respect to $\mathrm{E}\left( 3\right)$ and the object permutation group ${\mathrm{S}}_{n}$ . When used together with a base density that is $\mathrm{E}\left( 3\right) \times {\mathrm{S}}_{n}$ -invariant, the diffusion model is also $\mathrm{E}\left( 3\right) \times {\mathrm{S}}_{n}$ -invariant [2,16]. Often, a particular task requires breaking this symmetry: imagine, for instance, that a particular object needs to be moved to a particular location. The Diffuser approach is an excellent match for such situations, as conditioning on the current state, future state, or a reward model as proposed by Janner et al. [15] can softly break the symmetry group as desired [5]. + + < g r a p h i c s > + +Fig. 2: Diffusion-based robotic planning. We show normalized rewards (higher is better) as in Tbl. II as a function of training dataset size. GATr (—) is more successful at block stacking and more sample-efficient than the baselines, including the original Diffuser model [15] (一) and our modification of it based on a Transformer(- -). In grey, we show results reported in the literature [5, 15]. + +GATr-Diffuser is demonstrated on the problem of a Kuka robotic gripper stacking blocks using the "unconditional" environment introduced by Janner et al. [15]. We train a GATr-Diffuser model on the offline trajectory dataset published with that paper. To facilitate a geometric interpretation, we parameterize the data in terms of geometric quantities like object positions and orientations. In particular, we use the position and pose of the robotic endeffector as features and map to joint angles with an inverse kinematics model. We then test GATr-Diffuser on its ability to stack four blocks on each other. We compare our GATr-Diffuser model to a reproduction of the original Diffuser model (based on the published code, but using our data parameterization) and a new transformer backbone for the Diffuser model. In addition, we show the published results of Diffuser [15], the equivariant EDGI [5], and the offline RL algorithms CQL [17] and BCQ [11] as published in Ref. [15]. The problem and hyperparameters are described in detail in Appendix D. + +As shown in Tbl. II and Fig. 2, GATr-Diffuser is able to solve the block-stacking problem better than all baselines. It is also clearly more sample-efficient, matching the performance of a Diffuser model trained on the full dataset even when training only on $1\%$ of the trajectories. The fact that GATr-Diffuser also outperforms the E(3)-equivariant EDGI model [5] is evidence that equivariance alone is not the key to its success, hinting that the geometric algebra provides a useful inductive bias. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/N00uQFLlvHC/Initial_manuscript_md/Initial_manuscript.md b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/N00uQFLlvHC/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..bf2aab0d55401abc13c4ffb0e48ca2ce421dfde1 --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/N00uQFLlvHC/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,203 @@ +# Spatial Generalization of Visual Imitation Learning with Position-Invariant Regularization + +Anonymous Authors + +Abstract-How the visual imitation learning models can generalize to novel unseen visual observations is a highly challenging problem. Such a generalization ability is very crucial for their real-world applications. Since this generalization problem has many different aspects, we focus on one case called spatial generalization, which refers to generalization to unseen setup of object (entity) locations in a task, such as a novel setup of object locations in the robotic manipulation problem. In this case, previous works observe that the visual imitation learning models will overfit to the absolute information (e.g., coordinates) rather than the relational information between objects, which is more important for decision making. As a result, the models will perform poorly in novel object location setups. Nevertheless, so far, it remains unclear how we can solve this problem effectively. Our insight into this problem is to explicitly remove the absolute information from the features learned by imitation learning models so that the models can use robust, relational information to make decisions. To this end, we propose a novel, position-invariant regularizer called POINT for generalization. The proposed regularizer will penalize the imitation learning model when its features contain absolute, positional information of objects. Various experiments demonstrate the effectiveness of our method. + +## I. INTRODUCTION + +Imitation learning is a class of algorithms that enable robots to acquire behaviors from human demonstrations [8]. The recent advance in deep learning has boosted the development of visual imitation learning and supported its applications like autonomous driving, robotic manipulation, and human-robot interaction [8]. + +In spite of its success, visual imitation learning methods still face many practical challenges. One major challenge is its ability to generalize to novel unseen visual observations, which is very common when we deploy the trained models [15, 11]. In the literature, this generalization problem is also known as the robustness problem. The problem covers many different aspects. For example, here we can identify two basic generalization capabilities: observational generalization and spatial generalization (Figure 1). Observational generalization refers to the generalization to novel visual textures. The changes in background color, object texture, or ambient light in the robotic manipulation task are examples of observational generalization. Such kind of visual change does not affect the underlying task structure (e.g., the position of object and targets) and only requires the robot to reason about semantic meanings correctly. In contrast, spatial generalization refers to the generalization to unseen setup of objects' (entities) locations in one task, which instead requires physical common sense about space and object. Consider the task of letting a warehouse robot move a box to some target region. If we set the initial position of the box to a place that is not covered by the demonstration dataset, then the imitation learning methods must be able to perform spatial generalization so as to succeed. In reality, the generalization challenge usually emerges as a combination of different generalization capabilities. In this paper, we focus on the study of spatial generalization. + +![019640f8-41cd-718c-9e34-a418c0ceba1c_0_919_463_731_199_0.jpg](images/019640f8-41cd-718c-9e34-a418c0ceba1c_0_919_463_731_199_0.jpg) + +Fig. 1: Left and Middle: Two kinds of visual generalization. The examples are based on the MAGICAL benchmark provided by [15], in which a robot is required to relocate a box to a target region. The left figure shows an example of observational generalization, in which the only change during the testing phase is the visual texture of objects. The middle figure shows an example of spatial generalization. The object setup in the testing phase is unseen. Right: To achieve spatial generalization, we suggest that absolute information should be removed from the feature while the relational information should be kept. We propose a novel, position-invariant regularizer for this purpose. + +For better spatial generalization, the visual imitation learning models should be able to obtain knowledge about objects and their spatial relations with proper inductive biases. Some work finds that vanilla deep visual imitation learning models strongly overfit to the absolute position of objects [15], which suggests that they do not extract relational information of objects to make decisions like humans [4]. Based on this observation, our main insight into this problem is to explicitly remove the absolute, positional information from the learned features in the visual imitation learning models. Note that this does not mean that the decision-making process is not dependent on absolute information. Rather, we expect that the model can extract the relational information (e.g., distance, direction) from the absolute information to make robust decisions. To this end, we propose a novel position-invariant regularizer called POINT. This regularizer will penalize the imitation learning model when it finds that the learned feature highly correlates with absolute, positional information. As a result, the imitation learning model has to discover more robust relational features, and can generalize better in unseen scenarios. + +## II. Preliminaries + +a) Notations: We model the sequential decision making problem as a Markov Decision Process $\mathcal{M} = \left( {\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T}}\right) .\mathcal{S}$ is the state space. $\mathcal{A}$ is the action space. $\mathcal{R}$ is the reward function. $\mathcal{T}$ is the transition dynamics. The agent’s state at timestep $t$ is ${s}_{t} \in \mathcal{S}$ . The agent takes action ${a}_{t}$ and receives reward ${r}_{t} = \mathcal{R}\left( {{s}_{t},{a}_{t}}\right)$ . Its state at timestep $t + 1$ is then ${s}_{t + 1} \sim \mathcal{T}\left( {{s}_{t},{a}_{t}}\right)$ . The objective of the agent is to maximize the return $\mathop{\sum }\limits_{{t = 0}}^{T}{\gamma }^{t}{r}_{t}$ , where $\gamma \in (0,1\rbrack$ is a discount factor. For the imitation learning problem studied here, the agent has no access to $\mathcal{R}$ and $\mathcal{T}$ , but it is provided with a fixed expert demonstration dataset $\mathcal{D} = \left\{ {\tau }_{i}\right\}$ . Here, each ${\tau }_{i} = \left( {{s}_{0}^{E},{a}_{0}^{E},{s}_{1}^{E},{a}_{1}^{E},\ldots {s}_{T}^{E},{a}_{T}^{E}}\right)$ is an expert trajectory that can achieve high performance (return) in $\mathcal{M}$ . Therefore, the agent should learn the behavior by leveraging the given demonstration dataset. + +b) Behavioral Cloning: One classical imitation learning algorithm is the Behavioral Cloning (BC). BC turns the imitation learning problem into a supervised learning problem. It fits the expert’s action ${a}_{i}$ given the observation ${s}_{i}$ . For the visual imitation learning problem, the BC model can be divided into two consecutive parts: a vision encoder ${f}_{\theta }$ (which is usually a convolutional neural network), and a policy head $\pi$ . The ${f}_{\theta }$ first encodes ${s}_{i}$ to the feature ${f}_{i} = {f}_{\theta }\left( {s}_{i}\right)$ , and the $\pi$ then uses it to predict the expert’s action. The BC algorithm minimizes the following negative log-likelihood objective: + +$$ +{\mathcal{L}}_{BC} = {\mathbb{E}}_{\left( {{s}_{i},{a}_{i}}\right) \in \mathcal{D}}\left\lbrack {-\log \pi \left( {{a}_{i} \mid {f}_{\theta }\left( {s}_{i}\right) }\right) }\right\rbrack . \tag{1} +$$ + +Due to its simplicity, $\mathrm{{BC}}$ is widely used in visual imitation learning. Therefore, we study the spatial generalization of BC in this paper. + +## III. METHOD + +## A. Formulation and Challenges + +For the tasks that involve spatial generalization, there usually exist multiple objects in the observed states, such as the agent, the target object, and the goal. For the state ${s}_{i}$ , we denote each of these objects in ${s}_{i}$ as ${o}_{i}^{j}$ , and their positions as $\left( {{x}_{i}^{j},{y}_{i}^{j}}\right)$ . Then, our idea can be formulated as the minimization problem of each $I\left( {\left( {{\mathbf{x}}^{j},{\mathbf{y}}^{j}}\right) ,\mathbf{f}}\right)$ , where $I$ is the mutual information. Note that we use the notation ${\mathbf{x}}^{j},{\mathbf{y}}^{j},\mathbf{f}$ to indicate the corresponding random variables of ${x}_{i}^{j},{y}_{i}^{j},{f}_{i}$ . However, this formulation leads to many practical challenges. First, since each $\left( {{x}_{i}^{j},{y}_{i}^{j}}\right)$ is not provided directly by ${s}_{i}$ and should be inferred, we have to either train some object key-point detectors to detect the underlying objects in the training set, or annotate the objects by ourselves. However, both of these approaches can be difficult and tedious in practice. Second, even if we have ideal key-point detectors, we have to deal with a hard optimization problem in the summation form $\mathop{\sum }\limits_{j}I\left( {\left( {{\mathbf{x}}^{j},{\mathbf{y}}^{j}}\right) ,\mathbf{f}}\right)$ . This can be intractable when there are many objects in the observed state. + +Fortunately, we find that the previous works on the interpretation of deep learning models like GradCAM provide useful tools to handle these challenges. It can reduce the problem to a much simpler form. We discuss our observations as follows. + +![019640f8-41cd-718c-9e34-a418c0ceba1c_1_918_159_735_361_0.jpg](images/019640f8-41cd-718c-9e34-a418c0ceba1c_1_918_159_735_361_0.jpg) + +Fig. 2: Overview of our method. The blue branch above is the common imitation learning (BC) pipeline. Our proposed regularizer is shown in the light pink box at the bottom. The regularizer first uses the GradCAM++ algorithm to find out the important areas based on which the latest BC model makes decisions. Then it samples the coordinates from the discovered important areas and trains a discriminator network $D$ to calculate whether these sampled coordinates are paired with the feature ${f}_{i}$ . The BC model (encoder ${f}_{\theta }$ ) is then trained to fool the discriminator $D$ . When the encoder ${f}_{\theta }$ is able to fool $D$ , the absolute positional information is removed from the feature as desired. + +## B. Problem Reduction with GradCAM + +GradCAM [13] is an interpretation method that can tell which part of the image is crucial in the decision process of a deep learning model. Given a BC model $\left( {{f}_{\theta },\pi }\right)$ and input $s$ , GradCAM outputs an importance heatmap of the same resolution as the input $s$ . The heatmap indicates the importance of each pixel when we use this BC model for prediction. One nice property of this generated heatmap is that it is smooth and usually coincides with the meaningful objects in the input $s$ . Therefore, we can consider the GradCAM as a rough object detector here. + +We propose to sample ${p}_{i} = \left( {{x}_{i},{y}_{i}}\right)$ from the generated heatmap, and then minimize the $I\left( {\mathbf{p},\mathbf{f}}\right)$ . We find that this new objective can act as a proxy for the original objective in practice. Concretely, if ${p}_{i}$ is always far from a specific object like ${o}^{k}$ , then we know that ${o}^{k}$ is irrelevant to the decision process of the current model. In this case, we conjecture that $I\left( {\left( {{\mathbf{x}}^{k},{\mathbf{y}}^{k}}\right) ,\mathbf{f}}\right)$ should be low enough to meet our requirement. On the contrary, if ${p}_{i}$ always coincides with a certain object like ${o}^{l}$ , then we actually minimize $I\left( {\mathbf{p},\mathbf{f}}\right) \approx I\left( {\left( {{\mathbf{x}}^{l},{\mathbf{y}}^{l}}\right) ,\mathbf{f}}\right)$ as we want. + +## C. Loss Functions + +Now, our remaining work is to reduce the mutual information $I\left( {\mathbf{p},\mathbf{f}}\right)$ . However, we find that jointly estimating and minimizing the mutual information in our vision-based tasks is hard in practice. Since our ultimate goal is to minimize the information of $\mathbf{p}$ in $\mathbf{f}$ , we instead propose an adversarial training framework to achieve this goal. + +Specifically, we introduce a discriminator network $D$ to play a two-player min-max game with the BC model as follows. + +$$ +\mathop{\min }\limits_{{f}_{\theta }}\mathop{\max }\limits_{D}{\mathbb{E}}_{\left( {{s}_{i},{a}_{i}}\right) \sim \mathcal{D},\left( {{s}_{j},{a}_{j}}\right) \sim \mathcal{D}} \tag{2} +$$ + +$$ +\left\lbrack {\log D\left( {{p}_{i},{f}_{i}}\right) + \log \left( {1 - D\left( {{p}_{j},{f}_{i}}\right) }\right) }\right\rbrack . \tag{3} +$$ + +In this min-max game, the discriminator $D$ tries to tell the joint distribution of $\mathbf{p}$ and $\mathbf{f}$ , denoted as ${\mathbb{P}}_{\mathbf{p},\mathbf{f}}$ , from the product of their marginal distributions ${\mathbb{P}}_{\mathbf{p} \otimes \mathbf{f}}$ . Meanwhile, the BC model is trying to fool the discriminator by removing the information of $\mathbf{p}$ from $\mathbf{f}$ . Applying the convergence theory of the generative adversarial network (GAN) [6], we know that when ${f}_{\theta }$ is a global minimizer of Equation 2, ${\mathbb{P}}_{\mathbf{p},\mathbf{f}} = {\mathbb{P}}_{\mathbf{p} \otimes \mathbf{f}}$ , which implies that $I\left( {\mathbf{p},\mathbf{f}}\right) = 0$ . Therefore this min-max game fulfills our requirement. + +In practice, we train $D$ to minimize the following binary classification loss function: + +$$ +{\mathcal{L}}_{D} = - {\mathbb{E}}_{\left( {{s}_{i},{a}_{i}}\right) \sim \mathcal{D},\left( {{s}_{j},{a}_{j}}\right) \sim \mathcal{D}} \tag{4} +$$ + +$$ +\left\lbrack {\log D\left( {{p}_{i},{f}_{i}}\right) + \log \left( {1 - D\left( {{p}_{j},{f}_{i}}\right) }\right) }\right\rbrack \text{.} \tag{5} +$$ + +However, for the encoder ${f}_{\theta }$ , we find that using $- {\mathcal{L}}_{D}$ as the loss function for training will result in instabilities. We assume this is because the ${f}_{i}$ term is present in both of the two terms in Equation 2, which is different from that in the original GAN objective. Therefore, we propose to use the following loss function for optimization, which we find works well empirically: + +$$ +{\mathcal{L}}_{\text{reg }} = {\mathbb{E}}_{\left( {{s}_{i},{a}_{i}}\right) \sim \mathcal{D}}\left\lbrack {\log D\left( {{p}_{i},{f}_{i}}\right) }\right\rbrack . \tag{6} +$$ + +Combining the BC loss, the loss function to train the ${f}_{\theta }$ and $\pi$ is then + +$$ +\mathcal{L} = {\mathcal{L}}_{BC} + \lambda {\mathcal{L}}_{\text{reg }} \tag{7} +$$ + +$$ += {\mathbb{E}}_{\left( {{s}_{i},{a}_{i}}\right) \sim \mathcal{D}}\left\lbrack {-\log \pi \left( {{a}_{i} \mid {f}_{\theta }\left( {s}_{i}\right) }\right) + \lambda \log D\left( {{p}_{i},{f}_{i}}\right) }\right\rbrack . \tag{8} +$$ + +## IV. EXPERIMENTS + +In the experiments, we first test the performance of our method on the MAGICAL benchmark. We study the generalization according to the IID protocol [9]. This means that the training and testing task distributions are the same, though the test instance will be unseen. Then, we provide an analysis of our algorithm through both qualitative and quantitive studies. Finally, we extend our method to a real robot manipulation problem. + +## A. Task Setup + +a) MAGICAL: The MAGICAL benchmark simulates a $2\mathrm{D}$ robotic manipulation problem in a warehouse room. The tasks provided by the MAGICAL involve complex interactions between the agent and multiple objects, which require effective spatial generalization. In the experiments, we use a variant of its MatchRegion task. In this task, a robot is required to go across a square room to move some objects to a target region specified by a dashed rectangle. We set up several task instances of the MatchRegion task: MatchRegion-Target-1, MatchRegion-Target-2, MatchRegion-Target-2-Distract, MatchRegion-Target- 3, MatchRegion-Target-3-Distract. We provide an illustration of these tasks in Figure 3. For each MatchRegion-Target- $X$ task(MR - TX), there is no distractor object in the room, so the robot only needs to move all the $X$ objects into the target location. However, for the MatchRegion-Target-X-Distract task (MR-TXD), there is an additional distractor object in the room. This object is also randomly placed in the room during testing. The existence of this distractor object not only increases the risk of learning spurious features but also adds to the difficulty of learning secure motions. As we will discuss later, even the existence of one distractor object can lead to a significant increase of generalization difficulty. The study of more distractors is carried out in the analysis part. + +![019640f8-41cd-718c-9e34-a418c0ceba1c_2_922_154_728_168_0.jpg](images/019640f8-41cd-718c-9e34-a418c0ceba1c_2_922_154_728_168_0.jpg) + +Fig. 3: The MAGICAL tasks used in our experiments. The grey robot is required to move the target objects (we mark them with red dots) to the target region. The red curve shows a possible plan to solve the task (the interaction details like releasing box are omitted). The long horizontal nature of this task brings additional challenges aside from the spatial generalization problem. + +For each of the tasks above, we collect its human demonstration dataset by ourselves. For each demonstration trajectory, we randomly set up the initial position of the objects, target region, and the robot. For MR-T1, we collect 50 trajectories. For each of the other tasks, we collect 100 trajectories. The collection of all these trajectories takes 2 hours. We also study the outcome of using a different number of trajectories in the later analysis part. + +## B. Baselines + +For the vanilla BC policy, we train an IMPALA [5] policy, whose encoder is a residual convolutional neural network. We also try vision-transformer [3] and relational network [12] that have relational biases, but we find that they perform worse than IMPALA and do not report their results here. Then, we implement the following baselines for comparison: Dropout [14], Crop [17, 10], Cutout [2], MixReg [16], OREO [11], and CLOP [1]. + +## C. Results + +a) MAGICAL: The result on MAGICAL is shown in Table 1. The performance is defined by the success rate of the trained policy, which is the number of target objects that are successfully transferred to the target region, divided by the total number of target objects. We observe that our method is able to achieve state-of-the-art results and outperform the baselines by a large margin. Concretely, it improves the success rate by about ${30}\%$ . Besides, we find that most of the previous regularization methods do increase the success rate of the vanilla version and their results are similar to each other. This shows that they may solve some common issues in the generalization problem. However, their performance gap from our method suggests that we tackle a different issue here, which is overfitting to absolute positions. + +TABLE I: Evaluation result on the MAGICAL benchmark. We show the average score on three random seeds. Our method can achieve state-of-the-art results compared with the baselines. + +
MethodVanillaDropoutCropCutoutMixRegOREOCLOPOurs
MR-T1$\underset{0.00}{0.09}$${0.28} \pm {0.04}$$\underset{+{0.02}}{0.42}$${0.19} \pm {0.03}$$\underset{+{0.02}}{0.26}$${0.21} \pm {0.03}$$\underset{+{0.06}}{0.16}$0.63
$\pm {0.05}$
MR-T1D$\underset{\pm {0.06}}{-{0.19}}$$\underset{\pm {0.11}}{\pm {0.11}}$$\underset{\pm {0.03}}{\pm {0.03}}$$\underset{\pm {0.03}}{\pm {0.03}}$$\underset{\pm {0.10}}{\pm {0.10}}$$\underset{\pm {0.06}}{\pm {0.06}}$$\underset{\pm {0.02}}{0.21}$0.60 $\pm {0.08}$
MR-T2$\underline{0.25}$0.480.46$\underset{+{0.05}}{0.44}$$\underset{0.07}{0.32}$0.75
${}_{\pm {0.03}}^{+{0.23}}$$\pm {0.03}$$\pm {0.04}$${}_{\pm {0.05}}^{+{0.43}}$${}_{\pm {0.05}}^{\;{0.37}}$$\pm {0.07}$
MR-T2D$\underset{0.25}{0.27}$0.350.38$\underset{+{0.03}}{-{0.32}}$$\underset{0.00}{0.33}$$\underset{+{0.02}}{0.27}$$\underset{+{0.004}}{0.23}$0.70
$\pm {0.06}$$\pm {0.03}$$\pm {0.04}$${}_{\pm {0.03}}^{+{0.03}}$$\pm {0.04}$
MR-T30.230.510.470.320.480.420.350.66
$\pm {0.02}$$\pm {0.03}$$\pm {0.05}$$\pm {0.04}$$\pm {0.05}$$\pm {0.07}$$\pm {0.03}$
+ +![019640f8-41cd-718c-9e34-a418c0ceba1c_3_141_606_741_384_0.jpg](images/019640f8-41cd-718c-9e34-a418c0ceba1c_3_141_606_741_384_0.jpg) + +Fig. 4: The GradCAM++ importance heatmap of the dropout model (left) and our model (right) on the MatchRegion-Target- 1-Distract task. The red region indicates the most important region, while the dark blue indicates the least important region. The results suggest that the dropout model attends to the red distractor and is not robust. In contrast, our model is able to attend to correct objects. + +## D. Analysis + +a) Qualitative Results: To understand whether our method learns more robust features, we use GradCAM++ to visualize the learned model. For simplicity, we show the result on the MatchReigion-Target-1-Distract task. We compare the result of our model to the model trained with dropout here (Figure 4). We notice that the dropout model tends to focus on the red distractor object rather than the correct target object. In contrast, our model is able to focus on the correct objects. Even when the distance between the agent and the object is large, it can attend to the agent and the object simultaneously. The visualization results suggest that our regularizer indeed leads to robust relational features even when the vision network IMPALA does not have an explicit relational inductive bias. This accounts for the improvement of generalization. + +b) Unseen Number of Distractors: A robust model should base its decision on robust relational information. As a result, for the MAGICAL tasks, it should be able to ignore the distractor and generalize to an unseen number of distractors. Therefore, we test whether our model trained on MR-T1D (where only one distractor presents) can generalize to MR-T1D with the unseen number of distractors (e.g., 0, 2, 3). We also compare the results with the previous models. The result is shown in Figure 5. We find that our model is able to generalize to the case of0,2,3, though the performance is lower than the case of 1 (training scenario). In contrast, the prior model, such as the dropout model, fails in these unseen cases totally. This also echoes our qualitative analysis results. + +![019640f8-41cd-718c-9e34-a418c0ceba1c_3_908_604_346_249_0.jpg](images/019640f8-41cd-718c-9e34-a418c0ceba1c_3_908_604_346_249_0.jpg) + +Fig. 5: The generalization performance to different number of distractors on MR-T1D. + +![019640f8-41cd-718c-9e34-a418c0ceba1c_3_1310_604_350_249_0.jpg](images/019640f8-41cd-718c-9e34-a418c0ceba1c_3_1310_604_350_249_0.jpg) + +Fig. 6: The variation of performance on MAGICAL using the datasets of different sizes. + +c) Number of Demonstrations: We also study whether the proposed method works when the amount of expert demonstrations is limited. For this purpose, we test our method on the MAGICAL with ${25}\% ,{50}\% ,{75}\%$ of expert demonstrations. We show the averaged performance in Figure 6. We find that our method can achieve consistent improvement, though the performance decreases as the dataset becomes smaller. This result suggests that we still require a certain amount of diverse data to achieve spatial generalization. + +## V. CONCLUSION + +We studied the spatial generalization problem of imitation learning. We proposed POINT, a novel position-invariant regularizer to remove the absolute positional information from the features to tackle this problem. Through experiments on the MAGICAL benchmark as well as a robot manipulation system, we confirmed that previous methods do overfit to the absolute position and showed that our proposed approach can effectively help generalization. + +## REFERENCES + +[1] David Bertoin and Emmanuel Rachelson. Local feature swapping for generalization in reinforcement learning. In International Conference on Learning Representations (ICLR), 2022. + +[2] Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. + +[3] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representation (ICLR), 2021. + +[4] Leonidas AA Doumas, Guillermo Puebla, Andrea E Martin, and John E Hummel. A theory of relation learning and cross-domain generalization. Psychological review, 2022. + +[5] Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In International Conference on Machine Learning (ICML), 2018. + +[6] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139-144, 2020. + +[7] Kyle Hsu, Moo Jin Kim, Rafael Rafailov, Jiajun Wu, and Chelsea Finn. Vision-based manipulators need to also see from their hands. In International Conference on Learning Representation (ICLR), 2022. + +[8] Ahmed Hussein, Mohamed Medhat Gaber, Eyad Elyan, and Chrisina Jayne. Imitation learning: A survey of learning methods. ACM Computing Surveys (CSUR), 50 (2):1-35, 2017. + +[9] Robert Kirk, Amy Zhang, Edward Grefenstette, and Tim Rocktäschel. A survey of generalisation in deep reinforcement learning. arXiv preprint arXiv:2111.09794, 2021. + +[10] Misha Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, and Aravind Srinivas. Reinforcement learning with augmented data. In Neural Information Processing Systems (NeurIPS), 2020. + +[11] Jongjin Park, Younggyo Seo, Chang Liu, Li Zhao, Tao Qin, Jinwoo Shin, and Tie-Yan Liu. Object-aware regularization for addressing causal confusion in imitation learning. In Neural Information Processing Systems (NeurIPS), 2021. + +[12] Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. In Neural Information Processing Systems (NeurIPS), 2017. + +[13] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In International Conference on Computer Vision (ICCV), pages 618-626, 2017. + +[14] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research (JMLR), 15(1): 1929-1958, 2014. + +[15] Sam Toyer, Rohin Shah, Andrew Critch, and Stuart Russell. The magical benchmark for robust imitation. In Neural Information Processing Systems (NeurIPS), 2020. + +[16] Kaixin Wang, Bingyi Kang, Jie Shao, and Jiashi Feng. Improving generalization in reinforcement learning with mixture regularization. In Neural Information Processing Systems (NeurIPS), 2020. + +[17] Denis Yarats, Ilya Kostrikov, and Rob Fergus. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. In International Conference on Learning Representations (ICLR), 2020. + +## VI. APPENDIX + +## A. Real-World Experiments + +We also test whether our method scales to the real-world pick-and-place manipulation problem. We extend the MR-T1D to a UR10 robot arm with a Robotiq parallel-jaw gripper (Figure 7). As suggested by [7], we use a gripper camera and a workspace camera to provide observation. For the BC model, we use two separate IMPALA encoders to process each camera image, concatenate their output features along with the $z$ -coordinate of gripper, and feed them into an MLP. We use the proposed regularizer to regularize the workspace branch. We collect 75 human demonstrations for training. We compare our method to dropout with different numbers of distract objects. The result is shown in Table II. Our method also achieves a large improvement in this problem. The qualitative results are shown in the Appendix Section VI-B. + +![019640f8-41cd-718c-9e34-a418c0ceba1c_5_153_790_713_538_0.jpg](images/019640f8-41cd-718c-9e34-a418c0ceba1c_5_153_790_713_538_0.jpg) + +Fig. 7: The setup of real-world robot manipulation experiments. + +TABLE II: The success rate of the real-world experiments. Our method is also effective here. Each test consists of 20 trials. + +
$\mathbf{{Method}}$DropoutOurs
0 Dis. Obj35%55%
1 Dis. Obj35%60%
2 Dis. Obj20%$\mathbf{{50}\% }$
3 Dis. Obj10%45%
+ +## B. Qualitative Results of the Manipulation Problem + +In this section, we provide some qualitative results of the real-world manipulation problem. Recall that in this task, the robot is required to move a red cube to a target location specified by a green area. We show the importance heatmap of the dropout model (Figure 8) and our model (Figure 9). As is shown in the figures, we find that dropout model tends to attend more to the round distractor object compared with our model. However, + +![019640f8-41cd-718c-9e34-a418c0ceba1c_5_912_149_744_475_0.jpg](images/019640f8-41cd-718c-9e34-a418c0ceba1c_5_912_149_744_475_0.jpg) + +Fig. 8: The GradCAM++ importance heatmap of dropout model in the real-world manipulation problem. The dropout model tends to attend the round distractor object. + +![019640f8-41cd-718c-9e34-a418c0ceba1c_5_914_766_745_475_0.jpg](images/019640f8-41cd-718c-9e34-a418c0ceba1c_5_914_766_745_475_0.jpg) + +Fig. 9: The GradCAM++ importance heatmap of our model in the real-world manipulation problem. Our model attends less to the round distractor object. However, due to the visual complexity, we find that our model sometimes may attend the shadow in the background. + +due to the visual complexity, we find that our model sometimes may attend the shadow in the background. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/N00uQFLlvHC/Initial_manuscript_tex/Initial_manuscript.tex b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/N00uQFLlvHC/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..df5c4def73c74b5d92aae3985b2e98fe3f1b3410 --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/N00uQFLlvHC/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,170 @@ +§ SPATIAL GENERALIZATION OF VISUAL IMITATION LEARNING WITH POSITION-INVARIANT REGULARIZATION + +Anonymous Authors + +Abstract-How the visual imitation learning models can generalize to novel unseen visual observations is a highly challenging problem. Such a generalization ability is very crucial for their real-world applications. Since this generalization problem has many different aspects, we focus on one case called spatial generalization, which refers to generalization to unseen setup of object (entity) locations in a task, such as a novel setup of object locations in the robotic manipulation problem. In this case, previous works observe that the visual imitation learning models will overfit to the absolute information (e.g., coordinates) rather than the relational information between objects, which is more important for decision making. As a result, the models will perform poorly in novel object location setups. Nevertheless, so far, it remains unclear how we can solve this problem effectively. Our insight into this problem is to explicitly remove the absolute information from the features learned by imitation learning models so that the models can use robust, relational information to make decisions. To this end, we propose a novel, position-invariant regularizer called POINT for generalization. The proposed regularizer will penalize the imitation learning model when its features contain absolute, positional information of objects. Various experiments demonstrate the effectiveness of our method. + +§ I. INTRODUCTION + +Imitation learning is a class of algorithms that enable robots to acquire behaviors from human demonstrations [8]. The recent advance in deep learning has boosted the development of visual imitation learning and supported its applications like autonomous driving, robotic manipulation, and human-robot interaction [8]. + +In spite of its success, visual imitation learning methods still face many practical challenges. One major challenge is its ability to generalize to novel unseen visual observations, which is very common when we deploy the trained models [15, 11]. In the literature, this generalization problem is also known as the robustness problem. The problem covers many different aspects. For example, here we can identify two basic generalization capabilities: observational generalization and spatial generalization (Figure 1). Observational generalization refers to the generalization to novel visual textures. The changes in background color, object texture, or ambient light in the robotic manipulation task are examples of observational generalization. Such kind of visual change does not affect the underlying task structure (e.g., the position of object and targets) and only requires the robot to reason about semantic meanings correctly. In contrast, spatial generalization refers to the generalization to unseen setup of objects' (entities) locations in one task, which instead requires physical common sense about space and object. Consider the task of letting a warehouse robot move a box to some target region. If we set the initial position of the box to a place that is not covered by the demonstration dataset, then the imitation learning methods must be able to perform spatial generalization so as to succeed. In reality, the generalization challenge usually emerges as a combination of different generalization capabilities. In this paper, we focus on the study of spatial generalization. + + < g r a p h i c s > + +Fig. 1: Left and Middle: Two kinds of visual generalization. The examples are based on the MAGICAL benchmark provided by [15], in which a robot is required to relocate a box to a target region. The left figure shows an example of observational generalization, in which the only change during the testing phase is the visual texture of objects. The middle figure shows an example of spatial generalization. The object setup in the testing phase is unseen. Right: To achieve spatial generalization, we suggest that absolute information should be removed from the feature while the relational information should be kept. We propose a novel, position-invariant regularizer for this purpose. + +For better spatial generalization, the visual imitation learning models should be able to obtain knowledge about objects and their spatial relations with proper inductive biases. Some work finds that vanilla deep visual imitation learning models strongly overfit to the absolute position of objects [15], which suggests that they do not extract relational information of objects to make decisions like humans [4]. Based on this observation, our main insight into this problem is to explicitly remove the absolute, positional information from the learned features in the visual imitation learning models. Note that this does not mean that the decision-making process is not dependent on absolute information. Rather, we expect that the model can extract the relational information (e.g., distance, direction) from the absolute information to make robust decisions. To this end, we propose a novel position-invariant regularizer called POINT. This regularizer will penalize the imitation learning model when it finds that the learned feature highly correlates with absolute, positional information. As a result, the imitation learning model has to discover more robust relational features, and can generalize better in unseen scenarios. + +§ II. PRELIMINARIES + +a) Notations: We model the sequential decision making problem as a Markov Decision Process $\mathcal{M} = \left( {\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T}}\right) .\mathcal{S}$ is the state space. $\mathcal{A}$ is the action space. $\mathcal{R}$ is the reward function. $\mathcal{T}$ is the transition dynamics. The agent’s state at timestep $t$ is ${s}_{t} \in \mathcal{S}$ . The agent takes action ${a}_{t}$ and receives reward ${r}_{t} = \mathcal{R}\left( {{s}_{t},{a}_{t}}\right)$ . Its state at timestep $t + 1$ is then ${s}_{t + 1} \sim \mathcal{T}\left( {{s}_{t},{a}_{t}}\right)$ . The objective of the agent is to maximize the return $\mathop{\sum }\limits_{{t = 0}}^{T}{\gamma }^{t}{r}_{t}$ , where $\gamma \in (0,1\rbrack$ is a discount factor. For the imitation learning problem studied here, the agent has no access to $\mathcal{R}$ and $\mathcal{T}$ , but it is provided with a fixed expert demonstration dataset $\mathcal{D} = \left\{ {\tau }_{i}\right\}$ . Here, each ${\tau }_{i} = \left( {{s}_{0}^{E},{a}_{0}^{E},{s}_{1}^{E},{a}_{1}^{E},\ldots {s}_{T}^{E},{a}_{T}^{E}}\right)$ is an expert trajectory that can achieve high performance (return) in $\mathcal{M}$ . Therefore, the agent should learn the behavior by leveraging the given demonstration dataset. + +b) Behavioral Cloning: One classical imitation learning algorithm is the Behavioral Cloning (BC). BC turns the imitation learning problem into a supervised learning problem. It fits the expert’s action ${a}_{i}$ given the observation ${s}_{i}$ . For the visual imitation learning problem, the BC model can be divided into two consecutive parts: a vision encoder ${f}_{\theta }$ (which is usually a convolutional neural network), and a policy head $\pi$ . The ${f}_{\theta }$ first encodes ${s}_{i}$ to the feature ${f}_{i} = {f}_{\theta }\left( {s}_{i}\right)$ , and the $\pi$ then uses it to predict the expert’s action. The BC algorithm minimizes the following negative log-likelihood objective: + +$$ +{\mathcal{L}}_{BC} = {\mathbb{E}}_{\left( {{s}_{i},{a}_{i}}\right) \in \mathcal{D}}\left\lbrack {-\log \pi \left( {{a}_{i} \mid {f}_{\theta }\left( {s}_{i}\right) }\right) }\right\rbrack . \tag{1} +$$ + +Due to its simplicity, $\mathrm{{BC}}$ is widely used in visual imitation learning. Therefore, we study the spatial generalization of BC in this paper. + +§ III. METHOD + +§ A. FORMULATION AND CHALLENGES + +For the tasks that involve spatial generalization, there usually exist multiple objects in the observed states, such as the agent, the target object, and the goal. For the state ${s}_{i}$ , we denote each of these objects in ${s}_{i}$ as ${o}_{i}^{j}$ , and their positions as $\left( {{x}_{i}^{j},{y}_{i}^{j}}\right)$ . Then, our idea can be formulated as the minimization problem of each $I\left( {\left( {{\mathbf{x}}^{j},{\mathbf{y}}^{j}}\right) ,\mathbf{f}}\right)$ , where $I$ is the mutual information. Note that we use the notation ${\mathbf{x}}^{j},{\mathbf{y}}^{j},\mathbf{f}$ to indicate the corresponding random variables of ${x}_{i}^{j},{y}_{i}^{j},{f}_{i}$ . However, this formulation leads to many practical challenges. First, since each $\left( {{x}_{i}^{j},{y}_{i}^{j}}\right)$ is not provided directly by ${s}_{i}$ and should be inferred, we have to either train some object key-point detectors to detect the underlying objects in the training set, or annotate the objects by ourselves. However, both of these approaches can be difficult and tedious in practice. Second, even if we have ideal key-point detectors, we have to deal with a hard optimization problem in the summation form $\mathop{\sum }\limits_{j}I\left( {\left( {{\mathbf{x}}^{j},{\mathbf{y}}^{j}}\right) ,\mathbf{f}}\right)$ . This can be intractable when there are many objects in the observed state. + +Fortunately, we find that the previous works on the interpretation of deep learning models like GradCAM provide useful tools to handle these challenges. It can reduce the problem to a much simpler form. We discuss our observations as follows. + + < g r a p h i c s > + +Fig. 2: Overview of our method. The blue branch above is the common imitation learning (BC) pipeline. Our proposed regularizer is shown in the light pink box at the bottom. The regularizer first uses the GradCAM++ algorithm to find out the important areas based on which the latest BC model makes decisions. Then it samples the coordinates from the discovered important areas and trains a discriminator network $D$ to calculate whether these sampled coordinates are paired with the feature ${f}_{i}$ . The BC model (encoder ${f}_{\theta }$ ) is then trained to fool the discriminator $D$ . When the encoder ${f}_{\theta }$ is able to fool $D$ , the absolute positional information is removed from the feature as desired. + +§ B. PROBLEM REDUCTION WITH GRADCAM + +GradCAM [13] is an interpretation method that can tell which part of the image is crucial in the decision process of a deep learning model. Given a BC model $\left( {{f}_{\theta },\pi }\right)$ and input $s$ , GradCAM outputs an importance heatmap of the same resolution as the input $s$ . The heatmap indicates the importance of each pixel when we use this BC model for prediction. One nice property of this generated heatmap is that it is smooth and usually coincides with the meaningful objects in the input $s$ . Therefore, we can consider the GradCAM as a rough object detector here. + +We propose to sample ${p}_{i} = \left( {{x}_{i},{y}_{i}}\right)$ from the generated heatmap, and then minimize the $I\left( {\mathbf{p},\mathbf{f}}\right)$ . We find that this new objective can act as a proxy for the original objective in practice. Concretely, if ${p}_{i}$ is always far from a specific object like ${o}^{k}$ , then we know that ${o}^{k}$ is irrelevant to the decision process of the current model. In this case, we conjecture that $I\left( {\left( {{\mathbf{x}}^{k},{\mathbf{y}}^{k}}\right) ,\mathbf{f}}\right)$ should be low enough to meet our requirement. On the contrary, if ${p}_{i}$ always coincides with a certain object like ${o}^{l}$ , then we actually minimize $I\left( {\mathbf{p},\mathbf{f}}\right) \approx I\left( {\left( {{\mathbf{x}}^{l},{\mathbf{y}}^{l}}\right) ,\mathbf{f}}\right)$ as we want. + +§ C. LOSS FUNCTIONS + +Now, our remaining work is to reduce the mutual information $I\left( {\mathbf{p},\mathbf{f}}\right)$ . However, we find that jointly estimating and minimizing the mutual information in our vision-based tasks is hard in practice. Since our ultimate goal is to minimize the information of $\mathbf{p}$ in $\mathbf{f}$ , we instead propose an adversarial training framework to achieve this goal. + +Specifically, we introduce a discriminator network $D$ to play a two-player min-max game with the BC model as follows. + +$$ +\mathop{\min }\limits_{{f}_{\theta }}\mathop{\max }\limits_{D}{\mathbb{E}}_{\left( {{s}_{i},{a}_{i}}\right) \sim \mathcal{D},\left( {{s}_{j},{a}_{j}}\right) \sim \mathcal{D}} \tag{2} +$$ + +$$ +\left\lbrack {\log D\left( {{p}_{i},{f}_{i}}\right) + \log \left( {1 - D\left( {{p}_{j},{f}_{i}}\right) }\right) }\right\rbrack . \tag{3} +$$ + +In this min-max game, the discriminator $D$ tries to tell the joint distribution of $\mathbf{p}$ and $\mathbf{f}$ , denoted as ${\mathbb{P}}_{\mathbf{p},\mathbf{f}}$ , from the product of their marginal distributions ${\mathbb{P}}_{\mathbf{p} \otimes \mathbf{f}}$ . Meanwhile, the BC model is trying to fool the discriminator by removing the information of $\mathbf{p}$ from $\mathbf{f}$ . Applying the convergence theory of the generative adversarial network (GAN) [6], we know that when ${f}_{\theta }$ is a global minimizer of Equation 2, ${\mathbb{P}}_{\mathbf{p},\mathbf{f}} = {\mathbb{P}}_{\mathbf{p} \otimes \mathbf{f}}$ , which implies that $I\left( {\mathbf{p},\mathbf{f}}\right) = 0$ . Therefore this min-max game fulfills our requirement. + +In practice, we train $D$ to minimize the following binary classification loss function: + +$$ +{\mathcal{L}}_{D} = - {\mathbb{E}}_{\left( {{s}_{i},{a}_{i}}\right) \sim \mathcal{D},\left( {{s}_{j},{a}_{j}}\right) \sim \mathcal{D}} \tag{4} +$$ + +$$ +\left\lbrack {\log D\left( {{p}_{i},{f}_{i}}\right) + \log \left( {1 - D\left( {{p}_{j},{f}_{i}}\right) }\right) }\right\rbrack \text{ . } \tag{5} +$$ + +However, for the encoder ${f}_{\theta }$ , we find that using $- {\mathcal{L}}_{D}$ as the loss function for training will result in instabilities. We assume this is because the ${f}_{i}$ term is present in both of the two terms in Equation 2, which is different from that in the original GAN objective. Therefore, we propose to use the following loss function for optimization, which we find works well empirically: + +$$ +{\mathcal{L}}_{\text{ reg }} = {\mathbb{E}}_{\left( {{s}_{i},{a}_{i}}\right) \sim \mathcal{D}}\left\lbrack {\log D\left( {{p}_{i},{f}_{i}}\right) }\right\rbrack . \tag{6} +$$ + +Combining the BC loss, the loss function to train the ${f}_{\theta }$ and $\pi$ is then + +$$ +\mathcal{L} = {\mathcal{L}}_{BC} + \lambda {\mathcal{L}}_{\text{ reg }} \tag{7} +$$ + +$$ += {\mathbb{E}}_{\left( {{s}_{i},{a}_{i}}\right) \sim \mathcal{D}}\left\lbrack {-\log \pi \left( {{a}_{i} \mid {f}_{\theta }\left( {s}_{i}\right) }\right) + \lambda \log D\left( {{p}_{i},{f}_{i}}\right) }\right\rbrack . \tag{8} +$$ + +§ IV. EXPERIMENTS + +In the experiments, we first test the performance of our method on the MAGICAL benchmark. We study the generalization according to the IID protocol [9]. This means that the training and testing task distributions are the same, though the test instance will be unseen. Then, we provide an analysis of our algorithm through both qualitative and quantitive studies. Finally, we extend our method to a real robot manipulation problem. + +§ A. TASK SETUP + +a) MAGICAL: The MAGICAL benchmark simulates a $2\mathrm{D}$ robotic manipulation problem in a warehouse room. The tasks provided by the MAGICAL involve complex interactions between the agent and multiple objects, which require effective spatial generalization. In the experiments, we use a variant of its MatchRegion task. In this task, a robot is required to go across a square room to move some objects to a target region specified by a dashed rectangle. We set up several task instances of the MatchRegion task: MatchRegion-Target-1, MatchRegion-Target-2, MatchRegion-Target-2-Distract, MatchRegion-Target- 3, MatchRegion-Target-3-Distract. We provide an illustration of these tasks in Figure 3. For each MatchRegion-Target- $X$ task(MR - TX), there is no distractor object in the room, so the robot only needs to move all the $X$ objects into the target location. However, for the MatchRegion-Target-X-Distract task (MR-TXD), there is an additional distractor object in the room. This object is also randomly placed in the room during testing. The existence of this distractor object not only increases the risk of learning spurious features but also adds to the difficulty of learning secure motions. As we will discuss later, even the existence of one distractor object can lead to a significant increase of generalization difficulty. The study of more distractors is carried out in the analysis part. + + < g r a p h i c s > + +Fig. 3: The MAGICAL tasks used in our experiments. The grey robot is required to move the target objects (we mark them with red dots) to the target region. The red curve shows a possible plan to solve the task (the interaction details like releasing box are omitted). The long horizontal nature of this task brings additional challenges aside from the spatial generalization problem. + +For each of the tasks above, we collect its human demonstration dataset by ourselves. For each demonstration trajectory, we randomly set up the initial position of the objects, target region, and the robot. For MR-T1, we collect 50 trajectories. For each of the other tasks, we collect 100 trajectories. The collection of all these trajectories takes 2 hours. We also study the outcome of using a different number of trajectories in the later analysis part. + +§ B. BASELINES + +For the vanilla BC policy, we train an IMPALA [5] policy, whose encoder is a residual convolutional neural network. We also try vision-transformer [3] and relational network [12] that have relational biases, but we find that they perform worse than IMPALA and do not report their results here. Then, we implement the following baselines for comparison: Dropout [14], Crop [17, 10], Cutout [2], MixReg [16], OREO [11], and CLOP [1]. + +§ C. RESULTS + +a) MAGICAL: The result on MAGICAL is shown in Table 1. The performance is defined by the success rate of the trained policy, which is the number of target objects that are successfully transferred to the target region, divided by the total number of target objects. We observe that our method is able to achieve state-of-the-art results and outperform the baselines by a large margin. Concretely, it improves the success rate by about ${30}\%$ . Besides, we find that most of the previous regularization methods do increase the success rate of the vanilla version and their results are similar to each other. This shows that they may solve some common issues in the generalization problem. However, their performance gap from our method suggests that we tackle a different issue here, which is overfitting to absolute positions. + +TABLE I: Evaluation result on the MAGICAL benchmark. We show the average score on three random seeds. Our method can achieve state-of-the-art results compared with the baselines. + +max width= + +Method Vanilla Dropout Crop Cutout MixReg OREO CLOP Ours + +1-9 +MR-T1 $\underset{0.00}{0.09}$ ${0.28} \pm {0.04}$ $\underset{+{0.02}}{0.42}$ ${0.19} \pm {0.03}$ $\underset{+{0.02}}{0.26}$ ${0.21} \pm {0.03}$ $\underset{+{0.06}}{0.16}$ 0.63 + +1-9 +X X X X X X X X $\pm {0.05}$ + +1-9 +MR-T1D $\underset{\pm {0.06}}{-{0.19}}$ $\underset{\pm {0.11}}{\pm {0.11}}$ $\underset{\pm {0.03}}{\pm {0.03}}$ $\underset{\pm {0.03}}{\pm {0.03}}$ $\underset{\pm {0.10}}{\pm {0.10}}$ $\underset{\pm {0.06}}{\pm {0.06}}$ $\underset{\pm {0.02}}{0.21}$ 0.60 $\pm {0.08}$ + +1-9 +MR-T2 $\underline{0.25}$ 0.48 0.46 X $\underset{+{0.05}}{0.44}$ X $\underset{0.07}{0.32}$ 0.75 + +1-9 +X ${}_{\pm {0.03}}^{+{0.23}}$ $\pm {0.03}$ $\pm {0.04}$ ${}_{\pm {0.05}}^{+{0.43}}$ X ${}_{\pm {0.05}}^{\;{0.37}}$ X $\pm {0.07}$ + +1-9 +MR-T2D $\underset{0.25}{0.27}$ 0.35 0.38 $\underset{+{0.03}}{-{0.32}}$ $\underset{0.00}{0.33}$ $\underset{+{0.02}}{0.27}$ $\underset{+{0.004}}{0.23}$ 0.70 + +1-9 +X $\pm {0.06}$ $\pm {0.03}$ $\pm {0.04}$ X ${}_{\pm {0.03}}^{+{0.03}}$ X X $\pm {0.04}$ + +1-9 +MR-T3 0.23 0.51 0.47 0.32 0.48 0.42 0.35 0.66 + +1-9 +X $\pm {0.02}$ $\pm {0.03}$ $\pm {0.05}$ $\pm {0.04}$ $\pm {0.05}$ X $\pm {0.07}$ $\pm {0.03}$ + +1-9 + + < g r a p h i c s > + +Fig. 4: The GradCAM++ importance heatmap of the dropout model (left) and our model (right) on the MatchRegion-Target- 1-Distract task. The red region indicates the most important region, while the dark blue indicates the least important region. The results suggest that the dropout model attends to the red distractor and is not robust. In contrast, our model is able to attend to correct objects. + +§ D. ANALYSIS + +a) Qualitative Results: To understand whether our method learns more robust features, we use GradCAM++ to visualize the learned model. For simplicity, we show the result on the MatchReigion-Target-1-Distract task. We compare the result of our model to the model trained with dropout here (Figure 4). We notice that the dropout model tends to focus on the red distractor object rather than the correct target object. In contrast, our model is able to focus on the correct objects. Even when the distance between the agent and the object is large, it can attend to the agent and the object simultaneously. The visualization results suggest that our regularizer indeed leads to robust relational features even when the vision network IMPALA does not have an explicit relational inductive bias. This accounts for the improvement of generalization. + +b) Unseen Number of Distractors: A robust model should base its decision on robust relational information. As a result, for the MAGICAL tasks, it should be able to ignore the distractor and generalize to an unseen number of distractors. Therefore, we test whether our model trained on MR-T1D (where only one distractor presents) can generalize to MR-T1D with the unseen number of distractors (e.g., 0, 2, 3). We also compare the results with the previous models. The result is shown in Figure 5. We find that our model is able to generalize to the case of0,2,3, though the performance is lower than the case of 1 (training scenario). In contrast, the prior model, such as the dropout model, fails in these unseen cases totally. This also echoes our qualitative analysis results. + + < g r a p h i c s > + +Fig. 5: The generalization performance to different number of distractors on MR-T1D. + + < g r a p h i c s > + +Fig. 6: The variation of performance on MAGICAL using the datasets of different sizes. + +c) Number of Demonstrations: We also study whether the proposed method works when the amount of expert demonstrations is limited. For this purpose, we test our method on the MAGICAL with ${25}\% ,{50}\% ,{75}\%$ of expert demonstrations. We show the averaged performance in Figure 6. We find that our method can achieve consistent improvement, though the performance decreases as the dataset becomes smaller. This result suggests that we still require a certain amount of diverse data to achieve spatial generalization. + +§ V. CONCLUSION + +We studied the spatial generalization problem of imitation learning. We proposed POINT, a novel position-invariant regularizer to remove the absolute positional information from the features to tackle this problem. Through experiments on the MAGICAL benchmark as well as a robot manipulation system, we confirmed that previous methods do overfit to the absolute position and showed that our proposed approach can effectively help generalization. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/N8KlLRpevrT/Initial_manuscript_md/Initial_manuscript.md b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/N8KlLRpevrT/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..167aaaba49436b250d01bfa26f04d9cc15f515b7 --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/N8KlLRpevrT/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,213 @@ +# Point-based Correspondence Estimation for Cloth Alignment and Manipulation + +Author Names Omitted for Anonymous Review. Paper-ID [add your ID here] + +Abstract-Automating cloth folding is a challenging task with practical implications in various domains. Existing methods often struggle with unaligned configurations, limiting their applicability in real-world scenarios. In this research, we present FabricFlowAlignNet (FFAN), a novel approach that learns flow-based correspondences on point clouds between the current observed and goal cloth configurations. We use these learned 3D correspondences for both cloth alignment and manipulation: correspondences are used to align the observed cloth with the goal, and the flow-based correspondences are re-used as action proposals. Our experiments demonstrate that FFAN demonstrates superior performance compared to a state-of-the-art folding approach, particularly in scenarios where observed cloth is rotated or otherwise unaligned with the goal. This research highlights the multi-faceted effectiveness of learning 3D correspondences for deformable object manipulation. + +## I. INTRODUCTION + +Cloth manipulation is a challenging task, with difficulties in both perception and control due to the deformability of cloth. Manual cloth manipulation techniques are time-consuming, labor-intensive, and prone to human error. As a result, there is a growing demand to automate cloth manipulation processes in various domains such as folding laundry, handling textiles in manufacturing processes, and assisting individuals with disabilities in dressing. + +A fundamental aspect of successful cloth manipulation is establishing correspondences between the current observation and the goal configuration. These correspondences provide critical spatial associations necessary for planning and executing folding actions. However, while prior methods have proposed to learn correspondences for cloth [11, 4], they do not explicitly use such methods for reasoning about the alignment between the observed cloth and the desired configuration. Alignment is a crucial step in cloth manipulation, and prior correspondence-based policies do not handle cases where the cloth and goal are not already aligned [11], or rely on human demonstrations [4]. + +In this work, we propose FabricFlowAlignNet (FFAN), an approach that combines the use of correspondences and symmetry-handling techniques to learn a goal-conditioned cloth manipulation policy. Our method leverages correspondences to "virtually" align the observation and goal point clouds, enabling the policy to determine the appropriate actions to execute on the observation. By incorporating these correspondences and symmetry handling, our approach aims to acquire an understanding of cloth folding strategies and develop a manipulation policy capable of accurately and efficiently folding clothes. This is particularly beneficial in challenging scenarios where the observed cloth is rotated or unaligned with the desired goal configuration. + +![019640f1-620e-7667-9240-573936cca549_0_936_472_698_430_0.jpg](images/019640f1-620e-7667-9240-573936cca549_0_936_472_698_430_0.jpg) + +Fig. 1. Performance of FFAN (ours) vs FNN on unaligned goals. + +We evaluate the performance of our method against a state-of-the-art folding approach [11] on a folding task, where the goal and observation poses are not aligned. Our method, reasons about symmetries and employs correspondences to deal with unaligned goals, unlike the baseline approach. The results demonstrate the effectiveness and robustness of our approach in achieving successful cloth folding when the observation and goal configurations are unaligned. + +## II. PRIOR WORK + +FabricFlowNet (FFN) [11] performed bimanual cloth folding by estimating flow correspondences between the observed cloth image and goal cloth image. However, FFN relies on strict alignment between observation and goal cloth poses in the image. Our approach extends FFN by proposing an approach for aligning learned 3D correspondences to overcome these limitations. By establishing spatial relationships between points in observation and goal configurations, we enable precise alignment and achieve better folding performance for unaligned goals than FFN. + +Fabric Descriptors [4] is a method for estimating correspondences in fabric manipulation tasks. However, it heavily relies on human demonstrations to learn the necessary correspondences. In contrast, our method can learn and estimate correspondences without any human intervention. + +SpeedFolding [1], another cloth manipulation method, is trained exclusively in real-world settings. In comparison, our approach, similar to FFN, undergoes training in a simulation environment before being transferred to real-world scenarios. While SpeedFolding employs self-supervised learning techniques to handle unaligned observations and goals, our approach leverages learned correspondences for alignment purposes. + +Regarding alignment methods, Cloth Funnels [2] is a notable technique that combines cloth folding and alignment. Their alignment procedure utilizes the Procrustes' algorithm, primarily designed for aligning rigid objects. However, due to its local alignment nature, it may not be as suitable for deformable objects. In contrast, our approach adopts the random sample consensus (RANSAC) algorithm, allowing for probabilistic achievement of a globally optimal alignment. + +## III. Point Cloud Correspondence Estimation For CLOTH ALIGNMENT AND MANIPULATION + +In this section, we describe FabricFlowAlignNet (FFAN), our approach for estimating observation-goal correspondences to align and manipulate cloth. + +## A. Learning Correspondences for Point Clouds + +We propose a 3D, flow-based correspondence estimator called "3DFlowNet", a component of our overall pipeline. 3DFlowNet takes the observation and goal point clouds ${c}_{o}$ and ${c}_{g}$ as input, and outputs 3D flow $\widehat{f}$ . 3DFlowNet is a non-trivial extension of the FlowNet from FabricFlowNet [11], which was limited to image input and 2D flow output. A schematic overview of 3DFlowNet can be found in Fig. 2. + +![019640f1-620e-7667-9240-573936cca549_1_163_1115_722_264_0.jpg](images/019640f1-620e-7667-9240-573936cca549_1_163_1115_722_264_0.jpg) + +Fig. 2. 3DFlowNet Architecture + +We first transform the point clouds into a graph, where nodes represent cloth particles and are connected to their neighboring particles on the cloth mesh. This step requires privileged state information from the simulator of the cloth mesh edges, which would not be available in the real world; estimating these edges is an area of future work and could leverage prior methods like VCD [6]. We embed the input graphs by employing a graph neural network $H$ , which outputs embeddings for each node in the graph: ${c}_{o}^{\prime },{c}_{g}^{\prime } \in {\mathbb{R}}^{N \times F}$ . + +We then use a Transformer network [8] denoted as $T$ to perform cross-attention between observation and goal features. Our approach is inspired by prior Transformer-based per-point networks like DCP [10] and TAX-Pose [7]. $T$ takes ${c}_{o}^{\prime }$ and ${c}_{o}^{\prime }$ as input and outputs transformed embeddings ${c}^{\prime } \in {\mathbb{R}}^{N \times F}.{c}^{\prime }$ The resulting transformer embeddings, ${c}^{\prime }$ , are then summed with the original observation embeddings ${c}_{o}^{\prime }$ to produce ${c}_{o}^{\prime \prime }$ . + +To estimate correspondences between the two configurations, we pass ${c}_{o}^{\prime \prime }$ through MLP layers to produce estimated correspondences $\widehat{f} \in {\mathbb{R}}^{Nx3}$ . These correspondences represent how each cloth particle in $N$ transports to achieve the goal configuration. + +To train 3DFlowNet, we use a weighted L2 loss between the estimated and ground truth correspondences. The ground truth correspondences are computed as the difference between the point clouds ${c}_{g}$ and ${c}_{o}$ . The weighted L2 loss function is defined as: + +$$ +{\mathcal{L}}_{2}\left( {\widehat{f}, f}\right) = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}{\left( {f}_{i} - {\widehat{f}}_{i}\right) }^{2} \tag{1} +$$ + +where $\widehat{f}$ represents the estimated correspondences, $f$ represents the ground truth correspondences, and $N$ is the total number of points in the point cloud. The weights ${w}_{i}$ are higher for ground truth pick points. + +## B. Iterative Correspondence Estimation + +The trained 3DFlowNet model struggles to generate large correspondences, leading to suboptimal results. To address this limitation, we introduce an iterative approach to enhance the accuracy of correspondence estimation. Our iterative process involves updating the estimated correspondences and the intermediate point cloud to gradually refine the correspondence estimation. + +In each iteration, we utilize the trained 3DFlowNet model to estimate the correspondence between the intermediate point cloud ${\widehat{c}}_{o}$ and the target configuration ${c}_{g}$ . By integrating the estimated correspondence into the observation, we simulate the application of the flow to progressively approach the target configuration. The algorithm for iterative correspondence estimation is summarized in Alg. 1. + +Algorithm 1 Iterative Correspondence Estimation + +--- + +1: Input: Trained 3DFlowNet, Point Clouds ${c}_{o},{c}_{g}$ + + Initialize all zeros $\bar{f} \in {\mathbb{R}}^{N \times 3}$ + + ${\widehat{c}}_{o} \mathrel{\text{:=}} {c}_{o}$ + + for $\mathrm{t} = 1\ldots \mathrm{T}$ do + + $\widehat{f} = 3\mathrm{{DFlowNet}}\left( {{\widehat{c}}_{o},{c}_{g}}\right)$ + + $\bar{f} + = \widehat{f}$ + + ${\widehat{c}}_{o} + = \widehat{f}$ + + end for + + return $\bar{f}$ + +--- + +## C. RANSAC Alignment for Unaligned Goals + +To address the alignment issue between observation and goal, we propose utilizing estimated correspondences to perform alignment using the RANSAC algorithm [3]. The forward pass through 3DFlowNet provides the estimated correspondences. The steps involved in this process are as follows: + +1) Sample two indices(i, j)on the cloth. + +2) Compute the transformation matrix $R$ using the correspondences $\left( {{f}_{i},{f}_{j}}\right)$ , i.e. $\left( {{p}_{i},{p}_{j}}\right)$ and $\left( {{p}_{i} + {f}_{i}}\right) ,\left( {{p}_{j} + {f}_{j}}\right)$ . + +3) Compute inliers by comparing the distance between transformed points $\begin{Vmatrix}{R{p}_{i} - \left( {{p}_{i} + {f}_{i}}\right) }\end{Vmatrix}$ and a threshold epsilon $\epsilon$ . + +4) Choose the transformation matrix $R$ with the maximum number of inliers. + +D. Estimating the Pick Location for an Action + +![019640f1-620e-7667-9240-573936cca549_2_194_320_641_417_0.jpg](images/019640f1-620e-7667-9240-573936cca549_2_194_320_641_417_0.jpg) + +Fig. 3. 3DPickNet Architecture + +To predict the pick points necessary for cloth manipulation, we introduce a neural network called 3DPickNet. This network is specifically designed to estimate the pick points ${p}_{1}$ and ${p}_{2}$ . The inputs to 3DPickNet are the current observation ${c}_{o}$ and the estimated correspondences $\widehat{f}$ between ${c}_{o}$ and the goal configuration ${c}_{g}$ . + +To enable the prediction of the second pick point conditioned on the first pick point, we utilize two separate networks: 3DPickNet1 and 3DPickNet2. In 3DPickNet1, we concatenate ${c}_{o}$ and $\widehat{f}$ and create a graph representation of the point cloud. Each node in the graph is represented as $\left\lbrack {x, y, z,\widehat{f}}\right\rbrack$ . 3DPickNet1 generates a probability value for each node to be selected as the first pick point ${p}_{1}$ . The node with the highest probability is identified as ${p}_{1}$ . + +3DPickNet2 is responsible for predicting the second pick point ${p}_{2}$ , taking into account the information about ${p}_{1}$ . In this network, we introduce an additional channel called $\widehat{{p}_{1}}$ , which represents a 3D Gaussian distribution centered around ${p}_{1}$ . This channel assigns lower values to nodes near ${p}_{1}$ and higher values to nodes farther away, preventing the selection of ${p}_{2}$ in close proximity to ${p}_{1}$ . + +The architecture of 3DPickNet is depicted in Figure 3. + +For training 3DPickNet, we use a weighted binary cross-entropy loss. The loss function compares the predicted probabilities of nodes being pick points with the ground truth labels. + +The binary cross-entropy loss function is defined as: + +$$ +L\left( {p, y}\right) = \mathop{\sum }\limits_{{i = 1}}^{N} - {w}_{i}\left( {{y}_{i}\log {p}_{i} + \left( {1 - {y}_{i}}\right) \log \left( {1 - {p}_{i}}\right) }\right) \tag{2} +$$ + +Here, $p$ represents the predicted probabilities, $y$ is the ground truth labels, and $N$ is the total number of nodes. The weights ${w}_{i}$ are higher for ground truth pick points. + +Once the pick points ${p}_{1}$ and ${p}_{2}$ are predicted using the estimated correspondences $\widehat{f}$ , the corresponding manipulation actions can be executed to achieve the desired goal configuration. + +## E. Implementation Details + +1) Dataset: We employ a dataset constructed using Soft-Gym [5], a deformable object simulator, to train and evaluate our approach. This dataset is the same as the one used in the FabricFlowNet method [11], but we extract point clouds from SoftGym to represent the cloth instead of using depth images. + +The dataset is generated by sampling random actions biased towards grasping the corners of a square towel, following the approach of FabricFlowNet. Each instance in the training, validation, and test sets consists of a tuple $\left( {{c}_{o},{c}_{g}, a}\right)$ , where ${c}_{o}$ and ${c}_{g}$ are point clouds representing the current observation and the desired goal configuration, respectively. The ground truth action $a$ corresponds to the action that achieves the goal configuration. + +In our approach, the action space is defined as $a =$ $\left( {{p}_{1},{p}_{2},{q}_{1},{q}_{2}}\right)$ , where $p$ and $q$ represent the pick and place points, respectively, paired according to the subscripts. These pick and place points are selected from a set of indices representing the points in the point cloud. + +2) 3DFlowNet: The graph neural network $H$ consists of two Graph Attention layers (GATConv) [9]. The MLP network architecture $M$ consists of two fully-connected layers. + +3) 3DPickNet: The 3DPickNet architecture consists of three Graph Attention Network layers and two Multi-Layer Perceptron (MLP) layers for both 3DPickNet1 and 3DPick-Net2. At the end of each network, a Sigmoid layer computes the probability of each node being a pick point. 3DPickNet1 represents each node with a six-dimensional feature, while 3DPickNet2 utilizes a seven-dimensional feature to accommodate the additional information provided by $\widehat{{p}_{1}}$ . + +## IV. EXPERIMENTS + +Our experiments investigate the following questions: (1) How does FFAN compare with FabricFlowNet (FFN) [11] on aligned goals? (2) How does FFAN compare with FFN on unaligned goals? We evaluate the methods in simulation, using the average L2 distance between cloth points in the achieved vs. desired point clouds as our error metric. + +## A. Performance on Aligned Goals + +We use the same test set as FFN [11] to evaluate performance on aligned goals. This test set consists of 40 single-step goals, where both the observation and the goal positioned at the center of the workspace with the same orientation. For this experiment, we do not use alignment estimation with FFAN to directly compare pre-aligned folding performance. + +Table 1 presents the performance comparison between our method and FFN. The results demonstrate that our method performs comparably to FFN on aligned goals, with only a marginal difference in average particle distance. + +TABLE I + +FOLDING PERFORMANCE ON 40 ALIGNED GOALS + +
MethodAverage Particle Distance (mm) $\downarrow$
FFN [11]4.26
FFAN (Ours)5.54
+ +## B. Performance on Unaligned Goals + +We also evaluate the performance on unaligned goals, where the goal cloth configuration is randomly rotated and therefore not aligned with the initial observed configuration. We conducted experiments on three test sets: Easy, Medium, and Hard, where each test set corresponds to a different range of rotations. Easy encompasses angles between -5 and 5 degrees, Medium ranges from -45 to 45 degrees, and Hard covers a complete rotation from 0 to 360 degrees. + +We evaluated the performance of FFAN in two scenarios: utilizing ground truth correspondences for RANSAC alignment and utilizing estimated correspondences for RANSAC alignment. Figure 4 presents a comparison of the two methods against FFN across all four test sets: aligned, easy, medium, and hard. + +![019640f1-620e-7667-9240-573936cca549_3_178_746_669_530_0.jpg](images/019640f1-620e-7667-9240-573936cca549_3_178_746_669_530_0.jpg) + +Fig. 4. Comparison of Folding Performance on Different Test Sets + +From the results, we observe that our method with estimated correspondences outperforms FFN on the Medium and Hard tasks. However, using ground truth correspondences for RANSAC alignment yields even better results across all tasks, surpassing the performance of FFN. This demonstrates the potential for further improvement by improving the correspondence estimation. Qualitative results comparing our method and FFN on the Medium case can be found in Fig. 1. + +## C. Ablations + +1) No Iterative Correspondence: In this section, we ablate our approach by removing iterative correspondence estimation (Sec. III-B). Table II shows that average particle distance error is higher when iterative correspondence estimation is removed. + +TABLE II + +ABLATION OF ITERATIVE CORRESPONDENCE ESTIMATION + +
MethodAverage Particle Distance (mm) $\downarrow$
FFAN w/o Iter. Corresp.10.591
FFAN w/ Iter. Corresp.5.54
+ +2) Number of Iterations for Iterative Correspondence: To determine the number of iterations to run for iterative correspondence estimation, we measured performance while increasing the number of iterations on a validation set. We used flow prediction error, an unweighted version of Eq. 1, as our performance metric. We evaluated number of iterations ranging from $k = 1$ (run 3DFlowNet once) to 4 . Note that we did not retrain 3DFlowNet in an iterative manner. + +Figure 5 shows the flow prediction error as a function of the number of iterations(k)in the iterative flow process. As we increase $k$ from 1 to 3, there is a notable decrease in the flow prediction error, indicating improved accuracy on correspondences. However, beyond $k = 3$ , we observed a slight increase in the error. Based on these observations, we empirically determined that the number of iterations for iterative flow correspondence estimation is $k = 3$ . + +![019640f1-620e-7667-9240-573936cca549_3_952_764_662_497_0.jpg](images/019640f1-620e-7667-9240-573936cca549_3_952_764_662_497_0.jpg) + +Fig. 5. Correspondence Prediction Error vs. Number of Iterative Correspondence Estimation Steps + +## V. CONCLUSION + +Key Insights: Our method performs on par with FFN for aligned goals but excels in handling rotations between observation and goal. Robust correspondences and accurate estimation are crucial for achieving successful alignment and folding. Our approach offers valuable advancements in goal-driven manipulation tasks, providing a reliable and effective solution. + +Limitations and Challenges: Like FFN, our method also relies on predefined sub-goals, which can be restrictive and may not generalize well to unseen fabrics and configurations. Complex and longer-horizon tasks pose challenges beyond our method's capabilities. Exploring alternative approaches that eliminate explicit sub-goals and address these challenges is important for broader applicability. + +## REFERENCES + +[1] Yahav Avigal, Lars Berscheid, Tamim Asfour, Torsten Kröger, and Ken Goldberg. Speedfolding: Learning efficient bimanual folding of garments. In 2022 IEEE/RSJ + +International Conference on Intelligent Robots and Systems (IROS), pages 1-8. IEEE, 2022. + +[2] Alper Canberk, Cheng Chi, Huy Ha, Benjamin Burchfiel, Eric Cousineau, Siyuan Feng, and Shuran Song. Cloth funnels: Canonicalized-alignment for multi-purpose garment manipulation. arXiv preprint arXiv:2210.09347, 2022. + +[3] Martin A Fischler and Robert C Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381-395, 1981. + +[4] Aditya Ganapathi, Priya Sundaresan, Brijen Thanan-jeyan, Ashwin Balakrishna, Daniel Seita, Jennifer Grannen, Minho Hwang, Ryan Hoque, Joseph E Gonzalez, Nawid Jamali, et al. Learning dense visual correspondences in simulation to smooth and fold real fabrics. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 11515-11522. IEEE, 2021. + +[5] Xingyu Lin, Yufei Wang, Jake Olkin, and David Held. Softgym: Benchmarking deep reinforcement learning for deformable object manipulation. In Conference on Robot Learning, pages 432-448. PMLR, 2021. + +[6] Xingyu Lin, Yufei Wang, Zixuan Huang, and David Held. Learning visible connectivity dynamics for cloth smoothing. In Conference on Robot Learning, pages 256-266. PMLR, 2022. + +[7] Chuer Pan, Brian Okorn, Harry Zhang, Ben Eisner, and David Held. Tax-pose: Task-specific cross-pose estimation for robot manipulation. In Conference on Robot Learning, pages 1783-1792. PMLR, 2023. + +[8] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. + +[9] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017. + +[10] Yue Wang and Justin M Solomon. Deep closest point: Learning representations for point cloud registration. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3523-3532, 2019. + +[11] Thomas Weng, Sujay Man Bajracharya, Yufei Wang, Khush Agrawal, and David Held. Fabricflownet: Bimanual cloth manipulation with a flow-based policy. In Conference on Robot Learning, pages 192-202. PMLR, 2022. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/N8KlLRpevrT/Initial_manuscript_tex/Initial_manuscript.tex b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/N8KlLRpevrT/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..6fd2d87a7ba26b75e320c3c047011614406ff9a7 --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/N8KlLRpevrT/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,203 @@ +§ POINT-BASED CORRESPONDENCE ESTIMATION FOR CLOTH ALIGNMENT AND MANIPULATION + +Author Names Omitted for Anonymous Review. Paper-ID [add your ID here] + +Abstract-Automating cloth folding is a challenging task with practical implications in various domains. Existing methods often struggle with unaligned configurations, limiting their applicability in real-world scenarios. In this research, we present FabricFlowAlignNet (FFAN), a novel approach that learns flow-based correspondences on point clouds between the current observed and goal cloth configurations. We use these learned 3D correspondences for both cloth alignment and manipulation: correspondences are used to align the observed cloth with the goal, and the flow-based correspondences are re-used as action proposals. Our experiments demonstrate that FFAN demonstrates superior performance compared to a state-of-the-art folding approach, particularly in scenarios where observed cloth is rotated or otherwise unaligned with the goal. This research highlights the multi-faceted effectiveness of learning 3D correspondences for deformable object manipulation. + +§ I. INTRODUCTION + +Cloth manipulation is a challenging task, with difficulties in both perception and control due to the deformability of cloth. Manual cloth manipulation techniques are time-consuming, labor-intensive, and prone to human error. As a result, there is a growing demand to automate cloth manipulation processes in various domains such as folding laundry, handling textiles in manufacturing processes, and assisting individuals with disabilities in dressing. + +A fundamental aspect of successful cloth manipulation is establishing correspondences between the current observation and the goal configuration. These correspondences provide critical spatial associations necessary for planning and executing folding actions. However, while prior methods have proposed to learn correspondences for cloth [11, 4], they do not explicitly use such methods for reasoning about the alignment between the observed cloth and the desired configuration. Alignment is a crucial step in cloth manipulation, and prior correspondence-based policies do not handle cases where the cloth and goal are not already aligned [11], or rely on human demonstrations [4]. + +In this work, we propose FabricFlowAlignNet (FFAN), an approach that combines the use of correspondences and symmetry-handling techniques to learn a goal-conditioned cloth manipulation policy. Our method leverages correspondences to "virtually" align the observation and goal point clouds, enabling the policy to determine the appropriate actions to execute on the observation. By incorporating these correspondences and symmetry handling, our approach aims to acquire an understanding of cloth folding strategies and develop a manipulation policy capable of accurately and efficiently folding clothes. This is particularly beneficial in challenging scenarios where the observed cloth is rotated or unaligned with the desired goal configuration. + + < g r a p h i c s > + +Fig. 1. Performance of FFAN (ours) vs FNN on unaligned goals. + +We evaluate the performance of our method against a state-of-the-art folding approach [11] on a folding task, where the goal and observation poses are not aligned. Our method, reasons about symmetries and employs correspondences to deal with unaligned goals, unlike the baseline approach. The results demonstrate the effectiveness and robustness of our approach in achieving successful cloth folding when the observation and goal configurations are unaligned. + +§ II. PRIOR WORK + +FabricFlowNet (FFN) [11] performed bimanual cloth folding by estimating flow correspondences between the observed cloth image and goal cloth image. However, FFN relies on strict alignment between observation and goal cloth poses in the image. Our approach extends FFN by proposing an approach for aligning learned 3D correspondences to overcome these limitations. By establishing spatial relationships between points in observation and goal configurations, we enable precise alignment and achieve better folding performance for unaligned goals than FFN. + +Fabric Descriptors [4] is a method for estimating correspondences in fabric manipulation tasks. However, it heavily relies on human demonstrations to learn the necessary correspondences. In contrast, our method can learn and estimate correspondences without any human intervention. + +SpeedFolding [1], another cloth manipulation method, is trained exclusively in real-world settings. In comparison, our approach, similar to FFN, undergoes training in a simulation environment before being transferred to real-world scenarios. While SpeedFolding employs self-supervised learning techniques to handle unaligned observations and goals, our approach leverages learned correspondences for alignment purposes. + +Regarding alignment methods, Cloth Funnels [2] is a notable technique that combines cloth folding and alignment. Their alignment procedure utilizes the Procrustes' algorithm, primarily designed for aligning rigid objects. However, due to its local alignment nature, it may not be as suitable for deformable objects. In contrast, our approach adopts the random sample consensus (RANSAC) algorithm, allowing for probabilistic achievement of a globally optimal alignment. + +§ III. POINT CLOUD CORRESPONDENCE ESTIMATION FOR CLOTH ALIGNMENT AND MANIPULATION + +In this section, we describe FabricFlowAlignNet (FFAN), our approach for estimating observation-goal correspondences to align and manipulate cloth. + +§ A. LEARNING CORRESPONDENCES FOR POINT CLOUDS + +We propose a 3D, flow-based correspondence estimator called "3DFlowNet", a component of our overall pipeline. 3DFlowNet takes the observation and goal point clouds ${c}_{o}$ and ${c}_{g}$ as input, and outputs 3D flow $\widehat{f}$ . 3DFlowNet is a non-trivial extension of the FlowNet from FabricFlowNet [11], which was limited to image input and 2D flow output. A schematic overview of 3DFlowNet can be found in Fig. 2. + + < g r a p h i c s > + +Fig. 2. 3DFlowNet Architecture + +We first transform the point clouds into a graph, where nodes represent cloth particles and are connected to their neighboring particles on the cloth mesh. This step requires privileged state information from the simulator of the cloth mesh edges, which would not be available in the real world; estimating these edges is an area of future work and could leverage prior methods like VCD [6]. We embed the input graphs by employing a graph neural network $H$ , which outputs embeddings for each node in the graph: ${c}_{o}^{\prime },{c}_{g}^{\prime } \in {\mathbb{R}}^{N \times F}$ . + +We then use a Transformer network [8] denoted as $T$ to perform cross-attention between observation and goal features. Our approach is inspired by prior Transformer-based per-point networks like DCP [10] and TAX-Pose [7]. $T$ takes ${c}_{o}^{\prime }$ and ${c}_{o}^{\prime }$ as input and outputs transformed embeddings ${c}^{\prime } \in {\mathbb{R}}^{N \times F}.{c}^{\prime }$ The resulting transformer embeddings, ${c}^{\prime }$ , are then summed with the original observation embeddings ${c}_{o}^{\prime }$ to produce ${c}_{o}^{\prime \prime }$ . + +To estimate correspondences between the two configurations, we pass ${c}_{o}^{\prime \prime }$ through MLP layers to produce estimated correspondences $\widehat{f} \in {\mathbb{R}}^{Nx3}$ . These correspondences represent how each cloth particle in $N$ transports to achieve the goal configuration. + +To train 3DFlowNet, we use a weighted L2 loss between the estimated and ground truth correspondences. The ground truth correspondences are computed as the difference between the point clouds ${c}_{g}$ and ${c}_{o}$ . The weighted L2 loss function is defined as: + +$$ +{\mathcal{L}}_{2}\left( {\widehat{f},f}\right) = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}{\left( {f}_{i} - {\widehat{f}}_{i}\right) }^{2} \tag{1} +$$ + +where $\widehat{f}$ represents the estimated correspondences, $f$ represents the ground truth correspondences, and $N$ is the total number of points in the point cloud. The weights ${w}_{i}$ are higher for ground truth pick points. + +§ B. ITERATIVE CORRESPONDENCE ESTIMATION + +The trained 3DFlowNet model struggles to generate large correspondences, leading to suboptimal results. To address this limitation, we introduce an iterative approach to enhance the accuracy of correspondence estimation. Our iterative process involves updating the estimated correspondences and the intermediate point cloud to gradually refine the correspondence estimation. + +In each iteration, we utilize the trained 3DFlowNet model to estimate the correspondence between the intermediate point cloud ${\widehat{c}}_{o}$ and the target configuration ${c}_{g}$ . By integrating the estimated correspondence into the observation, we simulate the application of the flow to progressively approach the target configuration. The algorithm for iterative correspondence estimation is summarized in Alg. 1. + +Algorithm 1 Iterative Correspondence Estimation + +1: Input: Trained 3DFlowNet, Point Clouds ${c}_{o},{c}_{g}$ + + Initialize all zeros $\bar{f} \in {\mathbb{R}}^{N \times 3}$ + + ${\widehat{c}}_{o} \mathrel{\text{ := }} {c}_{o}$ + + for $\mathrm{t} = 1\ldots \mathrm{T}$ do + + $\widehat{f} = 3\mathrm{{DFlowNet}}\left( {{\widehat{c}}_{o},{c}_{g}}\right)$ + + $\bar{f} + = \widehat{f}$ + + ${\widehat{c}}_{o} + = \widehat{f}$ + + end for + + return $\bar{f}$ + +§ C. RANSAC ALIGNMENT FOR UNALIGNED GOALS + +To address the alignment issue between observation and goal, we propose utilizing estimated correspondences to perform alignment using the RANSAC algorithm [3]. The forward pass through 3DFlowNet provides the estimated correspondences. The steps involved in this process are as follows: + +1) Sample two indices(i, j)on the cloth. + +2) Compute the transformation matrix $R$ using the correspondences $\left( {{f}_{i},{f}_{j}}\right)$ , i.e. $\left( {{p}_{i},{p}_{j}}\right)$ and $\left( {{p}_{i} + {f}_{i}}\right) ,\left( {{p}_{j} + {f}_{j}}\right)$ . + +3) Compute inliers by comparing the distance between transformed points $\begin{Vmatrix}{R{p}_{i} - \left( {{p}_{i} + {f}_{i}}\right) }\end{Vmatrix}$ and a threshold epsilon $\epsilon$ . + +4) Choose the transformation matrix $R$ with the maximum number of inliers. + +D. Estimating the Pick Location for an Action + + < g r a p h i c s > + +Fig. 3. 3DPickNet Architecture + +To predict the pick points necessary for cloth manipulation, we introduce a neural network called 3DPickNet. This network is specifically designed to estimate the pick points ${p}_{1}$ and ${p}_{2}$ . The inputs to 3DPickNet are the current observation ${c}_{o}$ and the estimated correspondences $\widehat{f}$ between ${c}_{o}$ and the goal configuration ${c}_{g}$ . + +To enable the prediction of the second pick point conditioned on the first pick point, we utilize two separate networks: 3DPickNet1 and 3DPickNet2. In 3DPickNet1, we concatenate ${c}_{o}$ and $\widehat{f}$ and create a graph representation of the point cloud. Each node in the graph is represented as $\left\lbrack {x,y,z,\widehat{f}}\right\rbrack$ . 3DPickNet1 generates a probability value for each node to be selected as the first pick point ${p}_{1}$ . The node with the highest probability is identified as ${p}_{1}$ . + +3DPickNet2 is responsible for predicting the second pick point ${p}_{2}$ , taking into account the information about ${p}_{1}$ . In this network, we introduce an additional channel called $\widehat{{p}_{1}}$ , which represents a 3D Gaussian distribution centered around ${p}_{1}$ . This channel assigns lower values to nodes near ${p}_{1}$ and higher values to nodes farther away, preventing the selection of ${p}_{2}$ in close proximity to ${p}_{1}$ . + +The architecture of 3DPickNet is depicted in Figure 3. + +For training 3DPickNet, we use a weighted binary cross-entropy loss. The loss function compares the predicted probabilities of nodes being pick points with the ground truth labels. + +The binary cross-entropy loss function is defined as: + +$$ +L\left( {p,y}\right) = \mathop{\sum }\limits_{{i = 1}}^{N} - {w}_{i}\left( {{y}_{i}\log {p}_{i} + \left( {1 - {y}_{i}}\right) \log \left( {1 - {p}_{i}}\right) }\right) \tag{2} +$$ + +Here, $p$ represents the predicted probabilities, $y$ is the ground truth labels, and $N$ is the total number of nodes. The weights ${w}_{i}$ are higher for ground truth pick points. + +Once the pick points ${p}_{1}$ and ${p}_{2}$ are predicted using the estimated correspondences $\widehat{f}$ , the corresponding manipulation actions can be executed to achieve the desired goal configuration. + +§ E. IMPLEMENTATION DETAILS + +1) Dataset: We employ a dataset constructed using Soft-Gym [5], a deformable object simulator, to train and evaluate our approach. This dataset is the same as the one used in the FabricFlowNet method [11], but we extract point clouds from SoftGym to represent the cloth instead of using depth images. + +The dataset is generated by sampling random actions biased towards grasping the corners of a square towel, following the approach of FabricFlowNet. Each instance in the training, validation, and test sets consists of a tuple $\left( {{c}_{o},{c}_{g},a}\right)$ , where ${c}_{o}$ and ${c}_{g}$ are point clouds representing the current observation and the desired goal configuration, respectively. The ground truth action $a$ corresponds to the action that achieves the goal configuration. + +In our approach, the action space is defined as $a =$ $\left( {{p}_{1},{p}_{2},{q}_{1},{q}_{2}}\right)$ , where $p$ and $q$ represent the pick and place points, respectively, paired according to the subscripts. These pick and place points are selected from a set of indices representing the points in the point cloud. + +2) 3DFlowNet: The graph neural network $H$ consists of two Graph Attention layers (GATConv) [9]. The MLP network architecture $M$ consists of two fully-connected layers. + +3) 3DPickNet: The 3DPickNet architecture consists of three Graph Attention Network layers and two Multi-Layer Perceptron (MLP) layers for both 3DPickNet1 and 3DPick-Net2. At the end of each network, a Sigmoid layer computes the probability of each node being a pick point. 3DPickNet1 represents each node with a six-dimensional feature, while 3DPickNet2 utilizes a seven-dimensional feature to accommodate the additional information provided by $\widehat{{p}_{1}}$ . + +§ IV. EXPERIMENTS + +Our experiments investigate the following questions: (1) How does FFAN compare with FabricFlowNet (FFN) [11] on aligned goals? (2) How does FFAN compare with FFN on unaligned goals? We evaluate the methods in simulation, using the average L2 distance between cloth points in the achieved vs. desired point clouds as our error metric. + +§ A. PERFORMANCE ON ALIGNED GOALS + +We use the same test set as FFN [11] to evaluate performance on aligned goals. This test set consists of 40 single-step goals, where both the observation and the goal positioned at the center of the workspace with the same orientation. For this experiment, we do not use alignment estimation with FFAN to directly compare pre-aligned folding performance. + +Table 1 presents the performance comparison between our method and FFN. The results demonstrate that our method performs comparably to FFN on aligned goals, with only a marginal difference in average particle distance. + +TABLE I + +FOLDING PERFORMANCE ON 40 ALIGNED GOALS + +max width= + +Method Average Particle Distance (mm) $\downarrow$ + +1-2 +FFN [11] 4.26 + +1-2 +FFAN (Ours) 5.54 + +1-2 + +§ B. PERFORMANCE ON UNALIGNED GOALS + +We also evaluate the performance on unaligned goals, where the goal cloth configuration is randomly rotated and therefore not aligned with the initial observed configuration. We conducted experiments on three test sets: Easy, Medium, and Hard, where each test set corresponds to a different range of rotations. Easy encompasses angles between -5 and 5 degrees, Medium ranges from -45 to 45 degrees, and Hard covers a complete rotation from 0 to 360 degrees. + +We evaluated the performance of FFAN in two scenarios: utilizing ground truth correspondences for RANSAC alignment and utilizing estimated correspondences for RANSAC alignment. Figure 4 presents a comparison of the two methods against FFN across all four test sets: aligned, easy, medium, and hard. + + < g r a p h i c s > + +Fig. 4. Comparison of Folding Performance on Different Test Sets + +From the results, we observe that our method with estimated correspondences outperforms FFN on the Medium and Hard tasks. However, using ground truth correspondences for RANSAC alignment yields even better results across all tasks, surpassing the performance of FFN. This demonstrates the potential for further improvement by improving the correspondence estimation. Qualitative results comparing our method and FFN on the Medium case can be found in Fig. 1. + +§ C. ABLATIONS + +1) No Iterative Correspondence: In this section, we ablate our approach by removing iterative correspondence estimation (Sec. III-B). Table II shows that average particle distance error is higher when iterative correspondence estimation is removed. + +TABLE II + +ABLATION OF ITERATIVE CORRESPONDENCE ESTIMATION + +max width= + +Method Average Particle Distance (mm) $\downarrow$ + +1-2 +FFAN w/o Iter. Corresp. 10.591 + +1-2 +FFAN w/ Iter. Corresp. 5.54 + +1-2 + +2) Number of Iterations for Iterative Correspondence: To determine the number of iterations to run for iterative correspondence estimation, we measured performance while increasing the number of iterations on a validation set. We used flow prediction error, an unweighted version of Eq. 1, as our performance metric. We evaluated number of iterations ranging from $k = 1$ (run 3DFlowNet once) to 4 . Note that we did not retrain 3DFlowNet in an iterative manner. + +Figure 5 shows the flow prediction error as a function of the number of iterations(k)in the iterative flow process. As we increase $k$ from 1 to 3, there is a notable decrease in the flow prediction error, indicating improved accuracy on correspondences. However, beyond $k = 3$ , we observed a slight increase in the error. Based on these observations, we empirically determined that the number of iterations for iterative flow correspondence estimation is $k = 3$ . + + < g r a p h i c s > + +Fig. 5. Correspondence Prediction Error vs. Number of Iterative Correspondence Estimation Steps + +§ V. CONCLUSION + +Key Insights: Our method performs on par with FFN for aligned goals but excels in handling rotations between observation and goal. Robust correspondences and accurate estimation are crucial for achieving successful alignment and folding. Our approach offers valuable advancements in goal-driven manipulation tasks, providing a reliable and effective solution. + +Limitations and Challenges: Like FFN, our method also relies on predefined sub-goals, which can be restrictive and may not generalize well to unseen fabrics and configurations. Complex and longer-horizon tasks pose challenges beyond our method's capabilities. Exploring alternative approaches that eliminate explicit sub-goals and address these challenges is important for broader applicability. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/OFoo4631KAo/Initial_manuscript_md/Initial_manuscript.md b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/OFoo4631KAo/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..2156ecf37c5b7071e0f3afe764c50a9c703c8f4f --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/OFoo4631KAo/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,297 @@ +# Edge Grasp Network: A Graph-Based SE(3)-invariant Approach to Grasp Detection + +Haojie Huang Dian Wang Xupeng Zhu Robin Walters Robert Platt + +Khoury College of Computer Science, Northeastern University + +\{huang.haoj; wang.dian; zhu.xup; r.walters; r.platt\} @northeastern.edu + +Abstract-Given point cloud input, the problem of 6-DoF grasp pose detection is to identify a set of hand poses in $\mathrm{{SE}}\left( 3\right)$ from which an object can be successfully grasped. This important problem has many practical applications. Here we propose a novel method and neural network model that enables better grasp success rates relative to what is available in the literature. The method takes standard point cloud data as input and works well with single-view point clouds observed from arbitrary viewing directions. Videos and code are available at https://haojhuang.github.io/edge_grasp_page/. + +## I. INTRODUCTION + +Grasp detection $\left\lbrack {6,{25},{18}}\right\rbrack$ is a critical robotic skill. The robot first observes a scene containing objects in the form of images, voxels, or point clouds, and detects a set of viable grasp poses from which an object may be grasped stably. There are two general approaches: $\mathrm{{SE}}\left( 2\right)$ methods where the model reasons in terms of a top-down image of the scene (e.g. [13, ${15},{17},{12},{30}\rbrack )$ , and $\mathrm{{SE}}\left( 3\right)$ methods where the model reasons in terms of a point cloud or voxel grid (e.g. [6, 18, 8, 3]). $\mathrm{{SE}}\left( 3\right)$ methods have a distinct advantage over $\mathrm{{SE}}\left( 2\right)$ methods because they have more flexibility and are easier to apply in general robotics settings. Unfortunately, $\mathrm{{SE}}\left( 3\right)$ methods are generally much more complex, so SE(2) models are often preferred. + +This paper tackles the problem of $\mathrm{{SE}}\left( 3\right)$ grasping with a novel grasp detection model that we call the Edge Grasp Network. The model is based on a novel representation of a 6-DoF grasp that uses a pair of vertices in a graph. Given a single approach point (a position the hand will approach), we define a KNN graph that contains all the points in the point cloud that are within a fixed radius of the approach point. Each point in this KNN graph corresponds to an orientation of the gripper and, when paired with the approach point, defines a distinct 6-DOF grasp pose. We infer the quality of all such grasps simultaneously using a graph neural network. + +This approach is novel relative to the literature in three ways: 1) First, our method of defining unique grasp candidates in terms of a pair of vertices in a graph is new; 2) Second, our inference model using a graph neural network defined with respect to a single approach point is novel; 3) Third, our model is the first $\mathrm{{SE}}\left( 3\right)$ grasp method that incorporates $\mathrm{{SO}}\left( 3\right)$ equivariance. + +## II. Problem Statement + +The grasp detection problem is to locate a set of grasp poses in $\mathrm{{SE}}\left( 3\right)$ for a parallel-jaw gripper given input about the scene in the form of a point cloud. Denote the point cloud observation as $P = {\left\{ {p}_{i} \in {\mathbb{R}}^{3}\right\} }_{i = 1}^{n}$ , where $n$ is the number of points. For each point $p \in P$ , we will assume that an estimate of the object surface normal ${n}_{p} \in {S}^{2}$ can be calculated. Although it is not required, we generally assume that this point cloud is generated by a single depth camera. A grasp pose of the gripper is parameterized $\alpha = \left( {C, R}\right) \in \mathrm{{SE}}\left( 3\right)$ , where $C \in {\mathbb{R}}^{3}$ is the location of the center of the gripper and $R \in \mathrm{{SO}}\left( 3\right)$ represents its orientation. The grasp detection problem is to find a function $S : P \mapsto {\left\{ {\alpha }_{i} \in \mathrm{{SE}}\left( 3\right) \right\} }_{i = 1}^{m}$ , that maps $P$ onto $m$ grasp poses detected in the scene. The grasp evaluation problem is to find a function $\Phi : \left( {P,\alpha }\right) \mapsto \left\lbrack {0,1}\right\rbrack$ , that denotes the quality of grasp $\alpha$ . Notice that $\Phi$ is invariant to translation and rotation in the sense that $\Phi \left( {g \cdot P, g \cdot \alpha }\right) = \Phi \left( {P,\alpha }\right)$ for an arbitrary $g \in \mathrm{{SE}}\left( 3\right)$ . In other words, the predicted quality of a grasp attempt should be invariant to transformation of the object to be grasped and the grasp pose by the same rotation and translation. + +## III. METHOD + +## A. Grasp Pose Representation + +We represent a grasp as a pair of points in the cloud, $\left( {{p}_{a},{p}_{c}}\right) \in {P}^{2}.{p}_{a}$ is considered to be the approach point and ${p}_{c}$ is the contact point. Assuming that we can estimate the object surface normal ${n}_{c}$ at point ${p}_{c}$ , $\left( {{p}_{a},{p}_{c}}\right)$ defines a grasp orientation $R$ where the gripper fingers move parallel to the vector ${n}_{c}$ and the gripper approaches the object along the vector ${a}_{ac} =$ ${n}_{c} \times \left( {{n}_{c} \times \left( {{p}_{a} - {p}_{c}}\right) }\right)$ . This is illustrated in Figure 1. The gripper center $C$ is positioned such that ${p}_{a}$ is directly between the fingers and ${p}_{c}$ is at a desired point of contact on the finger, $C = {p}_{a} - \delta {a}_{ac}$ . Here, $\delta = {G}_{d} + {\left( {p}_{a} - {p}_{c}\right) }^{T}{a}_{ac}$ denotes the distance between the center of the gripper and ${p}_{a}$ and ${G}_{d}$ denotes gripper depth. We will sometimes refer to a grasp defined this way as an edge grasp. + +![019640ff-2ac2-78f1-8644-b928b4fbd18a_0_1276_1320_381_310_0.jpg](images/019640ff-2ac2-78f1-8644-b928b4fbd18a_0_1276_1320_381_310_0.jpg) + +Fig. 1. Grasp pose defined by the edge grasp $\left( {{p}_{a},{p}_{c}}\right)$ . The reference frame of the gripper is illustrated by the RGB coordinate system. ${G}_{w}$ and ${G}_{d}$ are the gripper width and gripper depth. + +To sample edge grasps, we will generally sample the approach point ${p}_{a}$ first and then for each approach point sample multiple contact points ${p}_{c}$ from the neighbors of ${p}_{a}$ within the distance of $\frac{{G}_{w}}{2}$ , where ${G}_{w}$ denotes the aperture of the gripper, i.e. the distance between the fingers when the gripper is open. One key advantage of this representation is that we can easily provide the approximate position of a desired grasp as an input to the model. If we want to grasp a tool by its handle, for example, this is easily achieved by only considering contact locations on the handle. + +![019640ff-2ac2-78f1-8644-b928b4fbd18a_1_213_146_1379_296_0.jpg](images/019640ff-2ac2-78f1-8644-b928b4fbd18a_1_213_146_1379_296_0.jpg) + +Fig. 2. Encoding process of edge grasps. The rightmost part shows the represented grasp of one edge feature. + +## B. Model Architecture + +Our model, which we call the Edge Grasp Network, evaluates the grasp quality for a set of edge grasps that have a single approach point ${p}_{a} \in P$ in common. We evaluate multiple approach points by cropping them separately and then placing them in a batch. There are four steps, as illustrated in Figure 2. + +Step 1: Crop Point Cloud. Given a point cloud $P$ and an approach point ${p}_{a}$ , only a set of neighboring points of ${p}_{a}$ affects the edge grasp. We crop the point cloud to a ball around ${p}_{a} :$ + +$$ +{S}_{a} = \left\{ {p \in P : {\begin{Vmatrix}p - {p}_{a}\end{Vmatrix}}_{2} \leq {G}_{w}/2}\right\} , +$$ + +Step 2: PointNetConv $\left( \psi \right)$ . We compute a feature at each point using a stack of PointNetConv layers [21], denoted $\psi$ . Each layer calculates a new feature ${f}_{i}^{\left( l + 1\right) }$ at each point ${p}_{i} \in {S}_{a}$ + +using + +$$ +{f}_{i}^{\left( \ell + 1\right) } = \mathop{\max }\limits_{{j \in \mathcal{N}\left( i\right) }}\operatorname{MLP}\left( {{f}_{j}^{\left( \ell \right) },{p}_{j} - {p}_{i}}\right) , \tag{1} +$$ + +where $\mathcal{N}\left( i\right)$ denotes the $k$ -nearest neighbors to ${p}_{i}$ . Here, ${f}_{j}^{\left( l\right) }$ denotes the feature at point ${p}_{j}$ prior to the layer, max denotes max-pooling where the max is taken over features (like in PointNet [20]). MLP is a 2-layer multi-layer perceptron that takes both parameters as input. The input features at the first layer are the positions and surface normals of the points. Let ${F}_{{S}_{a}}$ denote the set of features for the points in ${S}_{a}$ at the output of Step 2. + +Step 3: Compute Global Feature $\left( \omega \right) .\omega$ takes ${F}_{{S}_{a}}$ as input and generates a single global feature ${g}_{a}$ that describes ${S}_{a}$ . First, ${F}_{{S}_{a}}$ is passed to an MLP followed by a max-pooling layer (over features) to generate a first-level global feature. This is concatenated with each feature $f \in {F}_{{S}_{a}}$ and passed to a second MLP and max-pooling layer to output ${g}_{a}$ . Finally, for each edge grasp $\left( {{p}_{a},{p}_{c}}\right) \in {P}^{2}$ associated with ${p}_{a}$ , we calculate an edge feature ${f}_{ac} \in {F}_{ac}$ by concatenating ${g}_{a}$ with the point feature ${f}_{c} \in {F}_{{S}_{a}}$ corresponding to ${p}_{c}$ . This edge feature will represent the edge grasp to the classifier. Step 4: Grasp Classification. After calculating the edge features ${F}_{ac}$ , we predict grasp success using a four-layer MLP with a sigmoid function which takes an edge feature ${f}_{ac}$ as input and infers whether the corresponding edge grasp will succeed. + +## C. $\mathrm{{SO}}\left( 3\right)$ Invariance of Edge Grasp Network + +In Section II, we noted that the grasp quality function $\Phi \left( {P,\alpha }\right)$ is invariant to translation and rotation, i.e. $\Phi (g \cdot P, g \cdot$ $\alpha ) = \Phi \left( {P,\alpha }\right)$ for arbitrary $g \in \mathrm{{SE}}\left( 3\right)$ . As presented above, the Edge Grasp Network is invariant to translation because each ${S}_{a}$ is centered at the approach point ${p}_{a}$ (we translate ${p}_{a}$ to the origin of the world frame). However, additional methodology is required to create invariance to rotations. Rotational invariance allows the model to generalize grasp knowledge from one orientation to another. We enable rotational invariance with two different approaches. The first approach is to apply data augmentation on ${S}_{a}$ to learn $\mathrm{{SO}}\left( 3\right)$ invariance during training. Our second approach is to use an $\mathrm{{SO}}\left( 3\right)$ -equivariant model, Vector Neurons [5]. Vector Neurons can be applied to nearly any neural model architecture by encoding the ${\mathbb{R}}^{3}$ along which $\mathrm{{SO}}\left( 3\right)$ acts as a separate tensor axis. As we show in Section IV-C, leveraging SO(3) symmetries is beneficial to learn a grasp function. + +## IV. SIMULATIONS + +We benchmarked our method in simulation against three strong baselines, PointNetGPD [14], VGN [2], and GIGA [8]. To make the comparison as fair as possible, we used the same simulator developed by Breyer et al. [2] and used by Jiang et al. [8]. There are two types of simulated grasp environments, PACKED and PILED. In PACKED, objects are placed randomly in an upright configuration in close proximity, e.g. as shown in Figure 3(a). In PILED, objects are dumped randomly from a box into a pile. + +## A. Experimental Protocol: + +We evaluate our model over several rounds of testing. During each round, a pile or packed scene with 5 test objects is generated inside of a ${30} \times {30} \times {30}{\mathrm{\;{cm}}}^{3}$ workspace and the system begins grasping one object at a time. Prior to each grasp, we take a depth image of the scene from a direction above the table to extract the point cloud or TSDF, and pass it to the model. After receiving grasp scores from the model, we execute the grasp with the highest quality score. A round of testing ends when either all objects are cleared or two consecutive grasp failures occur. Performance is measured over 100 simulation rounds with 5 different random seeds in terms of: 1) Grasp Success Rate (GSR $= \frac{\# \text{ successful grasps); }}{\# \text{ total grasps }}$ and 2) Declutter Rate (DR $= \frac{\# \text{ grasped objects }}{\# \text{ total objects }}$ ). The results are reported in Table I. Detailed description of the baselines and training could be found in Appendix VIII-E and VIII-D. + +![019640ff-2ac2-78f1-8644-b928b4fbd18a_2_204_148_617_243_0.jpg](images/019640ff-2ac2-78f1-8644-b928b4fbd18a_2_204_148_617_243_0.jpg) + +Fig. 3. Left: the packed scenario; Right: the pile scenario. + +TABLE I. Quantitative results of clutter removal. Edge-sample randomly sample edges that do not collide with the table. EdgeGraspNet is the version of our method trained with data augmentation.VN-EdgeGraspNet is the version with Vector Neurons. GIGA-High query at a higher resolution of ${60} \times {60} \times {60}$ . + +
MethodPackedPile
GSR (%)DR (%)GSR (%)DR (%)
PointNetGPD${79.3} \pm {1.8}$${82.5} \pm {2.9}$${75.6} \pm {2.3}$${77.0} \pm {2.8}$
VGN${80.2} \pm {1.6}$${86.2} \pm {2.0}$${64.9} \pm {2.2}$${69.1} \pm {3.2}$
GIGA${85.3} \pm {1.9}$${91.2} \pm {1.7}$${69.9} \pm {1.8}$${75.2} \pm {2.2}$
GIGA-High${88.5} \pm {2.0}$${93.9} \pm {1.4}$${74.1} \pm {1.5}$${80.1} \pm {0.5}$
Edge-Sample${44.0} \pm {4.0}$${39.7} \pm {4.5}$${40.2} \pm {2.5}$${30.9} \pm {3.2}$
EdgeGraspNet${92.0} \pm {1.4}$${94.8} \pm {0.8}$${89.9} \pm {1.8}$${92.8} \pm {1.6}$
VN-EdgeGraspNet${92.3} \pm {1.2}$${95.2} \pm {0.6}$${92.3} \pm {1.5}$${93.5} \pm {1.8}$
+ +
MethodPointNetGPDVGNGIGAGIGA-HighEdgeGraspNetVN-EdgeGraspNet
#of Parameters1.6 M0.3 M0.6 M0.6 M${3.0}\mathrm{M}$1.7 M
Inference time${382}\mathrm{\;{ms}}$10 ms${21}\mathrm{\;{ms}}$50 ms28 ms89 ms
+ +TABLE II. Number of parameters and inference time for proposed methods and baselines. Evaluated on one NVIDIA-GeForce RTX 3090. + +## B. Results Analysis: + +We draw several conclusions from Table I. First, our sample strategy unadorned with grasp quality inference (Edge-Sample) already performs with a grasp success rate of between ${40}\%$ and ${44}\%$ . This suggests our edge grasp representation and sample strategy provide a helpful bias. Second, both Edge-GraspNet and VN-EdgeGraspNet outperform all the baselines in all performance categories by a significant margin, particularly in the PILE category. Third, the performance gap between the packed and piled scenarios is smaller for our method than that for the baselines, which suggests that our model adapts to different object configurations better. Finally, one concern of most sampled-based methods is the inference time since they need to evaluate each grasp individually. However, our method takes use of the shared global features and could achieve a real-time inference time. Detailed inference time analyses could be found in Appendix VIII-F. + +## C. Vector Neurons and Data Augmentation: + +To investigate the role of $\mathrm{{SO}}\left( 3\right)$ invariance, we compared our base version of EdgeGraspNet with a variation that omits data augmentation (EdgeGraspNet-NoAug) and VN-EdgeGraspNet. + +As shown in Figure 4, the Vector Neurons version performs best and learns fastest, and the base EdgeGrasp-Net converges to approximately the same level. However, without either Vector Neurons or data augmentation, the model overfits. This demonstrates that leveraging $\mathrm{{SO}}\left( 3\right)$ symmetry is beneficial to learning the grasp function. + +![019640ff-2ac2-78f1-8644-b928b4fbd18a_2_1300_160_359_343_0.jpg](images/019640ff-2ac2-78f1-8644-b928b4fbd18a_2_1300_160_359_343_0.jpg) + +Fig. 4. Test loss functions showing the effect of data augmentation and Vector Neurons. + +## D. Ablation study on cropping ${S}_{a}$ + +![019640ff-2ac2-78f1-8644-b928b4fbd18a_2_912_667_733_367_0.jpg](images/019640ff-2ac2-78f1-8644-b928b4fbd18a_2_912_667_733_367_0.jpg) + +Fig. 5. Ablation Study on cropping ${S}_{a}$ . Left Figure: Test loss v.s. Epoch; Right Figure: Test Accuracy v.s. Epoch. The results show the effect of cropping ${S}_{a}$ . + +We compare our EdgeGrapNet with a variation that skips cropping point cloud around the approach point ${p}_{a}$ . After getting the observed point cloud $P$ , we build a KNN graph on $P$ and feed it to $\psi$ directly to get the point features ${F}_{P}$ . Then, we extract the global feature ${g}_{a}$ corresponding to ${p}_{a}$ from $\left\{ {{f}_{p} \in {F}_{P} \mid p \in {S}_{a}}\right\}$ . Instead of translating ${p}_{a}$ to the origin of the world coordinate, we center $P$ , the entire observed point cloud, at the origin. Except for these variations, other operations are the same. Let's denote the variation as EdgeGraspNet-NoBall. Figure 5 shows the results of our model and the variation version. It indicates that implementing on ${S}_{a}$ is better than implementing on $P$ . There are some reasons why ${S}_{a}$ is better than $P$ . First, $P$ is a special case of ${S}_{a}$ when we set the radius of the sphere as infinity. Second, ${S}_{a}$ includes all the related points that affect the grasp quality without redundant information. Last but not least, the invariant property on ${S}_{a}$ is more generalized than that on ${P}_{a}$ . Given a $g \in \mathrm{{SO}}\left( 3\right)$ , a grasp action $\alpha$ , and a grasp evaluation function $\Psi$ , the invariance of EdgeGraspNet could be defined as + +$$ +\Psi \left( {g \cdot {S}_{a}, g \cdot \alpha }\right) = \Psi \left( {{S}_{a},\alpha }\right) +$$ + +However, EdgeGraspNet-NoBall could only be invariant to rotations on the entire point cloud: $\Psi \left( {g \cdot P, g \cdot \alpha }\right) = \Psi \left( {P,\alpha }\right)$ , which is less generalized. + +## V. EVALUATION ON A ROBOT + +In this paper, we measure physical grasp performance in three different setups with 4 object sets, as shown in Figure 7. Our model trained in simulation is directly implemented on a real robot. + +![019640ff-2ac2-78f1-8644-b928b4fbd18a_3_187_148_638_257_0.jpg](images/019640ff-2ac2-78f1-8644-b928b4fbd18a_3_187_148_638_257_0.jpg) + +Fig. 6. Robot setup. Left: the robot takes a depth image of the scene from a random viewpoint. Right: the robot grasps the red adversarial object from a localized graspable part. + +![019640ff-2ac2-78f1-8644-b928b4fbd18a_3_145_487_721_353_0.jpg](images/019640ff-2ac2-78f1-8644-b928b4fbd18a_3_145_487_721_353_0.jpg) + +Fig. 7. Object sets and test configurations used for real robot experiments. From left column to right column: packed scene with 10 objects; pile scene with 10 objects; 20 test hard objects [31]; 12 Berkeley adversarial objects [16]. + +## VI. SETUP + +We used a UR5 robot equipped with a Robotiq-85 Gripper, as shown in Figure 6. An Occipital Structure Sensor was mounted on the arm to capture the observation. Prior to each grasp, we move the sensor to a randomly selected viewpoint ${}^{1}$ (pointing toward the objects to be grasped, as shown in Figure 6(a)), take a depth image, and generate a point cloud. We detect and remove the table plane with RANSAC and we denoise and downsample the point cloud using Open3D [29]. For each observed point cloud, we sample 40 approach points and 2000 grasps total. After running inference, we filter out the grasps with a grasp quality score below 0.9 . As is the procedure in [2] and [6], we select the highest (largest $z$ - coordinate) above-threshold candidate for execution. + +## A. Results + +Household Objects in the Packed and Pile Settings: This experiment evaluates our method in the packed and piled settings described in Section IV. In each round, 5 objects are randomly selected from 10 objects. Table III reports grasp success rates and declutter rates from 16 rounds ( 80 objects total). GSRs vary between ${91.7}\%$ and ${93}\%$ - a result that closely matches our simulated results. It indicates the small sim-to-real gap of our method. + +
MethodPackedPile
GSR (%)DR (%)GSR (%)DR (%)
EdgeGrasoNet91.9 (80 / 87)100 (80/80)93.0 (80/86)100 (80/80)
VN-EdgeGraspNet91.7 (78/85)98.7 (79/80)92.9 (79/85)98.7 (79 /80)
+ +TABLE III. Results of real-robot experiments for packed and piled grasp settings. + +Comparison with Zhu et al. [31] on test hard Objects: + +This experiment compares our method against the method of Zhu et al. [31], a strong baseline from the literature. In each round, 10 objects are randomly selected and dumped on the table. Table IV shows the results from 15 runs. VN-EdgeGraspNet outperforms [31] by about four percentage points both in terms of the grasp success rate and the declutter rate - a significant improvement against a strong baseline. + +
MethodGSR (%)DR (%)
Zhu et al. [31]89.0 (138/155)94.0 (141 / 150)
EdgeGraspNet91.8 (146/159)${98.0}\left( {{147}/{150}}\right)$
VN-EdgeGraspNet93.6 (148/159)98.6 (148 / 150)
+ +TABLEIV. Comparison with the method of Zhu et al. [31] using exactly the same objects and setup. + +Comparison with [3] on the Berkeley Adversarial Pile: We also baselined our method using the 12 Berkeley Adversarial Objects described in [16], shown in Figure 7. Here, we compare our method to the work of Cai et al. [3], called Volumetric Point Network (VPN). Table V shows the performance comparison. The results indicate that our method outperforms all the baselines. Our final grasp success rate is ${84.4}\%$ , a very good performance for the Berkeley adversarial object set. + +
MethodGSR (%)DR (%)
Gualtieri et al. [6]*70.91 (39 / 55)97.5 (39/40)
Breyer et al. [2]*41.56 (32 / 77)80 (32/40)
Cai et al. [3]*78.4 (40/51)100 (40/40)
EdgeGraspNet84.4 (38/45)95.0 (38/40)
VN-EdgeGraspNet83.0 (40/48)100 (40/40)
+ +TABLE V. Comparison with VPN [3], GPD [6], and VGN [2] for the Berkeley Adversarial Objects in a pile setting. We performed five rounds of grasping with piles of eight objects in each. * Results for VPN [3], GPD [6], and VGN [2] are copied directly from [3]. + +## VII. CONCLUSION + +This paper proposes a novel edge representation in the 6- DoF grasp detection problem. By formulating the grasp pose with an approach point, a contact point, and its surface normal, we represent edge grasps by local features of contacts and global features of the related points. We explore the $\mathrm{{SE}}\left( 3\right)$ symmetry of our representation and propose EdgeGraspNet and VN-EdgeGraspNet to leverage $\mathrm{{SE}}\left( 3\right)$ invariance in two different ways. Finally, We evaluate our models on various simulated and real-world object sets against several strong baselines. Experiments show the small sim-to-real gap, the high grasping success rate, and the generalization ability to different object sets of our method. A clear direction for future work is to integrate more on-policy learning, which we believe would enable us to improve our performance. + +## REFERENCES + +[1] Antonio Bicchi. On the closure properties of robotic grasping. The International Journal of Robotics Research, 14(4):319-334, 1995. + +--- + +${}^{1}$ We randomly select a viewpoint and repeatedly use it. + +--- + +[2] Michel Breyer, Jen Jen Chung, Lionel Ott, Roland Sieg-wart, and Juan Nieto. Volumetric grasping network: Real- + +time 6 dof grasp detection in clutter. arXiv preprint arXiv:2101.01132, 2021. + +[3] Junhao Cai, Jun Cen, Haokun Wang, and Michael Yu Wang. Real-time collision-free grasp pose detection with geometry-aware refinement using high-resolution volume. IEEE Robotics and Automation Letters, 7(2): 1888-1895, 2022. + +[4] Berk Calli, Arjun Singh, Aaron Walsman, Siddhartha Srinivasa, Pieter Abbeel, and Aaron M Dollar. The ycb object and model set: Towards common benchmarks for manipulation research. In 2015 international conference on advanced robotics (ICAR), pages 510-517. IEEE, 2015. + +[5] Congyue Deng, Or Litany, Yueqi Duan, Adrien Poule-nard, Andrea Tagliasacchi, and Leonidas J Guibas. Vector neurons: A general framework for so (3)-equivariant networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12200-12209, 2021. + +[6] Marcus Gualtieri, Andreas Ten Pas, Kate Saenko, and Robert Platt. High precision grasp pose detection in dense clutter. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 598-605. IEEE, 2016. + +[7] Haojie Huang, Dian Wang, Robin Walter, and Robert Platt. Equivariant transporter network. arXiv preprint arXiv:2202.09400, 2022. + +[8] Zhenyu Jiang, Yifeng Zhu, Maxwell Svetlik, Kuan Fang, and Yuke Zhu. Synergies between affordance and geometry: 6-dof grasp detection via implicit representations. arXiv preprint arXiv:2104.01542, 2021. + +[9] Daniel Kappler, Jeannette Bohg, and Stefan Schaal. Leveraging big data for grasp planning. In 2015 IEEE international conference on robotics and automation (ICRA), pages 4304-4311. IEEE, 2015. + +[10] Alexander Kasper, Zhixing Xue, and Rüdiger Dill-mann. The kit object models database: An object model database for object recognition, localization and manipulation in service robotics. The International Journal of Robotics Research, 31(8):927-934, 2012. + +[11] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. + +[12] Sulabh Kumra, Shirin Joshi, and Ferat Sahin. Antipodal robotic grasping using generative residual convolutional neural network. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 9626-9633. IEEE, 2020. + +[13] Ian Lenz, Honglak Lee, and Ashutosh Saxena. Deep learning for detecting robotic grasps. The International Journal of Robotics Research, 34(4-5):705-724, 2015. + +[14] Hongzhuo Liang, Xiaojian Ma, Shuang Li, Michael Görner, Song Tang, Bin Fang, Fuchun Sun, and Jianwei Zhang. Pointnetgpd: Detecting grasp configurations from + +point sets. In 2019 International Conference on Robotics and Automation (ICRA), pages 3629-3635. IEEE, 2019. + +[15] Jeffrey Mahler, Jacky Liang, Sherdil Niyaz, Michael Laskey, Richard Doan, Xinyu Liu, Juan Aparicio Ojea, and Ken Goldberg. Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv preprint arXiv:1703.09312, 2017. + +[16] Jeffrey Mahler, Matthew Matl, Vishal Satish, Michael Danielczuk, Bill DeRose, Stephen McKinley, and Ken Goldberg. Learning ambidextrous robot grasping policies. Science Robotics, 4(26):eaau4984, 2019. + +[17] Douglas Morrison, Peter Corke, and Jürgen Leitner. Closing the loop for robotic grasping: A real-time, generative grasp synthesis approach. arXiv preprint arXiv:1804.05172, 2018. + +[18] Arsalan Mousavian, Clemens Eppner, and Dieter Fox. 6-dof graspnet: Variational grasp generation for object manipulation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2901- 2910, 2019. + +[19] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Icml, 2010. + +[20] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for $3\mathrm{\;d}$ classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652-660, 2017. + +[21] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30, 2017. + +[22] Yuzhe Qin, Rui Chen, Hao Zhu, Meng Song, Jing Xu, and Hao Su. S4g: Amodal single-view single-shot se (3) grasp detection in cluttered scenes. In Conference on robot learning, pages 53-65. PMLR, 2020. + +[23] Anthony Simeonov, Yilun Du, Andrea Tagliasac-chi, Joshua B Tenenbaum, Alberto Rodriguez, Pulkit Agrawal, and Vincent Sitzmann. Neural descriptor fields: Se (3)-equivariant object representations for manipulation. In 2022 International Conference on Robotics and Automation (ICRA), pages 6394-6400. IEEE, 2022. + +[24] Arjun Singh, James Sha, Karthik S Narayan, Tudor Achim, and Pieter Abbeel. Bigbird: A large-scale 3d database of object instances. In 2014 IEEE international conference on robotics and automation (ICRA), pages 509-516. IEEE, 2014. + +[25] Andreas ten Pas, Marcus Gualtieri, Kate Saenko, and Robert Platt. Grasp pose detection in point clouds. The International Journal of Robotics Research, 36(13-14): 1455-1473, 2017. + +[26] Dian Wang, Robin Walters, Xupeng Zhu, and Robert Platt. Equivariant $q$ learning in spatial action spaces. In Conference on Robot Learning, pages 1713-1723. PMLR, 2022. + +[27] Chaozheng Wu, Jian Chen, Qiaoyu Cao, Jianchi Zhang, Yunxin Tai, Lin Sun, and Kui Jia. Grasp proposal + +networks: An end-to-end solution for visual learning of robotic grasps. Advances in Neural Information Processing Systems, 33:13174-13184, 2020. + +[28] Binglei Zhao, Hanbo Zhang, Xuguang Lan, Haoyu Wang, Zhiqiang Tian, and Nanning Zheng. Regnet: Region-based grasp network for end-to-end grasp detection in point clouds. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 13474-13480. IEEE, 2021. + +[29] Qian-Yi Zhou, Jaesik Park, and Vladlen Koltun. Open3d: A modern library for 3d data processing. arXiv preprint arXiv:1801.09847, 2018. + +[30] Xupeng Zhu, Dian Wang, Ondrej Biza, Guanang Su, Robin Walters, and Robert Platt. Sample efficient grasp learning using equivariant models. Proceedings of Robotics: Science and Systems (RSS), 2022. + +[31] Xupeng Zhu, Dian Wang, Ondrej Biza, Guanang Su, Robin Walters, and Robert Platt. Sample efficient grasp learning using equivariant models. arXiv preprint arXiv:2202.09468, 2022. + +## VIII. APPENDIX + +## A. Grasp Sampling + +Edge Grasp Network enables us to evaluate a large number of edge grasps that share a single approach point with a single forward pass through the model. However, each different approach point necessitates evaluating the model separately. Therefore we adopt the following grasp sample strategy. First, we sample a small number of approach points ${P}_{a} \subset P$ . These approach points can be sampled uniformly at random from the cloud, or they can be focused on parts of the cloud where a grasp is preferred. Then, we evaluate the model once for all approach points by forming a minibatch of $\left| {P}_{a}\right|$ inputs and performing a single forward pass. The output of this is a set of sets of edge grasp features, ${F}_{{\left( ac\right) }_{1}},{F}_{{\left( ac\right) }_{2}},\ldots ,{F}_{{\left( ac\right) }_{\left| {P}_{a}\right| }}$ . One can take the union of these sets, sample $m$ edge grasps uniformly at random or select grasps with preferred gripper approach directions and gripper contact locations, and then run the grasp classifier on these sampled grasps to produce the final output. + +## B. Model + +We implemented the Edge Grasp Network model described in Section III-B. The input to the model is a downsampled point cloud created by voxelizing the input with a $4\mathrm{\;{mm}}$ voxel dimension. The PointNetConv layers in $\psi$ are implemented using a KNN graph with $k = {16}$ , i.e. with 16 nearest neighbors. $\psi$ is implemented as a sequence of three PointNetConv layers with a 2-layer MLP as the message passing function. The grasp classifier is implemented as a 4-layer MLP with ReLUs [19] and a sigmoid layer at the end. We evaluate both conventional and Vector Neuron versions of our model in simulated and real-robot experiments. + +## C. Data Augmentation + +Extensive data augmentation is applied to the conventional version of our model to force it to learn the $\mathrm{{SO}}\left( 3\right)$ invariance from training. Before loading the point cloud $P$ from the training dataset, we randomly sample a $g \in \mathrm{{SO}}\left( 3\right)$ to rotate $P$ . This results in rotations on the 32 cropped point clouds corresponding to each approach point, i.e., $\left\{ {g \cdot {S}_{{a}_{1}}, g \cdot {S}_{{a}_{2}},\ldots , g \cdot {S}_{{a}_{32}}}\right\}$ . Since ${S}_{a}$ is centered at ${p}_{a}$ , we then translate ${p}_{a}$ to the origin. A batch of 32 rotated and translated ${S}_{a}$ is fed to our model as the input during training. Since the Vector Neurons version of our model obtains $\mathrm{{SO}}\left( 3\right)$ invariance by mathematical constraint, in this case only a translation is applied to each ${S}_{a}$ . + +## D. Training + +The grasp simulator developed by Breyer et al. [2] includes a Franka-Emika Panda gripper. There are 303 training objects and 40 test objects drawn collectively from YCB [4], BigBird [24] and other sources [10, 9]. We created training data by generating both packed and piled scenes with a random number of objects in simulation, adding Gaussian noise to the depth images captured from random camera views, voxelizing the point cloud, generating up to 2000 edge grasp candidates per scene, and labeling each of those candidates by attempting a grasp in simulation. To generate the 2000 edge grasp candidates, we sample 32 approach points uniformly at random from the voxelized cloud. In total, we generated ${3.36}\mathrm{\;M}$ labeled grasps based on3,317scenes, ${85}\%$ of which were used for training and 15% were used for testing. We train our model with the Adam [11] optimizer and an initial learning rate of ${10}^{-4}$ . The learning rate is reduced by a factor of 2 when the test loss has stopped improving for 6 epochs. It takes about 0.5 seconds to complete one SGD step with a batch size of 32 on a NVIDIA Tesla V100 SXM2 GPU. We train the model for 150 epochs and balance the positive and negative grasp labels during training. Both VN-EdgeGraspNet and EdgeGraspNet converge in less than 10 hours. + +## E. Baselines for Simulation Experiments + +We compare our method against three strong baselines in Section IV. PointNetGPD [14] is a sample-based method that represents a candidate grasp pose by the canonicalized points inside the gripper and infers grasp quality using a PointNet [20] model. VGN [2] (Volumetric Grasping Network) takes a TSDF of the workspace as input and outputs the grasp orientation and quality at each voxel. GIGA [8] (Grasp detection via Implicit Geometry and Affordance) uses a structured implicit neural representation from 2D feature grids and generates the grasp orientation and quality for each point trained with a auxiliary occupancy loss. Both VGN and GIGA receive a ${40} \times {40} \times {40}$ TSDF based on output from a single depth image. We also evaluate a variation of GIGA with a ${60} \times {60} \times {60}$ resolution TSDF, which we refer to as GIGA-High. We use the pretrained models ${}^{2}$ of VGN and GIGA from Jiang et al. [8] and uniformly sample 64 approach points and 4000 grasps for our method and PointNetGPD. As shown in Table II, the pretrained VGN and GIGA models have fewer parameters than our method due to their TSDF input. While our model requires more parameters to operate on point clouds, all compared models are relatively lightweight. + +## F. Performance Considerations + +Inference Time: Table II shows the time needed by various models to infer grasp qualities. At ${28}\mathrm{\;{ms}}$ per4,000grasps, our EdgeGraspNet model is slightly slower than both VGN and GIGA but still much faster than PointNetGPD and GIGA-High. The Vector Neurons version of out model is about three times slower than the EdgeGraspNet model. + +Performance of different sample sizes: The speed and performance of our model is closely tied to the number of approach points (which determines batch size) and the number of classified grasps. Table VI shows that fewer approach points and grasp samples reduce grasp success somewhat, but not by a huge amount. As shown in Table VII, when we double the number of approach points, the inference time increases about 1.7 times. As shown in Table VIII, when we fix the number of approach points and increase the sampled edge grasps, the inference time almost does not change. + +--- + +${}^{2}$ Our trained models for VGN and GIGA on the dataset described above in Section VIII-D did not perform as well as the pretrained models from Jiang et al. [8]. It is probably because they train separate models for the PACKED and PILE scenarios with a larger dataset (4M labeled grasps for each scenario). We used their pretained models to do the evaluations. + +--- + +![019640ff-2ac2-78f1-8644-b928b4fbd18a_7_134_142_1536_1012_0.jpg](images/019640ff-2ac2-78f1-8644-b928b4fbd18a_7_134_142_1536_1012_0.jpg) + +Fig. 8. Illustrations of grasp candidates found using our algorithm. The first two rows show three examples of a gripper placed at randomly sampled grasp candidate configurations. The last row shows five grasps that share the same contact point. + +
MethodPackedPile
GSR (%)DR (%)GSR (%)DR (%)
EdgeGraspNet (16-1k)${88.5} \pm {1.7}$${92.6} \pm {1.4}$${84.8} \pm {2.1}$${86.7} \pm {3.3}$
EdgeGraspNet (32-2k)${91.4} \pm {1.5}$${94.0} \pm {2.0}$${89.4} \pm {1.3}$${91.2} \pm {2.5}$
EdgeGraspNet (64-4k)${92.0} \pm {1.4}$${94.8} \pm {0.8}$${89.9} \pm {1.8}$${92.8} \pm {1.6}$
VN-EdgeGraspNet (16-1k)${89.7} \pm {2.4}$${92.2} \pm {1.6}$${87.1} \pm {0.8}$${88.5} \pm {2.3}$
VN-EdgeGraspNet (32-2k)${91.4} \pm {1.3}$${93.8} \pm {2.0}$${89.3} \pm {0.5}$${92.1} \pm {1.8}$
VN-EdgeGraspNet (64-4K)${92.3} \pm {1.2}$${95.2} \pm {0.6}$${92.3} \pm {1.5}$${93.5} \pm {1.8}$
+ +TABLE VI. Grasp performance for different numbers of approach points (16, 32, and 64) and grasp samples (1000, 2000, and 4000). + +TABLE VII. Inference time v.s. # of approach points. We sample different numbers of approach points (16,32 and 64) with the same number (2000) of edge grasps. Evaluated on one NVIDIA-GeForce RTX 3090. + +
16-2k32-2k64-2k
EdgeGraspNet9.6 ms15.8 ms27.4 ms
+ +
32-50032-1k32-2k
EdgeGraspNet15.8 ms15.7 ms15.8 ms
+ +TABLE VIII. Inference time v.s. # sampled edge grasps. We sample different numbers of edge grasps(500,1000and 2000)with the same number (32) of approach points. Evaluated on one NVIDIA-GeForce RTX 3090. + +## G. Visualization of Grasps + +We shows grasp candidates found using our algorithm in Figure 8. The first two rows show three examples of randomly sampled grasp poses for each observed object. The diversity of grasp poses demonstrates our model can provides a high coverage of possible stable grasps. The last row of Figure 8 shows five grasps that share the same contact point. It indicates our model is beneficial to grasping tasks involved with specific contact locations. + +## IX. RELATED WORK + +## A. 6-DoF gasping methods + +There are two main types of 6-DoF grasping methods in recent research. Sample-based methods like GPD [25], PoinetNetGDP [14], GraspNet [18] that are often comprised of a grasp sampler module and a grasp evaluator module. These methods often require long training time and execution time since each grasp is represented and evaluated individually. In contrast, our method uses shared features to represent different grasps and achieve more computation efficiency. Element-wise prediction methods include point-based methods [3, 22, ${27},{28}\rbrack$ and volumetric-based methods $\left\lbrack {2,8}\right\rbrack$ . They estimate grasp qualities for all interesting points or voxels with a single feed-forward propagation. For instance, S4G [22] generates each point feature through PointNet++ [21] and predicts the grasp quality and the grasp pose together. REGNet [28] considers the geometry of radius sphere around the sampled points and regresses the orientations. However, the grasp distribution is a multi-modal function and regression methods only predict one grasp pose for a single point, which may cause ambiguity when multiple graspable poses are valid in that position. Classification methods can generate the distributions over multiple grasps at a single point, but copious amounts of data are often required. Volumetric-based methods $\left\lbrack {2,8}\right\rbrack$ use well-structured voxels instead of an unordered set of points. The memory requirements for voxel grids or SDFs are cubic in the resolution of the grid and therefore severely limit the resolution at which the method can be applied. + +## B. Grasp Pose Representation + +Grasp representation matters in evaluating and refining grasp poses. Most sample-based methods have a clear representation of grasp pose. GPD [25] projects the points around the gripper into canonical planes; PoinetGPD [14] feeds the points inside the gripper to PointNet; GraspNet [18] represents the grasp pose with a set of points of the gripper. On the other hand, element-wise methods $\left\lbrack {3,{22},{27},{28},2,8}\right\rbrack$ often avoid representing grasp explicitly. Since the relative pose between the gripper and the point/voxel is unclear, they have to do regressions or classifications of some elements of the grasp pose. Our method has a clear representation of the grasp pose and satisfies the multi-modal property of the grasp distribution and the friction constraint [1] of the contact point. + +## C. Symmetries in Manipulation + +Symmetries and equivariance have been shown to improve learning efficiency and generalization ability in many manipulation tasks [31, 26, 7, 23]. Zhu et al. [31] decouples rotation and translation symmetries to enable the robot to learn a planar grasp policy within 1.5 hours; Huang et al. [7] achieve better sample efficiency and faster convergence speed in planar pick and place tasks with the use of ${C}_{n} \times {C}_{n}$ equivariance; Simeonov et al. [23] use Vector Neurons to get $\mathrm{{SE}}\left( 3\right)$ -equivariant object representations so that the model can manipulate objects in the same category with a few training demonstrations. Our method also leverages $\mathrm{{SE}}\left( 3\right)$ symmetry to learn faster and generalize better on 6-DoF grasping. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/OFoo4631KAo/Initial_manuscript_tex/Initial_manuscript.tex b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/OFoo4631KAo/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..7815f30aedf582dce4cf57089f8a0e66aa75fe71 --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/OFoo4631KAo/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,236 @@ +§ EDGE GRASP NETWORK: A GRAPH-BASED SE(3)-INVARIANT APPROACH TO GRASP DETECTION + +Haojie Huang Dian Wang Xupeng Zhu Robin Walters Robert Platt + +Khoury College of Computer Science, Northeastern University + +{huang.haoj; wang.dian; zhu.xup; r.walters; r.platt} @northeastern.edu + +Abstract-Given point cloud input, the problem of 6-DoF grasp pose detection is to identify a set of hand poses in $\mathrm{{SE}}\left( 3\right)$ from which an object can be successfully grasped. This important problem has many practical applications. Here we propose a novel method and neural network model that enables better grasp success rates relative to what is available in the literature. The method takes standard point cloud data as input and works well with single-view point clouds observed from arbitrary viewing directions. Videos and code are available at https://haojhuang.github.io/edge_grasp_page/. + +§ I. INTRODUCTION + +Grasp detection $\left\lbrack {6,{25},{18}}\right\rbrack$ is a critical robotic skill. The robot first observes a scene containing objects in the form of images, voxels, or point clouds, and detects a set of viable grasp poses from which an object may be grasped stably. There are two general approaches: $\mathrm{{SE}}\left( 2\right)$ methods where the model reasons in terms of a top-down image of the scene (e.g. [13, ${15},{17},{12},{30}\rbrack )$ , and $\mathrm{{SE}}\left( 3\right)$ methods where the model reasons in terms of a point cloud or voxel grid (e.g. [6, 18, 8, 3]). $\mathrm{{SE}}\left( 3\right)$ methods have a distinct advantage over $\mathrm{{SE}}\left( 2\right)$ methods because they have more flexibility and are easier to apply in general robotics settings. Unfortunately, $\mathrm{{SE}}\left( 3\right)$ methods are generally much more complex, so SE(2) models are often preferred. + +This paper tackles the problem of $\mathrm{{SE}}\left( 3\right)$ grasping with a novel grasp detection model that we call the Edge Grasp Network. The model is based on a novel representation of a 6-DoF grasp that uses a pair of vertices in a graph. Given a single approach point (a position the hand will approach), we define a KNN graph that contains all the points in the point cloud that are within a fixed radius of the approach point. Each point in this KNN graph corresponds to an orientation of the gripper and, when paired with the approach point, defines a distinct 6-DOF grasp pose. We infer the quality of all such grasps simultaneously using a graph neural network. + +This approach is novel relative to the literature in three ways: 1) First, our method of defining unique grasp candidates in terms of a pair of vertices in a graph is new; 2) Second, our inference model using a graph neural network defined with respect to a single approach point is novel; 3) Third, our model is the first $\mathrm{{SE}}\left( 3\right)$ grasp method that incorporates $\mathrm{{SO}}\left( 3\right)$ equivariance. + +§ II. PROBLEM STATEMENT + +The grasp detection problem is to locate a set of grasp poses in $\mathrm{{SE}}\left( 3\right)$ for a parallel-jaw gripper given input about the scene in the form of a point cloud. Denote the point cloud observation as $P = {\left\{ {p}_{i} \in {\mathbb{R}}^{3}\right\} }_{i = 1}^{n}$ , where $n$ is the number of points. For each point $p \in P$ , we will assume that an estimate of the object surface normal ${n}_{p} \in {S}^{2}$ can be calculated. Although it is not required, we generally assume that this point cloud is generated by a single depth camera. A grasp pose of the gripper is parameterized $\alpha = \left( {C,R}\right) \in \mathrm{{SE}}\left( 3\right)$ , where $C \in {\mathbb{R}}^{3}$ is the location of the center of the gripper and $R \in \mathrm{{SO}}\left( 3\right)$ represents its orientation. The grasp detection problem is to find a function $S : P \mapsto {\left\{ {\alpha }_{i} \in \mathrm{{SE}}\left( 3\right) \right\} }_{i = 1}^{m}$ , that maps $P$ onto $m$ grasp poses detected in the scene. The grasp evaluation problem is to find a function $\Phi : \left( {P,\alpha }\right) \mapsto \left\lbrack {0,1}\right\rbrack$ , that denotes the quality of grasp $\alpha$ . Notice that $\Phi$ is invariant to translation and rotation in the sense that $\Phi \left( {g \cdot P,g \cdot \alpha }\right) = \Phi \left( {P,\alpha }\right)$ for an arbitrary $g \in \mathrm{{SE}}\left( 3\right)$ . In other words, the predicted quality of a grasp attempt should be invariant to transformation of the object to be grasped and the grasp pose by the same rotation and translation. + +§ III. METHOD + +§ A. GRASP POSE REPRESENTATION + +We represent a grasp as a pair of points in the cloud, $\left( {{p}_{a},{p}_{c}}\right) \in {P}^{2}.{p}_{a}$ is considered to be the approach point and ${p}_{c}$ is the contact point. Assuming that we can estimate the object surface normal ${n}_{c}$ at point ${p}_{c}$ , $\left( {{p}_{a},{p}_{c}}\right)$ defines a grasp orientation $R$ where the gripper fingers move parallel to the vector ${n}_{c}$ and the gripper approaches the object along the vector ${a}_{ac} =$ ${n}_{c} \times \left( {{n}_{c} \times \left( {{p}_{a} - {p}_{c}}\right) }\right)$ . This is illustrated in Figure 1. The gripper center $C$ is positioned such that ${p}_{a}$ is directly between the fingers and ${p}_{c}$ is at a desired point of contact on the finger, $C = {p}_{a} - \delta {a}_{ac}$ . Here, $\delta = {G}_{d} + {\left( {p}_{a} - {p}_{c}\right) }^{T}{a}_{ac}$ denotes the distance between the center of the gripper and ${p}_{a}$ and ${G}_{d}$ denotes gripper depth. We will sometimes refer to a grasp defined this way as an edge grasp. + + < g r a p h i c s > + +Fig. 1. Grasp pose defined by the edge grasp $\left( {{p}_{a},{p}_{c}}\right)$ . The reference frame of the gripper is illustrated by the RGB coordinate system. ${G}_{w}$ and ${G}_{d}$ are the gripper width and gripper depth. + +To sample edge grasps, we will generally sample the approach point ${p}_{a}$ first and then for each approach point sample multiple contact points ${p}_{c}$ from the neighbors of ${p}_{a}$ within the distance of $\frac{{G}_{w}}{2}$ , where ${G}_{w}$ denotes the aperture of the gripper, i.e. the distance between the fingers when the gripper is open. One key advantage of this representation is that we can easily provide the approximate position of a desired grasp as an input to the model. If we want to grasp a tool by its handle, for example, this is easily achieved by only considering contact locations on the handle. + + < g r a p h i c s > + +Fig. 2. Encoding process of edge grasps. The rightmost part shows the represented grasp of one edge feature. + +§ B. MODEL ARCHITECTURE + +Our model, which we call the Edge Grasp Network, evaluates the grasp quality for a set of edge grasps that have a single approach point ${p}_{a} \in P$ in common. We evaluate multiple approach points by cropping them separately and then placing them in a batch. There are four steps, as illustrated in Figure 2. + +Step 1: Crop Point Cloud. Given a point cloud $P$ and an approach point ${p}_{a}$ , only a set of neighboring points of ${p}_{a}$ affects the edge grasp. We crop the point cloud to a ball around ${p}_{a} :$ + +$$ +{S}_{a} = \left\{ {p \in P : {\begin{Vmatrix}p - {p}_{a}\end{Vmatrix}}_{2} \leq {G}_{w}/2}\right\} , +$$ + +Step 2: PointNetConv $\left( \psi \right)$ . We compute a feature at each point using a stack of PointNetConv layers [21], denoted $\psi$ . Each layer calculates a new feature ${f}_{i}^{\left( l + 1\right) }$ at each point ${p}_{i} \in {S}_{a}$ + +using + +$$ +{f}_{i}^{\left( \ell + 1\right) } = \mathop{\max }\limits_{{j \in \mathcal{N}\left( i\right) }}\operatorname{MLP}\left( {{f}_{j}^{\left( \ell \right) },{p}_{j} - {p}_{i}}\right) , \tag{1} +$$ + +where $\mathcal{N}\left( i\right)$ denotes the $k$ -nearest neighbors to ${p}_{i}$ . Here, ${f}_{j}^{\left( l\right) }$ denotes the feature at point ${p}_{j}$ prior to the layer, max denotes max-pooling where the max is taken over features (like in PointNet [20]). MLP is a 2-layer multi-layer perceptron that takes both parameters as input. The input features at the first layer are the positions and surface normals of the points. Let ${F}_{{S}_{a}}$ denote the set of features for the points in ${S}_{a}$ at the output of Step 2. + +Step 3: Compute Global Feature $\left( \omega \right) .\omega$ takes ${F}_{{S}_{a}}$ as input and generates a single global feature ${g}_{a}$ that describes ${S}_{a}$ . First, ${F}_{{S}_{a}}$ is passed to an MLP followed by a max-pooling layer (over features) to generate a first-level global feature. This is concatenated with each feature $f \in {F}_{{S}_{a}}$ and passed to a second MLP and max-pooling layer to output ${g}_{a}$ . Finally, for each edge grasp $\left( {{p}_{a},{p}_{c}}\right) \in {P}^{2}$ associated with ${p}_{a}$ , we calculate an edge feature ${f}_{ac} \in {F}_{ac}$ by concatenating ${g}_{a}$ with the point feature ${f}_{c} \in {F}_{{S}_{a}}$ corresponding to ${p}_{c}$ . This edge feature will represent the edge grasp to the classifier. Step 4: Grasp Classification. After calculating the edge features ${F}_{ac}$ , we predict grasp success using a four-layer MLP with a sigmoid function which takes an edge feature ${f}_{ac}$ as input and infers whether the corresponding edge grasp will succeed. + +§ C. $\MATHRM{{SO}}\LEFT( 3\RIGHT)$ INVARIANCE OF EDGE GRASP NETWORK + +In Section II, we noted that the grasp quality function $\Phi \left( {P,\alpha }\right)$ is invariant to translation and rotation, i.e. $\Phi (g \cdot P,g \cdot$ $\alpha ) = \Phi \left( {P,\alpha }\right)$ for arbitrary $g \in \mathrm{{SE}}\left( 3\right)$ . As presented above, the Edge Grasp Network is invariant to translation because each ${S}_{a}$ is centered at the approach point ${p}_{a}$ (we translate ${p}_{a}$ to the origin of the world frame). However, additional methodology is required to create invariance to rotations. Rotational invariance allows the model to generalize grasp knowledge from one orientation to another. We enable rotational invariance with two different approaches. The first approach is to apply data augmentation on ${S}_{a}$ to learn $\mathrm{{SO}}\left( 3\right)$ invariance during training. Our second approach is to use an $\mathrm{{SO}}\left( 3\right)$ -equivariant model, Vector Neurons [5]. Vector Neurons can be applied to nearly any neural model architecture by encoding the ${\mathbb{R}}^{3}$ along which $\mathrm{{SO}}\left( 3\right)$ acts as a separate tensor axis. As we show in Section IV-C, leveraging SO(3) symmetries is beneficial to learn a grasp function. + +§ IV. SIMULATIONS + +We benchmarked our method in simulation against three strong baselines, PointNetGPD [14], VGN [2], and GIGA [8]. To make the comparison as fair as possible, we used the same simulator developed by Breyer et al. [2] and used by Jiang et al. [8]. There are two types of simulated grasp environments, PACKED and PILED. In PACKED, objects are placed randomly in an upright configuration in close proximity, e.g. as shown in Figure 3(a). In PILED, objects are dumped randomly from a box into a pile. + +§ A. EXPERIMENTAL PROTOCOL: + +We evaluate our model over several rounds of testing. During each round, a pile or packed scene with 5 test objects is generated inside of a ${30} \times {30} \times {30}{\mathrm{\;{cm}}}^{3}$ workspace and the system begins grasping one object at a time. Prior to each grasp, we take a depth image of the scene from a direction above the table to extract the point cloud or TSDF, and pass it to the model. After receiving grasp scores from the model, we execute the grasp with the highest quality score. A round of testing ends when either all objects are cleared or two consecutive grasp failures occur. Performance is measured over 100 simulation rounds with 5 different random seeds in terms of: 1) Grasp Success Rate (GSR $= \frac{\# \text{ successful grasps); }}{\# \text{ total grasps }}$ and 2) Declutter Rate (DR $= \frac{\# \text{ grasped objects }}{\# \text{ total objects }}$ ). The results are reported in Table I. Detailed description of the baselines and training could be found in Appendix VIII-E and VIII-D. + + < g r a p h i c s > + +Fig. 3. Left: the packed scenario; Right: the pile scenario. + +TABLE I. Quantitative results of clutter removal. Edge-sample randomly sample edges that do not collide with the table. EdgeGraspNet is the version of our method trained with data augmentation.VN-EdgeGraspNet is the version with Vector Neurons. GIGA-High query at a higher resolution of ${60} \times {60} \times {60}$ . + +max width= + +2*Method 2|c|Packed 2|c|Pile + +2-5 + GSR (%) DR (%) GSR (%) DR (%) + +1-5 +PointNetGPD ${79.3} \pm {1.8}$ ${82.5} \pm {2.9}$ ${75.6} \pm {2.3}$ ${77.0} \pm {2.8}$ + +1-5 +VGN ${80.2} \pm {1.6}$ ${86.2} \pm {2.0}$ ${64.9} \pm {2.2}$ ${69.1} \pm {3.2}$ + +1-5 +GIGA ${85.3} \pm {1.9}$ ${91.2} \pm {1.7}$ ${69.9} \pm {1.8}$ ${75.2} \pm {2.2}$ + +1-5 +GIGA-High ${88.5} \pm {2.0}$ ${93.9} \pm {1.4}$ ${74.1} \pm {1.5}$ ${80.1} \pm {0.5}$ + +1-5 +Edge-Sample ${44.0} \pm {4.0}$ ${39.7} \pm {4.5}$ ${40.2} \pm {2.5}$ ${30.9} \pm {3.2}$ + +1-5 +EdgeGraspNet ${92.0} \pm {1.4}$ ${94.8} \pm {0.8}$ ${89.9} \pm {1.8}$ ${92.8} \pm {1.6}$ + +1-5 +VN-EdgeGraspNet ${92.3} \pm {1.2}$ ${95.2} \pm {0.6}$ ${92.3} \pm {1.5}$ ${93.5} \pm {1.8}$ + +1-5 + +max width= + +Method PointNetGPD VGN GIGA GIGA-High EdgeGraspNet VN-EdgeGraspNet + +1-7 +#of Parameters 1.6 M 0.3 M 0.6 M 0.6 M ${3.0}\mathrm{M}$ 1.7 M + +1-7 +Inference time ${382}\mathrm{\;{ms}}$ 10 ms ${21}\mathrm{\;{ms}}$ 50 ms 28 ms 89 ms + +1-7 + +TABLE II. Number of parameters and inference time for proposed methods and baselines. Evaluated on one NVIDIA-GeForce RTX 3090. + +§ B. RESULTS ANALYSIS: + +We draw several conclusions from Table I. First, our sample strategy unadorned with grasp quality inference (Edge-Sample) already performs with a grasp success rate of between ${40}\%$ and ${44}\%$ . This suggests our edge grasp representation and sample strategy provide a helpful bias. Second, both Edge-GraspNet and VN-EdgeGraspNet outperform all the baselines in all performance categories by a significant margin, particularly in the PILE category. Third, the performance gap between the packed and piled scenarios is smaller for our method than that for the baselines, which suggests that our model adapts to different object configurations better. Finally, one concern of most sampled-based methods is the inference time since they need to evaluate each grasp individually. However, our method takes use of the shared global features and could achieve a real-time inference time. Detailed inference time analyses could be found in Appendix VIII-F. + +§ C. VECTOR NEURONS AND DATA AUGMENTATION: + +To investigate the role of $\mathrm{{SO}}\left( 3\right)$ invariance, we compared our base version of EdgeGraspNet with a variation that omits data augmentation (EdgeGraspNet-NoAug) and VN-EdgeGraspNet. + +As shown in Figure 4, the Vector Neurons version performs best and learns fastest, and the base EdgeGrasp-Net converges to approximately the same level. However, without either Vector Neurons or data augmentation, the model overfits. This demonstrates that leveraging $\mathrm{{SO}}\left( 3\right)$ symmetry is beneficial to learning the grasp function. + + < g r a p h i c s > + +Fig. 4. Test loss functions showing the effect of data augmentation and Vector Neurons. + +§ D. ABLATION STUDY ON CROPPING ${S}_{A}$ + + < g r a p h i c s > + +Fig. 5. Ablation Study on cropping ${S}_{a}$ . Left Figure: Test loss v.s. Epoch; Right Figure: Test Accuracy v.s. Epoch. The results show the effect of cropping ${S}_{a}$ . + +We compare our EdgeGrapNet with a variation that skips cropping point cloud around the approach point ${p}_{a}$ . After getting the observed point cloud $P$ , we build a KNN graph on $P$ and feed it to $\psi$ directly to get the point features ${F}_{P}$ . Then, we extract the global feature ${g}_{a}$ corresponding to ${p}_{a}$ from $\left\{ {{f}_{p} \in {F}_{P} \mid p \in {S}_{a}}\right\}$ . Instead of translating ${p}_{a}$ to the origin of the world coordinate, we center $P$ , the entire observed point cloud, at the origin. Except for these variations, other operations are the same. Let's denote the variation as EdgeGraspNet-NoBall. Figure 5 shows the results of our model and the variation version. It indicates that implementing on ${S}_{a}$ is better than implementing on $P$ . There are some reasons why ${S}_{a}$ is better than $P$ . First, $P$ is a special case of ${S}_{a}$ when we set the radius of the sphere as infinity. Second, ${S}_{a}$ includes all the related points that affect the grasp quality without redundant information. Last but not least, the invariant property on ${S}_{a}$ is more generalized than that on ${P}_{a}$ . Given a $g \in \mathrm{{SO}}\left( 3\right)$ , a grasp action $\alpha$ , and a grasp evaluation function $\Psi$ , the invariance of EdgeGraspNet could be defined as + +$$ +\Psi \left( {g \cdot {S}_{a},g \cdot \alpha }\right) = \Psi \left( {{S}_{a},\alpha }\right) +$$ + +However, EdgeGraspNet-NoBall could only be invariant to rotations on the entire point cloud: $\Psi \left( {g \cdot P,g \cdot \alpha }\right) = \Psi \left( {P,\alpha }\right)$ , which is less generalized. + +§ V. EVALUATION ON A ROBOT + +In this paper, we measure physical grasp performance in three different setups with 4 object sets, as shown in Figure 7. Our model trained in simulation is directly implemented on a real robot. + + < g r a p h i c s > + +Fig. 6. Robot setup. Left: the robot takes a depth image of the scene from a random viewpoint. Right: the robot grasps the red adversarial object from a localized graspable part. + + < g r a p h i c s > + +Fig. 7. Object sets and test configurations used for real robot experiments. From left column to right column: packed scene with 10 objects; pile scene with 10 objects; 20 test hard objects [31]; 12 Berkeley adversarial objects [16]. + +§ VI. SETUP + +We used a UR5 robot equipped with a Robotiq-85 Gripper, as shown in Figure 6. An Occipital Structure Sensor was mounted on the arm to capture the observation. Prior to each grasp, we move the sensor to a randomly selected viewpoint ${}^{1}$ (pointing toward the objects to be grasped, as shown in Figure 6(a)), take a depth image, and generate a point cloud. We detect and remove the table plane with RANSAC and we denoise and downsample the point cloud using Open3D [29]. For each observed point cloud, we sample 40 approach points and 2000 grasps total. After running inference, we filter out the grasps with a grasp quality score below 0.9 . As is the procedure in [2] and [6], we select the highest (largest $z$ - coordinate) above-threshold candidate for execution. + +§ A. RESULTS + +Household Objects in the Packed and Pile Settings: This experiment evaluates our method in the packed and piled settings described in Section IV. In each round, 5 objects are randomly selected from 10 objects. Table III reports grasp success rates and declutter rates from 16 rounds ( 80 objects total). GSRs vary between ${91.7}\%$ and ${93}\%$ - a result that closely matches our simulated results. It indicates the small sim-to-real gap of our method. + +max width= + +2*Method 2|c|Packed 2|c|Pile + +2-5 + GSR (%) DR (%) GSR (%) DR (%) + +1-5 +EdgeGrasoNet 91.9 (80 / 87) 100 (80/80) 93.0 (80/86) 100 (80/80) + +1-5 +VN-EdgeGraspNet 91.7 (78/85) 98.7 (79/80) 92.9 (79/85) 98.7 (79 /80) + +1-5 + +TABLE III. Results of real-robot experiments for packed and piled grasp settings. + +Comparison with Zhu et al. [31] on test hard Objects: + +This experiment compares our method against the method of Zhu et al. [31], a strong baseline from the literature. In each round, 10 objects are randomly selected and dumped on the table. Table IV shows the results from 15 runs. VN-EdgeGraspNet outperforms [31] by about four percentage points both in terms of the grasp success rate and the declutter rate - a significant improvement against a strong baseline. + +max width= + +Method GSR (%) DR (%) + +1-3 +Zhu et al. [31] 89.0 (138/155) 94.0 (141 / 150) + +1-3 +EdgeGraspNet 91.8 (146/159) ${98.0}\left( {{147}/{150}}\right)$ + +1-3 +VN-EdgeGraspNet 93.6 (148/159) 98.6 (148 / 150) + +1-3 + +TABLEIV. Comparison with the method of Zhu et al. [31] using exactly the same objects and setup. + +Comparison with [3] on the Berkeley Adversarial Pile: We also baselined our method using the 12 Berkeley Adversarial Objects described in [16], shown in Figure 7. Here, we compare our method to the work of Cai et al. [3], called Volumetric Point Network (VPN). Table V shows the performance comparison. The results indicate that our method outperforms all the baselines. Our final grasp success rate is ${84.4}\%$ , a very good performance for the Berkeley adversarial object set. + +max width= + +Method GSR (%) DR (%) + +1-3 +Gualtieri et al. [6]* 70.91 (39 / 55) 97.5 (39/40) + +1-3 +Breyer et al. [2]* 41.56 (32 / 77) 80 (32/40) + +1-3 +Cai et al. [3]* 78.4 (40/51) 100 (40/40) + +1-3 +EdgeGraspNet 84.4 (38/45) 95.0 (38/40) + +1-3 +VN-EdgeGraspNet 83.0 (40/48) 100 (40/40) + +1-3 + +TABLE V. Comparison with VPN [3], GPD [6], and VGN [2] for the Berkeley Adversarial Objects in a pile setting. We performed five rounds of grasping with piles of eight objects in each. * Results for VPN [3], GPD [6], and VGN [2] are copied directly from [3]. + +§ VII. CONCLUSION + +This paper proposes a novel edge representation in the 6- DoF grasp detection problem. By formulating the grasp pose with an approach point, a contact point, and its surface normal, we represent edge grasps by local features of contacts and global features of the related points. We explore the $\mathrm{{SE}}\left( 3\right)$ symmetry of our representation and propose EdgeGraspNet and VN-EdgeGraspNet to leverage $\mathrm{{SE}}\left( 3\right)$ invariance in two different ways. Finally, We evaluate our models on various simulated and real-world object sets against several strong baselines. Experiments show the small sim-to-real gap, the high grasping success rate, and the generalization ability to different object sets of our method. A clear direction for future work is to integrate more on-policy learning, which we believe would enable us to improve our performance. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/YeOtYX-WB1/Initial_manuscript_md/Initial_manuscript.md b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/YeOtYX-WB1/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..76a7a69a03aa7b71ae92a96741aaf78dd650bc1d --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/YeOtYX-WB1/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,314 @@ +# Geometric Regularity with Robot Intrinsic Symmetry in Reinforcement Learning + +Author Names Omitted for Anonymous Review. Paper-ID 3 + +Abstract-Geometric regularity, which leverages data symmetry, has been successfully incorporated into deep learning architectures such as CNNs, RNNs, GNNs, and Transformers. While this concept has been widely applied in robotics to address the curse of dimensionality when learning from high-dimensional data, the inherent reflectional and rotational symmetry of robot structures has not been adequately explored. Drawing inspiration from cooperative multi-agent reinforcement learning, we introduce novel network structures for deep learning algorithms that explicitly capture this geometric regularity. Moreover, we investigate the relationship between the geometric prior and the concept of Parameter Sharing in multi-agent reinforcement learning. Through experiments conducted on various challenging continuous control tasks, we demonstrate the significant potential of the proposed geometric regularity in enhancing robot learning capabilities. + +## I. INTRODUCTION + +Robots have the ability to undertake tasks that are dangerous or difficult for humans. With more degrees of freedom, they can perform increasingly complex tasks. For example, humanoid robots and quadrupedal robots can walk over challenging terrain, while robot arms and hands can achieve dexterous manipulation. However, controlling robots with a large number of degrees of freedom becomes increasingly difficult as the observation and action space grows exponentially. Although deep reinforcement learning has been employed to solve various robot control problems [8, 11, 20, 3], learning effective control strategies for these robots remains a challenging task. + +Training neural networks on high-dimensional data is known to be challenging due to the curse of dimensionality [4]. To overcome this challenge, researchers have developed network architectures and incorporated various inductive biases that respect the structure and symmetries of the corresponding domains. For example, convolutional neural networks (CNNs) leverage the strong geometric prior of images by incorporating translation equivariance into the design of convolutional layers. This ensures that the extracted features move along with the original image, regardless of the direction it is shifted in. Similarly, graph neural networks (GNNs) take advantage of the geometric prior of permutation invariance in other domains to capture the relationships among objects. Overall, incorporating domain-specific inductive biases and symmetries can greatly improve the ability of neural networks to learn from high-dimensional data. + +However, in the realm of deep reinforcement learning research, the potential benefits of utilizing symmetry structures present in environments, such as reflectional and rotational symmetry, have not attracted much attention and thus, how to combine these prior knowledge to effectively improve the existing approaches still is worth to be investigated. To bridge the research gap, we propose to reformulate the control problems under Multi-Agent Reinforcement Learning (MARL) framework to better leverage the symmetry structures. We demonstrate the surprising effectiveness of our approach by combining the new architectures with model-free deep reinforcement learning methods. Additionally, we establish a connection between our proposed geometric prior and the important concept of "Parameter Sharing" in multi-agent reinforcement learning, which excessively reduces the optimization space and speeds up the learning process. We also design a set of challenging robot control tasks (see Fig. 1) and evaluate our method on them. Our experimental results show that our proposed method significantly improves the performance of robot control learning tasks. + +![019640fa-35a2-78dc-96ee-b1b12f06bb4b_0_936_463_698_411_0.jpg](images/019640fa-35a2-78dc-96ee-b1b12f06bb4b_0_936_463_698_411_0.jpg) + +Fig. 1: We design tasks (except TriFinger [3]) challenging for current deep reinforcement learning baseline algorithms. + +## II. BACKGROUND AND RELATED WORK + +## A. Multi-Agent Reinforcement Learning (MARL) + +MARL is an extended reinforcement learning method for decision-making problems, where multiple agents can interact and learn in one environment. The most popular mathematical framework for MARL problems is Markov games. A Markov game is a tuple $\left\langle {\mathcal{N},\mathcal{S},\mathcal{O},\mathcal{A}, P,{R}_{i},\gamma }\right\rangle .\mathcal{N}$ is the set of all agents and $\mathcal{S}$ is the set of states. ${\mathcal{O}}_{i}$ and ${\mathcal{A}}_{i}$ are observation space and action space for agent $i$ , while $\mathcal{O} = { \times }_{i \in \mathcal{N}}{\mathcal{O}}_{i}$ and $\mathcal{A} = { \times }_{i \in \mathcal{N}}{\mathcal{A}}_{i}$ represent joint observation space and joint action space. Define ${\Delta }_{\left| S\right| }$ and ${\Delta }_{\left| A\right| }$ be the probability measure on $\mathcal{S}$ and $\mathcal{A}$ respectively. Then $P$ is the transition probability $P\left( {{s}^{\prime } \mid s, a}\right) : \mathcal{S} \times \mathcal{A} \rightarrow {\Delta }_{\mathcal{S}}$ . Each agent $i$ maintains a specific reward function ${R}_{i}\left( {s, a}\right) : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ , and the future rewards are discounted by the discount factor $\gamma \in \left\lbrack {0,1}\right\rbrack$ . Let ${\Pi }_{i} = \left\{ {{\pi }_{i}\left( {{a}_{i} \mid {o}_{i}}\right) : {\mathcal{O}}_{i} \rightarrow {\Delta }_{{\mathcal{A}}_{i}}}\right\}$ be the policy space for agent $i$ , then the objective for agent $i$ is represented as $\mathop{\max }\limits_{{\pi }_{i}}{\mathbb{E}}_{\pi , P}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{{+\infty }}{\gamma }^{t}{R}_{i}\left( {{s}_{t},{a}_{t}}\right) }\right\rbrack$ . In practice, the state space and the observation space can be identical if the observation has already fully described the system. Our paper also follows this assumption and hence uses observation alone. + +Multi-Agent Mujoco [13] is a popular benchmark for MARL algorithms which divides a single robot into several distinct parts with separate action space. However, the state-of-the-art MARL algorithms still couldn't match the performance of the single-agent algorithms on this benchmark. Different from their work, in which they arbitrarily divide robots into parts and ignore the geometric structures of the robots, we leverage ideas from geometric regularity during the MARL training and our results show that MARL can outperform single-agent algorithms by a substantial margin. + +## B. Symmetry in Robot Learning + +In robot learning domain, two groups of symmetric structures have been used to improve performance and learning efficiency. 1) Extrinsic Symmetry: By extrinsic symmetry we refer to the symmetries existing in the Exteroceptive sensors of the robot such as camera input. Some work [18, 24, 17, 19] have been proposed to integrate these symmetries into system identification via the neural network, especially CNN-structured network. These methods can largely improve the performance for manipulation tasks, but they are mostly around manipulation tasks with image input and gripper without roll-pitch movement. Van der Pol et al. [16] introduce MDP homomorphic networks to numerically construct equiv-ariant network layers.However, the proposed network only considers a pole balancing task with discrete action. Moreover, additional calculation is required to design the network even if the domain specific transformation is given. Mondal et al. [12] propose to learn symmetry directly from data in the latent space but is still limited to representation learning from images. 2) Intrinsic Symmetry: Different from extrinsic symmetries, intrinsic symmetries mostly naturally come from the physical constraints in the control system. For example, a humanoid robot control task exhibits reflectional symmetry. A symmetric control policy on such robot is usually more natural and effective. Mavalankar [10] proposes a data-augmentation method to improve reinforcement learning method for rotation invariant locomotion. Abdolhosseini et al. [2] investigate four different methods to encourage symmetric motion of bipedal simulated robots. They are implemented via specific policy network, data augmentation or auxiliary loss function. Even though the robots' motions become more natural-looking, they do not show a major improvement on different tasks. The policy network method in [2] is similar to ours in this work. But instead of a specific network merely for locomotion tasks with reflectional symmetry, we propose a generic equivariant policy network for both reflectional and rotational symmetries, which are predominant symmetry features in robotic systems and animal biology. Moreover, we approach the control task in the field of multi-agent systems. Finally, we get substantial performance improvement in experiments by reducing the policy search space. + +![019640fa-35a2-78dc-96ee-b1b12f06bb4b_1_908_152_756_285_0.jpg](images/019640fa-35a2-78dc-96ee-b1b12f06bb4b_1_908_152_756_285_0.jpg) + +Fig. 2: Agent partitioning considering symmetry structures: Humanoid and Cheetah robots split into left and right parts by reflectional symmetry; TriFinger and Ant robots split into 3 and 4 parts by rotational symmetry, where each part is controlled individually by a dedicated agent. The central part (grey) is controlled by all agents. + +## III. Single Robot Control As MARL + +Instead of learning a single-agent policy to control the whole robot, which will lead to a large observation-action space that is difficult to optimize, we introduce multiple agents that are responsible for each individual component of the robot inspired by MARL. We further propose a framework driven by the presence of symmetry structures in many robots and exploit such inductive biases to facilitate the training by applying parameter sharing techniques. + +The overview structure of our method is to (1) identify the geometric structures of different robots and divide single robots into multiple parts accordingly; (2) reformulate the control problem as a MARL framework; (3) optimize policies with parameter sharing technique. + +## A. Dividing Single Robots into Multiple Parts + +Previous research [13] also divides a single robot into multiple parts to evaluate the performance of MARL methods. However, its irregular partitioning makes the multi-agent methods hard to compete with the single-agent methods. In this paper, we reconsider partitioning in a more reasonable way, which is achieved by taking into account the symmetry structures of robots when dividing them into multiple agents. + +As shown in Fig. 2a, robots with reflectional symmetry can be partitioned into left (blue), right (green) and central (grey) parts. The robots with rotational symmetry in Fig. 2b are partitioned into parts with the same number of symmetric limbs (colour) and a central part (grey). For a robot with any of these symmetric structures, we split the whole robot's original observation-action space $\mathcal{O} \times \mathcal{A}$ by $\mathcal{O} = {\mathcal{O}}_{\mathrm{c}} \times \mathop{\prod }\limits_{{i \in \mathcal{N}}}{\mathcal{O}}_{\mathrm{s},\mathrm{i}}$ and $\mathcal{A} = {\mathcal{A}}_{\mathrm{c}} \times \mathop{\prod }\limits_{{i \in \mathcal{N}}}{\mathcal{A}}_{\mathrm{s},\mathrm{i}} \cdot {\mathcal{O}}_{\mathrm{c}} \times {\mathcal{A}}_{\mathrm{c}}$ represents the central observation-action pair, which consists of measurements and actuators that do not have symmetric counterparts, such as the position, orientation, velocity and joints of the torso, target direction, or states of the manipulated objects. Raw sensor data such as images and point clouds also belongs to central observation. ${\mathcal{O}}_{\mathrm{s}, i} \times {\mathcal{A}}_{\mathrm{s}, i}$ corresponds to symmetric observation-action spaces, whose measurements may include joint positions and velocities from the limbs, contact sensor measurements of the feet or fingers, and so on. The symmetric observation-action spaces are exactly the same for any $i \in \mathcal{N}$ due to the robots' symmetric property. + +![019640fa-35a2-78dc-96ee-b1b12f06bb4b_2_140_145_1527_369_0.jpg](images/019640fa-35a2-78dc-96ee-b1b12f06bb4b_2_140_145_1527_369_0.jpg) + +Fig. 3: a) TriFinger robot moves a sphere towards a target position. From left to right are the original state, rotated by ${120}^{ \circ }$ , and rotated by ${240}^{ \circ }$ . Note that the actions of different body parts should be equivariant with regard to the transformation. The red arrow represents the desired moving direction of the manipulated object. b) Equivariant policy network with parameter $\Phi$ . $\mathbf{c}$ and $\mathbf{s}$ stand for central and symmetric actions. c) Invariant value network with parameter $\Psi ,\Theta$ . + +## B. Multi-agent Reinforcement Learning Formulation + +Assume the original observation and action of the whole robot be $o \in \mathcal{O}$ and $a \in \mathcal{A}$ respectively and the number of agents $\left| \mathcal{N}\right|$ , equal to the number of symmetry parts of the robots. For each agent $i \in \mathcal{N}$ , there is a unique transformation function ${T}_{i}$ to obtain its own observation ${o}_{i} = {T}_{i}\left( o\right)$ . Detailed explanation of ${T}_{i}$ can be found in Appendix A1. Each agent generates the local action ${a}_{i}$ , consisting of $\overline{{a}_{\mathrm{c}, i}} \in {\mathcal{A}}_{c}$ and ${a}_{\mathrm{s}, i} \in {\mathcal{A}}_{s, i}$ for central and symmetric actions, by its own policy network. Finally, the whole robot’s action $a$ is recovered by gathering all symmetric actions ${a}_{s, i}$ and merging all central actions ${a}_{c, i}$ into ${a}_{c}$ . + +Regarding the reward function, our formulation follows the cooperative MARL setup, where ${R}_{i}$ for all $i \in \mathcal{N}$ are identical at every time step. This shared reward is calculated by a task-related reward function $R\left( {o, a}\right)$ which depends on the whole robot's observation and action. To optimize the policies ${\pi }_{i}$ , we adopt the multi-agent version of Proximal Policy Optimization (PPO) [14] methods. PPO is a popular model-free actor-critic reinforcement learning algorithm in different domains [22, 3, 11] for its stability, good performance and ease of implementation. Its multi-agent version also achieves competitive performance on different MARL benchmarks [23, 6]. + +## C. Geometric Regularization + +Parameter Sharing has been recognized as a crucial element in MARL for efficient training [7]. By enabling agents to share parameters in their policy networks, parameter sharing not only facilitates scalability to a large number of agents but also enables agents to leverage shared learned representations, leading to reduced training time and improved overall performance. However, it is shown by Christianos et al. [5] that indiscriminately applying parameter sharing could hurt the learning process. Successful utilization of parameter sharing relies on the presence of homogeneous agents as a vital requirement. In other words, agents should execute the same action once they are given the same observation. This assumption ensures the transformation equivariance of the overall policy regarding the symmetry structures. + +Take the simplified TriFinger Move task as an example, where the TriFinger robot has to move the sphere towards a target position. As shown in Fig. 3a, if the whole system is rotated by ${120}^{ \circ }$ or ${240}^{ \circ }$ around the $z$ axis of the robot base, the actions should also shift circularly among the three fingers for the optimal policy. Given the whole robot’s observation $o$ , this relationship can be denoted by: + +$$ +{A}_{\mathrm{s}, j}\left( {{T}_{i}\left( o\right) }\right) = {A}_{\mathrm{s}, i}\left( {{T}_{j}\left( o\right) }\right) ,\;{A}_{\mathrm{c}}\left( {{T}_{i}\left( o\right) }\right) = {T}_{i}\left( {{A}_{\mathrm{c}}\left( o\right) }\right) \tag{1} +$$ + +where ${A}_{\mathrm{s}, j}$ is the symmetric action of the $j$ th agent, ${A}_{\mathrm{c}}$ is the central action, ${T}_{i}$ is the symmetry transformation between agents $i$ and 0 (see definition in Appendix A1). The transformation for observation and action are so similar that we won't distinguish between them in this work for simplicity. Note that the the corresponding robot parts of agents can be defined arbitrarily. It does not influence the equivariance/invariance. + +Based on the equivariance represented by Eq. 1, we design the multi-agent actor-critic network structure in Fig. 3b, 3c. Agent $i$ gets a transformed observation ${T}_{i}\left( o\right)$ as the input of the policy network, the output action value consists of ${a}_{\mathrm{c}, i}$ and ${a}_{\mathrm{s}, i}$ . The central joints are controlled by the mean action over all agents’ output ${a}_{\mathrm{c}, i}$ , while ${a}_{\mathrm{s}, i}$ will be used as the action to take for the robot part $i$ . The policy network parameters are shared among agents. The value network gets the observations from all agents as input. The observations first go through the shared feature learning layers in the value network. Then the latent features are merged by a set operator (mean in this work). The value is finally calculated with the merged feature. + +The proposed policy network is equivariant with respect to symmetric transformations we consider in this work, while the value network is an invariant function (see proof in Appendix A2). By sharing the same policy network among all agents, we are able to incorporate the geometric regularization and reduce the dimension of the observation-action space. + +![019640fa-35a2-78dc-96ee-b1b12f06bb4b_3_134_154_1518_261_0.jpg](images/019640fa-35a2-78dc-96ee-b1b12f06bb4b_3_134_154_1518_261_0.jpg) + +Fig. 4: Learning curves on robot control tasks. The x-axis is environment time steps and the y-axis is episodic returns during training. All graphs are plotted with median and 25%-75% percentile shading across 5 random seeds. + +## IV. EXPERIMENTS AND DISCUSSION + +## A. Experimental Setup + +1) Challenging Tasks: Previous robotic control benchmarks [15] evaluate algorithms on fundamental tasks, e.g. controlling agents to walk. The movements in these tasks are limited and it's relatively easy to learn an optimal policy. In this work, we design several more challenging robotic control tasks, where current state-of-the-art methods fail to achieve good performance. The tasks are shown in Fig. 1: Humanoid Tightrope, Humanoid Dribbling, A1 Beam, Trifinger Move and Ant Acrobatic. The detailed introduction of the tasks can be found in Appendix B2. All experiments are carried out based on the NVIDIA Isaac Gym [9] robotics simulator. + +2) Baselines: For each task, we compare our method, named as Multi-agent with Symmetry Augmentation (MASA), with a set of baselines including: + +- Single-agent(SA): We first compare the single-agent reinforcement learning algorithm, which is to view all parts of the robot as a whole and optimize them jointly. This baseline can provide an intuitive comparison of our proposed framework to previous classic reinforcement learning works. The state space is kept the same as the multi-agent one for a fair comparison. + +- Single-agent with Symmetry Augmentation (SASA): This baseline follows the ${SA}$ ’s setup and is augmented with a symmetry loss [2]. Specifically, for any received observation $o$ , we calculate its symmetric representation ${o}_{sym}$ based on the robots' structures. We regulate the policy function $\pi$ and the value function $V$ in PPO with extra symmetry losses by minimizing ${\begin{Vmatrix}{T}_{i}\left( A\left( o\right) \right) - A\left( {T}_{i}\left( o\right) \right) \end{Vmatrix}}_{2}$ and $\left| {V\left( o\right) - V\left( {{T}_{i}\left( o\right) }\right) }\right|$ , where $A$ and $V$ are the gathered action and critic value of the robot. + +- Multi-agent without Symmetry Augmentation (MA): This baseline uses the same architecture as MASA. However, it does not involve the transformations in Fig. 3b 3c. Thus the geometric regularity of symmetry is ignored, which follows the previous research [13]. We concatenate a one-hot id encoding to each agent's observation as a common operation for non-homogeneous agents. + +We conclude the hyperparameters in Appendix B1. + +## B. Main Results + +Figure 4 presents the average return of all methods on different tasks during training. The proposed method MASA significantly outperforms other baselines across all 5 tasks. Further, the advantages over other baselines rise with the increasing difficulties of the task, which can be indicated by the increased number of joints, the extended state dimension and the enlarged state space in the task. Humanoid Tightrope and Humanoid Football control the same robot. But in the tightrope task, the robot only needs to walk forward, while the football task involves random turns and an external object so that other baselines can hardly learn meaningful behaviours in this task. Besides, we find a correlation between the performance gain of MASA and the reduction in action dimension. In tasks such as Humanoid Tightrope and A1 Beam, the action dimensions are reduced by 9 and 6 respectively, resulting in varying performance gains. Notably, tasks like Trifinger Move and Ant Acrobatic, which only retain $\frac{1}{3}$ and $\frac{1}{4}$ dimensions respectively after our method is implemented, demonstrate substantial performance improvements, emphasizing the advantages of symmetry-based optimization space reduction. + +By comparing the results of MASA, MA and SASA, we could observe that both of the two factors in MASA, multi-agent framework and symmetry structure, play an important role. Utilizing symmetry data structure alone (SASA) can gradually learn to solve a few tasks but with aparently lower data efficiency. Because the optimization space is not reduced and thus larger than that of MASA method. The multi-agent structure itself(MA)cannot guarantee meaningful results at all, which follows the criticism of naively sharing parameters among non-homogeneous agents [5]. + +## C. Discussion + +Our proposed multi-agent method exhibits impressive performance in challenging control tasks. The network structures we introduce are not limited to on-policy reinforcement learning algorithms and can be adapted for off-policy learning, imitation learning, and model-based learning methods. While our approach is straightforward to implement with observation transformations, it still requires domain knowledge. We believe our method can enhance robot learning in more demanding tasks, serving as a guide for designing robots with increased degrees of freedom while managing the observation-action space growth linearly. Future research directions include exploring additional symmetric structures and automating the process of identifying robots' intrinsic symmetries. + +[1] Unitree A1. Unitree. a1: More dexterity, more posibility, 2018. https://www.unitree.com/a1/, January 2018. + +[2] Farzad Abdolhosseini, Hung Yu Ling, Zhaoming Xie, Xue Bin Peng, and Michiel Van De Panne. On learning symmetric locomotion. In Motion, Interaction and Games, pages 1-10, Newcastle upon Tyne United Kingdom, October 2019. ACM. + +[3] Arthur Allshire, Mayank Mittal, Varun Lodaya, Vik-tor Makoviychuk, Denys Makoviichuk, Felix Widmaier, Manuel Wüthrich, Stefan Bauer, Ankur Handa, and Ani-mesh Garg. Transferring dexterous manipulation from gpu simulation to a remote real-world trifinger. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 11802-11809. IEEE, 2022. + +[4] Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478, 2021. + +[5] Filippos Christianos, Georgios Papoudakis, Muhammad A Rahman, and Stefano V Albrecht. Scaling multi-agent reinforcement learning with selective parameter sharing. In International Conference on Machine Learning, pages 1989-1998. PMLR, 2021. + +[6] Christian Schroeder de Witt, Tarun Gupta, Denys Makoviichuk, Viktor Makoviychuk, Philip HS Torr, Mingfei Sun, and Shimon Whiteson. Is independent learning all you need in the starcraft multi-agent challenge? arXiv preprint arXiv:2011.09533, 2020. + +[7] Jayesh K Gupta, Maxim Egorov, and Mykel Kochender-fer. Cooperative multi-agent control using deep reinforcement learning. In Autonomous Agents and Multiagent Systems: AAMAS 2017 Workshops, Best Papers, São Paulo, Brazil, May 8-12, 2017, Revised Selected Papers 16, pages 66-83. Springer, 2017. + +[8] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. + +[9] Viktor Makoviychuk, Lukasz Wawrzyniak, Yunrong Guo, Michelle Lu, Kier Storey, Miles Macklin, David Hoeller, Nikita Rudin, Arthur Allshire, Ankur Handa, and Gavriel State. Isaac gym: High performance GPU based physics simulation for robot learning. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks, 2021. + +[10] Aditi Mavalankar. Goal-conditioned batch reinforcement learning for rotation invariant locomotion. arXiv preprint arXiv:2004.08356, 2020. + +[11] Takahiro Miki, Joonho Lee, Jemin Hwangbo, Lorenz Wellhausen, Vladlen Koltun, and Marco Hutter. Learning robust perceptive locomotion for quadrupedal robots in the wild. Science Robotics, 7(62):eabk2822, 2022. + +[12] Arnab Kumar Mondal, Vineet Jain, Kaleem Siddiqi, and + +Siamak Ravanbakhsh. Eqr: Equivariant representations + +for data-efficient reinforcement learning. In International Conference on Machine Learning, pages 15908-15926. PMLR, 2022. + +[13] Bei Peng, Tabish Rashid, Christian Schroeder de Witt, Pierre-Alexandre Kamienny, Philip Torr, Wendelin Böhmer, and Shimon Whiteson. Facmac: Factored multi-agent centralised policy gradients. Advances in Neural Information Processing Systems, 34:12208-12221, 2021. + +[14] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. + +[15] Saran Tunyasuvunakool, Alistair Muldal, Yotam Doron, Siqi Liu, Steven Bohez, Josh Merel, Tom Erez, Timothy Lillicrap, Nicolas Heess, and Yuval Tassa. dm_control: Software and tasks for continuous control. Software Impacts, 6:100022, 2020. + +[16] Elise Van der Pol, Daniel Worrall, Herke van Hoof, Frans Oliehoek, and Max Welling. Mdp homomorphic networks: Group symmetries in reinforcement learning. Advances in Neural Information Processing Systems, 33: 4199-4210, 2020. + +[17] Dian Wang and Robin Walters. So (2) equivariant reinforcement learning. In International Conference on Learning Representations, 2022. + +[18] Dian Wang, Mingxi Jia, Xupeng Zhu, Robin Walters, and Robert Platt. On-robot learning with equivariant models. In Conference on robot learning, 2022. + +[19] Dian Wang, Robin Walters, Xupeng Zhu, and Robert Platt. Equivariant $q$ learning in spatial action spaces. In Conference on Robot Learning, pages 1713-1723. PMLR, 2022. + +[20] Philipp Wu, Alejandro Escontrela, Danijar Hafner, Pieter Abbeel, and Ken Goldberg. Daydreamer: World models for physical robot learning. In Conference on Robot Learning, pages 2226-2240. PMLR, 2023. + +[21] Manuel Wüthrich, Felix Widmaier, Felix Grimminger, Joel Akpo, Shruti Joshi, Vaibhav Agrawal, Bilal Ham-moud, Majid Khadiv, Miroslav Bogdanovic, Vincent Berenz, Julian Viereck, Maximilien Naveau, Ludovic Rightetti, Bernhard Schölkopf, and Stefan Bauer. Trifin-ger: An open-source robot for learning dexterity, January 2021. + +[22] Shengchao Yan, Tim Welschehold, Daniel Büscher, and Wolfram Burgard. Courteous behavior of automated vehicles at unsignalized intersections via reinforcement learning. IEEE Robotics and Automation Letters, 7(1): 191-198, 2021. + +[23] Chao Yu, Akash Velu, Eugene Vinitsky, Jiaxuan Gao, Yu Wang, Alexandre Bayen, and Yi Wu. The surprising effectiveness of ppo in cooperative multi-agent games. Advances in Neural Information Processing Systems, 35: 24611-24624, 2022. + +[24] Xupeng Zhu, Dian Wang, Ondrej Biza, Guanang Su, Robin Walters, and Robert Platt. Sample efficient grasp learning using equivariant models. Proceedings + +of Robotics: Science and Systems (RSS), 2022. + +## Appendix + +## A. Extra Method Details + +1) Transformation Functions: As mentioned in Sec. III-C, ${T}_{0}$ of the base agent is identity transformation. In this section we describe in detail the transformation function of other agents. For convenience, we only explain observation transformation in detail, the transformations are actually the same for actions, for which we only need to change the observation to the corresponding action components. By default, we assume the original observation $o = \left\lbrack {{o}_{\mathrm{c}},{o}_{\mathrm{s},0},{o}_{\mathrm{s},1},\ldots ,{o}_{\mathrm{s},\left| \mathcal{N}\right| - 1}}\right\rbrack$ is in the local coordinate system of the robot base for convenience. + +a) Reflectional Symmetry: For robots with reflectional symmetry, two robot parts in Fig. 2a are controlled by agents $\{ 0,1\}$ . We define ${T}_{1}\left( o\right) = \left\lbrack {{T}_{\mathrm{c},1}\left( {o}_{\mathrm{c}}\right) ,{o}_{\mathrm{s},1},{o}_{\mathrm{s},0}}\right\rbrack$ , where ${T}_{\mathrm{c},1}\left( {o}_{\mathrm{c}}\right)$ is a reflectional function, which reflects the central observation through the plane of symmetry. As a result of ${T}_{1}\left( o\right)$ different observation components are transformed as follows: + +- symmetric observations directly switch their values; + +- some of the central observation values are negated; + +- humanoid robot: ${y}_{\text{torso }},{v}_{\text{torso }, y},{\omega }_{\text{torso }, x},{\omega }_{\text{torso }, z},{\alpha }_{\text{torso }}$ , ${\gamma }_{\text{torso }},\;{\theta }_{\text{lower waist }, x},\;{\theta }_{\text{pelvis }, x},\;{\omega }_{\text{lower waist }, x},\;{\omega }_{\text{pelvis }, x},$ ${a}_{\text{lower waist,}x},{a}_{\text{pelvis,}x}$ + +- A1 robot: ${y}_{\text{torso }},{v}_{\text{torso }, y},{\omega }_{\text{torso }, x},{\omega }_{\text{torso }, z},{\alpha }_{\text{torso }},{\gamma }_{\text{torso }}$ - external objects: ${y}_{\text{ball }},{v}_{\text{ball }, y}$ + +- other central observation values stay the same. + +b) Rotational Symmetry: For robots with rotational symmetry, the robot parts in Fig. 2b are controlled by agents $\{ 0,1,\ldots ,\left| \mathcal{N}\right| - 1\}$ . We define ${T}_{i}\left( o\right) =$ $\left\lbrack {{T}_{\mathrm{c}, i}\left( {o}_{\mathrm{c}}\right) ,{o}_{\mathrm{s}, i},{o}_{\mathrm{s}, i + 1},\ldots ,{o}_{\mathrm{s},\left| \mathcal{N}\right| - 1},{o}_{\mathrm{s},0},{o}_{\mathrm{s},1},\ldots ,{o}_{\mathrm{s}, i - 1}}\right\rbrack$ , where ${T}_{\mathrm{c}, i}\left( {o}_{\mathrm{c}}\right)$ is a rotational transformation for central observations around the axis of symmetry. The degree of rotation is the angular distance between the robot limbs of agents $i$ and 0 . As a result of ${T}_{i}\left( o\right)$ different observation components are transformed as follows, + +- symmetric observations circularly shift their values; + +- the central observation components are rotated. + +2) Proof of Transformation Equivariance/Invariance: At the beginning we summarize the properties of the symmetry transformations in this work. They are: + +- commutative: ${T}_{j}\left( {{T}_{i}\left( o\right) }\right) = {T}_{i + j}\left( o\right) = {T}_{i}\left( {{T}_{j}\left( o\right) }\right)$ + +- distributive: ${T}_{j}\left( {{T}_{i}\left( o\right) + {T}_{k}\left( o\right) }\right) = {T}_{j}\left( {{T}_{i}\left( o\right) }\right) + {T}_{j}\left( {{T}_{k}\left( o\right) }\right)$ + +- cyclic: ${T}_{i}\left( o\right) = {T}_{i + \left| \mathcal{N}\right| }\left( o\right)$ + +The equivariance of the policy for symmetric actions in Eq. 1 is proved as follows: + +$$ +{A}_{\mathrm{s}, j}\left( {{T}_{i}\left( o\right) }\right) = {\Phi }_{\mathrm{s}}\left( {{T}_{j}\left( {{T}_{i}\left( o\right) }\right) }\right) = {\Phi }_{\mathrm{s}}\left( {{T}_{j}\left( {{T}_{i}\left( o\right) }\right) }\right) +$$ + +$$ += {A}_{\mathrm{s}, i}\left( {{T}_{j}\left( o\right) }\right) +$$ + +The equivariance for the central action is proved as follows: + +$$ +{A}_{\mathrm{c}}\left( {{T}_{i}\left( o\right) }\right) = \frac{1}{\left| \mathcal{N}\right| }\mathop{\sum }\limits_{{j = 0}}^{{\left| \mathcal{N}\right| - 1}}{T}_{\left| \mathcal{N}\right| - 1 - j}\left( {{\Phi }_{\mathrm{c}}\left( {{T}_{j}\left( {{T}_{i}\left( o\right) }\right) }\right) }\right) +$$ + +$$ += \frac{1}{\left| \mathcal{N}\right| }\mathop{\sum }\limits_{{j = \left| \mathcal{N}\right| - i}}^{{2\left| \mathcal{N}\right| - i - 1}}{T}_{\left| \mathcal{N}\right| - 1 - j}\left( {{\Phi }_{\mathrm{c}}\left( {{T}_{i + j}\left( o\right) }\right) }\right) +$$ + +$$ += \frac{1}{\left| \mathcal{N}\right| }\mathop{\sum }\limits_{{k = \left| \mathcal{N}\right| }}^{{2\left| \mathcal{N}\right| - 1}}{T}_{\left| \mathcal{N}\right| + i - 1 - k}\left( {{\Phi }_{\mathrm{c}}\left( {{T}_{k}\left( o\right) }\right) }\right) +$$ + +$$ += \frac{1}{\left| \mathcal{N}\right| }\mathop{\sum }\limits_{{k = 0}}^{{\left| \mathcal{N}\right| - 1}}{T}_{i}\left( {{T}_{\left| \mathcal{N}\right| - 1 - k}\left( {{\Phi }_{\mathrm{c}}\left( {{T}_{k}\left( o\right) }\right) }\right) }\right) +$$ + +$$ += {T}_{i}\left( {\frac{1}{\left| \mathcal{N}\right| }\mathop{\sum }\limits_{{k = 0}}^{{\left| \mathcal{N}\right| - 1}}{T}_{\left| \mathcal{N}\right| - 1 - k}\left( {{\Phi }_{\mathrm{c}}\left( {{T}_{k}\left( o\right) }\right) }\right) }\right) +$$ + +$$ += {T}_{i}\left( {{A}_{\mathrm{c}}\left( o\right) }\right) +$$ + +The invariance of the value network is proved as follows: + +$$ +V\left( {{T}_{i}\left( o\right) }\right) = \Theta \left( {\frac{1}{\left| \mathcal{N}\right| }\mathop{\sum }\limits_{{j = 0}}^{{\left| \mathcal{N}\right| - 1}}\Psi \left( {{T}_{j}\left( {{T}_{i}\left( o\right) }\right) }\right) }\right) +$$ + +$$ += \Theta \left( {\frac{1}{\left| \mathcal{N}\right| }\mathop{\sum }\limits_{{j = \left| \mathcal{N}\right| - i}}^{{2\left| \mathcal{N}\right| - i - 1}}\Psi \left( {{T}_{i + j}\left( o\right) }\right) }\right) +$$ + +$$ += \Theta \left( {\frac{1}{\left| \mathcal{N}\right| }\mathop{\sum }\limits_{{k = 0}}^{{\left| \mathcal{N}\right| - 1}}\Psi \left( {{T}_{k}\left( o\right) }\right) }\right) +$$ + +$$ += V\left( o\right) = V\left( {{T}_{j}\left( o\right) }\right) +$$ + +## B. Extra Experimental Setups + +1) Hyperparameters: Each baseline is run with 5 random seeds. All experiments are carried out on GPU card NVIDIA A100 and rtx3080 GPU. The hyperparameters of all baselines are consistent for a fair comparison. The detailed values can be accessed in Table 1. + +2) Tasks Details: + +a) Humanoid Tightrope: In this task, the agent learns to control a humanoid robot to walk on a tightrope. The humanoid robot has 21 controllable motors. The tightrope is extremely narrow with a diameter of only ${10}\mathrm{\;{cm}}$ , which challenges the efficiency of learning algorithms. The agent is rewarded with a forward speed on the tightrope and a proper posture. At each non-terminating step, the reward $r = {w}_{v} \times {r}_{v} + {w}_{\text{alive }} \times {r}_{\text{alive }} + {w}_{\text{up }} \times {r}_{\text{up }} + {w}_{\text{heading }} \times {r}_{\text{heading }} +$ ${w}_{\text{action }} \times {r}_{\text{action }} + {w}_{\text{energy }} \times {r}_{\text{energy }} + {w}_{\text{lateral }} \times {r}_{\text{lateral }}$ , where + +- ${r}_{v}$ is the robot’s forward velocity, ${w}_{v} = {1.0}$ ; + +- ${r}_{\text{alive }} = 1,{w}_{\text{alive }} = {2.0}$ ; + +- ${r}_{\text{up }} = 1$ if ${e}_{\text{up }, z} > {0.93}$ , where ${e}_{\text{up }}$ is the basis vector of torso’s $z$ axis in the global coordinate system, otherwise the value is $0,{w}_{\mathrm{{up}}} = {0.1}$ ; + +- ${r}_{\text{heading }} = {e}_{\text{forward }, x}$ , where ${e}_{\text{forward }}$ is the basis vector of torso’s $x$ axis in global coordinate system, ${w}_{\text{forward }} = {0.1}$ ; + +- ${r}_{\text{action }} = \parallel a{\parallel }_{2}^{2}$ , where $a$ is joints action, ${w}_{\text{action }} = - {0.01}$ + +- ${r}_{\text{energy }}$ is the joints power consumption, ${w}_{\text{energy }} = - {0.05}$ + +TABLE I: Hyperparameters of all experiments. + +
HYPERPARAMETERSHUMANOID TIGHTROPEHUMANOID FOOTBALLTRIFINGER MOVEA1 BEAMANT ACROBATIC
BATCH SIZE${4096} \times {32}$${4096} \times {32}$${16384} \times {16}$${4096} \times {24}$${4096} \times {16}$
MIXED PRECISIONTRUETRUEFALSETRUETRUE
NORMALIZE INPUTTRUETRUETRUETRUETRUE
NORMALIZE VALUETRUETRUETRUETRUETRUE
VALUE BOOTSTRAPTRUETRUETRUETRUETRUE
NUM ACTORS409640961638440964096
NORMALIZE ADVANTAGETRUETRUETRUETRUETRUE
GAMMA0.990.990.990.990.99
GAMMA0.950.950.950.950.95
E-CLIP0.20.20.20.20.2
ENTROPY COEFFICIENT0.00.00.00.00.0
LEARNING RATE5.E-45.E-43.E-43.E-43.E-4
KL THRESHOLD0.00080.00080.00080.00080.0008
TRUNCATED GRAD NORM1.01.01.01.01.0
HORIZON LENGTH3232162416
MINIBATCH SIZE3276832768163843276832768
MINI EPOCHS55454
CRITIC COEFFICIENT4.04.04.02.02.0
MAX EPOCH10K10K10K10K5K
POLICY NETWORK$\left\lbrack {{400},{200},{100}}\right\rbrack$$\left\lbrack {{400},{200},{100}}\right\rbrack$$\left\lbrack {{256},{256},{128},{128}}\right\rbrack$$\left\lbrack {{256},{128},{64}}\right\rbrack$$\left\lbrack {{256},{128},{64}}\right\rbrack$
CRITIC NETWORK$\left\lbrack {{400},{200},{100}}\right\rbrack$$\left\lbrack {{400},{200},{100}}\right\rbrack$$\left\lbrack {{256},{256},{128},{128}}\right\rbrack$$\left\lbrack {{256},{128},{64}}\right\rbrack$$\left\lbrack {{256},{128},{64}}\right\rbrack$
ACTIVATION FUNCTIONELUELUELUELUELU
+ +- ${r}_{\text{lateral }} = {v}_{\text{torso,}y}$ is the penalty for lateral velocity, ${w}_{\text{lateral }} = - {1.0}$ + +The reward is -1 for termination step. The action is the force applied to all joints. + +b) Humanoid Dribbling: In this task, the robot learns to dribble along routes with random turns. The observation space is augmented with features of the ball compared with the tightrope task. For observation calculation, the global coordinate system changes with the new target route at the turning position. At each non-terminating step, the reward $r = {w}_{v} \times {r}_{v} + {w}_{\text{alive }} \times {r}_{\text{alive }} + {w}_{\text{dist }} \times {r}_{\text{dist }} + {w}_{\text{heading }} \times {r}_{\text{heading }} +$ ${w}_{\text{action }} \times {r}_{\text{action }} + {w}_{\text{energy }} \times {r}_{\text{energy }} + {w}_{\text{lateral }} \times {r}_{\text{lateral }}$ , where + +- ${r}_{v}$ is the ball’s forward velocity, ${w}_{v} = {2.0}$ ; + +- ${r}_{\text{alive }} = 1,{w}_{\text{alive }} = {0.2}$ ; + +- ${r}_{\text{dist }} = {e}^{-d}$ where $d$ is the $2\mathrm{\;d}$ distance from torso to the ball, ${w}_{\text{dist }} = {0.2}$ ; + +- ${r}_{\text{heading }} = {e}_{\text{forward }, x}$ , where ${e}_{\text{forward }}$ is the basis vector of torso’s $x$ axis in the global system, ${w}_{\text{forward }} = {1.0}$ ; + +- ${r}_{\text{action }},{r}_{\text{energy }}$ are the same with Humanoid Tightrope + +- ${r}_{\text{lateral }} = {v}_{\text{ball,}y}$ is the penalty for the ball’s lateral velocity, ${w}_{\text{lateral }} = - {0.5}$ + +The reward is -1 for termination step. The action is the force applied to all joints. + +c) Al Beam: In this task, the agent controls the quadruped robot Unitree A1 [1] to walk on a balance beam with width of ${10}\mathrm{\;{cm}}$ following a predefined speed. Considering the width of A1 and the balance beam, it is much harder than walking on the ground. There are overall 12 motors for Unitree A1, 3 for each leg. At each non-terminating step, the reward $r = {w}_{v} \times {r}_{v} + {w}_{\text{alive }} \times {r}_{\text{alive }} + {w}_{\text{heading }} \times {r}_{\text{heading }} + {w}_{\text{action }} \times$ ${r}_{\text{action }} + {w}_{\text{lateral }} \times {r}_{\text{lateral }}$ , where + +- ${r}_{v} = {e}^{-\left| {{v}_{\text{torso,}}x - {v}_{\text{target }}}\right| }$ is speed tracking reward, ${w}_{v} = {1.0}$ ; + +- ${r}_{\text{alive }} = 1,{w}_{\text{alive }} = {1.0}$ ; + +- ${r}_{\text{heading }} = {e}_{\text{forward }, x}$ , where ${e}_{\text{forward }}$ is the basis vector of torso’s $x$ axis in global coordinate system, ${w}_{\text{forward }} = {1.0}$ ; - ${r}_{\text{action }} = \parallel a{\parallel }_{2}^{2}$ , where $a$ is the joints action, ${w}_{\text{action }} = - {0.5}$ - ${r}_{\text{lateral }} = {v}_{\text{torso }, y}$ is penalty for lateral velocity, ${w}_{\text{lateral }} =$ -1.0 + +The reward is -1 for termination step. The robot has a low-level joint controller. The action is the target angular position of all joints. + +d) Trifinger Move: Trifinger [21] is a 3-finger manipulator for learning dexterity. The goal of the task is to move a cube from a random initial pose to an arbitrary6-DoF target position and orientation. The environment is the same as that of [3], except that we remove the auxiliary penalty for finger movement, which increases the difficulty of the task. The robot has a low-level joint controller. The action is the target angular position of all joints. + +e) Ant Acrobatic: In this task, an ant learns to do complex acrobatics (e.g. heading a pole) on a ball, which extremely challenges the ability of agents to maintain balance. The action space is 8 dimensions. At each non-terminating step, the reward $r = {w}_{\text{alive }} \times {r}_{\text{alive }} + {w}_{\text{action }} \times {r}_{\text{action }} + {w}_{\text{energy }} \times {r}_{\text{energy }}$ , where + +- ${r}_{\text{alive }} = 1,{w}_{\text{alive }} = {0.5}$ ; + +- ${r}_{\text{action }} = \parallel a{\parallel }_{2}^{2}$ , where $a$ is joints action, ${w}_{\text{action }} = - {0.005}$ + +- ${r}_{\text{energy }}$ is joints power consumption, ${w}_{\text{energy }} = - {0.05}$ + +The reward is -1 for termination step. The action is the force applied to all joints. + +We conclude the observation space for each task in Table II for easier reading. + +TABLE II: Tasks Information + +
${z}_{\text{CUBE TARGET }}$ ${\mathrm{{UP}}}_{\text{POLE,}}x$ ${H}_{\text{CUBE TARGET,}x}$ ${\mathrm{{UP}}}_{\mathrm{{POLE}}, y}$ ${H}_{\text{CUBE TARGET}, y}$ ${\mathrm{{UP}}}_{\mathrm{{POLE}}, z}$ ${H}_{\text{CUBE TARGET }, z}$ ${x}_{\text{BALL }}$ ${v}_{\text{BALL,}z}$ ${\omega }_{\mathrm{{BALL}}, x}$HUMANOID TIGHTROPEHUMANOID FOOTBALLTRIFINGER MOVEA1 BEAMANT ACROBATIC
OBSERVATION DIMENSION7480414757
${O}_{\mathrm{C}}$TORSO${y}_{\text{TORSO }}$${y}_{\text{TORSO }}$${y}_{\text{TORSO }}$${x}_{\text{TORSO }}$
${z}_{\text{TORSO }}$${z}_{\text{TORSO }}$${z}_{\text{TORSO }}$${y}_{\text{TORSO }}$
${v}_{\text{TORSO,}x}$${v}_{\text{TORSO }, x}$${v}_{\text{TORSO }, x}$${z}_{\text{TORSO }}$
${v}_{\text{TORSO,}y}$${v}_{\text{TORSO }, y}$${v}_{\text{TORSO }, y}$${v}_{\text{TORSO }, x}$
${v}_{\text{TORSO,}z}$${v}_{\text{TORSO }, z}$${v}_{\text{TORSO }, z}$${v}_{\text{TORSO }, y}$
${\omega }_{\text{TORSO }, x}$${\omega }_{\text{TORSO,}x}$${\omega }_{\text{TORSO,}x}$${v}_{\text{TORSO,}z}$
WTORSO, $y$${\omega }_{\text{TORSO,}y}$${\omega }_{\text{TORSO,}y}$${\omega }_{\text{TORSO,}x}$
${\omega }_{\text{TORSO }, z}$${\omega }_{\text{TORSO,}z}$${\omega }_{\text{TORSO,}z}$${\omega }_{\text{TORSO,}y}$
${\alpha }_{\text{TORSO }}$${\alpha }_{\text{TORSO }}$${\alpha }_{\text{TORSO }}$${\omega }_{\text{TORSO }, z}$
${\beta }_{\text{TORSO }}$${\beta }_{\text{TORSO }}$${\beta }_{\text{TORSO }}$${\alpha }_{\text{TORSO }}$
${\gamma }_{\text{TORSO }}$${\gamma }_{\text{TORSO }}$${\gamma }_{\text{TORSO }}$${\beta }_{\text{TORSO }}$ ${\gamma }_{\text{TORSO }}$
TORSO JOINTS${\theta }_{\text{LOWER WAIST,}x}$${\theta }_{\text{LOWER WAIST }, x}$
${\theta }_{\text{LOWER WAIST }, y}$${\theta }_{\text{LOWER WAIST,}y}$
${\theta }_{\text{PELVIS,}x}$${\theta }_{\text{PELVIS }, x}$
${\omega }_{\text{LOWER WAIST }, x}$${\omega }_{\text{LOWER WAIST }, x}$
${\omega }_{\text{LOWER WAIST }, y}$${\omega }_{\text{LOWER WAIST }, y}$
${\omega }_{\text{PELVIS }, x}$${\omega }_{\text{PELVIS }, x}$
${a}_{\text{LOWER WAIST }, x}$${a}_{\text{LOWER WAIST }, x}$
${a}_{\text{LOWER WAIST }, y}$${a}_{\text{LOWER WAIST }, y}$
${a}_{\text{PELVIS }, x}$${a}_{\text{PELVIS }, x}$
EXTERNAL OBJECTS${x}_{\text{BALL }}$${x}_{\text{CUBE }}$${x}_{\text{POLE }}$
${y}_{\text{BALL }}$ ${z}_{\text{BALL }}$${y}_{\text{CUBE }}$ ${z}_{\text{CUBE }}$${y}_{\text{POLE }}$ ${z}_{\text{POLE }}$
${v}_{\text{BALL,}x}$${H}_{\text{cube }, x}$${v}_{\text{POLE,}x}$
${v}_{\mathrm{{BALL}}, y}$ ${v}_{\text{BALL,}z}$${H}_{\text{CUBE,}y}$ ${H}_{\text{cuBE,}z}$ ${H}_{\text{CUBE,}w}$ ${x}_{\text{CUBE TARGET }}$ ${y}_{\text{CUBE TARGET }}$${v}_{\mathrm{{POLE}}, y}$ ${v}_{\text{POLE,}z}$ ${\omega }_{\mathrm{{POLE}}, x}$ ${\omega }_{\text{POLE }, y}$ ${\omega }_{\mathrm{{POLE}}, z}$
${H}_{\text{CUBE TARGET,}w}$${y}_{\text{BALL }}$ ${z}_{\text{BALL }}$ ${v}_{\mathrm{{BALL}}, x}$ ${v}_{\mathrm{{BALL}}, y}$
${\omega }_{\mathrm{{BALL}}, y}$ ${\omega }_{\text{BALL,}z}$
${o}_{\mathrm{S}, i}$LIMB JOINTS${\theta }_{\text{UPPER ARM,}x}$${\theta }_{\text{UPPER ARM,}x}$${\theta }_{\text{FINGER UPPER }}$${\theta }_{\text{FRONT }\text{ HIP }}$
${\theta }_{\text{UPPER ARM,}z}$ ${\theta }_{\text{LOWER ARM,}x}$ ${\theta }_{\mathrm{{THIGH}}, x}$ ${\theta }_{\mathrm{{THIGH}}, y}$ ${\theta }_{\mathrm{{THIGH}}, z}$ ${\theta }_{\mathrm{{KNEE}}, x}$ ${\theta }_{\text{Foor }, x}$ ${\theta }_{\mathrm{{FOOT}}, y}$ ${\omega }_{\text{UPPER ARM,}x}$ ${\omega }_{\text{UPPER ARM,}z}$ ${\omega }_{\text{LOWER }\mathrm{{ARM}}, x}$ ${\omega }_{\mathrm{{THIGH}}, x}$ ${\omega }_{\mathrm{{THIGH}}, y}$ ${\omega }_{\mathrm{{THIGH}}, z}$ ${\omega }_{\mathrm{{KNEE}}, x}$ ${\omega }_{\text{FOOT,}x}$ ${\omega }_{\text{FOOT }, y}$ ${a}_{\text{UPPER }\mathrm{{ARM}}, x}$ ${a}_{\text{UPPER }\mathrm{{ARM}}, z}$ ${a}_{\text{LOWER }\mathrm{{ARM}}, x}$ ${a}_{\mathrm{{THIGH}}, x}$ ${a}_{\mathrm{{THIGH}}, y}$ ${a}_{\mathrm{{THIGH}}, z}$ ${a}_{\text{KNEE,}x}$ ${a}_{\text{FOOT }, x}$ ${a}_{\text{FOOT }, y}$${\theta }_{\text{UPPER ARM,}z}$ ${\theta }_{\text{LOWER ARM,}x}$ ${\theta }_{\mathrm{{THIGH}}, x}$ ${\theta }_{\mathrm{{THIGH}}, y}$ ${\theta }_{\mathrm{{THIGH}}, z}$ ${\theta }_{\text{KNEE }, x}$ ${\theta }_{\text{FooT,}x}$ ${\theta }_{\mathrm{{FooT}}, y}$ ${\omega }_{\text{UPPER }\mathrm{{ARM}}, x}$ ${\omega }_{\text{UPPER ARM }, z}$ ${\omega }_{\text{LOWER }\mathrm{{ARM}}, x}$ ${\omega }_{\mathrm{{THIGH}}, x}$ ${\omega }_{\mathrm{{THIGH}}, y}$ ${\omega }_{\mathrm{{THIGH}}, z}$ ${\omega }_{\mathrm{{KNEE}}, x}$ ${\omega }_{\text{FOOT }, x}$ ${\omega }_{\mathrm{{FOOT}}, y}$ ${a}_{\text{UPPER }\mathrm{{ARM}}, x}$ ${a}_{\text{UPPER ARM }, z}$ ${a}_{\text{LOWER }\mathrm{{ARM}}, x}$ ${a}_{\mathrm{{THIGH}}, x}$ ${a}_{\mathrm{{THIGH}}, y}$ ${a}_{\mathrm{{THIGH}}, z}$ ${a}_{\text{KNEE,}x}$ ${a}_{\text{FOOT }, x}$ ${a}_{\mathrm{{FOOT}}, y}$${\theta }_{\text{FINGER MIDDLE }}$ ${\theta }_{\text{FINGER LOWER }}$ ${\omega }_{\text{FINGER UPPER }}$ ${\omega }_{\text{FINGER MIDDLE}}$ ${\omega }_{\text{FINGER LOWER }}$ ${a}_{\text{FINGER UPPER }}$ ${a}_{\text{FINGER MIDDLE }}$ ${a}_{\text{FINGER LOWER }}$${\theta }_{\text{FRONT THIGH}}$ ${\theta }_{\text{FRONT CALF }}$ ${\theta }_{\text{REAR HIP }}$ ${\theta }_{\text{REAR THIGH}}$ ${\theta }_{\text{REAR CALF }}$ ${\omega }_{\text{FRONT HIP }}$ ${\omega }_{\text{FRONT THIGI}}$ ${\omega }_{\text{FRONT CALF }}$ ${\omega }_{\text{REAR HIP }}$ ${\omega }_{\text{REAR THIGH}}$ ${\omega }_{\text{REAR CALF }}$ ${a}_{\text{FRONT HIP }}$ ${a}_{\text{FRONT THIGH}}$ ${a}_{\text{FRONT CALF}}$ ${a}_{\text{REAR HIP }}$ ${a}_{\text{REAR THIGH}}$ ${a}_{\text{REAR CALF }}$
W22324
ACTION DIMENSION21219128
+ diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/YeOtYX-WB1/Initial_manuscript_tex/Initial_manuscript.tex b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/YeOtYX-WB1/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..b261ba2643b12c388a5a2324b11acbd951622a59 --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/YeOtYX-WB1/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,101 @@ +§ GEOMETRIC REGULARITY WITH ROBOT INTRINSIC SYMMETRY IN REINFORCEMENT LEARNING + +Author Names Omitted for Anonymous Review. Paper-ID 3 + +Abstract-Geometric regularity, which leverages data symmetry, has been successfully incorporated into deep learning architectures such as CNNs, RNNs, GNNs, and Transformers. While this concept has been widely applied in robotics to address the curse of dimensionality when learning from high-dimensional data, the inherent reflectional and rotational symmetry of robot structures has not been adequately explored. Drawing inspiration from cooperative multi-agent reinforcement learning, we introduce novel network structures for deep learning algorithms that explicitly capture this geometric regularity. Moreover, we investigate the relationship between the geometric prior and the concept of Parameter Sharing in multi-agent reinforcement learning. Through experiments conducted on various challenging continuous control tasks, we demonstrate the significant potential of the proposed geometric regularity in enhancing robot learning capabilities. + +§ I. INTRODUCTION + +Robots have the ability to undertake tasks that are dangerous or difficult for humans. With more degrees of freedom, they can perform increasingly complex tasks. For example, humanoid robots and quadrupedal robots can walk over challenging terrain, while robot arms and hands can achieve dexterous manipulation. However, controlling robots with a large number of degrees of freedom becomes increasingly difficult as the observation and action space grows exponentially. Although deep reinforcement learning has been employed to solve various robot control problems [8, 11, 20, 3], learning effective control strategies for these robots remains a challenging task. + +Training neural networks on high-dimensional data is known to be challenging due to the curse of dimensionality [4]. To overcome this challenge, researchers have developed network architectures and incorporated various inductive biases that respect the structure and symmetries of the corresponding domains. For example, convolutional neural networks (CNNs) leverage the strong geometric prior of images by incorporating translation equivariance into the design of convolutional layers. This ensures that the extracted features move along with the original image, regardless of the direction it is shifted in. Similarly, graph neural networks (GNNs) take advantage of the geometric prior of permutation invariance in other domains to capture the relationships among objects. Overall, incorporating domain-specific inductive biases and symmetries can greatly improve the ability of neural networks to learn from high-dimensional data. + +However, in the realm of deep reinforcement learning research, the potential benefits of utilizing symmetry structures present in environments, such as reflectional and rotational symmetry, have not attracted much attention and thus, how to combine these prior knowledge to effectively improve the existing approaches still is worth to be investigated. To bridge the research gap, we propose to reformulate the control problems under Multi-Agent Reinforcement Learning (MARL) framework to better leverage the symmetry structures. We demonstrate the surprising effectiveness of our approach by combining the new architectures with model-free deep reinforcement learning methods. Additionally, we establish a connection between our proposed geometric prior and the important concept of "Parameter Sharing" in multi-agent reinforcement learning, which excessively reduces the optimization space and speeds up the learning process. We also design a set of challenging robot control tasks (see Fig. 1) and evaluate our method on them. Our experimental results show that our proposed method significantly improves the performance of robot control learning tasks. + + < g r a p h i c s > + +Fig. 1: We design tasks (except TriFinger [3]) challenging for current deep reinforcement learning baseline algorithms. + +§ II. BACKGROUND AND RELATED WORK + +§ A. MULTI-AGENT REINFORCEMENT LEARNING (MARL) + +MARL is an extended reinforcement learning method for decision-making problems, where multiple agents can interact and learn in one environment. The most popular mathematical framework for MARL problems is Markov games. A Markov game is a tuple $\left\langle {\mathcal{N},\mathcal{S},\mathcal{O},\mathcal{A},P,{R}_{i},\gamma }\right\rangle .\mathcal{N}$ is the set of all agents and $\mathcal{S}$ is the set of states. ${\mathcal{O}}_{i}$ and ${\mathcal{A}}_{i}$ are observation space and action space for agent $i$ , while $\mathcal{O} = { \times }_{i \in \mathcal{N}}{\mathcal{O}}_{i}$ and $\mathcal{A} = { \times }_{i \in \mathcal{N}}{\mathcal{A}}_{i}$ represent joint observation space and joint action space. Define ${\Delta }_{\left| S\right| }$ and ${\Delta }_{\left| A\right| }$ be the probability measure on $\mathcal{S}$ and $\mathcal{A}$ respectively. Then $P$ is the transition probability $P\left( {{s}^{\prime } \mid s,a}\right) : \mathcal{S} \times \mathcal{A} \rightarrow {\Delta }_{\mathcal{S}}$ . Each agent $i$ maintains a specific reward function ${R}_{i}\left( {s,a}\right) : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ , and the future rewards are discounted by the discount factor $\gamma \in \left\lbrack {0,1}\right\rbrack$ . Let ${\Pi }_{i} = \left\{ {{\pi }_{i}\left( {{a}_{i} \mid {o}_{i}}\right) : {\mathcal{O}}_{i} \rightarrow {\Delta }_{{\mathcal{A}}_{i}}}\right\}$ be the policy space for agent $i$ , then the objective for agent $i$ is represented as $\mathop{\max }\limits_{{\pi }_{i}}{\mathbb{E}}_{\pi ,P}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{{+\infty }}{\gamma }^{t}{R}_{i}\left( {{s}_{t},{a}_{t}}\right) }\right\rbrack$ . In practice, the state space and the observation space can be identical if the observation has already fully described the system. Our paper also follows this assumption and hence uses observation alone. + +Multi-Agent Mujoco [13] is a popular benchmark for MARL algorithms which divides a single robot into several distinct parts with separate action space. However, the state-of-the-art MARL algorithms still couldn't match the performance of the single-agent algorithms on this benchmark. Different from their work, in which they arbitrarily divide robots into parts and ignore the geometric structures of the robots, we leverage ideas from geometric regularity during the MARL training and our results show that MARL can outperform single-agent algorithms by a substantial margin. + +§ B. SYMMETRY IN ROBOT LEARNING + +In robot learning domain, two groups of symmetric structures have been used to improve performance and learning efficiency. 1) Extrinsic Symmetry: By extrinsic symmetry we refer to the symmetries existing in the Exteroceptive sensors of the robot such as camera input. Some work [18, 24, 17, 19] have been proposed to integrate these symmetries into system identification via the neural network, especially CNN-structured network. These methods can largely improve the performance for manipulation tasks, but they are mostly around manipulation tasks with image input and gripper without roll-pitch movement. Van der Pol et al. [16] introduce MDP homomorphic networks to numerically construct equiv-ariant network layers.However, the proposed network only considers a pole balancing task with discrete action. Moreover, additional calculation is required to design the network even if the domain specific transformation is given. Mondal et al. [12] propose to learn symmetry directly from data in the latent space but is still limited to representation learning from images. 2) Intrinsic Symmetry: Different from extrinsic symmetries, intrinsic symmetries mostly naturally come from the physical constraints in the control system. For example, a humanoid robot control task exhibits reflectional symmetry. A symmetric control policy on such robot is usually more natural and effective. Mavalankar [10] proposes a data-augmentation method to improve reinforcement learning method for rotation invariant locomotion. Abdolhosseini et al. [2] investigate four different methods to encourage symmetric motion of bipedal simulated robots. They are implemented via specific policy network, data augmentation or auxiliary loss function. Even though the robots' motions become more natural-looking, they do not show a major improvement on different tasks. The policy network method in [2] is similar to ours in this work. But instead of a specific network merely for locomotion tasks with reflectional symmetry, we propose a generic equivariant policy network for both reflectional and rotational symmetries, which are predominant symmetry features in robotic systems and animal biology. Moreover, we approach the control task in the field of multi-agent systems. Finally, we get substantial performance improvement in experiments by reducing the policy search space. + + < g r a p h i c s > + +Fig. 2: Agent partitioning considering symmetry structures: Humanoid and Cheetah robots split into left and right parts by reflectional symmetry; TriFinger and Ant robots split into 3 and 4 parts by rotational symmetry, where each part is controlled individually by a dedicated agent. The central part (grey) is controlled by all agents. + +§ III. SINGLE ROBOT CONTROL AS MARL + +Instead of learning a single-agent policy to control the whole robot, which will lead to a large observation-action space that is difficult to optimize, we introduce multiple agents that are responsible for each individual component of the robot inspired by MARL. We further propose a framework driven by the presence of symmetry structures in many robots and exploit such inductive biases to facilitate the training by applying parameter sharing techniques. + +The overview structure of our method is to (1) identify the geometric structures of different robots and divide single robots into multiple parts accordingly; (2) reformulate the control problem as a MARL framework; (3) optimize policies with parameter sharing technique. + +§ A. DIVIDING SINGLE ROBOTS INTO MULTIPLE PARTS + +Previous research [13] also divides a single robot into multiple parts to evaluate the performance of MARL methods. However, its irregular partitioning makes the multi-agent methods hard to compete with the single-agent methods. In this paper, we reconsider partitioning in a more reasonable way, which is achieved by taking into account the symmetry structures of robots when dividing them into multiple agents. + +As shown in Fig. 2a, robots with reflectional symmetry can be partitioned into left (blue), right (green) and central (grey) parts. The robots with rotational symmetry in Fig. 2b are partitioned into parts with the same number of symmetric limbs (colour) and a central part (grey). For a robot with any of these symmetric structures, we split the whole robot's original observation-action space $\mathcal{O} \times \mathcal{A}$ by $\mathcal{O} = {\mathcal{O}}_{\mathrm{c}} \times \mathop{\prod }\limits_{{i \in \mathcal{N}}}{\mathcal{O}}_{\mathrm{s},\mathrm{i}}$ and $\mathcal{A} = {\mathcal{A}}_{\mathrm{c}} \times \mathop{\prod }\limits_{{i \in \mathcal{N}}}{\mathcal{A}}_{\mathrm{s},\mathrm{i}} \cdot {\mathcal{O}}_{\mathrm{c}} \times {\mathcal{A}}_{\mathrm{c}}$ represents the central observation-action pair, which consists of measurements and actuators that do not have symmetric counterparts, such as the position, orientation, velocity and joints of the torso, target direction, or states of the manipulated objects. Raw sensor data such as images and point clouds also belongs to central observation. ${\mathcal{O}}_{\mathrm{s},i} \times {\mathcal{A}}_{\mathrm{s},i}$ corresponds to symmetric observation-action spaces, whose measurements may include joint positions and velocities from the limbs, contact sensor measurements of the feet or fingers, and so on. The symmetric observation-action spaces are exactly the same for any $i \in \mathcal{N}$ due to the robots' symmetric property. + + < g r a p h i c s > + +Fig. 3: a) TriFinger robot moves a sphere towards a target position. From left to right are the original state, rotated by ${120}^{ \circ }$ , and rotated by ${240}^{ \circ }$ . Note that the actions of different body parts should be equivariant with regard to the transformation. The red arrow represents the desired moving direction of the manipulated object. b) Equivariant policy network with parameter $\Phi$ . $\mathbf{c}$ and $\mathbf{s}$ stand for central and symmetric actions. c) Invariant value network with parameter $\Psi ,\Theta$ . + +§ B. MULTI-AGENT REINFORCEMENT LEARNING FORMULATION + +Assume the original observation and action of the whole robot be $o \in \mathcal{O}$ and $a \in \mathcal{A}$ respectively and the number of agents $\left| \mathcal{N}\right|$ , equal to the number of symmetry parts of the robots. For each agent $i \in \mathcal{N}$ , there is a unique transformation function ${T}_{i}$ to obtain its own observation ${o}_{i} = {T}_{i}\left( o\right)$ . Detailed explanation of ${T}_{i}$ can be found in Appendix A1. Each agent generates the local action ${a}_{i}$ , consisting of $\overline{{a}_{\mathrm{c},i}} \in {\mathcal{A}}_{c}$ and ${a}_{\mathrm{s},i} \in {\mathcal{A}}_{s,i}$ for central and symmetric actions, by its own policy network. Finally, the whole robot’s action $a$ is recovered by gathering all symmetric actions ${a}_{s,i}$ and merging all central actions ${a}_{c,i}$ into ${a}_{c}$ . + +Regarding the reward function, our formulation follows the cooperative MARL setup, where ${R}_{i}$ for all $i \in \mathcal{N}$ are identical at every time step. This shared reward is calculated by a task-related reward function $R\left( {o,a}\right)$ which depends on the whole robot's observation and action. To optimize the policies ${\pi }_{i}$ , we adopt the multi-agent version of Proximal Policy Optimization (PPO) [14] methods. PPO is a popular model-free actor-critic reinforcement learning algorithm in different domains [22, 3, 11] for its stability, good performance and ease of implementation. Its multi-agent version also achieves competitive performance on different MARL benchmarks [23, 6]. + +§ C. GEOMETRIC REGULARIZATION + +Parameter Sharing has been recognized as a crucial element in MARL for efficient training [7]. By enabling agents to share parameters in their policy networks, parameter sharing not only facilitates scalability to a large number of agents but also enables agents to leverage shared learned representations, leading to reduced training time and improved overall performance. However, it is shown by Christianos et al. [5] that indiscriminately applying parameter sharing could hurt the learning process. Successful utilization of parameter sharing relies on the presence of homogeneous agents as a vital requirement. In other words, agents should execute the same action once they are given the same observation. This assumption ensures the transformation equivariance of the overall policy regarding the symmetry structures. + +Take the simplified TriFinger Move task as an example, where the TriFinger robot has to move the sphere towards a target position. As shown in Fig. 3a, if the whole system is rotated by ${120}^{ \circ }$ or ${240}^{ \circ }$ around the $z$ axis of the robot base, the actions should also shift circularly among the three fingers for the optimal policy. Given the whole robot’s observation $o$ , this relationship can be denoted by: + +$$ +{A}_{\mathrm{s},j}\left( {{T}_{i}\left( o\right) }\right) = {A}_{\mathrm{s},i}\left( {{T}_{j}\left( o\right) }\right) ,\;{A}_{\mathrm{c}}\left( {{T}_{i}\left( o\right) }\right) = {T}_{i}\left( {{A}_{\mathrm{c}}\left( o\right) }\right) \tag{1} +$$ + +where ${A}_{\mathrm{s},j}$ is the symmetric action of the $j$ th agent, ${A}_{\mathrm{c}}$ is the central action, ${T}_{i}$ is the symmetry transformation between agents $i$ and 0 (see definition in Appendix A1). The transformation for observation and action are so similar that we won't distinguish between them in this work for simplicity. Note that the the corresponding robot parts of agents can be defined arbitrarily. It does not influence the equivariance/invariance. + +Based on the equivariance represented by Eq. 1, we design the multi-agent actor-critic network structure in Fig. 3b, 3c. Agent $i$ gets a transformed observation ${T}_{i}\left( o\right)$ as the input of the policy network, the output action value consists of ${a}_{\mathrm{c},i}$ and ${a}_{\mathrm{s},i}$ . The central joints are controlled by the mean action over all agents’ output ${a}_{\mathrm{c},i}$ , while ${a}_{\mathrm{s},i}$ will be used as the action to take for the robot part $i$ . The policy network parameters are shared among agents. The value network gets the observations from all agents as input. The observations first go through the shared feature learning layers in the value network. Then the latent features are merged by a set operator (mean in this work). The value is finally calculated with the merged feature. + +The proposed policy network is equivariant with respect to symmetric transformations we consider in this work, while the value network is an invariant function (see proof in Appendix A2). By sharing the same policy network among all agents, we are able to incorporate the geometric regularization and reduce the dimension of the observation-action space. + + < g r a p h i c s > + +Fig. 4: Learning curves on robot control tasks. The x-axis is environment time steps and the y-axis is episodic returns during training. All graphs are plotted with median and 25%-75% percentile shading across 5 random seeds. + +§ IV. EXPERIMENTS AND DISCUSSION + +§ A. EXPERIMENTAL SETUP + +1) Challenging Tasks: Previous robotic control benchmarks [15] evaluate algorithms on fundamental tasks, e.g. controlling agents to walk. The movements in these tasks are limited and it's relatively easy to learn an optimal policy. In this work, we design several more challenging robotic control tasks, where current state-of-the-art methods fail to achieve good performance. The tasks are shown in Fig. 1: Humanoid Tightrope, Humanoid Dribbling, A1 Beam, Trifinger Move and Ant Acrobatic. The detailed introduction of the tasks can be found in Appendix B2. All experiments are carried out based on the NVIDIA Isaac Gym [9] robotics simulator. + +2) Baselines: For each task, we compare our method, named as Multi-agent with Symmetry Augmentation (MASA), with a set of baselines including: + + * Single-agent(SA): We first compare the single-agent reinforcement learning algorithm, which is to view all parts of the robot as a whole and optimize them jointly. This baseline can provide an intuitive comparison of our proposed framework to previous classic reinforcement learning works. The state space is kept the same as the multi-agent one for a fair comparison. + + * Single-agent with Symmetry Augmentation (SASA): This baseline follows the ${SA}$ ’s setup and is augmented with a symmetry loss [2]. Specifically, for any received observation $o$ , we calculate its symmetric representation ${o}_{sym}$ based on the robots' structures. We regulate the policy function $\pi$ and the value function $V$ in PPO with extra symmetry losses by minimizing ${\begin{Vmatrix}{T}_{i}\left( A\left( o\right) \right) - A\left( {T}_{i}\left( o\right) \right) \end{Vmatrix}}_{2}$ and $\left| {V\left( o\right) - V\left( {{T}_{i}\left( o\right) }\right) }\right|$ , where $A$ and $V$ are the gathered action and critic value of the robot. + + * Multi-agent without Symmetry Augmentation (MA): This baseline uses the same architecture as MASA. However, it does not involve the transformations in Fig. 3b 3c. Thus the geometric regularity of symmetry is ignored, which follows the previous research [13]. We concatenate a one-hot id encoding to each agent's observation as a common operation for non-homogeneous agents. + +We conclude the hyperparameters in Appendix B1. + +§ B. MAIN RESULTS + +Figure 4 presents the average return of all methods on different tasks during training. The proposed method MASA significantly outperforms other baselines across all 5 tasks. Further, the advantages over other baselines rise with the increasing difficulties of the task, which can be indicated by the increased number of joints, the extended state dimension and the enlarged state space in the task. Humanoid Tightrope and Humanoid Football control the same robot. But in the tightrope task, the robot only needs to walk forward, while the football task involves random turns and an external object so that other baselines can hardly learn meaningful behaviours in this task. Besides, we find a correlation between the performance gain of MASA and the reduction in action dimension. In tasks such as Humanoid Tightrope and A1 Beam, the action dimensions are reduced by 9 and 6 respectively, resulting in varying performance gains. Notably, tasks like Trifinger Move and Ant Acrobatic, which only retain $\frac{1}{3}$ and $\frac{1}{4}$ dimensions respectively after our method is implemented, demonstrate substantial performance improvements, emphasizing the advantages of symmetry-based optimization space reduction. + +By comparing the results of MASA, MA and SASA, we could observe that both of the two factors in MASA, multi-agent framework and symmetry structure, play an important role. Utilizing symmetry data structure alone (SASA) can gradually learn to solve a few tasks but with aparently lower data efficiency. Because the optimization space is not reduced and thus larger than that of MASA method. The multi-agent structure itself(MA)cannot guarantee meaningful results at all, which follows the criticism of naively sharing parameters among non-homogeneous agents [5]. + +§ C. DISCUSSION + +Our proposed multi-agent method exhibits impressive performance in challenging control tasks. The network structures we introduce are not limited to on-policy reinforcement learning algorithms and can be adapted for off-policy learning, imitation learning, and model-based learning methods. While our approach is straightforward to implement with observation transformations, it still requires domain knowledge. We believe our method can enhance robot learning in more demanding tasks, serving as a guide for designing robots with increased degrees of freedom while managing the observation-action space growth linearly. Future research directions include exploring additional symmetric structures and automating the process of identifying robots' intrinsic symmetries. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/heTTJfkTQC/Initial_manuscript_md/Initial_manuscript.md b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/heTTJfkTQC/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..afb8ab89a07ee8dbae35e103e638a50c6d1751e9 --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/heTTJfkTQC/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,326 @@ +# Robotic Manipulation Learning with Equivariant Descriptor Fields: Generative Modeling, Bi-equivariance, Steerability, and Locality + +Jiwoo Kim*†, Hyunwoo Ryu*1, Jongeun Choiß! Joohwan Seo", Nikhil Prakash", Ruolin Li", R. Horowitz + +${}^{ \dagger }$ School of Electrical and Electronic Engineering, ${}^{ \ddagger }$ Department of Artificial Intelligence, + +${}^{§}$ School of Mechanical Engineering, Yonsei University, Seoul, Republic of Korea + +Emails: \{nfsshift9801, tomato1mule, jongeunchoi\}@yonsei.ac.kr + +Department of Mechanical Engineering, University of California, Berkeley, CA, USA + +Emails: \{joohwan_seo, nikhilps, ruolin_li, horowitz\}@berkeley.edu + +*Equal Contribution + +Abstract-Conventional end-to-end visual robotic manipulation learning methods suffer from data inefficiency and limited generalization. To address these challenges, we introduce Equivariant Descriptor Fields (EDFs), a novel approach enabling end-to-end visual robotic manipulation learning with ${SE}\left( 3\right)$ - equivariance. In particular, EDFs leverage the power of generative modeling, bi-equivariance, steerable representation, and locality, in order to achieve effective ${SE}\left( 3\right)$ -equivariant manipulation learning. Remarkably, EDFs can be trained with just a few (5 to 10) expert demonstrations, without the need for prior knowledge. In this paper, we discuss how EDFs incorporate these four key concepts as compared to recent works on equivariant robotic manipulation. Through extensive experiments on 6- DoF robotic manipulation tasks, we demonstrate the impressive generalizability and sample efficiency of EDFs. + +## I. INTRODUCTION + +Recently, equivariant methods have gained notable attention due to their robustness to transformations, data efficiency, and generalizability. Incorporating equivariance has shown promising results in various fields, including protein [15, 11], molecule [12, 4], 3D object segmentation [17, 6], shape reconstruction [1, 2], and reinforcement learning [27, 18, 31]. + +For the robotic manipulation tasks, the prerequisite for numerous demonstrations [8, 14, 7, 39] and rollbacks [16] was a critical weakness. However, recent research reveal that incorporating equivariance can improve data efficiency and generalizability. The planar roto-translation equivariance, also known as the ${SE}\left( 2\right)$ -equivariance has been used to improve the efficiency of behavior cloning [40, 13, 21] and reinforcement learning methods [30, 28, 29, 41] for planar robotic manipulation tasks. For highly spatial manipulation tasks, roto-translation equivariance, or the ${SE}\left( 3\right)$ -equivariance, is required. Neural Descriptor Fields (NDFs) [23] and their variants [24, 3] have leveraged this property. While these methods show remarkable data efficiency and generalizability, they rely on pre-training and object segmentation pipelines. + +To this end, we introduce the Equivariant Descriptor Fields (EDFs) model [20], an end-to-end trainable method for ${SE}\left( 3\right)$ -equivariant manipulation learning. EDFs can learn from only a few ( 5 to 10 ) task demonstrations without requiring any prior knowledge and object segmentation. + +## II. Preliminaries: Representation Theory + +A map from the group $\mathcal{G}$ to an invertible matrix ${GL}\left( N\right) \in$ ${\mathbb{R}}^{N \times N}$ is called the representation $D$ . The representation $D$ satisfies the property that for every $g, h \in \mathcal{G}, D\left( g\right) D\left( h\right) =$ $D\left( {gh}\right)$ . In the representation theory of the ${SO}\left( 3\right)$ group, a representation $D\left( R\right)$ for $R \in {SO}\left( 3\right)$ can be expressed as an orthogonal block-diagonalized direct sum, which is composed of real Wigner D-matrices, ${D}_{l}\left( R\right) \in {\mathbb{R}}^{\left( {{2l} + 1}\right) \times \left( {{2l} + 1}\right) }$ of degree $l \in \{ 0,1,2,\ldots \}$ . Wigner D-matrices are irreducible representations of the ${SO}\left( 3\right)$ group. A type-l vector is a $\left( {{2l} + 1}\right)$ -dimensional vector that is transformed by ${D}_{l}\left( R\right)$ under rotation $R$ . In particular, type-0 vectors are invariant to rotations (i.e. scalars) such that ${D}_{0}\left( R\right) = I$ . On the other hand, type-1 vectors are rotated according to the 3D rotation matrices, that is, ${D}_{1}\left( R\right) = R$ . + +Let $\mathcal{O}$ be the set of all possible colored point clouds. A point cloud is given by $O = \left\{ {\left( {{x}_{i},{c}_{i}}\right) : i \in \mathcal{I}}\right\}$ , where ${x}_{i} \in {\mathbb{R}}^{3}$ and ${c}_{i} \in {\mathbb{R}}^{3}$ are point $i$ ’s position and color. A type- $l$ vector field $f : {\mathbb{R}}^{3} \times \mathcal{O} \rightarrow {\mathbb{R}}^{{2l} + 1}$ generated by $O \in \mathcal{O}$ is ${SE}\left( 3\right)$ - equivariant if ${D}_{l}\left( R\right) f\left( {x \mid O}\right) = f\left( {{gx} \mid g \cdot O}\right) ,\forall g = \left( {p, R}\right) \in$ ${SE}\left( 3\right) , x \in {\mathbb{R}}^{3}, O \in \mathcal{O}$ and $g \cdot O = \left\{ {\left( {g{x}_{i},{c}_{i}}\right) : i \in \mathcal{I}}\right\}$ . + +## III. Equivariant Descriptor Fields: THE FOUR KEY MODEL PROPERTIES + +Equivariant Descriptor Fields (EDFs) [20] are ${SE}\left( 3\right)$ - equivariant models for end-to-end visual robotic manipulation learning. Different from previous ${SE}\left( 3\right)$ -equivariant methods, EDFs are capable of learning mainpulation tasks from only a few demonstrations without requiring any prior knowledge, such as pre-training and object segmentation. In what follows, we will delve into EDFs and other models, focusing on the four key model properties, viz., generative modeling, bi-equivariance, steerable representation and locality (see Table [1], in order to gain valuable insights and illustrate the significance of EDFs. + +TABLE I: Comparison of recently proposed equivariant methods for robotic manipulation learning. + +
$\mathbf{{Method}}$Bi-EquivarianceLocalitySteerable RepresentationsGenerative ModelingEnd-to-end Training
Left Equiv.Right Equiv.
Transporter Networks [40]${SE}\left( 2\right)$TranslationInvariant
Equivariant Transporter Networks [13]${SE}\left( 2\right)$${SE}\left( 2\right)$Equivariant
Equivariant RL (SAC/DQN) [28, 29, 30]${SE}\left( 2\right)$${\mathbb{Z}}_{2}$Equivariant
NDFs [23]${SE}\left( 3\right)$Invariant
L-NDFs [3]${SE}\left( 3\right)$Invariant
R-NDFs [24]${SE}\left( 3\right)$${SE}\left( 3\right)$Invariant
EDFs [20]${SE}\left( 3\right)$${SE}\left( 3\right)$Equivariant
+ +## A. Generative Modeling + +In practice, expert demonstration policies for robotic manipulation tasks are rarely unimodal. To illustrate this, consider a mug-picking task. The human expert may occasionally choose to grasp the mug by the rim and at other times by the handle. To properly learn such multimodalities, generative modeling is required for the policy distributions [19] (see Fig. 1). As shown in Fig. 1, naively regressing or discretizing the policy results in suboptimal policy distributions. On the other hand, generative models such as energy-based models (EBMs) and diffusion models capture the behavior more accurately. EDFs utilize EBMs' approach to model the policy distribution, enabling both end-to-end training and sampling. This is in contrast to the energy minimization method of Simeonov et al. [23], which requires frozen pre-trained networks. + +The EDFs' energy-based policy conditioned by the point cloud observations of the scene ${O}^{\text{scene }}$ and the grasped object ${O}^{\text{grasp }}$ is defined on the ${SE}\left( 3\right)$ manifold as + +$$ +P\left( {g \mid {O}^{\text{scene }},{O}^{\text{grasp }}}\right) = \frac{\exp \left\lbrack {-E\left( {g \mid {O}^{\text{scene }},{O}^{\text{grasp }}}\right) }\right\rbrack }{Z} \tag{1} +$$ + +$$ +\text{where}Z = {\int }_{{SE}\left( 3\right) }{dg}\exp \left\lbrack {-E\left( {g \mid {O}^{\text{scene }},{O}^{\text{grasp }}}\right) }\right\rbrack \text{,} +$$ + +and $E$ is an energy function which will be defined later. + +## B. Bi-equivariance + +To successfully perform object picking tasks, it is crucial for the end-effector pose to be equivariant to changes in the initial pose of the target object within the scene. To illustrate this scene equivariance, consider a task in which the end-effector pose ${g}_{WE} \in {SE}\left( 3\right)$ in the world frame $W$ should be inferred from the observation of the scene ${O}^{\text{scene }}$ . Here, ${g}_{WE} \mathrel{\text{:=}} \left( {{x}_{WE},{R}_{WE}}\right) \in {SE}\left( 3\right)$ denotes the specification of the configuration of the end-effector frame $E$ relative to $W$ . Now, consider a new world frame ${W}^{\prime }$ . The reference frame change $\Delta {g}_{W} = {g}_{{W}^{\prime }W} \in {SE}\left( 3\right)$ induces the following transformations in the scene observation and end-effector pose. + +$$ +{O}_{{W}^{\prime }}^{\text{scene }} = \Delta {g}_{W} \cdot {O}_{W}^{\text{scene }} = {g}_{{W}^{\prime }} \cdot {O}_{W}^{\text{scene }} +$$ + +$$ +{g}_{{W}^{\prime }E} = \Delta {g}_{W}{g}_{WE} = {g}_{{W}^{\prime }W}{g}_{WE} +$$ + +The corresponding equivariant probabilistic policy ${}^{\top }P$ against ${\Delta g}$ then must satisfy + +$$ +P\left( {\Delta {g}_{W}{g}_{WE} \mid \Delta {g}_{W} \cdot {O}_{W}^{\text{scene }}}\right) = P\left( {{g}_{{W}^{\prime }} \mid {O}_{{W}^{\prime }}^{\text{scene }}}\right) +$$ + +Since the perturbation $\Delta {g}_{W}$ appears on the left side of $g$ , we refer to this scene equivariance as left equivariance. We illustrate left equivariance in Fig. 2. + +![019640ff-be93-77a0-9c79-ad822cf5c345_1_978_460_600_443_0.jpg](images/019640ff-be93-77a0-9c79-ad822cf5c345_1_978_460_600_443_0.jpg) + +Fig. 1: Comparison of behavior cloning methods: Generative models (EBM and Diffusion) accurately capture multimodal behaviors of the oracle policy $p\left( {a \mid o}\right)$ compared to regression (MSE) or discretized methods. Reproduced with authors' permission [19]. + +However, as it turns out, left equivariance alone is insufficient to successfully perform object placing tasks. Unlike picking tasks, which only require observing the scene, placing tasks also requires the observation of the grasp, which adds another layer of complexity to the problem. Furthermore, the grasp pose inferred by a pick policy learned from a few expert demonstrations may not be optimal. The grasped object may be in a pose that has never been shown by the expert demonstrations. Therefore, object placing tasks require another type of equivariance, namely grasp equivariance. Consider the same object pose $B$ being grasped in two different manners, respectively denoted $E$ and ${E}^{\prime }$ . Let ${O}_{E}^{\text{grasp }}$ be the observation of the object grasped by an end-effector with frame $E$ . We assume that frame $B$ is attached to the grasped object such that ${g}_{EB}$ is the pose of $B$ relative to frame $E$ . A transformation of the grasped object pose due to a change ${\Delta g}$ between end-effector frames $E$ and ${E}^{\prime }$ , as shown in Fig. 3, induces the transformed observation relative to frame ${E}^{\prime }$ : + +$$ +{O}_{{E}^{\prime }}^{\text{grasp }} = \Delta {g}_{E} \cdot {O}_{E}^{\text{grasp }} = {g}_{{E}^{\prime }} \cdot {O}_{E}^{\text{grasp }}. +$$ + +To keep the relative pose between the scene and the grasped object invariant for equivariance of the probabilistic policy, the end-effector pose must be transformed due to $\Delta {g}_{E}$ such that + +$$ +{g}_{WB} = {g}_{WE}{g}_{EB} = {g}_{W{E}^{\prime }}{g}_{{E}^{\prime }B} = {g}_{W{E}^{\prime }}{g}_{{E}^{\prime }E}{g}_{EB} = {g}_{W{E}^{\prime }}\Delta {g}_{E}{g}_{EB} +$$ + +$$ +\Rightarrow {g}_{W{E}^{\prime }} = {g}_{WE}\Delta {g}_{E}^{-1} +$$ + +A probabilistic policy $P$ under such an equivariance requires + +$$ +P\left( {{g}_{WE}\Delta {g}_{E}^{-1} \mid {O}_{E}^{\text{scene }},\Delta {g}_{E} \cdot {O}_{E}^{\text{grasp }}}\right) = P\left( {{g}_{W{E}^{\prime }} \mid {O}_{{E}^{\prime }}^{\text{scene }},{O}_{{E}^{\prime }}^{\text{grasp }}}\right) +$$ + +Notice that such a grasp equivariance is a right equivariance since the inverse of the perturbation $\Delta {g}_{E}^{-1}$ appears on the right side of $g$ . We illustrate the right equivariance in Fig. 3. Combining both the left and right equivariances, we finally define bi-equivariance [20] as follows. + +$$ +P\left( {g \mid {O}^{\text{scene }},{O}^{\text{grasp }}}\right) = P\left( {\Delta {g}_{W}g \mid \Delta {g}_{W} \cdot {O}^{\text{scene }},{O}^{\text{grasp }}}\right) \tag{2} +$$ + +$$ += P\left( {{g\Delta }{g}_{E}^{-1} \mid {O}^{\text{scene }},\Delta {g}_{E} \cdot {O}^{\text{grasp }}}\right) +$$ + +--- + +${}^{1}$ The equivariant probabilistic policy implies invariance of the conditional probabilities when the state and action are equivariantly transformed + +--- + +![019640ff-be93-77a0-9c79-ad822cf5c345_2_191_153_624_318_0.jpg](images/019640ff-be93-77a0-9c79-ad822cf5c345_2_191_153_624_318_0.jpg) + +Fig. 2: The left equivariance illustrates that the target pose is equivariant to the transformation of the scene, as such the perturbation ${\Delta g}$ is on the left of $g$ . + +Among ${SE}\left( 2\right)$ -equivariant methods, Transporter Networks [40] and recently proposed equivariant reinforcement learning methods $\left\lbrack {{28},{29},{30}}\right\rbrack$ are left equivariant, but not fully right equivariant (only translation equivariant). On the other hand, Equivariant Transporter Networks [13] incorporate full ${SE}\left( 2\right)$ bi-equivariance, thereby achieving significant increase in data efficiency over Transporter Networks. Among ${SE}\left( 3\right)$ - equivariant methods, Neural Descriptor Fields (NDFs) [23] and Local Neural Descriptor Fields (L-NDFs) [3] are uni-equivariant methods. Since NDFs and L-NDFs assume a fixed placement target pose, bi-equivariance is not required. However, to solve more general tasks such as object rearrangement tasks, bi-equivariance becomes essential. Relational Neural Descriptor Fields (R-NDFs) [24] are a bi-equivariant method for object rearrangement tasks. However, pre-trained NDFs and a human annotated object keypoint are required to equivariantly align query points for the training. + +On the other hand, EDFs [20] directly infer query points using an ${SE}\left( 3\right)$ -equivariant query density model that can be end-to-end trained. EDFs achieve bi-equivariance for the policy in (1) with a bi-equivariant energy function $E\left( {g \mid {O}^{\text{scene }},{O}^{\text{grasp }}}\right)$ . The specific design of this energy function will be introduced subsequently. + +## C. Steerable Representation + +To achieve robust equivariant manipulation, a model must utilize symmetric feature representations from the observations. Steerable representations are proficient in representing these features due to their orientation sensitivity [33] (see Fig. 4). Moreover, due to continuous expressions, steerable representations acquire rigorous information compared to the discretization methods and demonstrate better precision as evidenced by [1]. + +Importantly, compared to rotation invariant features, steerable features are superior in encoding the orientations of local geometries. To encode orientation information using rotation invariant features, they must be spatially distributed, breaking locality. For example, the color vector (red, green, blue) is such a rotation invariant feature. To determine the rigid-body orientation, at least three non-collinear points of different colors are required. Conversely, one can represent orientation with only a single point, using rotation equivariant, or steerable features. Thus, orientation information can be localized into a single point, better capturing the local geometry. This makes the learned features more generalizable and less sensitive to disturbances. + +![019640ff-be93-77a0-9c79-ad822cf5c345_2_985_137_555_307_0.jpg](images/019640ff-be93-77a0-9c79-ad822cf5c345_2_985_137_555_307_0.jpg) + +Fig. 3: The right equivariance implies that the target pose is equivariant to the grasp state, in which the perturbation ${\Delta g}$ is located on the right of the $g$ . + +Transporter Networks [40] and Neural Descriptor Fields variants $\left\lbrack {{23},{24},3}\right\rbrack$ utilize rotation invariant feature fields to obtain equivariance (e.g., Feature map of CNNs can be thought of as 2-dimensional feature fields). Alternatively, Huang et al. [13], Wang et al. [30, 28, 29] utilize the steerable features of the ${C}_{n}$ group (discretized ${SO}\left( 2\right)$ group), thereby significantly improving data efficiency. + +An EDF $\varphi \left( {x \mid O}\right)$ is defined as the concatenation of $N$ ${SO}\left( 3\right)$ -steerable vector fields that are ${SE}\left( 3\right)$ -equivariant + +$$ +\varphi \left( {x \mid O}\right) = {\bigoplus }_{n = 1}^{N}{\varphi }^{\left( n\right) }\left( {x \mid O}\right) +$$ + +where ${\varphi }^{\left( n\right) }\left( {x \mid O}\right) : {\mathbb{R}}^{3} \times \mathcal{O} \rightarrow {\mathbb{R}}^{2{l}_{n} + 1}$ is an ${SE}\left( 3\right)$ -equivariant type- ${l}_{n}$ vector field generated by $O$ . Therefore, $\varphi \left( {x \mid O}\right)$ is transformed according to $g = \left( {p, R}\right) \in {SE}\left( 3\right)$ as + +$$ +\varphi \left( {{gx} \mid g \cdot O}\right) = D\left( R\right) \varphi \left( {x \mid O}\right) +$$ + +$$ += \left( \begin{matrix} {D}_{{l}_{1}}\left( R\right) & \cdots & \varnothing \\ \vdots & \ddots & \vdots \\ \varnothing & \cdots & {D}_{{l}_{n}}\left( R\right) \end{matrix}\right) \varphi \left( {x \mid O}\right) +$$ + +where $D\left( R\right)$ is a block diagonal of Wigner D-matrices. + +The ${SE}\left( 3\right)$ bi-equivariant energy function for the EBM in Eq. (1) can be constructed with EDFs as + +$$ +E\left( {g \mid {O}^{\text{scene }},{O}^{\text{grasp }}}\right) = +$$ + +$$ +{\int }_{{\mathbb{R}}^{3}}{d}^{3}{x\rho }\left( {x \mid {O}^{\text{grasp }}}\right) {\begin{Vmatrix}\varphi \left( gx \mid {O}^{\text{scene }}\right) - D\left( R\right) \psi \left( x \mid {O}^{\text{grasp }}\right) \end{Vmatrix}}^{2} +$$ + +where $\varphi \left( {x \mid {O}^{\text{scene }}}\right)$ is the key ${EDF},\psi \left( {x \mid {O}^{\text{grasp }}}\right)$ is the query ${EDF}$ , and the $\rho \left( {x \mid {O}^{\text{grasp }}}\right)$ is the query density, which are all ${SE}\left( 3\right)$ -equivariant and learnable neural fields. + +## D. Locality + +For a robotic manipulation model to be robust, it must be able to pick and place objects in previously unseen poses. If the model can learn local geometric structures that are shared across different objects, it would greatly increase its generalizability. For example, if a model was trained to pick a mug by holding the rim, the similarities in the local geometric features can be utilized to grasp other objects by the rim. Consequently, locality is critical for generalizability and data efficiency. Recent studies in various fields such as robotics [3], point cloud segmentation [6], and shape reconstruction [1] highlight the importance of incorporating locality in equivari-ant methods. + +![019640ff-be93-77a0-9c79-ad822cf5c345_3_149_148_721_275_0.jpg](images/019640ff-be93-77a0-9c79-ad822cf5c345_3_149_148_721_275_0.jpg) + +Fig. 4: Visualization of type-l features $\left( {l = 0,1,2,\ldots }\right)$ . Higher-type features are sensitive to the orientations of local geometries such as planes and corners. Reproduced with the authors' permission [1]. + +Another benefit of imposing locality to equivariant methods is that the target object does not require to be segmented from the backgrounds. For unsegmented observations, only equiv-ariance to the target object is desired, and the equivariance to backgrounds must be suppressed. We name this property as local equivariance, in contrast to global equivariance (see Fig. 5). However, naively applying Eq. (2) can only guarantee global equivariance. Therefore, special care must be taken in designing methods to respect the locality of the tasks so as to obtain local equivariance. + +For example, Transporter networks and their variants [40, 21, 13 naturally exploit the locality of convolutional neural networks. Therefore, Transporter Networks and their variants can be used without object segmentation pipelines or any other object centric assumptions. On the other hand, NDFs [23] and R-NDFs [24] rely on centroid subtraction methods to achieve translational equivariance. Due to the highly non-local nature of centroid subtraction, these methods require the target object to be segmented from the background. + +EDFs utilize a Tensor Field Network (TFN) [25] model for the final layer and ${SE}\left( 3\right)$ -transformers [10] in other layers. These methods rely on spatial convolutions, enabling the easy acquisition of locality by using convolution kernels with finite support. This is in contrast to the Vector Neurons [5] method that were used as the backbone networks for [23, 24]. Further detailed explanations, mathematical proofs, and model architectures can be found in [20]. + +## IV. EXPERIMENTAL RESULTS + +To evaluate the EDFs' equivariant generalization performance with other methods, experiments were conducted with a mug-hanging task and a bowl/bottle pick-and-place task. The objective is to pick a mug or bowl/bottle and place it on a randomly posed hanger or plate. We evaluate EDFs in multiple scenarios which include unseen poses, unseen distracting objects, and unseen instances in randomized poses. + +![019640ff-be93-77a0-9c79-ad822cf5c345_3_1014_150_572_470_0.jpg](images/019640ff-be93-77a0-9c79-ad822cf5c345_3_1014_150_572_470_0.jpg) + +Fig. 5: The difference of global equivariance and local equivariance. The global equivariance represents the translation of the whole scene, while the local equivariance denotes the translation of the target object. + +First, we compare EDFs with ${SE}\left( 3\right)$ Transporter Networks [40], which are the extensions to the original Transporter Networks that regress additional degrees of freedom (height, roll, pitch). Table II of Appendix A shows that EDFs out-preform Transporter Networks in all three tasks. By comparing the results, EDFs were more robust than Transformer Networks, illustrating the significance of the ${SE}\left( 3\right)$ -equivariance when it comes to spatial manipulation tasks. + +In comparing EDFs to NDFs [23], it was necessary to account for some of NDFs' limitations such as the fact that NDFs require segmentations and a fixed target place pose. Thus, we compare the EDF model to an NDF-like model, which uses only the type-0 descriptor vectors. From Table III of Appendix A, EDFs, which use higher type descriptors, surpass the performance of the NDF-like model. Additional experimental descriptions and results can be found in Appendix A and [20]. + +## V. CONCLUSION + +We introduce EDFs and emphasize the importance of the following four properties: 1) generative modeling, 2) bi-equivariance, 3) steerable representations, and 4) locality; in order to synthesize noteworthy equivariant robotic manipulation learning models. We show the effectiveness and the generalization of EDFs in inferring the target pose in spite of previously unseen instances, unseen poses, and distracting objects using only a few demonstrations. + +For future research, it could be beneficial to integrate ${SE}\left( 3\right)$ -equivariant shape reconstruction and SLAM methods [38, 1, 2, 9] with EDFs to overcome incomplete and noisy point cloud observations. Expanding EDFs to trajectory-level problem is also an important issue. For kinematic and dynamic trajectory planning, one might consider incorporating guided diffusion methods [26] and geometric impedance control framework [22] respectively. Lastly, to improve the speed of the MCMC sampling required for EDFs, techniques such as amortized sampling [36, 32] and cooperative learning $\left\lbrack {{34},{35},{37}}\right\rbrack$ could be explored. + +[1] Evangelos Chatzipantazis, Stefanos Pertigkiozoglou, Edgar Dobriban, and Kostas Daniilidis. SE(3)- equivariant attention networks for shape reconstruction in function space. In The Eleventh International Conference on Learning Representations, 2023. URL https: //openreview.net/forum?id=RDy3IbvjMqT. + +[2] Yunlu Chen, Basura Fernando, Hakan Bilen, Matthias Nießner, and Efstratios Gavves. 3d equivariant graph implicit functions. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part III, pages 485-502. Springer, 2022. + +[3] Ethan Chun, Yilun Du, Anthony Simeonov, Tomas Lozano-Perez, and Leslie Kaelbling. Local neural descriptor fields: Locally conditioned object representations for manipulation. arXiv preprint arXiv:2302.03573, 2023. + +[4] Gabriele Corso, Hannes Stärk, Bowen Jing, Regina Barzilay, and Tommi Jaakkola. Diffdock: Diffusion steps, twists, and turns for molecular docking, 2023. + +[5] Congyue Deng, Or Litany, Yueqi Duan, Adrien Poule-nard, Andrea Tagliasacchi, and Leonidas Guibas. Vector neurons: a general framework for $\mathrm{{SO}}\left( 3\right)$ -equivariant networks. arXiv preprint arXiv:2104.12229, 2021. + +[6] Congyue Deng, Jiahui Lei, Bokui Shen, Kostas Dani-ilidis, and Leonidas Guibas. Banana: Banach fixed-point network for pointcloud segmentation with inter-part equivariance. arXiv preprint arXiv:2305.16314, 2023. + +[7] Coline Devin, Payam Rowghanian, Chris Vigorito, Will Richards, and Khashayar Rohanimanesh. Self-supervised goal-conditioned pick and place, 2020. + +[8] Yuqing Du, Daniel Ho, Alexander A. Alemi, Eric Jang, and Mohi Khansari. Bayesian imitation learning for end-to-end mobile manipulation, 2022. + +[9] Jiahui Fu, Yilun Du, Kurran Singh, Joshua B Tenenbaum, and John J Leonard. Neuse: Neural se (3)-equivariant embedding for consistent spatial understanding with objects. arXiv preprint arXiv:2303.07308, 2023. + +[10] Fabian B. Fuchs, Daniel E. Worrall, Volker Fischer, and Max Welling. SE(3)-transformers: $3\mathrm{\;d}$ roto-translation equivariant attention networks. In Advances in Neural Information Processing Systems 34 (NeurIPS), 2020. + +[11] Octavian-Eugen Ganea, Xinyuan Huang, Charlotte Bunne, Yatao Bian, Regina Barzilay, Tommi S. Jaakkola, and Andreas Krause. Independent SE(3)-equivariant models for end-to-end rigid protein docking. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=GQjaI9mLet. + +[12] Jiaqi Guan, Wesley Wei Qian, Xingang Peng, Yufeng Su, Jian Peng, and Jianzhu Ma. 3d equivariant diffusion for target-aware molecule generation and affinity prediction, 2023. + +[13] Haojie Huang, Dian Wang, Robin Walters, and Robert Platt. Equivariant transporter network. arXiv preprint + +arXiv:2202.09400, 2022. + +[14] Mohi Khansari, Daniel Kappler, Jianlan Luo, Jeff Bingham, and Mrinal Kalakrishnan. Action image representa- + +tion: Learning scalable deep grasping policies with zero real world data, 2020. + +[15] Jae Hyeon Lee, Payman Yadollahpour, Andrew M. Watkins, Nathan C Frey, Andrew Leaver-Fay, Stephen Ra, Kyunghyun Cho, Vladimir Gligorijević, Aviv Regev, and Richard Bonneau. Equifold: Protein structure prediction with a novel coarse-grained structure representation. bioRxiv, 2023. + +[16] Jeong-Hoon Lee and Jongeun Choi. Hierarchical primitive composition: Simultaneous activation of skills with inconsistent action dimensions in multiple hierarchies. IEEE Robotics and Automation Letters, 7(3):7581-7588, 2022. + +[17] Jiahui Lei, Congyue Deng, Karl Schmeckpeper, Leonidas Guibas, and Kostas Daniilidis. Efem: Equivariant neural field expectation maximization for $3\mathrm{\;d}$ object segmentation without scene supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4902-4912, 2023. + +[18] Arnab Kumar Mondal, Pratheeksha Nair, and Kaleem Siddiqi. Group equivariant deep reinforcement learning, 2020. + +[19] Tim Pearce, Tabish Rashid, Anssi Kanervisto, Dave Bignell, Mingfei Sun, Raluca Georgescu, Sergio Valcar-cel Macua, Shan Zheng Tan, Ida Momennejad, Katja Hofmann, and Sam Devlin. Imitating human behaviour with diffusion models, 2023. + +[20] Hyunwoo Ryu, Hong in Lee, Jeong-Hoon Lee, and Jongeun Choi. Equivariant descriptor fields: SE(3)- equivariant energy-based models for end-to-end visual robotic manipulation learning. International Conference on Learning Representations (ICLR), 2023. + +[21] Daniel Seita, Pete Florence, Jonathan Tompson, Erwin Coumans, Vikas Sindhwani, Ken Goldberg, and Andy Zeng. Learning to rearrange deformable cables, fabrics, and bags with goal-conditioned transporter networks. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 4568-4575. IEEE, 2021. + +[22] Joohwan Seo, Nikhil Potu Surya Prakash, Alexander Rose, and Roberto Horowitz. Geometric impedance control on $\mathrm{{SE}}\left( 3\right)$ for robotic manipulators. International Federation of Automatic Control (IFAC) World Congress, 2023. + +[23] Anthony Simeonov, Yilun Du, Andrea Tagliasac-chi, Joshua B. Tenenbaum, Alberto Rodriguez, Pulkit Agrawal, and Vincent Sitzmann. Neural descriptor fields: $\mathrm{{SE}}\left( 3\right)$ -equivariant object representations for manipulation. arXiv preprint arXiv:2112.05124, 2021. + +[24] Anthony Simeonov, Yilun Du, Lin Yen-Chen, Alberto Rodriguez, Leslie P. Kaelbling, Tomas L. Perez, and Pulkit Agrawal. SE(3)-equivariant relational rearrangement with neural descriptor fields. In Conference on Robot Learning (CoRL). PMLR, 2022. + +[25] Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann + +Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation- and translation-equivariant neural networks for $3\mathrm{\;d}$ point clouds,2018. + +[26] Julen Urain, Niklas Funk, Jan Peters, and Georgia Chal-vatzaki. SE(3)-diffusionfields: Learning smooth cost functions for joint grasp and motion optimization through diffusion. IEEE International Conference on Robotics and Automation (ICRA), 2023. + +[27] Elise van der Pol, Daniel Worrall, Herke van Hoof, Frans Oliehoek, and Max Welling. Mdp homomorphic networks: Group symmetries in reinforcement learning. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 4199-4210. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/ file/2be5f9c2e3620eb73c2972d7552b6cb5-Paper.pdf. + +[28] Dian Wang, Mingxi Jia, Xupeng Zhu, Robin Walters, and Robert Platt. On-robot learning with equivariant models. In 6th Annual Conference on Robot Learning, 2022. URL https://openreview.net/forum?id=K8W6ObPZQyh. + +[29] Dian Wang, Robin Walters, and Robert Platt. SO(2)- equivariant reinforcement learning. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=7F9cOhdvfk_ + +[30] Dian Wang, Robin Walters, Xupeng Zhu, and Robert Platt. Equivariant $q$ learning in spatial action spaces. In Conference on Robot Learning, pages 1713-1723. PMLR, 2022. + +[31] Dian Wang, Jung Yeon Park, Neel Sortur, Lawson L.S. Wong, Robin Walters, and Robert Platt. The surprising effectiveness of equivariant models in domains with latent symmetry. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=P4MUGRM4Acu. + +[32] Dilin Wang and Qiang Liu. Learning to draw samples: With application to amortized mle for generative adversarial learning. arXiv preprint arXiv:1611.01722, 2016. + +[33] Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco S Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/2018/ file/488e4104520c6aab692863cc1dba45af-Paper.pdf. + +[34] Jianwen Xie, Yang Lu, Ruiqi Gao, Song-Chun Zhu, and Ying Nian Wu. Cooperative training of descriptor and generator networks. IEEE transactions on pattern analysis and machine intelligence, 42(1):27-45, 2018. + +[35] Jianwen Xie, Zilong Zheng, Xiaolin Fang, Song-Chun Zhu, and Ying Nian Wu. Cooperative training of fast thinking initializer and slow thinking solver for conditional learning. IEEE Transactions on Pattern Analysis + +and Machine Intelligence, 44(8):3957-3973, 2021. + +[36] Jianwen Xie, Zilong Zheng, and Ping Li. Learning energy-based model with variational auto-encoder as amortized sampler. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 10441- 10451, 2021. + +[37] Jianwen Xie, Yaxuan Zhu, Jun Li, and Ping Li. A tale of two flows: Cooperative learning of langevin flow and normalizing flow toward energy-based model. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=31d5RLCUuXC + +[38] Yinshuang Xu, Jiahui Lei, and Kostas Daniilidis. Se (3)- equivariant reconstruction from light field. arXiv preprint arXiv:2212.14871, 2022. + +[39] Kevin Zakka, Andy Zeng, Johnny Lee, and Shuran Song. Form2fit: Learning shape priors for generalizable assembly from disassembly, 2020. + +[40] Andy Zeng, Pete Florence, Jonathan Tompson, Stefan Welker, Jonathan Chien, Maria Attarian, Travis Armstrong, Ivan Krasin, Dan Duong, Vikas Sindhwani, et al. Transporter networks: Rearranging the visual world for robotic manipulation. In Conference on Robot Learning, pages 726-747. PMLR, 2021. + +[41] Xupeng Zhu, Dian Wang, Ondrej Biza, Guanang Su, Robin Walters, and Robert Platt. Sample efficient grasp learning using equivariant models. Proceedings of Robotics: Science and Systems (RSS), 2022. + +## Appendix + +## A. Experimental Results + +This section explains the details of the experiments conducted to compare EDFs with prior methods. The mug-hanging, and bowl/bottle pick and place tasks were employed for comparison. The models were trained with ten demonstrations for each task where the cup, bowl, and bottle were positioned upright as shown in Fig. 6. For evaluation, the models were given various scenes with an unseen instance, in a random posture, with various distracting objects nearby, as shown in Fig 7. + +First, Table III compares EDFs with the state-of-the-art end-to-end visual manipulation method, Transporter Networks [40]. Specifically, we use the ${SE}\left( 3\right)$ -extended version of the original Transporter Networks (SE(3)-TNs) proposed in [40], where the additional three degrees of freedom (height, roll, pitch) are directly regressed. Therefore, despite its name, ${SE}\left( 3\right)$ -TNs are ${SE}\left( 2\right)$ -equivariant methods. We compare each methods in four scenarios: 1) the target object is an unseen target instance, 2) the target instance is positioned in a random orientation, 3) the target instance is surrounded by various unseen distracting objects, and finally 4) all of the three unseen conditions are combined. + +As can be seen in Table II, EDFs significantly outperform Transporter Networks in all of the four unseen scenarios. Especially, Transporter Networks completely fail when the target object is provided in previously unseen poses (Scenario 1), due to the lack of the spatial ${SE}\left( 3\right)$ -equivariance. For example, as shown in Fig. 8-A, Transporter Networks fail to pick the target instance when positioned in an unseen pose and anticipated to grab the instance as if it was positioned upright as it was during training. On the other hand, EDFs successfully infer appropriate end-effector poses in all of the cases, evidencing the importance of the ${SE}\left( 3\right)$ bi-equivariant modeling. + +![019640ff-be93-77a0-9c79-ad822cf5c345_6_129_152_765_579_0.jpg](images/019640ff-be93-77a0-9c79-ad822cf5c345_6_129_152_765_579_0.jpg) + +Fig. 7: The scenarios that are given to evaluate the models. New instances are given that were not seen during training, and they are positioned in random postures. In addition, there are several distracting objects around the target instance. Reproduced with the authors' permission [20]. + +Next, we perform another experiment to validate the importance of steerable representations. For the comparison, we use the ablated model without steerable representations, which is analogous to [23, 24, 3]. The results are summarized in Table III. The ablated models utilize only the type-0 descriptors, which are invariant to rotations. Therefore, as illustrated in Fig. 8-B, the ablated methods struggle to correctly infer the orientations of the target poses. In contrast, EDFs utilize higher descriptors, hence are capable of accurately inferring the target poses. Through this experiment, we demonstrate that steerable representations are crucial for improving the orientational accuracy and generalizability of inferred pick-and-place poses. + +Lastly, we experiment the robustness of EDFs under significant multimodality in the demonstrations. We test the performance of EDFs trained with three different demonstration sets for mug-hanging task: 1) unimodal, low-variance demonstrations (only picking a specific point on the mug), 2) diverse but consistent demonstrations (multimodal, but always picks by the rim of the mug), and 3) diverse and inconsistent demonstrations (multimodal, picking the mug by either the rim or the handle). The results are summarized in Table IV. Comparing the results of training demonstration set 1 (unimodal) and 2 (multimodal and consistent), we observe that EDFs are robust to the multimodality in the demonstrations. Rather, the experimental results suggest that EDFs actually benefit from the diversity of multimodal demonstrations. This can be attributed to the nature of generative models, that are flexible enough to leverage diverse pick-and-place strategies. Moreover, this generative nature of EDFs allows them to be tolerable to highly inconsistent demonstrations. As can be seen in the results for demonstration set 3 (multimodal and inconsistent), EDFs are shown to be robust to inconsistency in the demonstrations. + +![019640ff-be93-77a0-9c79-ad822cf5c345_6_920_156_739_369_0.jpg](images/019640ff-be93-77a0-9c79-ad822cf5c345_6_920_156_739_369_0.jpg) + +Fig. 8: A) Transformer Networks exhibit the inability to pick the target instance that is posed in an unseen posture due to their lack of ${SE}\left( 3\right)$ -equivarince. B) NDF-like models, which only use the type-0 descriptors, fail to place the cup on the hanger due to the lack of orientational sensitivity of the target instance. Reproduced with the authors' permission [20]. + +Through these various experiments, we reveal the importance of the four criteria in designing equivariant methods for end-to-end visual robotic manipulation. In addition, by comparing EDFs with different models in diverse scenarios, we show EDFs' robustness and generalizability. Further experimental results and explanations can be found in [20]. + +TABLE II: Pick-and-place success rates in various out-of-distribution settings. Reproduced with authors' permission [20] + +
$\mathbf{{Mug}}$BowlBottle
PickPlaceTotalPickPlaceTotalPickPlaceTotal
Unseen Instances
${SE}\left( 3\right)$ -TNs [40]1.000.360.360.761.000.760.201.000.20
EDFs (Ours)1.000.970.970.981.000.981.001.001.00
Unseen Poses
${SE}\left( 3\right)$ -TNs [40]0.00N/A0.000.00N/A0.000.00N/A0.00
EDFs (Ours)1.001.001.001.001.001.000.951.000.95
Unseen Distracting Objects
${SE}\left( 3\right)$ -TNs [40]1.000.630.631.001.001.000.960.920.88
EDFs (Ours)1.000.980.981.001.001.000.991.000.99
Unseen Instances, Arbitrary
Poses & Distracting Objects
${SE}\left( 3\right)$ -TNs [40]0.250.040.010.091.000.090.260.880.23
EDFs (Ours)1.000.950.950.951.000.950.951.000.95
+ +TABLE III: Success rate and inference time of the ablated model and EDFs. All the evaluations are done in the unseen instances, poses & distracting objects setting. Reproduced with authors' permission [20]. + +
Descriptor Type$\mathbf{{Mug}}$BowlBottle
PickPlaceTotalPickPlaceTotalPickPlaceTotal
$\mathbf{{NDF} - {like}\left( {{Type} - {OOnly}}\right) }$
Inference Time5.7s8.6s14.3s6.1s9.9s16.0s5.8s17.3s23.0s
Success Rate0.840.770.650.600.950.570.660.950.63
$\mathbf{{EDFs}\left( {{Type} - 0 \sim 3}\right) }$
Inference Time5.1s8.3s13.4s5.2s10.4s15.6s5.2s11.5s16.7s
Success Rate1.000.950.950.951.000.950.951.000.95
+ +TABLE IV: Success rate of EDFs for mug-hanging task with different demonstrations. Reproduced with authors' permission [20]. + +
SetupLow Var. & Unimodal GraspsDiverse and Consistent Grasps (Rim Only)Diverse and Inconsistent Grasps (Handle & Rim)
PickPlaceTotalPickPlaceTotalPickPlaceTotal
Unseen Poses (P)1.000.960.961.001.001.001.000.990.99
Unseen Instances (I)0.990.900.891.000.970.971.000.920.92
Unseen Distractors (D)1.001.001.001.000.980.980.960.990.95
Unseen P+I+D0.990.830.821.000.950.950.900.890.80
+ diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/heTTJfkTQC/Initial_manuscript_tex/Initial_manuscript.tex b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/heTTJfkTQC/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..987f986af95a1b0037f57e017baae00b6b4f9cf7 --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/heTTJfkTQC/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,223 @@ +§ ROBOTIC MANIPULATION LEARNING WITH EQUIVARIANT DESCRIPTOR FIELDS: GENERATIVE MODELING, BI-EQUIVARIANCE, STEERABILITY, AND LOCALITY + +Jiwoo Kim*†, Hyunwoo Ryu*1, Jongeun Choiß! Joohwan Seo", Nikhil Prakash", Ruolin Li", R. Horowitz + +${}^{ \dagger }$ School of Electrical and Electronic Engineering, ${}^{ \ddagger }$ Department of Artificial Intelligence, + +${}^{§}$ School of Mechanical Engineering, Yonsei University, Seoul, Republic of Korea + +Emails: {nfsshift9801, tomato1mule, jongeunchoi}@yonsei.ac.kr + +Department of Mechanical Engineering, University of California, Berkeley, CA, USA + +Emails: {joohwan_seo, nikhilps, ruolin_li, horowitz}@berkeley.edu + +*Equal Contribution + +Abstract-Conventional end-to-end visual robotic manipulation learning methods suffer from data inefficiency and limited generalization. To address these challenges, we introduce Equivariant Descriptor Fields (EDFs), a novel approach enabling end-to-end visual robotic manipulation learning with ${SE}\left( 3\right)$ - equivariance. In particular, EDFs leverage the power of generative modeling, bi-equivariance, steerable representation, and locality, in order to achieve effective ${SE}\left( 3\right)$ -equivariant manipulation learning. Remarkably, EDFs can be trained with just a few (5 to 10) expert demonstrations, without the need for prior knowledge. In this paper, we discuss how EDFs incorporate these four key concepts as compared to recent works on equivariant robotic manipulation. Through extensive experiments on 6- DoF robotic manipulation tasks, we demonstrate the impressive generalizability and sample efficiency of EDFs. + +§ I. INTRODUCTION + +Recently, equivariant methods have gained notable attention due to their robustness to transformations, data efficiency, and generalizability. Incorporating equivariance has shown promising results in various fields, including protein [15, 11], molecule [12, 4], 3D object segmentation [17, 6], shape reconstruction [1, 2], and reinforcement learning [27, 18, 31]. + +For the robotic manipulation tasks, the prerequisite for numerous demonstrations [8, 14, 7, 39] and rollbacks [16] was a critical weakness. However, recent research reveal that incorporating equivariance can improve data efficiency and generalizability. The planar roto-translation equivariance, also known as the ${SE}\left( 2\right)$ -equivariance has been used to improve the efficiency of behavior cloning [40, 13, 21] and reinforcement learning methods [30, 28, 29, 41] for planar robotic manipulation tasks. For highly spatial manipulation tasks, roto-translation equivariance, or the ${SE}\left( 3\right)$ -equivariance, is required. Neural Descriptor Fields (NDFs) [23] and their variants [24, 3] have leveraged this property. While these methods show remarkable data efficiency and generalizability, they rely on pre-training and object segmentation pipelines. + +To this end, we introduce the Equivariant Descriptor Fields (EDFs) model [20], an end-to-end trainable method for ${SE}\left( 3\right)$ -equivariant manipulation learning. EDFs can learn from only a few ( 5 to 10 ) task demonstrations without requiring any prior knowledge and object segmentation. + +§ II. PRELIMINARIES: REPRESENTATION THEORY + +A map from the group $\mathcal{G}$ to an invertible matrix ${GL}\left( N\right) \in$ ${\mathbb{R}}^{N \times N}$ is called the representation $D$ . The representation $D$ satisfies the property that for every $g,h \in \mathcal{G},D\left( g\right) D\left( h\right) =$ $D\left( {gh}\right)$ . In the representation theory of the ${SO}\left( 3\right)$ group, a representation $D\left( R\right)$ for $R \in {SO}\left( 3\right)$ can be expressed as an orthogonal block-diagonalized direct sum, which is composed of real Wigner D-matrices, ${D}_{l}\left( R\right) \in {\mathbb{R}}^{\left( {{2l} + 1}\right) \times \left( {{2l} + 1}\right) }$ of degree $l \in \{ 0,1,2,\ldots \}$ . Wigner D-matrices are irreducible representations of the ${SO}\left( 3\right)$ group. A type-l vector is a $\left( {{2l} + 1}\right)$ -dimensional vector that is transformed by ${D}_{l}\left( R\right)$ under rotation $R$ . In particular, type-0 vectors are invariant to rotations (i.e. scalars) such that ${D}_{0}\left( R\right) = I$ . On the other hand, type-1 vectors are rotated according to the 3D rotation matrices, that is, ${D}_{1}\left( R\right) = R$ . + +Let $\mathcal{O}$ be the set of all possible colored point clouds. A point cloud is given by $O = \left\{ {\left( {{x}_{i},{c}_{i}}\right) : i \in \mathcal{I}}\right\}$ , where ${x}_{i} \in {\mathbb{R}}^{3}$ and ${c}_{i} \in {\mathbb{R}}^{3}$ are point $i$ ’s position and color. A type- $l$ vector field $f : {\mathbb{R}}^{3} \times \mathcal{O} \rightarrow {\mathbb{R}}^{{2l} + 1}$ generated by $O \in \mathcal{O}$ is ${SE}\left( 3\right)$ - equivariant if ${D}_{l}\left( R\right) f\left( {x \mid O}\right) = f\left( {{gx} \mid g \cdot O}\right) ,\forall g = \left( {p,R}\right) \in$ ${SE}\left( 3\right) ,x \in {\mathbb{R}}^{3},O \in \mathcal{O}$ and $g \cdot O = \left\{ {\left( {g{x}_{i},{c}_{i}}\right) : i \in \mathcal{I}}\right\}$ . + +§ III. EQUIVARIANT DESCRIPTOR FIELDS: THE FOUR KEY MODEL PROPERTIES + +Equivariant Descriptor Fields (EDFs) [20] are ${SE}\left( 3\right)$ - equivariant models for end-to-end visual robotic manipulation learning. Different from previous ${SE}\left( 3\right)$ -equivariant methods, EDFs are capable of learning mainpulation tasks from only a few demonstrations without requiring any prior knowledge, such as pre-training and object segmentation. In what follows, we will delve into EDFs and other models, focusing on the four key model properties, viz., generative modeling, bi-equivariance, steerable representation and locality (see Table [1], in order to gain valuable insights and illustrate the significance of EDFs. + +TABLE I: Comparison of recently proposed equivariant methods for robotic manipulation learning. + +max width= + +2*$\mathbf{{Method}}$ 2|c|Bi-Equivariance 2*Locality 2*Steerable Representations 2*Generative Modeling 2*End-to-end Training + +2-3 + Left Equiv. Right Equiv. + +1-7 +Transporter Networks [40] ${SE}\left( 2\right)$ Translation 。 Invariant ✘ 。 + +1-7 +Equivariant Transporter Networks [13] ${SE}\left( 2\right)$ ${SE}\left( 2\right)$ 。 Equivariant ✘ 。 + +1-7 +Equivariant RL (SAC/DQN) [28, 29, 30] ${SE}\left( 2\right)$ ${\mathbb{Z}}_{2}$ 。 Equivariant ✘ 。 + +1-7 +NDFs [23] ${SE}\left( 3\right)$ ✘ ✘ Invariant ✘ ✘ + +1-7 +L-NDFs [3] ${SE}\left( 3\right)$ ✘ 。 Invariant ✘ ✘ + +1-7 +R-NDFs [24] ${SE}\left( 3\right)$ ${SE}\left( 3\right)$ ✘ Invariant ✘ ✘ + +1-7 +EDFs [20] ${SE}\left( 3\right)$ ${SE}\left( 3\right)$ 。 Equivariant 。 。 + +1-7 + +§ A. GENERATIVE MODELING + +In practice, expert demonstration policies for robotic manipulation tasks are rarely unimodal. To illustrate this, consider a mug-picking task. The human expert may occasionally choose to grasp the mug by the rim and at other times by the handle. To properly learn such multimodalities, generative modeling is required for the policy distributions [19] (see Fig. 1). As shown in Fig. 1, naively regressing or discretizing the policy results in suboptimal policy distributions. On the other hand, generative models such as energy-based models (EBMs) and diffusion models capture the behavior more accurately. EDFs utilize EBMs' approach to model the policy distribution, enabling both end-to-end training and sampling. This is in contrast to the energy minimization method of Simeonov et al. [23], which requires frozen pre-trained networks. + +The EDFs' energy-based policy conditioned by the point cloud observations of the scene ${O}^{\text{ scene }}$ and the grasped object ${O}^{\text{ grasp }}$ is defined on the ${SE}\left( 3\right)$ manifold as + +$$ +P\left( {g \mid {O}^{\text{ scene }},{O}^{\text{ grasp }}}\right) = \frac{\exp \left\lbrack {-E\left( {g \mid {O}^{\text{ scene }},{O}^{\text{ grasp }}}\right) }\right\rbrack }{Z} \tag{1} +$$ + +$$ +\text{ where }Z = {\int }_{{SE}\left( 3\right) }{dg}\exp \left\lbrack {-E\left( {g \mid {O}^{\text{ scene }},{O}^{\text{ grasp }}}\right) }\right\rbrack \text{ , } +$$ + +and $E$ is an energy function which will be defined later. + +§ B. BI-EQUIVARIANCE + +To successfully perform object picking tasks, it is crucial for the end-effector pose to be equivariant to changes in the initial pose of the target object within the scene. To illustrate this scene equivariance, consider a task in which the end-effector pose ${g}_{WE} \in {SE}\left( 3\right)$ in the world frame $W$ should be inferred from the observation of the scene ${O}^{\text{ scene }}$ . Here, ${g}_{WE} \mathrel{\text{ := }} \left( {{x}_{WE},{R}_{WE}}\right) \in {SE}\left( 3\right)$ denotes the specification of the configuration of the end-effector frame $E$ relative to $W$ . Now, consider a new world frame ${W}^{\prime }$ . The reference frame change $\Delta {g}_{W} = {g}_{{W}^{\prime }W} \in {SE}\left( 3\right)$ induces the following transformations in the scene observation and end-effector pose. + +$$ +{O}_{{W}^{\prime }}^{\text{ scene }} = \Delta {g}_{W} \cdot {O}_{W}^{\text{ scene }} = {g}_{{W}^{\prime }} \cdot {O}_{W}^{\text{ scene }} +$$ + +$$ +{g}_{{W}^{\prime }E} = \Delta {g}_{W}{g}_{WE} = {g}_{{W}^{\prime }W}{g}_{WE} +$$ + +The corresponding equivariant probabilistic policy ${}^{\top }P$ against ${\Delta g}$ then must satisfy + +$$ +P\left( {\Delta {g}_{W}{g}_{WE} \mid \Delta {g}_{W} \cdot {O}_{W}^{\text{ scene }}}\right) = P\left( {{g}_{{W}^{\prime }} \mid {O}_{{W}^{\prime }}^{\text{ scene }}}\right) +$$ + +Since the perturbation $\Delta {g}_{W}$ appears on the left side of $g$ , we refer to this scene equivariance as left equivariance. We illustrate left equivariance in Fig. 2. + + < g r a p h i c s > + +Fig. 1: Comparison of behavior cloning methods: Generative models (EBM and Diffusion) accurately capture multimodal behaviors of the oracle policy $p\left( {a \mid o}\right)$ compared to regression (MSE) or discretized methods. Reproduced with authors' permission [19]. + +However, as it turns out, left equivariance alone is insufficient to successfully perform object placing tasks. Unlike picking tasks, which only require observing the scene, placing tasks also requires the observation of the grasp, which adds another layer of complexity to the problem. Furthermore, the grasp pose inferred by a pick policy learned from a few expert demonstrations may not be optimal. The grasped object may be in a pose that has never been shown by the expert demonstrations. Therefore, object placing tasks require another type of equivariance, namely grasp equivariance. Consider the same object pose $B$ being grasped in two different manners, respectively denoted $E$ and ${E}^{\prime }$ . Let ${O}_{E}^{\text{ grasp }}$ be the observation of the object grasped by an end-effector with frame $E$ . We assume that frame $B$ is attached to the grasped object such that ${g}_{EB}$ is the pose of $B$ relative to frame $E$ . A transformation of the grasped object pose due to a change ${\Delta g}$ between end-effector frames $E$ and ${E}^{\prime }$ , as shown in Fig. 3, induces the transformed observation relative to frame ${E}^{\prime }$ : + +$$ +{O}_{{E}^{\prime }}^{\text{ grasp }} = \Delta {g}_{E} \cdot {O}_{E}^{\text{ grasp }} = {g}_{{E}^{\prime }} \cdot {O}_{E}^{\text{ grasp }}. +$$ + +To keep the relative pose between the scene and the grasped object invariant for equivariance of the probabilistic policy, the end-effector pose must be transformed due to $\Delta {g}_{E}$ such that + +$$ +{g}_{WB} = {g}_{WE}{g}_{EB} = {g}_{W{E}^{\prime }}{g}_{{E}^{\prime }B} = {g}_{W{E}^{\prime }}{g}_{{E}^{\prime }E}{g}_{EB} = {g}_{W{E}^{\prime }}\Delta {g}_{E}{g}_{EB} +$$ + +$$ +\Rightarrow {g}_{W{E}^{\prime }} = {g}_{WE}\Delta {g}_{E}^{-1} +$$ + +A probabilistic policy $P$ under such an equivariance requires + +$$ +P\left( {{g}_{WE}\Delta {g}_{E}^{-1} \mid {O}_{E}^{\text{ scene }},\Delta {g}_{E} \cdot {O}_{E}^{\text{ grasp }}}\right) = P\left( {{g}_{W{E}^{\prime }} \mid {O}_{{E}^{\prime }}^{\text{ scene }},{O}_{{E}^{\prime }}^{\text{ grasp }}}\right) +$$ + +Notice that such a grasp equivariance is a right equivariance since the inverse of the perturbation $\Delta {g}_{E}^{-1}$ appears on the right side of $g$ . We illustrate the right equivariance in Fig. 3. Combining both the left and right equivariances, we finally define bi-equivariance [20] as follows. + +$$ +P\left( {g \mid {O}^{\text{ scene }},{O}^{\text{ grasp }}}\right) = P\left( {\Delta {g}_{W}g \mid \Delta {g}_{W} \cdot {O}^{\text{ scene }},{O}^{\text{ grasp }}}\right) \tag{2} +$$ + +$$ += P\left( {{g\Delta }{g}_{E}^{-1} \mid {O}^{\text{ scene }},\Delta {g}_{E} \cdot {O}^{\text{ grasp }}}\right) +$$ + +${}^{1}$ The equivariant probabilistic policy implies invariance of the conditional probabilities when the state and action are equivariantly transformed + + < g r a p h i c s > + +Fig. 2: The left equivariance illustrates that the target pose is equivariant to the transformation of the scene, as such the perturbation ${\Delta g}$ is on the left of $g$ . + +Among ${SE}\left( 2\right)$ -equivariant methods, Transporter Networks [40] and recently proposed equivariant reinforcement learning methods $\left\lbrack {{28},{29},{30}}\right\rbrack$ are left equivariant, but not fully right equivariant (only translation equivariant). On the other hand, Equivariant Transporter Networks [13] incorporate full ${SE}\left( 2\right)$ bi-equivariance, thereby achieving significant increase in data efficiency over Transporter Networks. Among ${SE}\left( 3\right)$ - equivariant methods, Neural Descriptor Fields (NDFs) [23] and Local Neural Descriptor Fields (L-NDFs) [3] are uni-equivariant methods. Since NDFs and L-NDFs assume a fixed placement target pose, bi-equivariance is not required. However, to solve more general tasks such as object rearrangement tasks, bi-equivariance becomes essential. Relational Neural Descriptor Fields (R-NDFs) [24] are a bi-equivariant method for object rearrangement tasks. However, pre-trained NDFs and a human annotated object keypoint are required to equivariantly align query points for the training. + +On the other hand, EDFs [20] directly infer query points using an ${SE}\left( 3\right)$ -equivariant query density model that can be end-to-end trained. EDFs achieve bi-equivariance for the policy in (1) with a bi-equivariant energy function $E\left( {g \mid {O}^{\text{ scene }},{O}^{\text{ grasp }}}\right)$ . The specific design of this energy function will be introduced subsequently. + +§ C. STEERABLE REPRESENTATION + +To achieve robust equivariant manipulation, a model must utilize symmetric feature representations from the observations. Steerable representations are proficient in representing these features due to their orientation sensitivity [33] (see Fig. 4). Moreover, due to continuous expressions, steerable representations acquire rigorous information compared to the discretization methods and demonstrate better precision as evidenced by [1]. + +Importantly, compared to rotation invariant features, steerable features are superior in encoding the orientations of local geometries. To encode orientation information using rotation invariant features, they must be spatially distributed, breaking locality. For example, the color vector (red, green, blue) is such a rotation invariant feature. To determine the rigid-body orientation, at least three non-collinear points of different colors are required. Conversely, one can represent orientation with only a single point, using rotation equivariant, or steerable features. Thus, orientation information can be localized into a single point, better capturing the local geometry. This makes the learned features more generalizable and less sensitive to disturbances. + + < g r a p h i c s > + +Fig. 3: The right equivariance implies that the target pose is equivariant to the grasp state, in which the perturbation ${\Delta g}$ is located on the right of the $g$ . + +Transporter Networks [40] and Neural Descriptor Fields variants $\left\lbrack {{23},{24},3}\right\rbrack$ utilize rotation invariant feature fields to obtain equivariance (e.g., Feature map of CNNs can be thought of as 2-dimensional feature fields). Alternatively, Huang et al. [13], Wang et al. [30, 28, 29] utilize the steerable features of the ${C}_{n}$ group (discretized ${SO}\left( 2\right)$ group), thereby significantly improving data efficiency. + +An EDF $\varphi \left( {x \mid O}\right)$ is defined as the concatenation of $N$ ${SO}\left( 3\right)$ -steerable vector fields that are ${SE}\left( 3\right)$ -equivariant + +$$ +\varphi \left( {x \mid O}\right) = {\bigoplus }_{n = 1}^{N}{\varphi }^{\left( n\right) }\left( {x \mid O}\right) +$$ + +where ${\varphi }^{\left( n\right) }\left( {x \mid O}\right) : {\mathbb{R}}^{3} \times \mathcal{O} \rightarrow {\mathbb{R}}^{2{l}_{n} + 1}$ is an ${SE}\left( 3\right)$ -equivariant type- ${l}_{n}$ vector field generated by $O$ . Therefore, $\varphi \left( {x \mid O}\right)$ is transformed according to $g = \left( {p,R}\right) \in {SE}\left( 3\right)$ as + +$$ +\varphi \left( {{gx} \mid g \cdot O}\right) = D\left( R\right) \varphi \left( {x \mid O}\right) +$$ + +$$ += \left( \begin{matrix} {D}_{{l}_{1}}\left( R\right) & \cdots & \varnothing \\ \vdots & \ddots & \vdots \\ \varnothing & \cdots & {D}_{{l}_{n}}\left( R\right) \end{matrix}\right) \varphi \left( {x \mid O}\right) +$$ + +where $D\left( R\right)$ is a block diagonal of Wigner D-matrices. + +The ${SE}\left( 3\right)$ bi-equivariant energy function for the EBM in Eq. (1) can be constructed with EDFs as + +$$ +E\left( {g \mid {O}^{\text{ scene }},{O}^{\text{ grasp }}}\right) = +$$ + +$$ +{\int }_{{\mathbb{R}}^{3}}{d}^{3}{x\rho }\left( {x \mid {O}^{\text{ grasp }}}\right) {\begin{Vmatrix}\varphi \left( gx \mid {O}^{\text{ scene }}\right) - D\left( R\right) \psi \left( x \mid {O}^{\text{ grasp }}\right) \end{Vmatrix}}^{2} +$$ + +where $\varphi \left( {x \mid {O}^{\text{ scene }}}\right)$ is the key ${EDF},\psi \left( {x \mid {O}^{\text{ grasp }}}\right)$ is the query ${EDF}$ , and the $\rho \left( {x \mid {O}^{\text{ grasp }}}\right)$ is the query density, which are all ${SE}\left( 3\right)$ -equivariant and learnable neural fields. + +§ D. LOCALITY + +For a robotic manipulation model to be robust, it must be able to pick and place objects in previously unseen poses. If the model can learn local geometric structures that are shared across different objects, it would greatly increase its generalizability. For example, if a model was trained to pick a mug by holding the rim, the similarities in the local geometric features can be utilized to grasp other objects by the rim. Consequently, locality is critical for generalizability and data efficiency. Recent studies in various fields such as robotics [3], point cloud segmentation [6], and shape reconstruction [1] highlight the importance of incorporating locality in equivari-ant methods. + + < g r a p h i c s > + +Fig. 4: Visualization of type-l features $\left( {l = 0,1,2,\ldots }\right)$ . Higher-type features are sensitive to the orientations of local geometries such as planes and corners. Reproduced with the authors' permission [1]. + +Another benefit of imposing locality to equivariant methods is that the target object does not require to be segmented from the backgrounds. For unsegmented observations, only equiv-ariance to the target object is desired, and the equivariance to backgrounds must be suppressed. We name this property as local equivariance, in contrast to global equivariance (see Fig. 5). However, naively applying Eq. (2) can only guarantee global equivariance. Therefore, special care must be taken in designing methods to respect the locality of the tasks so as to obtain local equivariance. + +For example, Transporter networks and their variants [40, 21, 13 naturally exploit the locality of convolutional neural networks. Therefore, Transporter Networks and their variants can be used without object segmentation pipelines or any other object centric assumptions. On the other hand, NDFs [23] and R-NDFs [24] rely on centroid subtraction methods to achieve translational equivariance. Due to the highly non-local nature of centroid subtraction, these methods require the target object to be segmented from the background. + +EDFs utilize a Tensor Field Network (TFN) [25] model for the final layer and ${SE}\left( 3\right)$ -transformers [10] in other layers. These methods rely on spatial convolutions, enabling the easy acquisition of locality by using convolution kernels with finite support. This is in contrast to the Vector Neurons [5] method that were used as the backbone networks for [23, 24]. Further detailed explanations, mathematical proofs, and model architectures can be found in [20]. + +§ IV. EXPERIMENTAL RESULTS + +To evaluate the EDFs' equivariant generalization performance with other methods, experiments were conducted with a mug-hanging task and a bowl/bottle pick-and-place task. The objective is to pick a mug or bowl/bottle and place it on a randomly posed hanger or plate. We evaluate EDFs in multiple scenarios which include unseen poses, unseen distracting objects, and unseen instances in randomized poses. + + < g r a p h i c s > + +Fig. 5: The difference of global equivariance and local equivariance. The global equivariance represents the translation of the whole scene, while the local equivariance denotes the translation of the target object. + +First, we compare EDFs with ${SE}\left( 3\right)$ Transporter Networks [40], which are the extensions to the original Transporter Networks that regress additional degrees of freedom (height, roll, pitch). Table II of Appendix A shows that EDFs out-preform Transporter Networks in all three tasks. By comparing the results, EDFs were more robust than Transformer Networks, illustrating the significance of the ${SE}\left( 3\right)$ -equivariance when it comes to spatial manipulation tasks. + +In comparing EDFs to NDFs [23], it was necessary to account for some of NDFs' limitations such as the fact that NDFs require segmentations and a fixed target place pose. Thus, we compare the EDF model to an NDF-like model, which uses only the type-0 descriptor vectors. From Table III of Appendix A, EDFs, which use higher type descriptors, surpass the performance of the NDF-like model. Additional experimental descriptions and results can be found in Appendix A and [20]. + +§ V. CONCLUSION + +We introduce EDFs and emphasize the importance of the following four properties: 1) generative modeling, 2) bi-equivariance, 3) steerable representations, and 4) locality; in order to synthesize noteworthy equivariant robotic manipulation learning models. We show the effectiveness and the generalization of EDFs in inferring the target pose in spite of previously unseen instances, unseen poses, and distracting objects using only a few demonstrations. + +For future research, it could be beneficial to integrate ${SE}\left( 3\right)$ -equivariant shape reconstruction and SLAM methods [38, 1, 2, 9] with EDFs to overcome incomplete and noisy point cloud observations. Expanding EDFs to trajectory-level problem is also an important issue. For kinematic and dynamic trajectory planning, one might consider incorporating guided diffusion methods [26] and geometric impedance control framework [22] respectively. Lastly, to improve the speed of the MCMC sampling required for EDFs, techniques such as amortized sampling [36, 32] and cooperative learning $\left\lbrack {{34},{35},{37}}\right\rbrack$ could be explored. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/n9sxj3TKWm8/Initial_manuscript_md/Initial_manuscript.md b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/n9sxj3TKWm8/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..f9827249664e24ddefde1d73fbd1471939913e9f --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/n9sxj3TKWm8/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,163 @@ +# Morphological symmetries in robot learning + +Daniel Ordonez-Apraez*, Mario Martin ${}^{\dagger \ddagger }$ , Antonio Agudo* and Francesc Moreno-Noguer* * Institut de Robôtica i Informàtica Industrial, CSIC-UPC. ${}^{ \dagger }$ Barcelona Supercomputing Center (BSC). ${}^{ \ddagger }$ Departament de Ciències de la Computació, Universitat Politècnica de Catalunya (UPC). + +[dordonez, aagudo, fmoreno]@iri.upc.edu, mmartin@cs.upc.edu + +Abstract-This work studies the impact of morphological symmetries in learning applications in robotics. Morphological symmetries are a predominant feature in both biological and robotic systems, arising from the presence of planes/axis of symmetry in the system's morphology. This results in harmonious duplication and distribution of body parts (e.g., humans' sagittal/left-right symmetry). Morphological symmetries become a significant learning prior as they extend to symmetries in the system's dynamics, optimal control policies, and in all proprioceptive and exteroceptive measurements, related to the system's dynamics evolution. Exploiting these symmetries in learning applications offers several advantageous outcomes, such as the use of data augmentation to mitigate the cost and challenges of data collection, or the use of equivariant/invariant function approximation models (e.g., neural networks) to improve sample efficiency and generalization, while reducing the number of trainable parameters. We provide an open access repository ${}^{\top }$ reproducing our experiments and allowing for rapid prototyping in robot learning applications exploiting morphological symmetries. + +## I. INTRODUCTION + +Discrete Morphological Symmetries (DMSs) are ubiquitous in both biological and robotic systems. The vast majority of living and extinct animal species exhibit bilateral/sagittal reflection symmetry, where the right side of the body is approximately a reflection of the left side (see fig. 1-left). Similarly, a significant number of species exhibit radial symmetry, characterized by two or more morphological symmetry planes/axis (see fig. 1-center) [6]. These symmetries are a consequence of nature's tendency to symmetric body parts and harmonic duplication and distribution of limbs. A pattern perfected and exploited in the design of robotic systems. + +Symmetries of the state of a dynamical system translate to symmetries of the system's dynamics and control [17]. Thus, DMSs imply the presence of symmetries in the dynamics and control of body motions, extending to symmetries in all proprioceptive and exteroceptive measurements, related to the evolution of the system's dynamics (e.g., joint position/velocity/torque, depth images, contact forces). Therefore, for systems with morphological symmetries, we can use data augmentation to mitigate the challenges of data collection in robotics, computer graphics, and computational biology. This, roughly implies that for every minute of recorded data of a system with $n$ morphological symmetries, we can obtain an additional $n - 1$ minutes of recordings, solely by considering the symmetric states of the recorded data. See the case of the robot Solo in fig. 1-center, for which we obtain 3 additional minutes of recording by considering the depicted 4-fold symmetries. Furthermore, we can exploit the symmetries of proprioceptive and exteroceptive data by imposing symmetry constraints in machine learning algorithms to boost sample efficiency and enhance generalization [17, 4, 12]. Consider the case of robot Solo in fig. 1-center/right. We desire to approximate the function $\mathbf{y} = f\left( \mathbf{x}\right)$ , mapping points in an input space $\mathbf{x} \in \mathcal{X}$ (say, the state of our robot) to points in an output space $\mathbf{y} \in \mathcal{Y}$ (say, the binary contact state of the robot's feet). To achieve this we use recorded data to train a function approximation model $\widehat{f}$ parameterized with $\phi$ , i.e. $\mathbf{y} \approx \widehat{f}\left( {\mathbf{x};\mathbf{\phi }}\right)$ . Because of the robot morphological symmetry, the input and output spaces have symmetries, and our target function is subjected to an equivariance constraint: + +$$ +g \cdot \mathbf{y} = f\left( {g \cdot \mathbf{x}}\right) \mid \;\forall \;g \in \mathcal{G}. \tag{1} +$$ + +Where $g$ represents a symmetry, $g \cdot \mathbf{x}$ and $g \cdot \mathbf{y}$ the input and output points transformed by the symmetry (in our example, $g \cdot \mathbf{x}$ is the transformed robot state and $g \cdot \mathbf{y}$ a different contact state), while $\mathcal{G}$ represents the set of symmetries of the robot, its symmetry group. In these scenarios, we should impose the same equivariance constraints of our target function (eq. (1)) to our model $\bar{f}$ . Since by doing so, we are reducing the solution space of the optimization algorithm used to find the optimal $\bar{f}$ . In practice, imposing equivariance (or invariance) constraints implies reducing the number of parameters of your model $\phi$ , while empirically obtaining benefits in sample efficiency and generalization [4, 12, 10]. + +Despite the potential benefits of exploiting symmetry and the ubiquitous presence of morphological symmetries in robotic/biological/virtual systems, this relevant inductive bias is frequently left unexploited in data-driven applications in robotics, computational biology, and computer graphics. We attribute the scarce adoption of these techniques to a missing theoretical framework that consolidates the concept of morphological symmetries, facilitating their study and identification. And, to a missing practical framework enabling the efficient and convenient exploitation of symmetries in real-world data-driven applications. + +The identification of morphological symmetries and how these extend to symmetries of proprioceptive and exteroceptive data is currently a laborious and error-prone system-specific process, due to the lack of a clear theoretical framework. As a result, most recent works that exploit some morphological symmetry (e.g., [15, 1, 16] in computer graphics and [12, 9, 5. 3 in robotics/dynamical systems) have only been applied to simple systems and the simplest morphological symmetry: reflection/sagittal symmetry (see fig. 1-left), with the exception of Finzi et al. [3]. However, these works provide little guidance on how to apply these techniques to other systems, particularly those with more than a single morphological symmetry. + +--- + +1 github.com/Danfoa/RobotEquivariantNN + +--- + +${\mathcal{C}}_{2} = \left\{ {e,{g}_{s} \mid {g}_{s}^{2} = e}\right\}$ 2022年12月7日放送一下 $f\left( {\mathbf{x};\phi }\right)$ $\sigma$ 0B $0\mathbf{z}$ 0w 用是固定资源管理设备及资源回 $l - 1$ ${g}_{t}$ ${g}_{t} \cdot h$ ${f}_{t} \cdot {f}_{1}$ ${}^{l}\mathbf{w}$ $\phi \doteq \left\{ {{}^{0}\mathbf{c},\ldots ,{}^{l}\mathbf{c}}\right\}$ Reflection plane of ${g}_{s}$ ${g}_{s} \cdot h$ gt $h \doteq \left\lbrack \begin{array}{l} l \\ \mathbf{k} \end{array}\right\rbrack$ ${f}_{1}$ ${f}_{1}$ + +Fig. 1: Left: Symmetric configurations of the bipedal robot Atlas (3D animation) illustrating its morphological symmetry described by the reflection group ${\mathcal{C}}_{2}$ . The robot can imitate the reflections ${g}_{s}$ (hint: note the non-reflected text on the robot’s chest). Middle: Top-view of symmetric configurations of the quadruped robot Solo (3D animation) showcasing its morphological symmetries described by the Klein four-group ${\mathcal{K}}_{4}$ . The robot can imitate two reflections $\left( {{g}_{s},{g}_{t}}\right)$ and a ${180}^{ \circ }$ rotation $\left( {g}_{r}\right)$ of space (hint: observe the unreflected/unrotated robot's heading direction and legs coloring). Symmetry transformations (arrows) affect the robot’s configuration, as well as proprioceptive measurements (center of mass linear $\mathbf{l}$ and angular $\mathbf{k}$ momentum) and exteroceptive measurements (terrain elevation, external force ${f}_{1}$ ). Right: Diagram of a toy ${\mathcal{K}}_{4}$ -equivariant neural network, processing the symmetric states of robot Solo $\mathbf{x}$ and outputting the symmetric binary foot contact states $\mathbf{y}$ (see section IV). + +Our recent work [10], aims at increasing the adoption of morphological symmetry exploitation in robotics by presenting the theoretical and practical contributions ${}^{\square }$ that enable the study and exploitation of these symmetries in arbitrary dynamical systems with any number of symmetries. In this short paper, we summarize the most important facts of morphological symmetries in robotics and their implications in data-driven applications. For a rigorous and extended development, we refer the interested reader to [10]. + +## II. Properties of symmetric dynamical systems + +In robotics a symmetry $g$ is roughly defined as an energy-preserving transformation of the robot state $\left( {\mathbf{q},\dot{\mathbf{q}}}\right)$ , defined by the system generalized position $\mathbf{q} \in \mathrm{Q}$ and velocity coordinates $\dot{\mathbf{q}} \in {\mathrm{T}}_{\mathbf{q}}\mathrm{Q}$ . If a dynamical system has a group of symmetries $\mathcal{G}$ , its dynamics (i.e, its equations of motion $\mathbf{M}\left( \mathbf{q}\right) \ddot{\mathbf{q}} = \mathbf{\tau }\left( {\mathbf{q},\dot{\mathbf{q}}}\right)$ ) are equivariant. That is: + +$$ +g \cdot \left\lbrack {\underset{\text{Inertial }}{\underbrace{\mathbf{M}\left( \mathbf{q}\right) \ddot{\mathbf{q}}}} - \underset{\text{Moving }}{\underbrace{\mathbf{\tau }\left( {\mathbf{q},\dot{\mathbf{q}}}\right) }}}\right\rbrack = \underset{\text{Inertial }}{\underbrace{\mathbf{M}\left( {g \cdot \mathbf{q}}\right) g \cdot \ddot{\mathbf{q}}}} - \underset{\text{Moving }}{\underbrace{\mathbf{\tau }\left( {g \cdot \mathbf{q}, g \cdot \dot{\mathbf{q}}}\right) }} = \mathbf{0} +$$ + +$$ +\forall g \in \mathcal{G},\mathbf{q} \in \mathrm{Q},\dot{\mathbf{q}} \in {\mathrm{T}}_{\mathbf{q}}\mathrm{Q}. \tag{2} +$$ + +Denoting $\mathbf{M}\left( \mathbf{q}\right) : \mathrm{Q} \rightarrow {\mathbb{R}}^{n \times n}$ as the generalized mass matrix function and $\mathbf{\tau }\left( {\mathbf{q},\dot{\mathbf{q}}}\right) : \mathrm{Q} \times {\mathrm{T}}_{\mathbf{q}}\mathrm{Q} \rightarrow {\mathbb{R}}^{n}$ as the generalized moving forces at a given state $\left( {\mathbf{q},\dot{\mathbf{q}}}\right)$ . + +This property of symmetric dynamical systems, denoted as dynamics $\mathcal{G}$ -equivariance (eq. (2)), depends on both the generalized inertial and moving forces being independently equivariant, implying: + +$$ +\mathbf{M}\left( {g \cdot \mathbf{q}}\right) = g\mathbf{M}\left( \mathbf{q}\right) {g}^{-1}\; \land \;g \cdot \mathbf{\tau }\left( {\mathbf{q},\dot{\mathbf{q}}}\right) = \mathbf{\tau }\left( {g \cdot \mathbf{q}, g \cdot \dot{\mathbf{q}}}\right) +$$ + +$$ +\forall g \in \mathcal{G},\mathbf{q} \in \mathrm{Q},\dot{\mathbf{q}} \in {\mathrm{T}}_{\mathbf{q}}\mathrm{Q}. \tag{3} +$$ + +The equivariance of the inertial forces requires that the generalized mass matrix of the systems is equivariant. This is the identifying property of symmetrical dynamical systems. In practice, as the generalized mass matrix is well-defined for model-based systems, it can be used for the identification of system's symmetries using eq. (3) (see [10] for the case of rigid body dynamics). Furthermore, the equivariance of the generalized moving forces (which in practice, usually incorporates control, constraint, and external forces) implies that dynamics $\mathcal{G}$ -equivariance (eq. (2)) is upheld until a symmetry breaking force violates the equivariance of $\tau$ . + +To gain some intuition, consider as an example the bipedal robot Atlas, with symmetry group $\mathcal{G} = {\mathcal{C}}_{2} = \left\{ {e,{g}_{s}}\right\}$ . Both robot states in fig. 1-left are symmetric states (related by the action ${g}_{s}$ ). Then, eq. (2) suggests that any trajectory of motion, starting from the left robot state, will be equivalent (up to transformation by ${g}_{s}$ ) to a motion trajectory starting from the right robot state, if and only if, the moving forces driving both trajectories are equivalent (up to transformation by ${g}_{s}$ ). That is if the control and external forces are ${\mathcal{C}}_{2}$ -equivariant (eq. (3)). Note, we can perform a similar analysis for each symmetric state and action of systems with larger symmetry groups (e.g. Solo in fig. 1-center). + +The aforementioned definition of symmetries as energy-preserving transformations of the system state is intentionally generic, imposing no restrictions on the nature of the state transformation, such as whether the transformed state is feasible or reachable. This allows us to consider feasible state transformations (such as robot translations and rotations ${}^{2}$ ) along with unfeasible state transformations (such as a reflection of space) as symmetries of the system. Naturally, in robotics, we are interested in studying and exploiting feasible symmetries alone. Therefore we introduced the concept of discrete morphological symmetry, as the set of feasible symmetries of the system that imitate feasible and unfeasible symmetries. + +## III. DISCRETE MORPHOLOGICAL SYMMETRIES (DMSS) + +A dynamical system is said to possess a DMS if it can imitate the effects of a rotation, reflection, or translation in space (i.e. Euclidean isometries), through a feasible discrete change in its configuration (see formal definition in [10]). To gain intuition, we can analyze the simplest and most common DMS. + +Reflection DMS: Although most floating-base dynamical systems are symmetric with respect to reflections of space (section II), these symmetries are infeasible due to the impossibility to execute reflections in the real-world [11]. However, systems with sagittal symmetry (e.g., Atlas in fig. 1-left, or humans) can imitate the effect of a reflection with a feasible discrete change in their configuration, by rotating their body and modifying their limbs' pose. These systems share the same symmetry group, the reflection group $\mathcal{G} \equiv {\mathcal{C}}_{2}$ . + +Multiple DMSs: This property can be extended to the case of a floating-base system having multiple DMSs, allowing it to imitate multiple distinct Euclidean isometries. Most frequently systems can imitate a set of rotations and reflections, making $\mathcal{G}$ a Cyclic ${\mathcal{C}}_{k}$ or Dihedral ${\mathcal{D}}_{2k}$ group. See examples for ${\mathcal{C}}_{3}$ in [10], and for ${\mathcal{D}}_{4} \equiv {\mathcal{K}}_{4}$ in fig. 1 -center. + +Because each DMS is defined as a feasible transformation that imitates a system’s symmetry $\bar{g}$ due to a Euclidean isometry, the group of DMSs $\mathcal{G}$ is isomorphic to a subset of the feasible and unfeasible symmetries of the dynamical system due to rotations, reflections, and translations in space. Furthermore, the existence of the DMSs is subjected to the system’s generalized mass matrix being $\mathcal{G}$ -equivariant (eq. (3)). In practice, these constraints translate to identifiable constraints in the kinematic and dynamic parameters of the system model [10]. + +## IV. $\mathcal{G}$ -Equivariant and $\mathcal{G}$ -Invariant Function APPROXIMATORS + +Once we identified the DMS group $\mathcal{G}$ of our system, we know that any proprioceptive or exteroceptive measurements have the same symmetry group $\mathcal{G}$ . Therefore, to improve generalization and sample efficiency, we can exploit the known symmetries of the input $\mathbf{x}$ and output $\mathbf{y}$ spaces, of any mapping we desire to approximate, by constructing $\mathcal{G}$ - equivariant or $\mathcal{G}$ -invariant (eq. (1)) function approximation models $\widehat{f}\left( {\mathbf{x};\phi }\right)$ , parameterized with $\phi$ . In [10] we study the case of $\mathcal{G}$ -equivariant/invariant neural networks (NN). In + +${H}_{LF}$ ${g}_{1}$ $B$ ${H}_{RB}$ ${H}_{RF}$ ${S}_{RF}$ ${E}_{RH}$ + +Fig. 2: Left: Solo sagittal (blue) and transversal (red) symmetry planes of the base body. Right: Solo's kinematic tree, and permutation symmetries of the legs/tree-branches. + +this section, we summarized the most relevant implications of DMSs for this type of machine-learning model. + +- Computational implications of using $\mathcal{G}$ -equivariant NN. Thanks to recent theoretical and practical developments [4,10,12], the use of $\mathcal{G}$ -equivariant NN instead of unconstrained NN comes at the price of a negligible increase in memory and computational resources required during training of the model. Most importantly, there is no difference, at inference time, between equivariant and unconstrained models. + +- Number of trainable parameters of a NN. Imposing equivariance/invariance constraints in NN signifies the reduction in the number of trainable parameters of the model [4,12,2]. In practice, this implies that for a $\mathcal{G}$ - equivariant layer the number of trainable parameters is reduced by approximately ${}^{1}/\left| \mathcal{G}\right|$ being $\left| \mathcal{G}\right|$ the number of symmetries of the data (i.e., number of DMSs of the system). Therefore a $\mathcal{G}$ -equivariant architecture with $\mathcal{G} = {\mathcal{C}}_{2}$ (robot Atlas in fig. 1-left), or $\mathcal{G} = {\mathcal{K}}_{4}$ (Solo in fig. 1-center) will have approximately $1/2$ (Atlas) or $1/4$ (Solo) of the trainable parameters of an unconstrained NN of the same architectural size. The reduction of parameters is caused by the parameter sharing and is visually depicted in fig. 1-right. + +An increasing amount of theoretical [2, 14] and empirical $\left\lbrack {{12},3,{10},{13}}\right\rbrack$ evidence suggest that when the data features symmetries, the use of equivariant/invariant function approximation models leads to increase generalization capabilities and a reduction in sample complexity. On [10] we present empirical evidence in robotics in a synthetic and real-world learning application. Here, we summarize the results of the real-world application. + +## V. EXPERIMENTS + +We present a supervised experiment using real-world data in a classification application to showcase the effectiveness of Discrete Morphological Symmetries (DMSs) for data augmentation and training equivariant functions. The goal is to demonstrate the positive impact of exploiting DMSs on the model's sample efficiency and generalization capacity. For a detailed analysis of the technical aspects and additional experiments, please refer to [10]. + +--- + +${}^{2}$ In conservative systems, translational, rotational, and time-shift symmetries imply, by Noether's theorem, the conservation of linear momentum, angular momentum, and energy, respectively [8]. + +--- + +Mini-Cheetah $\mathcal{G} \approx {\mathcal{C}}_{2}$ Mini-Cheetah $\mathcal{G} \approx {\mathcal{C}}_{2}$ 0.95 CNN-aug 0.90 CNN 0.85 Metric Score 0.80 0.75 0.70 ECNN 0.65 CNN-aug CNN 0.60 ${300}\mathrm{\;k}$ ${400}\mathrm{\;k}$ ${500}\mathrm{\;k}$ leg-LF leg-RF leg-LH leg-RH RGS-AVG state training samples ECNN 0.90 CNN-aug 0.88 test loss test legs avg-fl 0.85 0.83 0.80 1.0 0.75 0.73 0 100 k 200 k ${300}\mathrm{\;k}$ 400 k 500 k 100 k 200 k training samples + +Fig. 3: Static-Friction-Regime contact detection results comparing CNN, CNN-aug, and ECNN. Left: Sample efficiency in log-log scale. Middle: Average legs F1-score. Right: Classification metrics on test set performance of models trained with the entire training set. The selected metrics include contact-state $\left( {\mathbf{y} \in {\mathbb{R}}^{16}}\right)$ accuracy (Acc) and fl-score (F1) for each leg binary contact state. Due to the sagittal symmetry of the robot, the left front (LF) and right front (RF) legs are expected to be symmetric, as well as the left hind (LH) and right hind (RH) legs. F1-score is presented considering the dataset class imbalance (see [10]). The reported values represent the average and standard deviation across 8 different seeds. + +## A. Static-friction-regime contact detection (Classification) + +In this experiment, we utilize the dataset introduced in Lin et al. [7] for estimating static-friction-regime contacts in the foots of the Mini-Cheetah quadruped robot. The dataset consists of real-world proprioceptive data $\left( {\widehat{\mathbf{q}},\widehat{\mathbf{q}}}\right.$ , base linear acceleration, base angular velocity, and leg feet positions and velocities) captured over a history of 150 time-frames. These measurements were obtained from inboard sensors during locomotion, encompassing various gaits and terrains. The dataset also includes $\mathbf{y} \in {\mathbb{R}}^{16}$ , representing the ground truth contact state of the robot, which was estimated offline using a non-causal algorithm. Our goal is to train a causal function approximator $\widehat{f}\left( {\mathbf{x};\mathbf{\phi }}\right)$ to predict the contact state based on the input proprioceptive data. + +The Mini-Cheetah robot in the real-world exhibits an approximate reflection symmetry group, $\mathcal{G} \approx {\mathcal{C}}_{2}$ . As a result, both the proprioceptive data $\mathbf{x}$ and the contact state $\mathbf{y}$ share the symmetry group $\mathcal{G}$ . In this experiment, we compare three variants of function approximators: the original Convolutional Neural Network architecture proposed by Lin et al. [7] (CNN), a version of CNN trained with data augmentation (CNN-aug), and a version of CNN that incorporates hard-equivariance constraints (E-CNN). + +The sampling efficiency and average leg contact state classification results are depicted in fig. 3-left-&-middle. The equiv-ariant model, E-CNN, demonstrates superior generalization performance and robustness to dataset biases compared to the unconstrained models [10]. Following E-CNN, CNN-aug exhibits better performance than the original CNN. In fig. 3- right, we evaluate the classification metrics of the test set when using the entire training data. The E-CNN model outperforms both CNN-aug and CNN in contact state classification and average leg contact detection. Notably, exploiting symmetries helps mitigate suboptimal asymmetries in the models, preventing them from favoring the classification of one leg over others (observe legs LF and RF in [fig. 3-right). + +## VI. Conclusions & Discussion + +In this work, we summarize the findings presented in [10], where we present the definition of Discrete Morphological Symmetry (DMS): a capability of some dynamical systems to imitate the effect of rotations, translations, and infeasible reflections of space with a feasible discrete change in the system configuration. Using the language of group theory we study the set of DMSs of a dynamical system as a symmetry group $\mathcal{G}$ and conclude that: (1) A system with a symmetry group $\mathcal{G}$ exhibits $\mathcal{G}$ -equivariant generalized mass matrix and dynamics. (2) That the symmetries of the dynamics extend to optimal control policies as well as to any proprioceptive and exteroceptive measurements, related to the evolution of the system's dynamics. + +We establish the necessary theoretical abstractions to investigate and identify DMSs in any dynamical system, irrespective of the number of symmetries present. This new formalism allows us to identify the reflection/sagittal symmetry, prevalent in humans, animals, and most robots, as the simplest morphological symmetry group $\mathcal{G} = {\mathcal{C}}_{2}$ . Crucially, we use the same formalism to identify and exploit DMSs in real-world robotic systems with a greater number of symmetries. + +In addition, we provide an open-access repository that facilitates the efficient prototyping of $\mathcal{G}$ -equivariant neural networks for exploiting DMS in various applications involving rigid-body dynamics, such as robotics, computer graphics, and computational biology. This repository includes a growing collection of symmetric dynamical systems, with their corresponding symmetry groups already identified. Furthermore, we present compelling empirical and theoretical evidence supporting the utilization of DMSs in data-driven applications through data augmentation and the adoption of $\mathcal{G}$ -equivariant neural networks. Both symmetry exploitation techniques result in improved sample efficiency and generalization. + +## ACKNOWLEDGMENTS + +This work's experiments were run at the Barcelona Super-computing Center in collaboration with the HPAI group. This work is supported by the Spanish government with the project MoHuCo PID2020-120049RB-I00 and the ERA-Net Chistera project IPALM PCI2019-103386. + +## REFERENCES + +[1] Farzad Abdolhosseini, Hung Yu Ling, Zhaoming Xie, Xue Bin Peng, and Michiel Van de Panne. On learning symmetric locomotion. In Motion, Interaction and Games, pages 1-10. 2019. + +[2] Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478, 2021. + +[3] Marc Finzi, Gregory Benton, and Andrew G Wilson. Residual pathway priors for soft equivariance constraints. Advances in Neural Information Processing Systems, 34: 30037-30049, 2021. + +[4] Marc Finzi, Max Welling, and Andrew Gordon Wilson. A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups. In International Conference on Machine Learning, pages 3318-3328. PMLR, 2021. + +[5] Kaveh Akbari Hamed and Jessy W Grizzle. Event-based stabilization of periodic orbits for underactuated 3-d bipedal robots with left-right symmetry. IEEE Transactions on Robotics, 30(2):365-381, 2013. + +[6] Gábor Holló. Demystification of animal symmetry: Symmetry is a response to mechanical forces. Biology Direct, 12(1):1-18, 2017. + +[7] Tzu-Yuan Lin, Ray Zhang, Justin Yu, and Maani Ghaf-fari. Legged robot state estimation using invariant kalman filtering and learned contact events. In 5th Annual Conference on Robot Learning, 2021. + +[8] Emmy Noether. Invariante variationsprobleme, math-phys. Klasse, pp235-257, 1918. + +[9] Daniel Ordonez-Apraez, Antonio Agudo, Francesc Moreno-Noguer, and Mario Martin. An adaptable approach to learn realistic legged locomotion without examples. In 2022 International Conference on Robotics and Automation (ICRA), pages 4671-4678. IEEE, 2022. + +[10] Daniel Ordonez-Apraez, Mario Martin, Antonio Agudo, and Francesc Moreno-Noguer. On discrete symmetries of robotics systems: A group-theoretic and data-driven analysis. arXiv preprint arXiv:2302.10433, 2023. + +[11] Jon M Selig. Geometric fundamentals of robotics, volume 128. Springer, 2005. + +[12] Elise Van der Pol, Daniel Worrall, Herke van Hoof, Frans Oliehoek, and Max Welling. Mdp homomorphic networks: Group symmetries in reinforcement learning. Advances in Neural Information Processing Systems, 33: 4199-4210, 2020. + +[13] Rui Wang, Robin Walters, and Rose Yu. Approximately + +equivariant networks for imperfectly symmetric dynamics. arXiv preprint arXiv:2201.11969, 2022. + +[14] Rui Wang, Robin Walters, and Rose Yu. Data augmentation vs. equivariant networks: A theory of generalization on dynamics forecasting. arXiv preprint arXiv:2206.09450, 2022. + +[15] Raymond Yeh, Yuan-Ting Hu, and Alexander Schwing. Chirality nets for human pose regression. Advances in Neural Information Processing Systems, 32, 2019. + +[16] Wenhao Yu, Greg Turk, and C Karen Liu. Learning symmetric and low-energy locomotion. ACM Transactions on Graphics (TOG), 37(4):1-12, 2018. + +[17] Martin Zinkevich and Tucker Balch. Symmetry in markov decision processes and its implications for single agent and multi agent learning. In In Proceedings of the 18th International Conference on Machine Learning. Citeseer, 2001. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/n9sxj3TKWm8/Initial_manuscript_tex/Initial_manuscript.tex b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/n9sxj3TKWm8/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..8821193924bd238179563599f434f4a1869e8c88 --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/n9sxj3TKWm8/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,117 @@ +§ MORPHOLOGICAL SYMMETRIES IN ROBOT LEARNING + +Daniel Ordonez-Apraez*, Mario Martin ${}^{\dagger \ddagger }$ , Antonio Agudo* and Francesc Moreno-Noguer* * Institut de Robôtica i Informàtica Industrial, CSIC-UPC. ${}^{ \dagger }$ Barcelona Supercomputing Center (BSC). ${}^{ \ddagger }$ Departament de Ciències de la Computació, Universitat Politècnica de Catalunya (UPC). + +[dordonez, aagudo, fmoreno]@iri.upc.edu, mmartin@cs.upc.edu + +Abstract-This work studies the impact of morphological symmetries in learning applications in robotics. Morphological symmetries are a predominant feature in both biological and robotic systems, arising from the presence of planes/axis of symmetry in the system's morphology. This results in harmonious duplication and distribution of body parts (e.g., humans' sagittal/left-right symmetry). Morphological symmetries become a significant learning prior as they extend to symmetries in the system's dynamics, optimal control policies, and in all proprioceptive and exteroceptive measurements, related to the system's dynamics evolution. Exploiting these symmetries in learning applications offers several advantageous outcomes, such as the use of data augmentation to mitigate the cost and challenges of data collection, or the use of equivariant/invariant function approximation models (e.g., neural networks) to improve sample efficiency and generalization, while reducing the number of trainable parameters. We provide an open access repository ${}^{\top }$ reproducing our experiments and allowing for rapid prototyping in robot learning applications exploiting morphological symmetries. + +§ I. INTRODUCTION + +Discrete Morphological Symmetries (DMSs) are ubiquitous in both biological and robotic systems. The vast majority of living and extinct animal species exhibit bilateral/sagittal reflection symmetry, where the right side of the body is approximately a reflection of the left side (see fig. 1-left). Similarly, a significant number of species exhibit radial symmetry, characterized by two or more morphological symmetry planes/axis (see fig. 1-center) [6]. These symmetries are a consequence of nature's tendency to symmetric body parts and harmonic duplication and distribution of limbs. A pattern perfected and exploited in the design of robotic systems. + +Symmetries of the state of a dynamical system translate to symmetries of the system's dynamics and control [17]. Thus, DMSs imply the presence of symmetries in the dynamics and control of body motions, extending to symmetries in all proprioceptive and exteroceptive measurements, related to the evolution of the system's dynamics (e.g., joint position/velocity/torque, depth images, contact forces). Therefore, for systems with morphological symmetries, we can use data augmentation to mitigate the challenges of data collection in robotics, computer graphics, and computational biology. This, roughly implies that for every minute of recorded data of a system with $n$ morphological symmetries, we can obtain an additional $n - 1$ minutes of recordings, solely by considering the symmetric states of the recorded data. See the case of the robot Solo in fig. 1-center, for which we obtain 3 additional minutes of recording by considering the depicted 4-fold symmetries. Furthermore, we can exploit the symmetries of proprioceptive and exteroceptive data by imposing symmetry constraints in machine learning algorithms to boost sample efficiency and enhance generalization [17, 4, 12]. Consider the case of robot Solo in fig. 1-center/right. We desire to approximate the function $\mathbf{y} = f\left( \mathbf{x}\right)$ , mapping points in an input space $\mathbf{x} \in \mathcal{X}$ (say, the state of our robot) to points in an output space $\mathbf{y} \in \mathcal{Y}$ (say, the binary contact state of the robot's feet). To achieve this we use recorded data to train a function approximation model $\widehat{f}$ parameterized with $\phi$ , i.e. $\mathbf{y} \approx \widehat{f}\left( {\mathbf{x};\mathbf{\phi }}\right)$ . Because of the robot morphological symmetry, the input and output spaces have symmetries, and our target function is subjected to an equivariance constraint: + +$$ +g \cdot \mathbf{y} = f\left( {g \cdot \mathbf{x}}\right) \mid \;\forall \;g \in \mathcal{G}. \tag{1} +$$ + +Where $g$ represents a symmetry, $g \cdot \mathbf{x}$ and $g \cdot \mathbf{y}$ the input and output points transformed by the symmetry (in our example, $g \cdot \mathbf{x}$ is the transformed robot state and $g \cdot \mathbf{y}$ a different contact state), while $\mathcal{G}$ represents the set of symmetries of the robot, its symmetry group. In these scenarios, we should impose the same equivariance constraints of our target function (eq. (1)) to our model $\bar{f}$ . Since by doing so, we are reducing the solution space of the optimization algorithm used to find the optimal $\bar{f}$ . In practice, imposing equivariance (or invariance) constraints implies reducing the number of parameters of your model $\phi$ , while empirically obtaining benefits in sample efficiency and generalization [4, 12, 10]. + +Despite the potential benefits of exploiting symmetry and the ubiquitous presence of morphological symmetries in robotic/biological/virtual systems, this relevant inductive bias is frequently left unexploited in data-driven applications in robotics, computational biology, and computer graphics. We attribute the scarce adoption of these techniques to a missing theoretical framework that consolidates the concept of morphological symmetries, facilitating their study and identification. And, to a missing practical framework enabling the efficient and convenient exploitation of symmetries in real-world data-driven applications. + +The identification of morphological symmetries and how these extend to symmetries of proprioceptive and exteroceptive data is currently a laborious and error-prone system-specific process, due to the lack of a clear theoretical framework. As a result, most recent works that exploit some morphological symmetry (e.g., [15, 1, 16] in computer graphics and [12, 9, 5. 3 in robotics/dynamical systems) have only been applied to simple systems and the simplest morphological symmetry: reflection/sagittal symmetry (see fig. 1-left), with the exception of Finzi et al. [3]. However, these works provide little guidance on how to apply these techniques to other systems, particularly those with more than a single morphological symmetry. + +1 github.com/Danfoa/RobotEquivariantNN + +${\mathcal{C}}_{2} = \left\{ {e,{g}_{s} \mid {g}_{s}^{2} = e}\right\}$ 2022年12月7日放送一下 $f\left( {\mathbf{x};\phi }\right)$ $\sigma$ 0B $0\mathbf{z}$ 0w 用是固定资源管理设备及资源回 $l - 1$ ${g}_{t}$ ${g}_{t} \cdot h$ ${f}_{t} \cdot {f}_{1}$ ${}^{l}\mathbf{w}$ $\phi \doteq \left\{ {{}^{0}\mathbf{c},\ldots ,{}^{l}\mathbf{c}}\right\}$ Reflection plane of ${g}_{s}$ ${g}_{s} \cdot h$ gt $h \doteq \left\lbrack \begin{array}{l} l \\ \mathbf{k} \end{array}\right\rbrack$ ${f}_{1}$ ${f}_{1}$ + +Fig. 1: Left: Symmetric configurations of the bipedal robot Atlas (3D animation) illustrating its morphological symmetry described by the reflection group ${\mathcal{C}}_{2}$ . The robot can imitate the reflections ${g}_{s}$ (hint: note the non-reflected text on the robot’s chest). Middle: Top-view of symmetric configurations of the quadruped robot Solo (3D animation) showcasing its morphological symmetries described by the Klein four-group ${\mathcal{K}}_{4}$ . The robot can imitate two reflections $\left( {{g}_{s},{g}_{t}}\right)$ and a ${180}^{ \circ }$ rotation $\left( {g}_{r}\right)$ of space (hint: observe the unreflected/unrotated robot's heading direction and legs coloring). Symmetry transformations (arrows) affect the robot’s configuration, as well as proprioceptive measurements (center of mass linear $\mathbf{l}$ and angular $\mathbf{k}$ momentum) and exteroceptive measurements (terrain elevation, external force ${f}_{1}$ ). Right: Diagram of a toy ${\mathcal{K}}_{4}$ -equivariant neural network, processing the symmetric states of robot Solo $\mathbf{x}$ and outputting the symmetric binary foot contact states $\mathbf{y}$ (see section IV). + +Our recent work [10], aims at increasing the adoption of morphological symmetry exploitation in robotics by presenting the theoretical and practical contributions ${}^{\square }$ that enable the study and exploitation of these symmetries in arbitrary dynamical systems with any number of symmetries. In this short paper, we summarize the most important facts of morphological symmetries in robotics and their implications in data-driven applications. For a rigorous and extended development, we refer the interested reader to [10]. + +§ II. PROPERTIES OF SYMMETRIC DYNAMICAL SYSTEMS + +In robotics a symmetry $g$ is roughly defined as an energy-preserving transformation of the robot state $\left( {\mathbf{q},\dot{\mathbf{q}}}\right)$ , defined by the system generalized position $\mathbf{q} \in \mathrm{Q}$ and velocity coordinates $\dot{\mathbf{q}} \in {\mathrm{T}}_{\mathbf{q}}\mathrm{Q}$ . If a dynamical system has a group of symmetries $\mathcal{G}$ , its dynamics (i.e, its equations of motion $\mathbf{M}\left( \mathbf{q}\right) \ddot{\mathbf{q}} = \mathbf{\tau }\left( {\mathbf{q},\dot{\mathbf{q}}}\right)$ ) are equivariant. That is: + +$$ +g \cdot \left\lbrack {\underset{\text{ Inertial }}{\underbrace{\mathbf{M}\left( \mathbf{q}\right) \ddot{\mathbf{q}}}} - \underset{\text{ Moving }}{\underbrace{\mathbf{\tau }\left( {\mathbf{q},\dot{\mathbf{q}}}\right) }}}\right\rbrack = \underset{\text{ Inertial }}{\underbrace{\mathbf{M}\left( {g \cdot \mathbf{q}}\right) g \cdot \ddot{\mathbf{q}}}} - \underset{\text{ Moving }}{\underbrace{\mathbf{\tau }\left( {g \cdot \mathbf{q},g \cdot \dot{\mathbf{q}}}\right) }} = \mathbf{0} +$$ + +$$ +\forall g \in \mathcal{G},\mathbf{q} \in \mathrm{Q},\dot{\mathbf{q}} \in {\mathrm{T}}_{\mathbf{q}}\mathrm{Q}. \tag{2} +$$ + +Denoting $\mathbf{M}\left( \mathbf{q}\right) : \mathrm{Q} \rightarrow {\mathbb{R}}^{n \times n}$ as the generalized mass matrix function and $\mathbf{\tau }\left( {\mathbf{q},\dot{\mathbf{q}}}\right) : \mathrm{Q} \times {\mathrm{T}}_{\mathbf{q}}\mathrm{Q} \rightarrow {\mathbb{R}}^{n}$ as the generalized moving forces at a given state $\left( {\mathbf{q},\dot{\mathbf{q}}}\right)$ . + +This property of symmetric dynamical systems, denoted as dynamics $\mathcal{G}$ -equivariance (eq. (2)), depends on both the generalized inertial and moving forces being independently equivariant, implying: + +$$ +\mathbf{M}\left( {g \cdot \mathbf{q}}\right) = g\mathbf{M}\left( \mathbf{q}\right) {g}^{-1}\; \land \;g \cdot \mathbf{\tau }\left( {\mathbf{q},\dot{\mathbf{q}}}\right) = \mathbf{\tau }\left( {g \cdot \mathbf{q},g \cdot \dot{\mathbf{q}}}\right) +$$ + +$$ +\forall g \in \mathcal{G},\mathbf{q} \in \mathrm{Q},\dot{\mathbf{q}} \in {\mathrm{T}}_{\mathbf{q}}\mathrm{Q}. \tag{3} +$$ + +The equivariance of the inertial forces requires that the generalized mass matrix of the systems is equivariant. This is the identifying property of symmetrical dynamical systems. In practice, as the generalized mass matrix is well-defined for model-based systems, it can be used for the identification of system's symmetries using eq. (3) (see [10] for the case of rigid body dynamics). Furthermore, the equivariance of the generalized moving forces (which in practice, usually incorporates control, constraint, and external forces) implies that dynamics $\mathcal{G}$ -equivariance (eq. (2)) is upheld until a symmetry breaking force violates the equivariance of $\tau$ . + +To gain some intuition, consider as an example the bipedal robot Atlas, with symmetry group $\mathcal{G} = {\mathcal{C}}_{2} = \left\{ {e,{g}_{s}}\right\}$ . Both robot states in fig. 1-left are symmetric states (related by the action ${g}_{s}$ ). Then, eq. (2) suggests that any trajectory of motion, starting from the left robot state, will be equivalent (up to transformation by ${g}_{s}$ ) to a motion trajectory starting from the right robot state, if and only if, the moving forces driving both trajectories are equivalent (up to transformation by ${g}_{s}$ ). That is if the control and external forces are ${\mathcal{C}}_{2}$ -equivariant (eq. (3)). Note, we can perform a similar analysis for each symmetric state and action of systems with larger symmetry groups (e.g. Solo in fig. 1-center). + +The aforementioned definition of symmetries as energy-preserving transformations of the system state is intentionally generic, imposing no restrictions on the nature of the state transformation, such as whether the transformed state is feasible or reachable. This allows us to consider feasible state transformations (such as robot translations and rotations ${}^{2}$ ) along with unfeasible state transformations (such as a reflection of space) as symmetries of the system. Naturally, in robotics, we are interested in studying and exploiting feasible symmetries alone. Therefore we introduced the concept of discrete morphological symmetry, as the set of feasible symmetries of the system that imitate feasible and unfeasible symmetries. + +§ III. DISCRETE MORPHOLOGICAL SYMMETRIES (DMSS) + +A dynamical system is said to possess a DMS if it can imitate the effects of a rotation, reflection, or translation in space (i.e. Euclidean isometries), through a feasible discrete change in its configuration (see formal definition in [10]). To gain intuition, we can analyze the simplest and most common DMS. + +Reflection DMS: Although most floating-base dynamical systems are symmetric with respect to reflections of space (section II), these symmetries are infeasible due to the impossibility to execute reflections in the real-world [11]. However, systems with sagittal symmetry (e.g., Atlas in fig. 1-left, or humans) can imitate the effect of a reflection with a feasible discrete change in their configuration, by rotating their body and modifying their limbs' pose. These systems share the same symmetry group, the reflection group $\mathcal{G} \equiv {\mathcal{C}}_{2}$ . + +Multiple DMSs: This property can be extended to the case of a floating-base system having multiple DMSs, allowing it to imitate multiple distinct Euclidean isometries. Most frequently systems can imitate a set of rotations and reflections, making $\mathcal{G}$ a Cyclic ${\mathcal{C}}_{k}$ or Dihedral ${\mathcal{D}}_{2k}$ group. See examples for ${\mathcal{C}}_{3}$ in [10], and for ${\mathcal{D}}_{4} \equiv {\mathcal{K}}_{4}$ in fig. 1 -center. + +Because each DMS is defined as a feasible transformation that imitates a system’s symmetry $\bar{g}$ due to a Euclidean isometry, the group of DMSs $\mathcal{G}$ is isomorphic to a subset of the feasible and unfeasible symmetries of the dynamical system due to rotations, reflections, and translations in space. Furthermore, the existence of the DMSs is subjected to the system’s generalized mass matrix being $\mathcal{G}$ -equivariant (eq. (3)). In practice, these constraints translate to identifiable constraints in the kinematic and dynamic parameters of the system model [10]. + +§ IV. $\MATHCAL{G}$ -EQUIVARIANT AND $\MATHCAL{G}$ -INVARIANT FUNCTION APPROXIMATORS + +Once we identified the DMS group $\mathcal{G}$ of our system, we know that any proprioceptive or exteroceptive measurements have the same symmetry group $\mathcal{G}$ . Therefore, to improve generalization and sample efficiency, we can exploit the known symmetries of the input $\mathbf{x}$ and output $\mathbf{y}$ spaces, of any mapping we desire to approximate, by constructing $\mathcal{G}$ - equivariant or $\mathcal{G}$ -invariant (eq. (1)) function approximation models $\widehat{f}\left( {\mathbf{x};\phi }\right)$ , parameterized with $\phi$ . In [10] we study the case of $\mathcal{G}$ -equivariant/invariant neural networks (NN). In + +${H}_{LF}$ ${g}_{1}$ $B$ ${H}_{RB}$ ${H}_{RF}$ ${S}_{RF}$ ${E}_{RH}$ + +Fig. 2: Left: Solo sagittal (blue) and transversal (red) symmetry planes of the base body. Right: Solo's kinematic tree, and permutation symmetries of the legs/tree-branches. + +this section, we summarized the most relevant implications of DMSs for this type of machine-learning model. + + * Computational implications of using $\mathcal{G}$ -equivariant NN. Thanks to recent theoretical and practical developments [4,10,12], the use of $\mathcal{G}$ -equivariant NN instead of unconstrained NN comes at the price of a negligible increase in memory and computational resources required during training of the model. Most importantly, there is no difference, at inference time, between equivariant and unconstrained models. + + * Number of trainable parameters of a NN. Imposing equivariance/invariance constraints in NN signifies the reduction in the number of trainable parameters of the model [4,12,2]. In practice, this implies that for a $\mathcal{G}$ - equivariant layer the number of trainable parameters is reduced by approximately ${}^{1}/\left| \mathcal{G}\right|$ being $\left| \mathcal{G}\right|$ the number of symmetries of the data (i.e., number of DMSs of the system). Therefore a $\mathcal{G}$ -equivariant architecture with $\mathcal{G} = {\mathcal{C}}_{2}$ (robot Atlas in fig. 1-left), or $\mathcal{G} = {\mathcal{K}}_{4}$ (Solo in fig. 1-center) will have approximately $1/2$ (Atlas) or $1/4$ (Solo) of the trainable parameters of an unconstrained NN of the same architectural size. The reduction of parameters is caused by the parameter sharing and is visually depicted in fig. 1-right. + +An increasing amount of theoretical [2, 14] and empirical $\left\lbrack {{12},3,{10},{13}}\right\rbrack$ evidence suggest that when the data features symmetries, the use of equivariant/invariant function approximation models leads to increase generalization capabilities and a reduction in sample complexity. On [10] we present empirical evidence in robotics in a synthetic and real-world learning application. Here, we summarize the results of the real-world application. + +§ V. EXPERIMENTS + +We present a supervised experiment using real-world data in a classification application to showcase the effectiveness of Discrete Morphological Symmetries (DMSs) for data augmentation and training equivariant functions. The goal is to demonstrate the positive impact of exploiting DMSs on the model's sample efficiency and generalization capacity. For a detailed analysis of the technical aspects and additional experiments, please refer to [10]. + +${}^{2}$ In conservative systems, translational, rotational, and time-shift symmetries imply, by Noether's theorem, the conservation of linear momentum, angular momentum, and energy, respectively [8]. + +Mini-Cheetah $\mathcal{G} \approx {\mathcal{C}}_{2}$ Mini-Cheetah $\mathcal{G} \approx {\mathcal{C}}_{2}$ 0.95 CNN-aug 0.90 CNN 0.85 Metric Score 0.80 0.75 0.70 ECNN 0.65 CNN-aug CNN 0.60 ${300}\mathrm{\;k}$ ${400}\mathrm{\;k}$ ${500}\mathrm{\;k}$ leg-LF leg-RF leg-LH leg-RH RGS-AVG state training samples ECNN 0.90 CNN-aug 0.88 test loss test legs avg-fl 0.85 0.83 0.80 1.0 0.75 0.73 0 100 k 200 k ${300}\mathrm{\;k}$ 400 k 500 k 100 k 200 k training samples + +Fig. 3: Static-Friction-Regime contact detection results comparing CNN, CNN-aug, and ECNN. Left: Sample efficiency in log-log scale. Middle: Average legs F1-score. Right: Classification metrics on test set performance of models trained with the entire training set. The selected metrics include contact-state $\left( {\mathbf{y} \in {\mathbb{R}}^{16}}\right)$ accuracy (Acc) and fl-score (F1) for each leg binary contact state. Due to the sagittal symmetry of the robot, the left front (LF) and right front (RF) legs are expected to be symmetric, as well as the left hind (LH) and right hind (RH) legs. F1-score is presented considering the dataset class imbalance (see [10]). The reported values represent the average and standard deviation across 8 different seeds. + +§ A. STATIC-FRICTION-REGIME CONTACT DETECTION (CLASSIFICATION) + +In this experiment, we utilize the dataset introduced in Lin et al. [7] for estimating static-friction-regime contacts in the foots of the Mini-Cheetah quadruped robot. The dataset consists of real-world proprioceptive data $\left( {\widehat{\mathbf{q}},\widehat{\mathbf{q}}}\right.$ , base linear acceleration, base angular velocity, and leg feet positions and velocities) captured over a history of 150 time-frames. These measurements were obtained from inboard sensors during locomotion, encompassing various gaits and terrains. The dataset also includes $\mathbf{y} \in {\mathbb{R}}^{16}$ , representing the ground truth contact state of the robot, which was estimated offline using a non-causal algorithm. Our goal is to train a causal function approximator $\widehat{f}\left( {\mathbf{x};\mathbf{\phi }}\right)$ to predict the contact state based on the input proprioceptive data. + +The Mini-Cheetah robot in the real-world exhibits an approximate reflection symmetry group, $\mathcal{G} \approx {\mathcal{C}}_{2}$ . As a result, both the proprioceptive data $\mathbf{x}$ and the contact state $\mathbf{y}$ share the symmetry group $\mathcal{G}$ . In this experiment, we compare three variants of function approximators: the original Convolutional Neural Network architecture proposed by Lin et al. [7] (CNN), a version of CNN trained with data augmentation (CNN-aug), and a version of CNN that incorporates hard-equivariance constraints (E-CNN). + +The sampling efficiency and average leg contact state classification results are depicted in fig. 3-left-&-middle. The equiv-ariant model, E-CNN, demonstrates superior generalization performance and robustness to dataset biases compared to the unconstrained models [10]. Following E-CNN, CNN-aug exhibits better performance than the original CNN. In fig. 3- right, we evaluate the classification metrics of the test set when using the entire training data. The E-CNN model outperforms both CNN-aug and CNN in contact state classification and average leg contact detection. Notably, exploiting symmetries helps mitigate suboptimal asymmetries in the models, preventing them from favoring the classification of one leg over others (observe legs LF and RF in [fig. 3-right). + +§ VI. CONCLUSIONS & DISCUSSION + +In this work, we summarize the findings presented in [10], where we present the definition of Discrete Morphological Symmetry (DMS): a capability of some dynamical systems to imitate the effect of rotations, translations, and infeasible reflections of space with a feasible discrete change in the system configuration. Using the language of group theory we study the set of DMSs of a dynamical system as a symmetry group $\mathcal{G}$ and conclude that: (1) A system with a symmetry group $\mathcal{G}$ exhibits $\mathcal{G}$ -equivariant generalized mass matrix and dynamics. (2) That the symmetries of the dynamics extend to optimal control policies as well as to any proprioceptive and exteroceptive measurements, related to the evolution of the system's dynamics. + +We establish the necessary theoretical abstractions to investigate and identify DMSs in any dynamical system, irrespective of the number of symmetries present. This new formalism allows us to identify the reflection/sagittal symmetry, prevalent in humans, animals, and most robots, as the simplest morphological symmetry group $\mathcal{G} = {\mathcal{C}}_{2}$ . Crucially, we use the same formalism to identify and exploit DMSs in real-world robotic systems with a greater number of symmetries. + +In addition, we provide an open-access repository that facilitates the efficient prototyping of $\mathcal{G}$ -equivariant neural networks for exploiting DMS in various applications involving rigid-body dynamics, such as robotics, computer graphics, and computational biology. This repository includes a growing collection of symmetric dynamical systems, with their corresponding symmetry groups already identified. Furthermore, we present compelling empirical and theoretical evidence supporting the utilization of DMSs in data-driven applications through data augmentation and the adoption of $\mathcal{G}$ -equivariant neural networks. Both symmetry exploitation techniques result in improved sample efficiency and generalization. + +§ ACKNOWLEDGMENTS + +This work's experiments were run at the Barcelona Super-computing Center in collaboration with the HPAI group. This work is supported by the Spanish government with the project MoHuCo PID2020-120049RB-I00 and the ERA-Net Chistera project IPALM PCI2019-103386. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/rnmab4CQN_/Initial_manuscript_md/Initial_manuscript.md b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/rnmab4CQN_/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..9dc8ddbb5ba49c6d03bbfa4b1f0f81eca6bd3e6f --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/rnmab4CQN_/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,157 @@ +# Progressive Learning for Physics-informed Neural Motion Planning + +Ruiqi Ni and Ahmed H. Qureshi Department of Computer Science, Purdue University \{ni117, ahqureshi\}@purdue.edu + +![019640f3-2556-77b6-9b80-623f6bf0feac_0_149_498_1499_431_0.jpg](images/019640f3-2556-77b6-9b80-623f6bf0feac_0_149_498_1499_431_0.jpg) + +Fig. 1: Physics-informed neural motion planning of a 6-DOF robot manipulator in a real-world narrow passage environment. The images from left to right show the robot's motion sequence from its start to the desired goal configuration. In this case, the proposed approach took 0.05 seconds, whereas LazyPRM* took 2.79 seconds to find a path, making our method at least ${50} \times$ faster than a traditional approach. + +Abstract-Neural motion planners (NMPs) demonstrate fast computational speed in finding path solutions but require a huge amount of expert trajectories for learning, thus adding a significant training computational load. In contrast, recent advancements have also led to a physics-informed NMP approach that directly solves the Eikonal equation for motion planning and does not require expert demonstrations for learning. However, experiments show that the physics-informed NMP approach performs poorly in complex environments and lacks scalability in multiple scenarios and high-dimensional real robot settings. To overcome these limitations, this paper presents a novel and tractable Eikonal equation formulation and introduces a new progressive learning strategy to train neural networks without expert data in complex, cluttered, multiple high-dimensional robot motion planning scenarios. We show that our approach scales to the real robot set up in a narrow passage environment. The proposed method's videos and code implementations are available at https://github.com/ruiqini/P-NTFields. + +## I. INTRODUCTION + +Robots moving in their surrounding environment must find their feasible motion trajectory coordinating their actuators to move from their start configuration to goal configuration while satisfying all the constraints, such as collision avoidance. + +Inspired by physics-informed deep learning models [5, 6], recent development has led to a physics-informed NMP called Neural Time Fields (NTFields) [3] that require no expert training trajectories and instead directly learn to solve the Eikonal equation for motion planning. Once trained, NTFields output the speed and time fields in the given environment for the desired start and goal configuration. Time fields' gradients are then followed to retrieve the feasible path solution for the underlying MP problem. Although NTFields find path solutions extremely fast and require no expert data, they struggle in complex environments and do not scale well to multiple scenarios and high-dimensional planning problems. These limitations are mainly due to the following two reasons. First, the Eikonal equation formulation has an extremely sharp feature solution around low-speed obstacles, making it difficult for the underlying deep-learning model to converge and perform well in complex scenarios. Second, training deep neural models to solve PDEs is inherently challenging and requires advanced learning strategies and an expressive PDE formulation with a smooth loss landscape. + +Therefore, this paper addresses the limitations of NTFields and proposes a new progressive learning method, which also requires no training trajectories and scales very well to complex scenarios, including high-dimensional, real-world robot manipulator planning problems. The main contributions of the paper are summarized as follows: + +- We highlight that the Eikonal equation formulation for motion planning in NTFields can converge to incorrect local minimums during training, resulting in relatively low performance and incapability to scale to multiple, complex environments. + +- We introduce a novel progressive speed scheduling strategy that iteratively guides neural model training from a constant high speed to a very low speed around obstacles in the environment, preventing incorrect local minimums when training physics-informed NMPs in complex, cluttered environments. + +- We propose using the viscosity term [1] based on the Laplacian operator in the Eikonal equation formulation to transform its ill-posed, non-linear behavior into a semi-linear elliptic representation with a unique smooth solution around low-speed obstacles. Our novel formulation leads to physics-informed NMPs that are scalable to complex scenarios. + +- We also demonstrate our framework performance using a 6 degree-of-freedom (DOF) UR5e robot in solving real-world narrow passage motion planning problems, as shown in Fig. 1. + +## II. BACKGROUND + +This section formally presents the background to robot motion planning problems and their solutions through physics-informed NMPs. + +## A. Robot Motion Planning + +Let the robot's configuration and environment space be denoted as $\mathcal{Q} \subset {\mathbb{R}}^{d}$ and $\mathcal{X} \subset {\mathbb{R}}^{m}$ , where $\{ m, d\} \in \mathbb{N}$ represents their dimensionality. The obstacles in the environment, denoted as ${\mathcal{X}}_{\text{obs }} \subset \mathcal{X}$ , form a formidable robot configuration space (c-space) defined as ${\mathcal{Q}}_{\text{obs }} \subset \mathcal{Q}$ . Finally, the feasible space in the environment and c-space is represented as ${\mathcal{X}}_{\text{free }} = \mathcal{X} \smallsetminus {\mathcal{X}}_{\text{obs }}$ and ${\mathcal{Q}}_{\text{free }} = \mathcal{Q} \smallsetminus {\mathcal{Q}}_{\text{obs }}$ , respectively. The objective of robot motion planning algorithms is to find a trajectory $\tau \subset {\mathcal{Q}}_{\text{free }}$ that connects the given robot start ${q}_{s} \in {Q}_{\text{free }}$ and goal ${q}_{g} \in {Q}_{\text{free }}$ configurations. Furthermore, additional constraints are sometimes imposed on the trajectory connecting the start and goal, such as having the shortest Euclidean distance or minimum travel time. The latter is often preferred as it allows imposing speed constraints near obstacles for robot and environment safety. However, planning under speed constraints is computationally expensive, and existing methods rely on path-smoothing techniques when safety is desired. + +## B. Physics-informed Motion Planning Framework + +Recent development led to a physics-informed motion planning framework called Neural Time Fields (NTFields) [3], which provide a computationally-efficient and demonstration-free deep learning method for motion planning problems. It views motion planning problems as the solution to a PDE, specifically focusing on solving the Eikonal equation. The Eikonal equation, a first-order non-linear PDE, allows finding the shortest trajectory between start $\left( {q}_{s}\right)$ and goal $\left( {q}_{g}\right)$ under speed constraints by relating a predefined speed model $S\left( q\right)$ at configuration ${q}_{g}$ to the arrival time $T\left( {{q}_{s},{q}_{g}}\right)$ from ${q}_{s}$ to ${q}_{g}$ as follows: + +$$ +1/S\left( {q}_{g}\right) = \begin{Vmatrix}{{\nabla }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right) }\end{Vmatrix} \tag{1} +$$ + +The ${\nabla }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right)$ is the partial derivative of the arrival time $T\left( {{q}_{s},{q}_{g}}\right)$ function with respect to ${q}_{g}$ . Therefore, finding a trajectory connecting the given start and goal requires solving the PDE under a predefined speed model and arrival time function. The arrival time function in NTFields is factorized as follows: + +$$ +T\left( {{q}_{s},{q}_{g}}\right) = \begin{Vmatrix}{{q}_{s} - {q}_{g}}\end{Vmatrix}/\tau \left( {{q}_{s},{q}_{g}}\right) \tag{2} +$$ + +The $\tau \left( {{q}_{s},{q}_{g}}\right)$ is the factorized time field which is the output of NTFields’ deep neural network for the given ${q}_{s}$ and ${q}_{g}$ . Since the neural network in NTfields outputs the factorized time field $\tau$ , the corresponding predicted speed is computed using the above equation. Furthermore, the NTField framework determines the ground truth speed using a predefined speed function: + +$$ +{S}^{ * }\left( q\right) = \frac{{s}_{\text{const }}}{{d}_{\max }} \times \operatorname{clip}\left( {\mathbf{d}\left( {\mathbf{p}\left( q\right) ,{\mathcal{X}}_{\text{obs }}}\right) ,{d}_{\min },{d}_{\max }}\right) \tag{3} +$$ + +where $\mathbf{d}\left( {\cdot , \cdot }\right)$ is the minimal distance between robot surface points $\mathbf{p}\left( q\right)$ at configuration $q$ and the environment obstacles ${\mathcal{X}}_{\text{obs }}$ . The ${d}_{\text{min }}$ , and ${d}_{\text{max }}$ are minimum and maximum distance thresholds, and the ${s}_{\text{const }}$ is a predefined speed constant; we normalize ${s}_{\text{const }} = 1$ to represent the maximum speed in the free space, and ${s}_{\text{min }} = {s}_{\text{const }} \times {d}_{\text{min }}/{d}_{\text{max }}$ represents the minimum speed in the obstacle space. Finally, the NTFields neural framework is trained end-to-end using a isotropic loss function between predicted $S$ and ground truth ${S}^{ * }$ speeds. + +## III. Proposed Method + +Although NTFields demonstrate the ability for efficient motion planning without expert training data, it exhibits relatively low success rates in complex, cluttered environments, including high-dimensional problems, and does not scale to multiple scenarios. We observed that these limitations are mainly because of the ill-posed nature of the Eikonal equation and that the physics-informed loss landscapes are hard to optimize in general. To overcome these limitations, we introduce a new progressive learning algorithm comprising a novel viscosity-based Eikonal equation formulation and a progressive speed update strategy to train physics-informed NMPs in multiple, complex, high-dimensional scenarios. + +## A. Viscosity-based Eikonal Equation + +The Eikonal equation's exact solution has several problems that lead to neural network fitting issues. First, the solution is not differentiable at every point in space, which means a neural network cannot approximate the solution very well, especially for the sharp feature in low-speed environments. Second, the gradient ${\nabla }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right)$ is not unique at these non-smooth points, which will also cause the neural network fitting issue because training is based on the supervision of the gradient ${\nabla }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right)$ . + +To fix these problems, we propose to use a viscosity term that can provide a differentiable and unique approximation of the Eikonal equation's solution. The viscosity term comes from the vanishing viscosity method [1]. It adds the Laplacian ${\Delta }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right)$ to the Eikonal equation, i.e., + +$$ +1/S\left( {q}_{g}\right) = \begin{Vmatrix}{{\nabla }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right) }\end{Vmatrix} + \epsilon {\Delta }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right) , \tag{4} +$$ + +where $\epsilon \in \mathbb{R}$ is a scaling coefficient. The resulting system in Eq. 4 is a semi-linear elliptic PDE with a smooth and unique solution. Furthermore, the value of $\epsilon$ affects the smoothness of the predicted time fields. In Fig 2, we compare fields with different values of $\epsilon$ to the ground truth field generated with the FMM approach. It can be seen that by varying the $\epsilon$ , the correctness of results varies compared to the ground truth. In practice, when the coefficient $\epsilon \rightarrow 0$ , the smooth and unique solution of Eq. 4 will approach the exact solution of the Eikonal equation Eq. 1. + +![019640f3-2556-77b6-9b80-623f6bf0feac_2_136_145_753_250_0.jpg](images/019640f3-2556-77b6-9b80-623f6bf0feac_2_136_145_753_250_0.jpg) + +Fig. 2: Effect of viscosity coefficient, $\epsilon$ , on the correctness of time field results. It can be seen a large value of $\epsilon$ deviates from the solution given by the expert. The expert is FMM which finds a solution to the Eikonal equation. The colorbar shows the speed fields range from 0 to 1 . + +## B. Progressive speed scheduling + +This section introduces our progressive speed scheduling approach to train physics-informed motion planners in complex environments. The physics-based loss functions are generally challenging to optimize as they depend on the gradient of the underlying neural network. In physics-informed motion planners, the optimization becomes more difficult due to low-speed conditions near obstacles, often leading to an incorrect local minimum, i.e., despite small training loss, the neural model behaves as if low-speed obstacles do not exist in the environment. To circumvent the incorrect local minimums, we observe and leverage the following two properties of the Eikonal equation to progressively guide the NN training process and capture the low-speed obstacle space for collision avoidance. + +First, we notice the solution of the Eikonal equation (Eq. 1), $T\left( {{q}_{s},{q}_{g}}\right)$ , in a constant max speed scene $\left( {S\left( q\right) = 1}\right)$ will become the distance between the given start and goal, which leads to trivial solution $\tau \left( {{q}_{s},{q}_{g}}\right) = 1$ . Second, we find that the interpolation from the constant max-speed to the low speed around obstacles is continuous, and the solutions of the Eikonal equation along those interpolations are also continuous. Based on these observations, we propose a progressive speed alteration strategy that gradually scales down the speed from a constant max value to a low value around obstacles using a parameter $\alpha \left( t\right) \in \left\lbrack {0,1}\right\rbrack$ , i.e., + +$$ +{S}_{\alpha \left( t\right) }^{ * }\left( q\right) = \left( {1 - \alpha \left( t\right) }\right) + \alpha \left( t\right) {S}^{ * }\left( q\right) , \tag{5} +$$ + +where $t \in \mathbb{N}$ represent the training epochs. Therefore, when $\alpha \left( t\right) = 0$ , the scene will have a constant max speed, and the Eikonal equation solution will be trivial. Furthermore, when $\alpha \left( t\right) = 1$ , the scene will have low speed around obstacles. Fig 3 shows the gradual progression of speed and time fields as $\alpha$ linearly scales from 0 to 1 . It can be seen that the speed and time fields are changing continuously with $\alpha$ changing linearly. + +![019640f3-2556-77b6-9b80-623f6bf0feac_2_912_147_752_249_0.jpg](images/019640f3-2556-77b6-9b80-623f6bf0feac_2_912_147_752_249_0.jpg) + +Fig. 3: Progressively decreasing the speed around obstacles using parameter $\alpha$ leads to continuous interpolation of speed and time fields in the given environment. The colorbar shows the speed fields range from 0 to 1 . + +To train the physics-informed motion planner, we start with a low value of $\alpha \left( t\right)$ and let NN fit a constant speed trivial solution. Next, we progressively interpolate the field from constant max speed to low speed by gradually increasing the $\alpha \left( t\right)$ over the training epochs. The NN can easily fit the trivial solution. Then progressively decreasing obstacle speed ${S}^{ * }\left( q\right)$ guides the network to learn the interpolating lower-speed fields. Furthermore, we also observe that the speed fields change linearly with $\alpha \left( t\right)$ , but the resulting time fields change more aggressively. Thus, we also reduce the rate of change of $\alpha \left( t\right)$ as the training epochs increase. + +## C. Neural Architecture + +This section describes our neural framework, as shown in Fig. 4, for generating the speed and time fields for solving the robot motion planning problems. Our framework comprises the following modules. Given the robot’s initial $\left( {q}_{s}\right)$ and target $\left( {q}_{g}\right)$ configurations, we use random Fourier features $\gamma$ [7,4] for obtaining high-frequency robot configuration embeddings. These features are further processed into a latent embedding by a C-space encoder $f\left( \cdot \right)$ , which is a ResNet-style multi-layer perception [2]. To combine features $f\left( {\gamma \left( {q}_{s}\right) }\right)$ and $f\left( {\gamma \left( {q}_{g}\right) }\right)$ , we use the non-linear symmetric operator $\otimes$ from NTFields method [3]. Our time field generator network $g$ is a ResNet-style multi-layer perceptron which takes the encoding $f\left( {\gamma \left( {q}_{s}\right) }\right) \bigotimes f\left( {\gamma \left( {q}_{g}\right) }\right)$ and outputs the factorized time field $\tau \left( {{q}_{s},{q}_{g}}\right) = g\left( {f\left( {\gamma \left( {q}_{s}\right) }\right) \bigotimes f\left( {\gamma \left( {q}_{g}\right) }\right) }\right)$ . Given the $\tau \left( {{q}_{s},{q}_{g}}\right)$ , we compute its gradient and Laplacian to determine the $S\left( {q}_{s}\right)$ and $S\left( {q}_{g}\right)$ . Finally, we propose a smooth isotropic objective function [6] to train our framework. + +$$ +L\left( {{S}_{\alpha }^{ * }\left( q\right) , S\left( q\right) }\right) = \frac{{S}_{\alpha }^{ * }\left( {q}_{s}\right) }{S\left( {q}_{s}\right) } + \frac{S\left( {q}_{s}\right) }{{S}_{\alpha }^{ * }\left( {q}_{s}\right) } + \frac{{S}_{\alpha }^{ * }\left( {q}_{g}\right) }{S\left( {q}_{g}\right) } + \frac{S\left( {q}_{g}\right) }{{S}_{\alpha }^{ * }\left( {q}_{g}\right) } - 4 \tag{6} +$$ + +## D. Planning pipeline + +Once trained, we use the execution pipeline similar to the NTFields method. First, we predict $\tau \left( {{q}_{s},{q}_{g}}\right)$ for the given start ${q}_{s}$ , goal ${q}_{g}$ . Next, the factorized time, $\tau$ , parameterizes Eq. 2 and 1 for computing time $T\left( {{q}_{s},{q}_{g}}\right)$ and speed fields $S\left( {q}_{s}\right) , S\left( {q}_{g}\right)$ , respectively. Finally, the path solution is determined in a bidirectional manner by iteratively updating the start and goal configurations as follows, + +$$ +{q}_{s} \leftarrow {q}_{s} - \beta {S}^{2}\left( {q}_{s}\right) {\nabla }_{{q}_{s}}T\left( {{q}_{s},{q}_{g}}\right) \tag{7} +$$ + +$$ +{q}_{g} \leftarrow {q}_{g} - \beta {S}^{2}\left( {q}_{g}\right) {\nabla }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right) +$$ + +![019640f3-2556-77b6-9b80-623f6bf0feac_3_142_144_742_430_0.jpg](images/019640f3-2556-77b6-9b80-623f6bf0feac_3_142_144_742_430_0.jpg) + +Fig. 4: The neural architecture comprises the Fourier-based C-space Encoder, symmetric operator, and time-field generator. Three images on the top left show we progressively decrease the speed around a bunny-shaped obstacle to guide the neural network training. The image on the top right shows the final time field from start to goal generated by the trained model. + +The parameter $\beta \in \mathbb{R}$ is a predefined step size. Furthermore, at each planning iteration, the start and goal configurations are updated using gradients to march toward each other until $\begin{Vmatrix}{{q}_{s} - {q}_{g}}\end{Vmatrix} < {d}_{g}$ , where ${d}_{g} \in \mathbb{R}$ . + +## IV. EVALUATION + +In this section, we evaluate our method through the 6-DOF UR5e robot manipulator planning in two complex cabinet environments with narrow passages. For these scenarios, we present evaluations in both simulation and real-world. + +In the simulation, we directly load a cabinet mesh, whereas, for real setup, we use Dot3D with RealSense camera to scan and create a point cloud of an actual cabinet. To form our test set, we randomly sampled $2 \times {100}$ start and goal configuration pairs for simulated and real-world environments. + +The table in Fig. 5 compares our method, NTField, RRT*, Lazy-PRM*, and RRT-Connect in both scenarios. We exclude IEF3D due to large data generation and training times. In the table, it can be seen that our method achieves the highest success rate with the shortest execution time, demonstrating the effectiveness of our progressive learning approach in complex, narrow passage environments. + +Fig. 5 shows the execution of our method (left) and RRT-Connect (right) in a challenging case in the simulated environment and the table underneath presents the overall statistical comparison of the indicated methods on the testing dataset. In the presented scenario, the UR5e robot's end effector starts from the middle shelf of the cabinet and crosses two relatively thin obstacles to the bottom shelf of the cabinet without collision. In this particular situation, NTField could not find a solution whereas our method took 0.07 seconds to get a 0.83 length path with a safe margin of 0.03 , and RRT-Connect took 20.13 seconds to get a 0.90 length path with a safe margin of 0.02 . For real-world experiments, in Fig. 1, we show a challenging path that the robot went from the initial pose to make its end effect go deep into the cabinet. + +![019640f3-2556-77b6-9b80-623f6bf0feac_3_911_149_760_318_0.jpg](images/019640f3-2556-77b6-9b80-623f6bf0feac_3_911_149_760_318_0.jpg) + +
Manipulatortime (sec)lengthsafe marginsr(%)
Ours${0.03} \pm {0.00}$${0.43} \pm {0.10}$${0.04} \pm {0.00}$92.0
NTFields${0.05} \pm {0.00}$${0.38} \pm {0.06}$${0.04} \pm {0.00}$84.5
RRT*${5.16} \pm {0.01}$${0.52} \pm {0.36}$${0.04} \pm {0.00}$67.0
LazyPRM*${2.79} \pm {0.48}$${0.76} \pm {0.80}$${0.04} \pm {0.00}$86.0
RRT-Connect${1.08} \pm {0.69}$${1.14} \pm {0.23}$${0.02} \pm {0.00}$87.5
+ +Fig. 5: Our method (left) and RRT-Connect (right) in a challenging case in the simulated environment: the manipulator crosses two relatively thin obstacles to move from the middle (start) to the bottom (goal) shelf. The table shows statistical results on $2 \times {100}$ different starts and goals for two environments. + +## V. DISCUSSIONS, CONCLUSIONS, AND FUTURE WORK + +We propose a novel progressive learning framework to train physics-informed NMPs by solving the Eikonal equation without expert demonstration. Our method deals with the PDE-solving challenges in physics-informed NMPs such as NTFields [3]. First, we propose a progressive speed scheduling strategy that begins with finding a simple PDE solution at constant high speed and then gradually decreases the speed near the obstacle for finding a new solution. Second, we propose to use the viscosity term for the Eikonal equation and convert a nonlinear PDE to a semi-linear PDE, which is easy for a neural network to solve. Thus our method solves the Eikonal equation more precisely and efficiently and increases the overall performance in solving motion planning problems than prior methods. Additionally, our method requires fewer neural network parameters due to our progressive learning strategy than NTFields, leading to computationally efficient physics-informed NMPs' training and planning. Furthermore, we also demonstrate that our method scales to multiple environments and complex scenarios, such as real-world narrow-passage planning with a 6-DOF UR5e manipulator. + +Although our method can scale to multiple environments and real-world setups and outperform prior methods with expert demonstration data, a few limitations, highlighted in the following, will still be the focus of our future research directions. First, our method cannot generalize to unseen environments and only scales to given multiple scenarios. Therefore, one of our future directions will be to explore novel environment encoding strategies to make physics-informed NMP generalize to the novel, never-before-seen environments. Lastly, aside from addressing a few limitations, we also aim to explore novel PDE formulations to train physics-informed NMPs to solve motion planning under dynamic and manifold constraints. + +## REFERENCES + +[1] Michael G Crandall and Pierre-Louis Lions. Viscosity solutions of hamilton-jacobi equations. Transactions of the American mathematical society, 277(1):1-42, 1983. + +[2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. + +[3] Ruiqi Ni and Ahmed H Qureshi. NTFields: Neural time fields for physics-informed robot motion planning. In International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=ApF0dmi1_ 9K. + +[4] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. Advances in neural information processing systems, 20, 2007. + +[5] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686-707, 2019. + +[6] Jonathan D Smith, Kamyar Azizzadenesheli, and Zachary E Ross. Eikonet: Solving the eikonal equation with deep neural networks. IEEE Transactions on Geoscience and Remote Sensing, 59(12):10685-10696, 2020. + +[7] Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in Neural Information Processing Systems, 33:7537-7547, 2020. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/rnmab4CQN_/Initial_manuscript_tex/Initial_manuscript.tex b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/rnmab4CQN_/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..710aa47d0e355aba68e4dea286ebcc508e71a3e4 --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/rnmab4CQN_/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,160 @@ +§ PROGRESSIVE LEARNING FOR PHYSICS-INFORMED NEURAL MOTION PLANNING + +Ruiqi Ni and Ahmed H. Qureshi Department of Computer Science, Purdue University {ni117, ahqureshi}@purdue.edu + + < g r a p h i c s > + +Fig. 1: Physics-informed neural motion planning of a 6-DOF robot manipulator in a real-world narrow passage environment. The images from left to right show the robot's motion sequence from its start to the desired goal configuration. In this case, the proposed approach took 0.05 seconds, whereas LazyPRM* took 2.79 seconds to find a path, making our method at least ${50} \times$ faster than a traditional approach. + +Abstract-Neural motion planners (NMPs) demonstrate fast computational speed in finding path solutions but require a huge amount of expert trajectories for learning, thus adding a significant training computational load. In contrast, recent advancements have also led to a physics-informed NMP approach that directly solves the Eikonal equation for motion planning and does not require expert demonstrations for learning. However, experiments show that the physics-informed NMP approach performs poorly in complex environments and lacks scalability in multiple scenarios and high-dimensional real robot settings. To overcome these limitations, this paper presents a novel and tractable Eikonal equation formulation and introduces a new progressive learning strategy to train neural networks without expert data in complex, cluttered, multiple high-dimensional robot motion planning scenarios. We show that our approach scales to the real robot set up in a narrow passage environment. The proposed method's videos and code implementations are available at https://github.com/ruiqini/P-NTFields. + +§ I. INTRODUCTION + +Robots moving in their surrounding environment must find their feasible motion trajectory coordinating their actuators to move from their start configuration to goal configuration while satisfying all the constraints, such as collision avoidance. + +Inspired by physics-informed deep learning models [5, 6], recent development has led to a physics-informed NMP called Neural Time Fields (NTFields) [3] that require no expert training trajectories and instead directly learn to solve the Eikonal equation for motion planning. Once trained, NTFields output the speed and time fields in the given environment for the desired start and goal configuration. Time fields' gradients are then followed to retrieve the feasible path solution for the underlying MP problem. Although NTFields find path solutions extremely fast and require no expert data, they struggle in complex environments and do not scale well to multiple scenarios and high-dimensional planning problems. These limitations are mainly due to the following two reasons. First, the Eikonal equation formulation has an extremely sharp feature solution around low-speed obstacles, making it difficult for the underlying deep-learning model to converge and perform well in complex scenarios. Second, training deep neural models to solve PDEs is inherently challenging and requires advanced learning strategies and an expressive PDE formulation with a smooth loss landscape. + +Therefore, this paper addresses the limitations of NTFields and proposes a new progressive learning method, which also requires no training trajectories and scales very well to complex scenarios, including high-dimensional, real-world robot manipulator planning problems. The main contributions of the paper are summarized as follows: + + * We highlight that the Eikonal equation formulation for motion planning in NTFields can converge to incorrect local minimums during training, resulting in relatively low performance and incapability to scale to multiple, complex environments. + + * We introduce a novel progressive speed scheduling strategy that iteratively guides neural model training from a constant high speed to a very low speed around obstacles in the environment, preventing incorrect local minimums when training physics-informed NMPs in complex, cluttered environments. + + * We propose using the viscosity term [1] based on the Laplacian operator in the Eikonal equation formulation to transform its ill-posed, non-linear behavior into a semi-linear elliptic representation with a unique smooth solution around low-speed obstacles. Our novel formulation leads to physics-informed NMPs that are scalable to complex scenarios. + + * We also demonstrate our framework performance using a 6 degree-of-freedom (DOF) UR5e robot in solving real-world narrow passage motion planning problems, as shown in Fig. 1. + +§ II. BACKGROUND + +This section formally presents the background to robot motion planning problems and their solutions through physics-informed NMPs. + +§ A. ROBOT MOTION PLANNING + +Let the robot's configuration and environment space be denoted as $\mathcal{Q} \subset {\mathbb{R}}^{d}$ and $\mathcal{X} \subset {\mathbb{R}}^{m}$ , where $\{ m,d\} \in \mathbb{N}$ represents their dimensionality. The obstacles in the environment, denoted as ${\mathcal{X}}_{\text{ obs }} \subset \mathcal{X}$ , form a formidable robot configuration space (c-space) defined as ${\mathcal{Q}}_{\text{ obs }} \subset \mathcal{Q}$ . Finally, the feasible space in the environment and c-space is represented as ${\mathcal{X}}_{\text{ free }} = \mathcal{X} \smallsetminus {\mathcal{X}}_{\text{ obs }}$ and ${\mathcal{Q}}_{\text{ free }} = \mathcal{Q} \smallsetminus {\mathcal{Q}}_{\text{ obs }}$ , respectively. The objective of robot motion planning algorithms is to find a trajectory $\tau \subset {\mathcal{Q}}_{\text{ free }}$ that connects the given robot start ${q}_{s} \in {Q}_{\text{ free }}$ and goal ${q}_{g} \in {Q}_{\text{ free }}$ configurations. Furthermore, additional constraints are sometimes imposed on the trajectory connecting the start and goal, such as having the shortest Euclidean distance or minimum travel time. The latter is often preferred as it allows imposing speed constraints near obstacles for robot and environment safety. However, planning under speed constraints is computationally expensive, and existing methods rely on path-smoothing techniques when safety is desired. + +§ B. PHYSICS-INFORMED MOTION PLANNING FRAMEWORK + +Recent development led to a physics-informed motion planning framework called Neural Time Fields (NTFields) [3], which provide a computationally-efficient and demonstration-free deep learning method for motion planning problems. It views motion planning problems as the solution to a PDE, specifically focusing on solving the Eikonal equation. The Eikonal equation, a first-order non-linear PDE, allows finding the shortest trajectory between start $\left( {q}_{s}\right)$ and goal $\left( {q}_{g}\right)$ under speed constraints by relating a predefined speed model $S\left( q\right)$ at configuration ${q}_{g}$ to the arrival time $T\left( {{q}_{s},{q}_{g}}\right)$ from ${q}_{s}$ to ${q}_{g}$ as follows: + +$$ +1/S\left( {q}_{g}\right) = \begin{Vmatrix}{{\nabla }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right) }\end{Vmatrix} \tag{1} +$$ + +The ${\nabla }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right)$ is the partial derivative of the arrival time $T\left( {{q}_{s},{q}_{g}}\right)$ function with respect to ${q}_{g}$ . Therefore, finding a trajectory connecting the given start and goal requires solving the PDE under a predefined speed model and arrival time function. The arrival time function in NTFields is factorized as follows: + +$$ +T\left( {{q}_{s},{q}_{g}}\right) = \begin{Vmatrix}{{q}_{s} - {q}_{g}}\end{Vmatrix}/\tau \left( {{q}_{s},{q}_{g}}\right) \tag{2} +$$ + +The $\tau \left( {{q}_{s},{q}_{g}}\right)$ is the factorized time field which is the output of NTFields’ deep neural network for the given ${q}_{s}$ and ${q}_{g}$ . Since the neural network in NTfields outputs the factorized time field $\tau$ , the corresponding predicted speed is computed using the above equation. Furthermore, the NTField framework determines the ground truth speed using a predefined speed function: + +$$ +{S}^{ * }\left( q\right) = \frac{{s}_{\text{ const }}}{{d}_{\max }} \times \operatorname{clip}\left( {\mathbf{d}\left( {\mathbf{p}\left( q\right) ,{\mathcal{X}}_{\text{ obs }}}\right) ,{d}_{\min },{d}_{\max }}\right) \tag{3} +$$ + +where $\mathbf{d}\left( {\cdot , \cdot }\right)$ is the minimal distance between robot surface points $\mathbf{p}\left( q\right)$ at configuration $q$ and the environment obstacles ${\mathcal{X}}_{\text{ obs }}$ . The ${d}_{\text{ min }}$ , and ${d}_{\text{ max }}$ are minimum and maximum distance thresholds, and the ${s}_{\text{ const }}$ is a predefined speed constant; we normalize ${s}_{\text{ const }} = 1$ to represent the maximum speed in the free space, and ${s}_{\text{ min }} = {s}_{\text{ const }} \times {d}_{\text{ min }}/{d}_{\text{ max }}$ represents the minimum speed in the obstacle space. Finally, the NTFields neural framework is trained end-to-end using a isotropic loss function between predicted $S$ and ground truth ${S}^{ * }$ speeds. + +§ III. PROPOSED METHOD + +Although NTFields demonstrate the ability for efficient motion planning without expert training data, it exhibits relatively low success rates in complex, cluttered environments, including high-dimensional problems, and does not scale to multiple scenarios. We observed that these limitations are mainly because of the ill-posed nature of the Eikonal equation and that the physics-informed loss landscapes are hard to optimize in general. To overcome these limitations, we introduce a new progressive learning algorithm comprising a novel viscosity-based Eikonal equation formulation and a progressive speed update strategy to train physics-informed NMPs in multiple, complex, high-dimensional scenarios. + +§ A. VISCOSITY-BASED EIKONAL EQUATION + +The Eikonal equation's exact solution has several problems that lead to neural network fitting issues. First, the solution is not differentiable at every point in space, which means a neural network cannot approximate the solution very well, especially for the sharp feature in low-speed environments. Second, the gradient ${\nabla }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right)$ is not unique at these non-smooth points, which will also cause the neural network fitting issue because training is based on the supervision of the gradient ${\nabla }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right)$ . + +To fix these problems, we propose to use a viscosity term that can provide a differentiable and unique approximation of the Eikonal equation's solution. The viscosity term comes from the vanishing viscosity method [1]. It adds the Laplacian ${\Delta }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right)$ to the Eikonal equation, i.e., + +$$ +1/S\left( {q}_{g}\right) = \begin{Vmatrix}{{\nabla }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right) }\end{Vmatrix} + \epsilon {\Delta }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right) , \tag{4} +$$ + +where $\epsilon \in \mathbb{R}$ is a scaling coefficient. The resulting system in Eq. 4 is a semi-linear elliptic PDE with a smooth and unique solution. Furthermore, the value of $\epsilon$ affects the smoothness of the predicted time fields. In Fig 2, we compare fields with different values of $\epsilon$ to the ground truth field generated with the FMM approach. It can be seen that by varying the $\epsilon$ , the correctness of results varies compared to the ground truth. In practice, when the coefficient $\epsilon \rightarrow 0$ , the smooth and unique solution of Eq. 4 will approach the exact solution of the Eikonal equation Eq. 1. + + < g r a p h i c s > + +Fig. 2: Effect of viscosity coefficient, $\epsilon$ , on the correctness of time field results. It can be seen a large value of $\epsilon$ deviates from the solution given by the expert. The expert is FMM which finds a solution to the Eikonal equation. The colorbar shows the speed fields range from 0 to 1 . + +§ B. PROGRESSIVE SPEED SCHEDULING + +This section introduces our progressive speed scheduling approach to train physics-informed motion planners in complex environments. The physics-based loss functions are generally challenging to optimize as they depend on the gradient of the underlying neural network. In physics-informed motion planners, the optimization becomes more difficult due to low-speed conditions near obstacles, often leading to an incorrect local minimum, i.e., despite small training loss, the neural model behaves as if low-speed obstacles do not exist in the environment. To circumvent the incorrect local minimums, we observe and leverage the following two properties of the Eikonal equation to progressively guide the NN training process and capture the low-speed obstacle space for collision avoidance. + +First, we notice the solution of the Eikonal equation (Eq. 1), $T\left( {{q}_{s},{q}_{g}}\right)$ , in a constant max speed scene $\left( {S\left( q\right) = 1}\right)$ will become the distance between the given start and goal, which leads to trivial solution $\tau \left( {{q}_{s},{q}_{g}}\right) = 1$ . Second, we find that the interpolation from the constant max-speed to the low speed around obstacles is continuous, and the solutions of the Eikonal equation along those interpolations are also continuous. Based on these observations, we propose a progressive speed alteration strategy that gradually scales down the speed from a constant max value to a low value around obstacles using a parameter $\alpha \left( t\right) \in \left\lbrack {0,1}\right\rbrack$ , i.e., + +$$ +{S}_{\alpha \left( t\right) }^{ * }\left( q\right) = \left( {1 - \alpha \left( t\right) }\right) + \alpha \left( t\right) {S}^{ * }\left( q\right) , \tag{5} +$$ + +where $t \in \mathbb{N}$ represent the training epochs. Therefore, when $\alpha \left( t\right) = 0$ , the scene will have a constant max speed, and the Eikonal equation solution will be trivial. Furthermore, when $\alpha \left( t\right) = 1$ , the scene will have low speed around obstacles. Fig 3 shows the gradual progression of speed and time fields as $\alpha$ linearly scales from 0 to 1 . It can be seen that the speed and time fields are changing continuously with $\alpha$ changing linearly. + + < g r a p h i c s > + +Fig. 3: Progressively decreasing the speed around obstacles using parameter $\alpha$ leads to continuous interpolation of speed and time fields in the given environment. The colorbar shows the speed fields range from 0 to 1 . + +To train the physics-informed motion planner, we start with a low value of $\alpha \left( t\right)$ and let NN fit a constant speed trivial solution. Next, we progressively interpolate the field from constant max speed to low speed by gradually increasing the $\alpha \left( t\right)$ over the training epochs. The NN can easily fit the trivial solution. Then progressively decreasing obstacle speed ${S}^{ * }\left( q\right)$ guides the network to learn the interpolating lower-speed fields. Furthermore, we also observe that the speed fields change linearly with $\alpha \left( t\right)$ , but the resulting time fields change more aggressively. Thus, we also reduce the rate of change of $\alpha \left( t\right)$ as the training epochs increase. + +§ C. NEURAL ARCHITECTURE + +This section describes our neural framework, as shown in Fig. 4, for generating the speed and time fields for solving the robot motion planning problems. Our framework comprises the following modules. Given the robot’s initial $\left( {q}_{s}\right)$ and target $\left( {q}_{g}\right)$ configurations, we use random Fourier features $\gamma$ [7,4] for obtaining high-frequency robot configuration embeddings. These features are further processed into a latent embedding by a C-space encoder $f\left( \cdot \right)$ , which is a ResNet-style multi-layer perception [2]. To combine features $f\left( {\gamma \left( {q}_{s}\right) }\right)$ and $f\left( {\gamma \left( {q}_{g}\right) }\right)$ , we use the non-linear symmetric operator $\otimes$ from NTFields method [3]. Our time field generator network $g$ is a ResNet-style multi-layer perceptron which takes the encoding $f\left( {\gamma \left( {q}_{s}\right) }\right) \bigotimes f\left( {\gamma \left( {q}_{g}\right) }\right)$ and outputs the factorized time field $\tau \left( {{q}_{s},{q}_{g}}\right) = g\left( {f\left( {\gamma \left( {q}_{s}\right) }\right) \bigotimes f\left( {\gamma \left( {q}_{g}\right) }\right) }\right)$ . Given the $\tau \left( {{q}_{s},{q}_{g}}\right)$ , we compute its gradient and Laplacian to determine the $S\left( {q}_{s}\right)$ and $S\left( {q}_{g}\right)$ . Finally, we propose a smooth isotropic objective function [6] to train our framework. + +$$ +L\left( {{S}_{\alpha }^{ * }\left( q\right) ,S\left( q\right) }\right) = \frac{{S}_{\alpha }^{ * }\left( {q}_{s}\right) }{S\left( {q}_{s}\right) } + \frac{S\left( {q}_{s}\right) }{{S}_{\alpha }^{ * }\left( {q}_{s}\right) } + \frac{{S}_{\alpha }^{ * }\left( {q}_{g}\right) }{S\left( {q}_{g}\right) } + \frac{S\left( {q}_{g}\right) }{{S}_{\alpha }^{ * }\left( {q}_{g}\right) } - 4 \tag{6} +$$ + +§ D. PLANNING PIPELINE + +Once trained, we use the execution pipeline similar to the NTFields method. First, we predict $\tau \left( {{q}_{s},{q}_{g}}\right)$ for the given start ${q}_{s}$ , goal ${q}_{g}$ . Next, the factorized time, $\tau$ , parameterizes Eq. 2 and 1 for computing time $T\left( {{q}_{s},{q}_{g}}\right)$ and speed fields $S\left( {q}_{s}\right) ,S\left( {q}_{g}\right)$ , respectively. Finally, the path solution is determined in a bidirectional manner by iteratively updating the start and goal configurations as follows, + +$$ +{q}_{s} \leftarrow {q}_{s} - \beta {S}^{2}\left( {q}_{s}\right) {\nabla }_{{q}_{s}}T\left( {{q}_{s},{q}_{g}}\right) \tag{7} +$$ + +$$ +{q}_{g} \leftarrow {q}_{g} - \beta {S}^{2}\left( {q}_{g}\right) {\nabla }_{{q}_{g}}T\left( {{q}_{s},{q}_{g}}\right) +$$ + + < g r a p h i c s > + +Fig. 4: The neural architecture comprises the Fourier-based C-space Encoder, symmetric operator, and time-field generator. Three images on the top left show we progressively decrease the speed around a bunny-shaped obstacle to guide the neural network training. The image on the top right shows the final time field from start to goal generated by the trained model. + +The parameter $\beta \in \mathbb{R}$ is a predefined step size. Furthermore, at each planning iteration, the start and goal configurations are updated using gradients to march toward each other until $\begin{Vmatrix}{{q}_{s} - {q}_{g}}\end{Vmatrix} < {d}_{g}$ , where ${d}_{g} \in \mathbb{R}$ . + +§ IV. EVALUATION + +In this section, we evaluate our method through the 6-DOF UR5e robot manipulator planning in two complex cabinet environments with narrow passages. For these scenarios, we present evaluations in both simulation and real-world. + +In the simulation, we directly load a cabinet mesh, whereas, for real setup, we use Dot3D with RealSense camera to scan and create a point cloud of an actual cabinet. To form our test set, we randomly sampled $2 \times {100}$ start and goal configuration pairs for simulated and real-world environments. + +The table in Fig. 5 compares our method, NTField, RRT*, Lazy-PRM*, and RRT-Connect in both scenarios. We exclude IEF3D due to large data generation and training times. In the table, it can be seen that our method achieves the highest success rate with the shortest execution time, demonstrating the effectiveness of our progressive learning approach in complex, narrow passage environments. + +Fig. 5 shows the execution of our method (left) and RRT-Connect (right) in a challenging case in the simulated environment and the table underneath presents the overall statistical comparison of the indicated methods on the testing dataset. In the presented scenario, the UR5e robot's end effector starts from the middle shelf of the cabinet and crosses two relatively thin obstacles to the bottom shelf of the cabinet without collision. In this particular situation, NTField could not find a solution whereas our method took 0.07 seconds to get a 0.83 length path with a safe margin of 0.03, and RRT-Connect took 20.13 seconds to get a 0.90 length path with a safe margin of 0.02 . For real-world experiments, in Fig. 1, we show a challenging path that the robot went from the initial pose to make its end effect go deep into the cabinet. + + < g r a p h i c s > + +max width= + +Manipulator time (sec) length safe margin sr(%) + +1-5 +Ours ${0.03} \pm {0.00}$ ${0.43} \pm {0.10}$ ${0.04} \pm {0.00}$ 92.0 + +1-5 +NTFields ${0.05} \pm {0.00}$ ${0.38} \pm {0.06}$ ${0.04} \pm {0.00}$ 84.5 + +1-5 +RRT* ${5.16} \pm {0.01}$ ${0.52} \pm {0.36}$ ${0.04} \pm {0.00}$ 67.0 + +1-5 +LazyPRM* ${2.79} \pm {0.48}$ ${0.76} \pm {0.80}$ ${0.04} \pm {0.00}$ 86.0 + +1-5 +RRT-Connect ${1.08} \pm {0.69}$ ${1.14} \pm {0.23}$ ${0.02} \pm {0.00}$ 87.5 + +1-5 + +Fig. 5: Our method (left) and RRT-Connect (right) in a challenging case in the simulated environment: the manipulator crosses two relatively thin obstacles to move from the middle (start) to the bottom (goal) shelf. The table shows statistical results on $2 \times {100}$ different starts and goals for two environments. + +§ V. DISCUSSIONS, CONCLUSIONS, AND FUTURE WORK + +We propose a novel progressive learning framework to train physics-informed NMPs by solving the Eikonal equation without expert demonstration. Our method deals with the PDE-solving challenges in physics-informed NMPs such as NTFields [3]. First, we propose a progressive speed scheduling strategy that begins with finding a simple PDE solution at constant high speed and then gradually decreases the speed near the obstacle for finding a new solution. Second, we propose to use the viscosity term for the Eikonal equation and convert a nonlinear PDE to a semi-linear PDE, which is easy for a neural network to solve. Thus our method solves the Eikonal equation more precisely and efficiently and increases the overall performance in solving motion planning problems than prior methods. Additionally, our method requires fewer neural network parameters due to our progressive learning strategy than NTFields, leading to computationally efficient physics-informed NMPs' training and planning. Furthermore, we also demonstrate that our method scales to multiple environments and complex scenarios, such as real-world narrow-passage planning with a 6-DOF UR5e manipulator. + +Although our method can scale to multiple environments and real-world setups and outperform prior methods with expert demonstration data, a few limitations, highlighted in the following, will still be the focus of our future research directions. First, our method cannot generalize to unseen environments and only scales to given multiple scenarios. Therefore, one of our future directions will be to explore novel environment encoding strategies to make physics-informed NMP generalize to the novel, never-before-seen environments. Lastly, aside from addressing a few limitations, we also aim to explore novel PDE formulations to train physics-informed NMPs to solve motion planning under dynamic and manifold constraints. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/yY5avw0u6G/Initial_manuscript_md/Initial_manuscript.md b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/yY5avw0u6G/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..0d2e7535dcecea02a347a1c310134efee44cdd8c --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/yY5avw0u6G/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,381 @@ +# Continual Reinforcement Learning with Group Symmetries + +Shiqi Liu*, Mengdi Xu*, Peide Huang, Xilun Zhang, Yongkang Liu ${}^{ \dagger }$ , Kentaro Oguchi ${}^{ \dagger }$ and Ding Zhao Department of Mechanical Engineering, Carnegie Mellon University + +${}^{ \dagger }$ Toyota Motor North America R&D + +Abstract-Continual reinforcement learning aims to sequentially learn a variety of tasks, retaining the ability to perform previously encountered tasks while simultaneously developing new policies for novel tasks. However, current continual RL approaches overlook the fact that certain tasks are identical under basic group operations like rotations or translations, especially with visual inputs. As a result, they unnecessarily create and train a new policy for each similar task, leading to poor sample efficiency and weak generalization capability. To address this, we introduce a unique Continual Reinforcement Learning method that recognizes Group Symmetries, cultivating a policy for each group of equivalent tasks rather than individual tasks. We introduce a PPO-based RL algorithm with an invariant feature extractor and a novel task grouping mechanism that relies on invariant features. We evaluate our algorithm on sequences of robotic manipulation tasks, where each group is associated with different objects. We show that our algorithm assigns tasks to different groups with high accuracy and outperforms baselines in terms of generalization capability by a large margin. Furthermore, we transitioned the policy, initially trained in a simulated environment, to a real-world robotic arm without fine-tuning. The results indicate that the agent is capable of adeptly solving different tasks. + +## I. INTRODUCTION + +Quickly adapting to unseen tasks has been a key desideratum in realistic reinforcement learning (RL) settings [8, 15, 14]. RL algorithms are generally trained in simulated environments and then deployed in the real world. However, pre-trained RL agents will likely encounter new tasks during their deployment. Blindly reusing policies obtained from the training time could cause significant performance drops and even lead to catastrophic failures in safety-critical applications [31]. Motivated by human's strong learning capability, in this work, we aim to enhance RL algorithms' quick adaptation capability in the continual learning setting. + +Continual RL (also known as lifelong RL) focuses on learning a stream of tasks sequentially while generating task-specific policies for the present task and retaining the capability to solve seen tasks [14, 12, 17, 26]. Continual RL assumes non-stationary environments with changing transitions or reward functions, which result in changing tasks. The task delineations are provided by the environment or automatically detected via unsupervised learning algorithms [27]. However, the task taxonomies in existing works ignore the geometric information embedded in task representations, which naturally emerges in practical applications. + +Consider an example in the field of robotic manipulation, as shown in Figure 1. Here, a robotic arm is tasked to press a button located in 2 different locations. The robot utilizes bird's-eye view images as inputs. In this scenario, it is possible that the controller might receive reflection-symmetric bird's-eye view images as inputs. + +Reflect Task Input Reflect Action Output + +Fig. 1: This example illustrates how group symmetric information enhances adaptability. The robotic arm is instructed to press a button situated in two distinct locations, using bird's-eye view images as inputs. In this scenario, considering the symmetry of the button's location around the robot's position, the optimal control policies should be equivalent but mirrored. + +However, with the bird-eye view images as inputs, prior continual RL works would treat each rotated configuration as a new task and learn the task from scratch. Such a learning process impedes positive interference among tasks and limits the agent's adaptivity. Hence, we aim to leverage the geometric similarity among tasks in the continual RL setting to adapt to unseen but equivalent tasks quickly. + +In this work, we propose a continual RL algorithm that encodes group symmetries in the state space to achieve strong generalization capability and solves equivalent tasks in a zero-shot manner. We assume that there exist tasks that are equivalent under group operations, and all equivalent tasks under the same group operation construct a task group. Furthermore, we adopt a realistic setting where the task and group delineations are unknown ahead of time. We state our main contributions as follows: + +1) We propose the Continual Reinforcement Learning with Group Symmetries framework, which aims to improve continual RL's adaptivity via modeling task similarity based on group symmetries. + +2) We introduce a novel PPO-based algorithm with a rotation-invariant feature extractor. Such a flexible model architecture facilitates strong generalization capability to tasks with equivalent state space. + +3) We propose a new unsupervised task grouping mechanism, which automatically detects group boundaries based on invariant features. It automatically grows policy when detecting a new group to handle streaming groups in the continual RL setting. + +4) We test our proposed algorithm in robotic manipulation tasks with environments built on Meta-World benchmark [29]. We show that (a) the group symmetric information from the equivariant feature extractor promotes the algorithm's adaptivity by maximizing the positive interference within each task, and (b) the task grouping mechanism recovers the ground truth group indexes, which helps minimize the negative interference among different tasks. + +## II. RELATED WORK + +Geometric deep learning. Geometric deep learning aims to transfer the high performance of traditional deep learning to irregular input data such as graphs and meshes [2, 3, 16, 13, 6, 1]. For mesh inputs, spline-based Convolutional Neural Networks (CNNs) deal with geometric inputs by aggregating features purely in the spatial domain [7]. For image inputs, E(2)-equivariant convolution [25] introduces CNNs that are equivariant to rotation and reflection for planar images, which we use as the feature extractor in our neural network architecture design. Recent improvements on the Transformer architecture also focus on the equivariance to Lie group transformations [9, 10]. + +MDP homomorphism. MDP homomorphic networks [22] are neural networks that are equivariant under symmetries in the state-action space of an MDP. By exploiting the prior knowledge of symmetries, MDP homomorphic networks impose an equivariance constraint on the policy and value network. As a result, the solution space is reduced and the RL agent is able to achieve higher sample efficiency. This single-agent MDP homomorphic network is then extended to the multi-agent RL domain by factorizing the global symmetries into local symmetries in the presence of multiple cooperative agents [23]. SO(2)-Equivariant RL [24] extends the discrete symmetry group to the group of continuous planar rotations, $\mathrm{{SO}}\left( 2\right)$ , to boost the performance in a series of robotic manipulation tasks. In our work, we also exploit the symmetric properties of the states to improve learning efficiency and adaptability. + +Continual RL in non-stationary environments. Continual RL, a.k.a. lifelong RL, aims to train RL agents that are adaptive to non-stationary environments with varying transition dynamics or reward functions [21, 20, 5, 28]. In hierarchical model-based RL, [18] infers a relationship between tasks automatically from the collected data and maintains a hierarchical latent variable model with Gaussian Process to model dynamics. [27] proposes a Dirichlet-Process-Gaussian-Process model to maintain a hierarchical structure of different dynamic models. Similarly, [17] uses the Hierarchical Dirichlet Process to model the abrupt changes in dynamics during an episode. Instead of keeping an explicit structure like the aforementioned works, [14] uses model-agnostic meta-learning to train a dynamics model prior such that it can adapt quickly to different environments. In model-free RL, Lifelong Latent Actor-Critic [26] leverages latent representation learning of environments using data from the replay buffer, which is then used in off-policy learning. Online System Identification (OIS) [30] is used to predict the dynamics model parameters, which are then fed into the controller along with system states to adapt to new environments. Different from the aforementioned works focusing on algorithms, Benna-Fusi-RL [11] equips the RL agent with a synaptic model that reduces catastrophic forgetting. In our work, instead of modeling dynamics or latent representation, we develop an ensemble model of policy networks to handle non-stationary and unseen driving scenarios. + +## III. PRELIMINARY + +Markov decision process. We consider a Markov decision process (MDP) as a 5-tuple $\left( {\mathcal{S},\mathcal{A}, T, R,\gamma }\right)$ , where $\mathcal{S}$ is a set of the states, $\mathcal{A}$ is a set of the actions, $T : \mathcal{S} \times \mathcal{A} \rightarrow \Delta \left( \mathcal{S}\right)$ is the transition probability, $R : \mathcal{S} \times \mathcal{A} \rightarrow R$ is the reward function, and $\gamma$ is the discount factor. + +The goal of RL is to find an optimal policy ${\pi }_{\theta } : \mathcal{S} \times \mathcal{A} \rightarrow$ $R$ parameterized by $\theta$ that maximizes the expected return ${\mathbb{E}}_{\tau \sim {\pi }_{\theta }}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{{H - 1}}{\gamma }^{t}r\left( {{s}_{t},{a}_{t}}\right) }\right\rbrack$ , where $H$ is the episode length. + +Invariance and equivariance. Let $G$ be a mathematical group. $f : \mathcal{X} \rightarrow \mathcal{Y}$ is a mapping function. For a transformation ${L}_{g} : \mathcal{X} \rightarrow \mathcal{X}$ that satisfies + +$$ +f\left( x\right) = f\left( {{L}_{g}\left\lbrack x\right\rbrack }\right) ,\forall g \in G, x \in \mathcal{X}, \tag{1} +$$ + +then we say $f$ is invariant or symmetric to ${L}_{g}$ . Equivariance is closely related to invariance. If we can find a second transformation ${K}_{g} : \mathcal{Y} \rightarrow \mathcal{Y}$ that fulfills + +$$ +{K}_{g}\left\lbrack {f\left( x\right) }\right\rbrack = f\left( {{L}_{g}\left\lbrack x\right\rbrack }\right) ,\forall g \in G, x \in \mathcal{X}, \tag{2} +$$ + +then we say $f$ is equivariant to transformation ${L}_{g}$ . It’s worth noticing that invariance is a special case of equivariance. + +MDP with group symmetries. Symmetries can be identified between MDPs. For the example in Figure 1, rotational symmetries can be identified between two MDPs. In MDP with symmetries, we can identify at least one mathematical group $G$ of transformations ${L}_{g} : \mathcal{S} \rightarrow \mathcal{S}$ and a state-dependent action transformation ${K}_{g}^{s} : \mathcal{A} \rightarrow \mathcal{A}$ , such that Equation (3) and (4) hold for all $g \in G, s,{s}^{\prime } \in \mathcal{S}, a \in \mathcal{A}$ . Formally, + +$$ +R\left( {s, a}\right) = R\left( {{L}_{g}\left\lbrack s\right\rbrack ,{K}_{g}^{s}\left\lbrack a\right\rbrack }\right) , \tag{3} +$$ + +$$ +T\left( {s, a,{s}^{\prime }}\right) = T\left( {{L}_{g}\left\lbrack s\right\rbrack ,{K}_{g}^{s}\left\lbrack a\right\rbrack ,{L}_{g}\left\lbrack {s}^{\prime }\right\rbrack }\right) . \tag{4} +$$ + +Group-structured MDP Homomorphisms. For MDP with group symmetries, every state-action pair(s, a)can be mapped to an equivalent class $\left( {\sigma \left( s\right) ,{\alpha }_{s}\left( a\right) }\right)$ . This means all transformations of state-action pair(s, a)can be mapped to the same abstract state-action pair. Formally, for all $g \in G, s \in \mathcal{S}, a \in$ $\mathcal{A}$ . + +$$ +\left( {\sigma \left( s\right) ,{\alpha }_{s}\left( a\right) }\right) = \left( {\sigma \left( {{L}_{g}\left\lbrack s\right\rbrack }\right) ,{\alpha }_{{L}_{g}\left\lbrack s\right\rbrack }\left( {{K}_{g}^{s}\left\lbrack a\right\rbrack }\right) }\right) . \tag{5} +$$ + +The state-action pairs that are mapped to the same abstracted state share the same optimal Q-value and optimal value function. Thus we can optimize the policy $\bar{\pi }\left( {\bar{a} \mid \sigma \left( s\right) }\right)$ under the abstracted MDP and then map the policy to the original MDP using a procedure called lifting [23]. + +$$ +{\pi }^{ \uparrow }\left( {a \mid s}\right) \triangleq \frac{\bar{\pi }\left( {\bar{a} \mid \sigma \left( s\right) }\right) }{\left| \left\{ a \in {\alpha }_{s}^{-1}\left( \bar{a}\right) \right\} \right| },\;\forall s \in \mathcal{S}, a \in {\alpha }_{s}^{-1}\left( \bar{a}\right) . \tag{6} +$$ + +Rotation-equivariant convolutional layer. Let $G$ be a rotation group. An rotation-equivariant convolutional layer is a function $f : {\rho }_{\text{in }} \rightarrow {\rho }_{\text{out }}$ that is equivariant to rotation transformation in $G$ . Here we use the most general form of an equivariant convolutional layer that preserves rotational equivariance [25]. The layer consists of G-steerable kernels $k : {\mathbb{R}}^{2} \rightarrow {\mathbb{R}}^{{c}_{\text{in }} \times {c}_{\text{out }}}$ that satisfied Equation (7), + +$$ +k\left( {gx}\right) = {\rho }_{\text{out }}\left( g\right) k\left( x\right) {\rho }_{\text{in }}\left( {g}^{-1}\right) \;\forall g \in G, x \in {\mathbb{R}}^{2}. \tag{7} +$$ + +A rotation-equivariant convolutional neural network can be constructed by stacking multiple rotational equivariant convolutional layers since the composition of equivariant maps is also equivariant. + +## IV. CONTINUAL RL WITH GROUP SYMMETRIES + +In this section, we proposed an online continual RL method with group symmetries. We follow the continual RL setting, assuming that a new task group is possible to emerge at every timestep. We further adopt a realistic setting where the total number of different groups is unknown. In the following subsections, we first introduce our proposed algorithm in Section IV-A. We then introduce two key components in our proposed algorithm: the equivariant policy network architecture in Section IV-B and the dynamic policy assignment mechanism in Section IV-D. + +## A. Algorithm Overview + +Our algorithm learns a collection of policies $\Pi$ where each policy $\pi \in \Pi$ handles one group of tasks independently. The algorithm begins by rolling out $n$ episodes from the environment under the current policy. Based on the roll-out data, the algorithm proceeds to either (a) create a new policy for an unseen group and add it to the policy collection, or (b) recall an existing policy from the collection if the group has been previously encountered. To accomplish this, we select $k$ observations at the beginning of each episode, putting them into a set denoted as $\mathcal{O}$ . Observations of these frames are chosen since the are being less impacted by the policy, therefore more valuable for group identification. + +For the $i$ th policy, denoted as ${\pi }_{i}$ , within the policy collection $\Pi$ , it is associated with a state buffer ${\mathcal{B}}_{i}$ which stores the initial frames of the group it governs. The policy is also capable of extracting invariant latent features from the observations using its feature extractor ${h}_{i}$ . Hence, we can obtain the latent features of the roll-out data, denoted as ${q}_{i,\mathcal{O}} = {h}_{i}\left( \mathcal{O}\right)$ , and the features of its observation buffer, denoted as ${q}_{i,{\mathcal{B}}_{i}} = {h}_{i}\left( {\mathcal{B}}_{i}\right)$ . Subsequently, we can compute the Wasserstein distance between these two distributions, represented as ${d}_{i} = W\left( {{q}_{i, S},{q}_{i,{S}_{i}}}\right)$ . A smaller ${d}_{i}$ value suggests that the observations in $\mathcal{O}$ are more likely associated with policy ${\pi }_{i}$ . By applying the same process to each policy in $\Pi$ , we can compile a list of distances ${d}_{1},{d}_{2},\ldots ,{d}_{n}$ . If all distances surpass a predefined threshold ${d}_{\epsilon }$ , a new policy ${\pi }_{n + 1}$ will be created and appended to the policy collection $\Pi$ . Alternatively, we will designate the current policy $\pi$ as the one corresponding to the smallest $d$ . Finally, we update the observation buffer $\mathcal{B}$ of the current policy $\mathcal{B} \leftarrow \mathcal{B} \cup \mathcal{O}$ . + +Algorithm 1 Continual RL with Group Symmetries + +--- + +Input: Distance threshold ${d}_{\epsilon }$ , initial frame number $k$ + +Output: collection of policies $\Pi$ + +Initialization: $\Pi \leftarrow \{ \varnothing \}$ + + while task not finish do + + $\mathcal{O} \leftarrow \varnothing$ + + for $i = 1,2,\ldots , N$ do + + $j \leftarrow 0$ + + while episode not finish do + + ${a}_{t} \sim \pi \left( {{s}_{t} \mid {a}_{t}}\right)$ + + ${s}_{t + 1} \sim p\left( {{s}_{t + 1} \mid {s}_{t},{a}_{t}}\right)$ + + if $j < k$ then + + $\mathcal{O} \leftarrow \mathcal{O} \cup \{ s\left( t\right) \}$ + + end if + + $j \leftarrow j + 1$ + + end while + + end for + + $\mathcal{D} \leftarrow \varnothing$ + + for all ${\pi }_{i} \in \Pi$ do + + ${q}_{i, O} = {f}_{i}\left( O\right)$ + + ${q}_{i,{\mathcal{B}}_{i}} = {f}_{i}\left( {\mathcal{B}}_{i}\right)$ + + ${d}_{i} = W\left( {{q}_{i,\mathcal{O}},{q}_{i,{\mathcal{B}}_{i}}}\right)$ + + $\mathcal{D} \leftarrow \mathcal{D} \cup \left\{ {d}_{i}\right\}$ + + end for + + $j \leftarrow$ argmin $\mathcal{D}$ + + if ${d}_{j} > {d}_{\epsilon }$ then + + Initialize a new policy ${\pi }_{\theta }$ with observation buffer $\mathcal{B}$ + + $\pi \leftarrow {\pi }_{\theta }$ + + $\Pi \leftarrow \Pi \cup \left\{ {\pi }_{\theta }\right\}$ + + else + + $\pi \leftarrow {\pi }_{j}$ + + $\mathcal{B} \leftarrow {\mathcal{B}}_{i}$ + + end if + + Update $\pi$ to maximize expected return + + $\mathcal{B} \leftarrow \mathcal{B} \cup \mathcal{O}$ + + end while + +--- + +## B. Policy Network Architecture + +We adapted Proximal Policy Optimization (PPO) [19] as the RL algorithm for each policy in our collection. PPO comprises two networks: a policy network that produces actions, and a value network that approximates values. + +For MDP with group symmetries, the optimal value function is group invariant, and the optimal policy is group equivariant [24]. To achieve this, the policy and the value networks employ a shared equivariant feature extractor to extract equivariant features from observations. As illustrated in Figure 2, in the value network, these extracted features are transformed into invariant features through a group pooling layer, and then fed forward into a fully connected network to generate the output values. On the other hand, in the policy network, these features are input into an equivariant network composed of multiple equivariant MLP layers. + +## C. Equivariant feature extractor. + +The equivariant feature extractor is comprised of 2 parts: an equivariant convolutional network that accepts images as inputs and an equivariant fully connected network that accepts vectors. The outputs from these 2 networks are concatenated, resulting in the final equivariant features. + +The equivariant convolutional network is constituted of 5 equivariant convolutional layers [25], which convert an image from a trivial representation into a regular representation. The equivariant fully connected network, on the other hand, consists of a single equivariant fully connected layer [4] that transforms a vector, which combines different representations, into a regular representation. + +## D. Unsupervised Dynamic Policy Assignment + +Detecting task boundaries and developing skills to solve each task are still open problems. Existing methods detect task boundaries based on model prediction errors in model-based RL settings [27], task performance drops in model-free RL settings, or policy reconstruction errors in imitation learning. In this work, we propose to detect different groups of tasks instead to facilitate knowledge transfer between tasks in each group and discourage negative policy interferences across different groups. + +Assignments based on invariant features. In contrast to prior works that calculate the distance in state space, our method determines policy assignments based on distance in the invariant feature space. Specifically, we compute the Wasserstein distance between the invariant features derived from $\mathcal{O}$ and buffer $\mathcal{B}$ of each policy in the collection. The invariant features are obtained from the equivariant feature extractor via a group pooling operation. + +Let $\mathbf{X}$ be a matrix constructed by invariant features extracted from the state buffer $\mathcal{B}$ of size $n$ . + +$$ +\mathbf{X} = {\left( {X}_{1},{X}_{2},\ldots ,{X}_{n}\right) }^{\mathrm{T}}, \tag{8} +$$ + +$$ +{X}_{i} = h\left( {s}_{i}\right) , i \in \left\lbrack n\right\rbrack ,{s}_{i} \in \mathcal{B}\text{.} +$$ + +We use the Earth Movers distance (EMD) to measure the distance between two empirical distributions $\mathbf{X}$ and $\mathbf{Y}$ with $n$ and $m$ features, respectively. More specifically, + +$$ +{EMD}\left( {\mathbf{X},\mathbf{Y}}\right) = \mathop{\min }\limits_{\gamma }\;\langle \gamma ,\mathbf{M}{\rangle }_{F} \tag{9} +$$ + +$$ +\text{s.t.}\gamma \mathbf{1} = \mathbf{a},{\gamma }^{T}\mathbf{1} = \mathbf{b},\gamma \geq 0 +$$ + +Images Vector FC, 64R FC, 64R FC, mix Action 3x3 Conv, 4R 2x2 Max pooling 3x3 Conv, 8R 2x2 Max pooling 3x3 Conv, 16R 2x2 Max pooling 3x3 Conv, 31R ✓ 2x2 Max pooling 3x3 Conv, 64R Global max pooling FC, 64 ✓ FC, 1T Value + +Fig. 2: Equivariant policy network architecture, ReLU nonlinearity is omitted in the figure. A layer with a suffix of $\mathrm{R}$ indicates the layer output is in the regular representation; a layer with a suffix of $\mathrm{T}$ indicates the layer output is in the trivial representation; a layer with a siffix of 'mix' meaning the layer output combines different representations. + +where $\mathbf{M}$ is the metric cost matrix and ${\mathbf{M}}_{i, j} = {\begin{Vmatrix}{X}_{i} - {Y}_{j}\end{Vmatrix}}_{2}$ , $\mathbf{a} = \left\lbrack {1/n,1/n,\ldots ,1/n}\right\rbrack ,\mathbf{b} = \left\lbrack {1/m,1/m,\ldots ,1/m}\right\rbrack .$ + +## V. EXPERIMENTAL SETUP + +In this section, we tested the proposed method in Meta-World benchmark [29] and evaluate its performance. We want to investigate whether our method can (i) assign or expand correct task group indices, (ii) recall policy when facing an old environment and automatically initialize a new policy when encountering an unseen environment (iii) achieve equivalent or better performance compared to baseline methods. + +## A. Baselines + +We compare our method termed as $M$ -Equi with 3 baselines detailed as follows: + +1) Multiple policies with ground truth policy indexes (M-Equi-gt): M-Equi-gt uses ground truth group labels to assign policies to different groups. With this additional group index information, M-Equi-gt is expected to have the oracle performance. M-Equi-gt helps ablate the performance of our proposed policy assignment mechanism. + +2) Multiple policies with a CNN-based feature extractor (M-CNN): M-CNN uses the same continual learning mechanism but with a vanilla CNN block as the feature extractor. The vanilla CNN shares a similar network structure compared with our proposed invariant feature extractor in Figure 2. The only difference is switching the rotation-equivariant convolutional layer to the vanilla convolutional layer. M-CNN helps reveal the benefit of using invariant feature extractors. + +3) A single policy with a CNN-based feature extractor (S-CNN): S-CNN is similar to S-Equi but uses a CNN-based feature extractor. + +## B. Environment + +Simulation Setup. Our robot manipulation scenarios consist of 4 group of tasks, and each group contains 4 tasks, rotations and reflections symmetries can be found between any 2 tasks within the same group. The robot is required to complete a sequence of groups, adhering to the standard RL setting. Here we utilized the Meta-World benchmark [29] as our simulation platform. Meta-World features a variety of table-top manipulation tasks that require interaction with diverse objects using a Sawyer robot. We chose 4 group of tasks as shown in Figure 6 from the Meta-World benchmark for our study: 'Reach', 'Button Press', 'Drawer Close', and 'Plate Slide'. + +1) Reaching. In the 'Reaching' group, the robot is required to move its gripper towards a selected target above the table. The goal location is symmetrically arranged around the center of the table. + +2) Button Press. In the 'Button Press' group, the robot is required to press a button located on the table. The button location is symmetrically arranged around the center of the table. + +3) Drawer Close. In the 'Drawer Close' group, the robot is required to close a drawer using its gripper. The location of the drawer is symmetrically arranged around the center of the table. + +4) Plate Slide. In the 'Plate Slide' group, the robot is required to move a plate to the target location using its gripper. The goal location is symmetrically arranged around the center of the table. + +Real World Setup. Our real-world experiment setup utilized a Kinova GEN3 robotic arm with is equipped with a Robotiq 2F-85 gripper. The top-down RGB image is captured with an Intel RealSense D345f. Information on the gripper's coordinates and opening angle were obtained through the robot's internal sensors. The real robot setups are demonstrated in Figure 7. + +## C. States and Actions + +The agent receives two types of observations: an RGB image captured from a top-down camera centered over the table, and the gripper's 3D coordinates and opening angle. Notably, the target location is only revealed to the agent during the 'Reaching' task. To close the gap between the simulation and real-world scenarios (Sim2Real), we prepossessed the RGB images before inputting them into the network. A comparison of the original and processed images can be found in Figure 8. + +The action is a four-dimensional vector that controls the gripper's movement along three axes and its opening angle. This particular type of action has been chosen to ensure the transferability of our algorithm between different robotic models, as we utilized two distinct robots in the simulation and real-world scenarios. This choice also makes our approach more independent of the specific robot arm model and broadly applicable to various scenarios. + +## VI. RESULTS AND ABLATIONS + +## A. Results + +In this section, we analyze the performance of our method against other baselines during training and evaluate their converged performance in each group. + +1) Training performance: As shown in Figure 5, when the environment switches to a new group, our method quickly detects changes and initialize different policy for each group. Our method also recalls the corresponding policy from the collection when facing the same group again. In general, our method's dynamic policy assignment result matches the ground truth group labels. We noticed that some assignment results did not match the ground truth. This may be due to the fact that the feature extractor of each sub-policy is not able to detect representative features of each group at the early stage. The misclassification rate drops significantly as the training episode grows. + +In Figure 4, our method's training curve is similar to M-Equi-gt which assigns policies according to the ground truth group index. The training curve shows that the performance drop due to misclassification is minor and acceptable. Comparing the training curve of baselines using vanilla CNN, the equivariance networks dramatically improve their sample efficiency. This is because the equivariance network only needs to optimize the policy for the abstracted MDP in the group so that it does not need to handle each task in the group independently. Additionally, the training curve does not differ much between our method and S-Equi at the early stage of training. But later, the training curve of S-Equi could not keep up with our method since it suffers from the forgetting problem due to not retaining parameters of previously seen groups. + +2) Converged performance: Table 1 and Table 1 show the converged performance of our method compared with other baseline methods. The performance of our method is similar to M-Equi-gt, which uses ground truth group policy indexes as extra prior knowledge. Besides M-Equi-gt, our method achieves a much higher episode reward and a much higher success rate compared to other baselines. The results show that our method is able to achieve high performance across all groups. + +Drawer Close Button Press Plate Slide Goal Reach Streaming groups Timesteps + +Fig. 3: The continual learning environment where there are four groups of tasks based on task configurations, including the drawer-close, button-press, plate-slide, and goal-reach. Different groups streamingly come in. + +400 4000 5000 6000 7000 Number of episodes Reward 100 M-Equi (Ours) M-Equi-GT S-Equi S-CNN 1000 2000 3000 + +Fig. 4: Training curves for our proposed M-Equi approach and the baseline models are illustrated, with each background color corresponding to one group of tasks. The results demonstrate that M-Equi shows similar performance when compared to M-Equi-gt, which utilizes ground truth group indices, and substantially outperforms other baseline models. + +3.0 5000 6000 Number of episodes Policies index 0.0 - 1000 2000 3000 + +Fig. 5: The selection of policies at each episode for our proposed M-Equi method and the baseline models is represented, with each background color denoting a distinct group of tasks. Generally, following the initial stage, the policy indices allocated by M-Equi remain in alignment with the ground truth group indices. + +## B. Ablation Study + +Here we analyze the effect of multiple policies and invariant feature extractors respectively through ablation studies. + +1) The effect of group symmetric information: As shown in Table 1, methods without the invariant feature extractor, such as S-CNN, have lower episodic rewards, as well as lower success rate, compared with methods using the invariant feature extractors (M-Equi, M-Equi-gt, S-Equi). Hence, we can conclude that group symmetric information extracted by the invariant feature extractor dramatically improves the performance. The invariant feature extractor maps all tasks in the same group into an abstracted task. Thus the RL agent can optimize policies under the abstracted MDP. + +2) The effect of the dynamic policy assignment module: By comparing the performances between M-Equi, M-Equi-gt, and S-Equi, we can conclude that using multiple policies in the continuous learning environment helps improve performance. As shown in Figure 4, S-Equi's performance does not differ much from M-Equi and M-Equi-gt at the early stage of training. However, when the environment switches to different groups, S-Equi's performance drops quickly due to the forgetting problem since it can not recall parameters from seen groups. + +Goal Reach Button Press Plate Slide Drawer Close + +Fig. 6: The simulated Sawyer setup consists of four different tasks, including goal-reach, button-press, drawer-close, and plate-slide. The goal point marked in the figure is only disclosed to the agent in goal-reach-related tasks. + +Goal Reach Button Press Plate Slide Drawer Close + +Fig. 7: The real Kinova GEN3 setup consists of four different tasks, including goal-reach, button-press, drawer-close, and plate slide. The goal point marked in the figure is only disclosed to the agent in goal-reach-related tasks. + +## VII. CONCLUSION + +We propose a novel continual RL framework that leverages group symmetries to facilitate quick generalization to unseen but equivalent tasks under group operations. Our proposed algorithm unsupervisedly detects group boundaries based on invariant state features and grows policies for each group of equivalent tasks instead of a single task. By implementing our algorithm on robotic manipulation scenarios, we show that our algorithm assigns tasks to different groups with high accuracy and has a strong generalization capability outperforming baselines by a large margin. Finally we transformed the trained algorithm from simulator to a real-world robotic arm without fine-tuning. The results indicate that the agent is capable of adeptly solving different tasks. + +Policy Environment RGB Image Processed Image Network Action + +Fig. 8: To address the sim-to-real gap, we incorporate preprocessing steps for both real and simulated observation images. These steps are performed before feeding the images into the network, as illustrated in the figure. + +TABLE I: Quantitative evaluation of episodic reward of policies at convergence. + +
MethodsPlate-slideButton-pressDrawer-closeGoal-reach
M-Equi (Ours)263.24271.26411.83478.56
M-Equi-GT72.81284.47389.46443.26
S-Equi58.78163.92304.61292.39
S-CNN25.6085.35121.23102.21
+ +TABLE II: Quantitative evaluation of episodic success rate of policies at convergence. + +
MethodsPlate-slideButton-pressDrawer-closeGoal-reach
M-Equi (Ours)0.830.890.960.99
M-Equi-GT0.120.310.970.99
S-Equi0.170.290.690.70
S-CNN0.020.150.130.12
+ +## ACKNOWLEDGMENT + +The authors gratefully acknowledge the support from the unrestricted research grant from Toyota Motor North America. The ideas, opinions, and conclusions presented in this paper are solely those of the authors. + +[1] Davide Boscaini, Jonathan Masci, Simone Melzi, Michael M Bronstein, Umberto Castellani, and Pierre Vandergheynst. Learning class-specific descriptors for deformable shapes using localized spectral convolutional networks. In Computer graphics forum, volume 34, pages 13-23. Wiley Online Library, 2015. + +[2] Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18-42, 2017. + +[3] Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478, 2021. + +[4] Gabriele Cesa, Leon Lang, and Maurice Weiler. A program to build e(n)-equivariant steerable CNNs. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=WE4qe9xlnQw. + +[5] Zhiyuan Chen and Bing Liu. Lifelong machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 12(3):1-207, 2018. + +[6] Joan Bruna Estrach, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and deep locally connected networks on graphs. In 2nd International conference on learning representations, ICLR, volume 2014, 2014. + +[7] Matthias Fey, Jan Eric Lenssen, Frank Weichert, and Heinrich Müller. Splineenn: Fast geometric deep learning with continuous b-spline kernels. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 869-877, 2018. + +[8] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, pages 1126-1135. PMLR, 2017. + +[9] Fabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: $3\mathrm{\;d}$ roto-translation equiv-ariant attention networks. Advances in Neural Information Processing Systems, 33:1970-1981, 2020. + +[10] Michael J Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont, Yee Whye Teh, and Hyunjik Kim. Lietransformer: Equivariant self-attention for lie groups. In International Conference on Machine Learning, pages 4533-4543. PMLR, 2021. + +[11] Christos Kaplanis, Murray Shanahan, and Claudia Clopath. Continual reinforcement learning with complex synapses. In International Conference on Machine Learning, pages 2497-2506. PMLR, 2018. + +[12] Khimya Khetarpal, Matthew Riemer, Irina Rish, and Doina Precup. Towards continual reinforcement learning: A review and perspectives. arXiv preprint arXiv:2012.13490, 2020. + +[13] Jonathan Masci, Davide Boscaini, Michael Bronstein, and Pierre Vandergheynst. Geodesic convolutional neural + +networks on riemannian manifolds. In Proceedings of the IEEE international conference on computer vision + +workshops, pages 37-45, 2015. + +[14] Anusha Nagabandi, Ignasi Clavera, Simin Liu, Ronald S Fearing, Pieter Abbeel, Sergey Levine, and Chelsea Finn. Learning to adapt in dynamic, real-world environments through meta-reinforcement learning. arXiv preprint arXiv:1803.11347, 2018. + +[15] Anusha Nagabandi, Chelsea Finn, and Sergey Levine. Deep online learning via meta-learning: Continual adaptation for model-based rl. arXiv preprint arXiv:1812.07671, 2018. + +[16] Adrien Poulenard and Maks Ovsjanikov. Multidirectional geodesic neural networks via equivariant convolution. ACM Transactions on Graphics (TOG), 37(6): 1-14, 2018. + +[17] Hang Ren, Aivar Sootla, Taher Jafferjee, Junxiao Shen, Jun Wang, and Haitham Bou-Ammar. Reinforcement learning in presence of discrete markovian context evolution. arXiv preprint arXiv:2202.06557, 2022. + +[18] Steindór Sæmundsson, Katja Hofmann, and Marc Peter Deisenroth. Meta reinforcement learning with latent variable gaussian processes. arXiv preprint arXiv:1803.07551, 2018. + +[19] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. + +[20] Fumihide Tanaka and Masayuki Yamamura. An approach to lifelong reinforcement learning through multiple environments. In 6th European Workshop on Learning Robots, pages 93-99, 1997. + +[21] Sebastian Thrun and Tom M Mitchell. Lifelong robot learning. Robotics and autonomous systems, 15(1-2):25- 46, 1995. + +[22] Elise van der Pol, Daniel Worrall, Herke van Hoof, Frans Oliehoek, and Max Welling. Mdp homomorphic networks: Group symmetries in reinforcement learning. Advances in Neural Information Processing Systems, 33: 4199-4210, 2020. + +[23] Elise van der Pol, Herke van Hoof, Frans A Oliehoek, and Max Welling. Multi-agent mdp homomorphic networks. arXiv preprint arXiv:2110.04495, 2021. + +[24] Dian Wang, Robin Walters, and Robert Platt. So (2) equivariant reinforcement learning. In International conference on learning representations (ICLR), 2022. + +[25] Maurice Weiler and Gabriele Cesa. General e (2)- equivariant steerable cnns. Advances in Neural Information Processing Systems, 32, 2019. + +[26] Annie Xie, James Harrison, and Chelsea Finn. Deep reinforcement learning amidst continual structured non-stationarity. In International Conference on Machine Learning, pages 11393-11403. PMLR, 2021. + +[27] Mengdi Xu, Wenhao Ding, Jiacheng Zhu, Zuxin Liu, Baiming Chen, and Ding Zhao. Task-agnostic online reinforcement learning with an infinite mixture of gaussian processes. Advances in Neural Information Processing + +Systems, 33:6429-6440, 2020. + +[28] Mengdi Xu, Zuxin Liu, Peide Huang, Wenhao Ding, Zhepeng Cen, Bo Li, and Ding Zhao. Trustworthy reinforcement learning against intrinsic vulnerabilities: Robustness, safety, and generalizability. arXiv preprint arXiv:2209.08025, 2022. + +[29] Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on Robot Learning, pages 1094-1100. PMLR, 2020. + +[30] Wenhao Yu, Jie Tan, C Karen Liu, and Greg Turk. Preparing for the unknown: Learning a universal policy with online system identification. arXiv preprint arXiv:1702.02453, 2017. + +[31] Wenshuai Zhao, Jorge Peña Queralta, and Tomi Wester-lund. Sim-to-real transfer in deep reinforcement learning for robotics: a survey. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pages 737-744. IEEE, 2020. \ No newline at end of file diff --git a/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/yY5avw0u6G/Initial_manuscript_tex/Initial_manuscript.tex b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/yY5avw0u6G/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..bc0798bb01867a342834a3ee31d3acc8f9fa889e --- /dev/null +++ b/RSS/RSS 2023/RSS 2023 Workshop/RSS 2023 Workshop Symmetry/yY5avw0u6G/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,341 @@ +§ CONTINUAL REINFORCEMENT LEARNING WITH GROUP SYMMETRIES + +Shiqi Liu*, Mengdi Xu*, Peide Huang, Xilun Zhang, Yongkang Liu ${}^{ \dagger }$ , Kentaro Oguchi ${}^{ \dagger }$ and Ding Zhao Department of Mechanical Engineering, Carnegie Mellon University + +${}^{ \dagger }$ Toyota Motor North America R&D + +Abstract-Continual reinforcement learning aims to sequentially learn a variety of tasks, retaining the ability to perform previously encountered tasks while simultaneously developing new policies for novel tasks. However, current continual RL approaches overlook the fact that certain tasks are identical under basic group operations like rotations or translations, especially with visual inputs. As a result, they unnecessarily create and train a new policy for each similar task, leading to poor sample efficiency and weak generalization capability. To address this, we introduce a unique Continual Reinforcement Learning method that recognizes Group Symmetries, cultivating a policy for each group of equivalent tasks rather than individual tasks. We introduce a PPO-based RL algorithm with an invariant feature extractor and a novel task grouping mechanism that relies on invariant features. We evaluate our algorithm on sequences of robotic manipulation tasks, where each group is associated with different objects. We show that our algorithm assigns tasks to different groups with high accuracy and outperforms baselines in terms of generalization capability by a large margin. Furthermore, we transitioned the policy, initially trained in a simulated environment, to a real-world robotic arm without fine-tuning. The results indicate that the agent is capable of adeptly solving different tasks. + +§ I. INTRODUCTION + +Quickly adapting to unseen tasks has been a key desideratum in realistic reinforcement learning (RL) settings [8, 15, 14]. RL algorithms are generally trained in simulated environments and then deployed in the real world. However, pre-trained RL agents will likely encounter new tasks during their deployment. Blindly reusing policies obtained from the training time could cause significant performance drops and even lead to catastrophic failures in safety-critical applications [31]. Motivated by human's strong learning capability, in this work, we aim to enhance RL algorithms' quick adaptation capability in the continual learning setting. + +Continual RL (also known as lifelong RL) focuses on learning a stream of tasks sequentially while generating task-specific policies for the present task and retaining the capability to solve seen tasks [14, 12, 17, 26]. Continual RL assumes non-stationary environments with changing transitions or reward functions, which result in changing tasks. The task delineations are provided by the environment or automatically detected via unsupervised learning algorithms [27]. However, the task taxonomies in existing works ignore the geometric information embedded in task representations, which naturally emerges in practical applications. + +Consider an example in the field of robotic manipulation, as shown in Figure 1. Here, a robotic arm is tasked to press a button located in 2 different locations. The robot utilizes bird's-eye view images as inputs. In this scenario, it is possible that the controller might receive reflection-symmetric bird's-eye view images as inputs. + + < g r a p h i c s > + +Fig. 1: This example illustrates how group symmetric information enhances adaptability. The robotic arm is instructed to press a button situated in two distinct locations, using bird's-eye view images as inputs. In this scenario, considering the symmetry of the button's location around the robot's position, the optimal control policies should be equivalent but mirrored. + +However, with the bird-eye view images as inputs, prior continual RL works would treat each rotated configuration as a new task and learn the task from scratch. Such a learning process impedes positive interference among tasks and limits the agent's adaptivity. Hence, we aim to leverage the geometric similarity among tasks in the continual RL setting to adapt to unseen but equivalent tasks quickly. + +In this work, we propose a continual RL algorithm that encodes group symmetries in the state space to achieve strong generalization capability and solves equivalent tasks in a zero-shot manner. We assume that there exist tasks that are equivalent under group operations, and all equivalent tasks under the same group operation construct a task group. Furthermore, we adopt a realistic setting where the task and group delineations are unknown ahead of time. We state our main contributions as follows: + +1) We propose the Continual Reinforcement Learning with Group Symmetries framework, which aims to improve continual RL's adaptivity via modeling task similarity based on group symmetries. + +2) We introduce a novel PPO-based algorithm with a rotation-invariant feature extractor. Such a flexible model architecture facilitates strong generalization capability to tasks with equivalent state space. + +3) We propose a new unsupervised task grouping mechanism, which automatically detects group boundaries based on invariant features. It automatically grows policy when detecting a new group to handle streaming groups in the continual RL setting. + +4) We test our proposed algorithm in robotic manipulation tasks with environments built on Meta-World benchmark [29]. We show that (a) the group symmetric information from the equivariant feature extractor promotes the algorithm's adaptivity by maximizing the positive interference within each task, and (b) the task grouping mechanism recovers the ground truth group indexes, which helps minimize the negative interference among different tasks. + +§ II. RELATED WORK + +Geometric deep learning. Geometric deep learning aims to transfer the high performance of traditional deep learning to irregular input data such as graphs and meshes [2, 3, 16, 13, 6, 1]. For mesh inputs, spline-based Convolutional Neural Networks (CNNs) deal with geometric inputs by aggregating features purely in the spatial domain [7]. For image inputs, E(2)-equivariant convolution [25] introduces CNNs that are equivariant to rotation and reflection for planar images, which we use as the feature extractor in our neural network architecture design. Recent improvements on the Transformer architecture also focus on the equivariance to Lie group transformations [9, 10]. + +MDP homomorphism. MDP homomorphic networks [22] are neural networks that are equivariant under symmetries in the state-action space of an MDP. By exploiting the prior knowledge of symmetries, MDP homomorphic networks impose an equivariance constraint on the policy and value network. As a result, the solution space is reduced and the RL agent is able to achieve higher sample efficiency. This single-agent MDP homomorphic network is then extended to the multi-agent RL domain by factorizing the global symmetries into local symmetries in the presence of multiple cooperative agents [23]. SO(2)-Equivariant RL [24] extends the discrete symmetry group to the group of continuous planar rotations, $\mathrm{{SO}}\left( 2\right)$ , to boost the performance in a series of robotic manipulation tasks. In our work, we also exploit the symmetric properties of the states to improve learning efficiency and adaptability. + +Continual RL in non-stationary environments. Continual RL, a.k.a. lifelong RL, aims to train RL agents that are adaptive to non-stationary environments with varying transition dynamics or reward functions [21, 20, 5, 28]. In hierarchical model-based RL, [18] infers a relationship between tasks automatically from the collected data and maintains a hierarchical latent variable model with Gaussian Process to model dynamics. [27] proposes a Dirichlet-Process-Gaussian-Process model to maintain a hierarchical structure of different dynamic models. Similarly, [17] uses the Hierarchical Dirichlet Process to model the abrupt changes in dynamics during an episode. Instead of keeping an explicit structure like the aforementioned works, [14] uses model-agnostic meta-learning to train a dynamics model prior such that it can adapt quickly to different environments. In model-free RL, Lifelong Latent Actor-Critic [26] leverages latent representation learning of environments using data from the replay buffer, which is then used in off-policy learning. Online System Identification (OIS) [30] is used to predict the dynamics model parameters, which are then fed into the controller along with system states to adapt to new environments. Different from the aforementioned works focusing on algorithms, Benna-Fusi-RL [11] equips the RL agent with a synaptic model that reduces catastrophic forgetting. In our work, instead of modeling dynamics or latent representation, we develop an ensemble model of policy networks to handle non-stationary and unseen driving scenarios. + +§ III. PRELIMINARY + +Markov decision process. We consider a Markov decision process (MDP) as a 5-tuple $\left( {\mathcal{S},\mathcal{A},T,R,\gamma }\right)$ , where $\mathcal{S}$ is a set of the states, $\mathcal{A}$ is a set of the actions, $T : \mathcal{S} \times \mathcal{A} \rightarrow \Delta \left( \mathcal{S}\right)$ is the transition probability, $R : \mathcal{S} \times \mathcal{A} \rightarrow R$ is the reward function, and $\gamma$ is the discount factor. + +The goal of RL is to find an optimal policy ${\pi }_{\theta } : \mathcal{S} \times \mathcal{A} \rightarrow$ $R$ parameterized by $\theta$ that maximizes the expected return ${\mathbb{E}}_{\tau \sim {\pi }_{\theta }}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{{H - 1}}{\gamma }^{t}r\left( {{s}_{t},{a}_{t}}\right) }\right\rbrack$ , where $H$ is the episode length. + +Invariance and equivariance. Let $G$ be a mathematical group. $f : \mathcal{X} \rightarrow \mathcal{Y}$ is a mapping function. For a transformation ${L}_{g} : \mathcal{X} \rightarrow \mathcal{X}$ that satisfies + +$$ +f\left( x\right) = f\left( {{L}_{g}\left\lbrack x\right\rbrack }\right) ,\forall g \in G,x \in \mathcal{X}, \tag{1} +$$ + +then we say $f$ is invariant or symmetric to ${L}_{g}$ . Equivariance is closely related to invariance. If we can find a second transformation ${K}_{g} : \mathcal{Y} \rightarrow \mathcal{Y}$ that fulfills + +$$ +{K}_{g}\left\lbrack {f\left( x\right) }\right\rbrack = f\left( {{L}_{g}\left\lbrack x\right\rbrack }\right) ,\forall g \in G,x \in \mathcal{X}, \tag{2} +$$ + +then we say $f$ is equivariant to transformation ${L}_{g}$ . It’s worth noticing that invariance is a special case of equivariance. + +MDP with group symmetries. Symmetries can be identified between MDPs. For the example in Figure 1, rotational symmetries can be identified between two MDPs. In MDP with symmetries, we can identify at least one mathematical group $G$ of transformations ${L}_{g} : \mathcal{S} \rightarrow \mathcal{S}$ and a state-dependent action transformation ${K}_{g}^{s} : \mathcal{A} \rightarrow \mathcal{A}$ , such that Equation (3) and (4) hold for all $g \in G,s,{s}^{\prime } \in \mathcal{S},a \in \mathcal{A}$ . Formally, + +$$ +R\left( {s,a}\right) = R\left( {{L}_{g}\left\lbrack s\right\rbrack ,{K}_{g}^{s}\left\lbrack a\right\rbrack }\right) , \tag{3} +$$ + +$$ +T\left( {s,a,{s}^{\prime }}\right) = T\left( {{L}_{g}\left\lbrack s\right\rbrack ,{K}_{g}^{s}\left\lbrack a\right\rbrack ,{L}_{g}\left\lbrack {s}^{\prime }\right\rbrack }\right) . \tag{4} +$$ + +Group-structured MDP Homomorphisms. For MDP with group symmetries, every state-action pair(s, a)can be mapped to an equivalent class $\left( {\sigma \left( s\right) ,{\alpha }_{s}\left( a\right) }\right)$ . This means all transformations of state-action pair(s, a)can be mapped to the same abstract state-action pair. Formally, for all $g \in G,s \in \mathcal{S},a \in$ $\mathcal{A}$ . + +$$ +\left( {\sigma \left( s\right) ,{\alpha }_{s}\left( a\right) }\right) = \left( {\sigma \left( {{L}_{g}\left\lbrack s\right\rbrack }\right) ,{\alpha }_{{L}_{g}\left\lbrack s\right\rbrack }\left( {{K}_{g}^{s}\left\lbrack a\right\rbrack }\right) }\right) . \tag{5} +$$ + +The state-action pairs that are mapped to the same abstracted state share the same optimal Q-value and optimal value function. Thus we can optimize the policy $\bar{\pi }\left( {\bar{a} \mid \sigma \left( s\right) }\right)$ under the abstracted MDP and then map the policy to the original MDP using a procedure called lifting [23]. + +$$ +{\pi }^{ \uparrow }\left( {a \mid s}\right) \triangleq \frac{\bar{\pi }\left( {\bar{a} \mid \sigma \left( s\right) }\right) }{\left| \left\{ a \in {\alpha }_{s}^{-1}\left( \bar{a}\right) \right\} \right| },\;\forall s \in \mathcal{S},a \in {\alpha }_{s}^{-1}\left( \bar{a}\right) . \tag{6} +$$ + +Rotation-equivariant convolutional layer. Let $G$ be a rotation group. An rotation-equivariant convolutional layer is a function $f : {\rho }_{\text{ in }} \rightarrow {\rho }_{\text{ out }}$ that is equivariant to rotation transformation in $G$ . Here we use the most general form of an equivariant convolutional layer that preserves rotational equivariance [25]. The layer consists of G-steerable kernels $k : {\mathbb{R}}^{2} \rightarrow {\mathbb{R}}^{{c}_{\text{ in }} \times {c}_{\text{ out }}}$ that satisfied Equation (7), + +$$ +k\left( {gx}\right) = {\rho }_{\text{ out }}\left( g\right) k\left( x\right) {\rho }_{\text{ in }}\left( {g}^{-1}\right) \;\forall g \in G,x \in {\mathbb{R}}^{2}. \tag{7} +$$ + +A rotation-equivariant convolutional neural network can be constructed by stacking multiple rotational equivariant convolutional layers since the composition of equivariant maps is also equivariant. + +§ IV. CONTINUAL RL WITH GROUP SYMMETRIES + +In this section, we proposed an online continual RL method with group symmetries. We follow the continual RL setting, assuming that a new task group is possible to emerge at every timestep. We further adopt a realistic setting where the total number of different groups is unknown. In the following subsections, we first introduce our proposed algorithm in Section IV-A. We then introduce two key components in our proposed algorithm: the equivariant policy network architecture in Section IV-B and the dynamic policy assignment mechanism in Section IV-D. + +§ A. ALGORITHM OVERVIEW + +Our algorithm learns a collection of policies $\Pi$ where each policy $\pi \in \Pi$ handles one group of tasks independently. The algorithm begins by rolling out $n$ episodes from the environment under the current policy. Based on the roll-out data, the algorithm proceeds to either (a) create a new policy for an unseen group and add it to the policy collection, or (b) recall an existing policy from the collection if the group has been previously encountered. To accomplish this, we select $k$ observations at the beginning of each episode, putting them into a set denoted as $\mathcal{O}$ . Observations of these frames are chosen since the are being less impacted by the policy, therefore more valuable for group identification. + +For the $i$ th policy, denoted as ${\pi }_{i}$ , within the policy collection $\Pi$ , it is associated with a state buffer ${\mathcal{B}}_{i}$ which stores the initial frames of the group it governs. The policy is also capable of extracting invariant latent features from the observations using its feature extractor ${h}_{i}$ . Hence, we can obtain the latent features of the roll-out data, denoted as ${q}_{i,\mathcal{O}} = {h}_{i}\left( \mathcal{O}\right)$ , and the features of its observation buffer, denoted as ${q}_{i,{\mathcal{B}}_{i}} = {h}_{i}\left( {\mathcal{B}}_{i}\right)$ . Subsequently, we can compute the Wasserstein distance between these two distributions, represented as ${d}_{i} = W\left( {{q}_{i,S},{q}_{i,{S}_{i}}}\right)$ . A smaller ${d}_{i}$ value suggests that the observations in $\mathcal{O}$ are more likely associated with policy ${\pi }_{i}$ . By applying the same process to each policy in $\Pi$ , we can compile a list of distances ${d}_{1},{d}_{2},\ldots ,{d}_{n}$ . If all distances surpass a predefined threshold ${d}_{\epsilon }$ , a new policy ${\pi }_{n + 1}$ will be created and appended to the policy collection $\Pi$ . Alternatively, we will designate the current policy $\pi$ as the one corresponding to the smallest $d$ . Finally, we update the observation buffer $\mathcal{B}$ of the current policy $\mathcal{B} \leftarrow \mathcal{B} \cup \mathcal{O}$ . + +Algorithm 1 Continual RL with Group Symmetries + +Input: Distance threshold ${d}_{\epsilon }$ , initial frame number $k$ + +Output: collection of policies $\Pi$ + +Initialization: $\Pi \leftarrow \{ \varnothing \}$ + + while task not finish do + + $\mathcal{O} \leftarrow \varnothing$ + + for $i = 1,2,\ldots ,N$ do + + $j \leftarrow 0$ + + while episode not finish do + + ${a}_{t} \sim \pi \left( {{s}_{t} \mid {a}_{t}}\right)$ + + ${s}_{t + 1} \sim p\left( {{s}_{t + 1} \mid {s}_{t},{a}_{t}}\right)$ + + if $j < k$ then + + $\mathcal{O} \leftarrow \mathcal{O} \cup \{ s\left( t\right) \}$ + + end if + + $j \leftarrow j + 1$ + + end while + + end for + + $\mathcal{D} \leftarrow \varnothing$ + + for all ${\pi }_{i} \in \Pi$ do + + ${q}_{i,O} = {f}_{i}\left( O\right)$ + + ${q}_{i,{\mathcal{B}}_{i}} = {f}_{i}\left( {\mathcal{B}}_{i}\right)$ + + ${d}_{i} = W\left( {{q}_{i,\mathcal{O}},{q}_{i,{\mathcal{B}}_{i}}}\right)$ + + $\mathcal{D} \leftarrow \mathcal{D} \cup \left\{ {d}_{i}\right\}$ + + end for + + $j \leftarrow$ argmin $\mathcal{D}$ + + if ${d}_{j} > {d}_{\epsilon }$ then + + Initialize a new policy ${\pi }_{\theta }$ with observation buffer $\mathcal{B}$ + + $\pi \leftarrow {\pi }_{\theta }$ + + $\Pi \leftarrow \Pi \cup \left\{ {\pi }_{\theta }\right\}$ + + else + + $\pi \leftarrow {\pi }_{j}$ + + $\mathcal{B} \leftarrow {\mathcal{B}}_{i}$ + + end if + + Update $\pi$ to maximize expected return + + $\mathcal{B} \leftarrow \mathcal{B} \cup \mathcal{O}$ + + end while + +§ B. POLICY NETWORK ARCHITECTURE + +We adapted Proximal Policy Optimization (PPO) [19] as the RL algorithm for each policy in our collection. PPO comprises two networks: a policy network that produces actions, and a value network that approximates values. + +For MDP with group symmetries, the optimal value function is group invariant, and the optimal policy is group equivariant [24]. To achieve this, the policy and the value networks employ a shared equivariant feature extractor to extract equivariant features from observations. As illustrated in Figure 2, in the value network, these extracted features are transformed into invariant features through a group pooling layer, and then fed forward into a fully connected network to generate the output values. On the other hand, in the policy network, these features are input into an equivariant network composed of multiple equivariant MLP layers. + +§ C. EQUIVARIANT FEATURE EXTRACTOR. + +The equivariant feature extractor is comprised of 2 parts: an equivariant convolutional network that accepts images as inputs and an equivariant fully connected network that accepts vectors. The outputs from these 2 networks are concatenated, resulting in the final equivariant features. + +The equivariant convolutional network is constituted of 5 equivariant convolutional layers [25], which convert an image from a trivial representation into a regular representation. The equivariant fully connected network, on the other hand, consists of a single equivariant fully connected layer [4] that transforms a vector, which combines different representations, into a regular representation. + +§ D. UNSUPERVISED DYNAMIC POLICY ASSIGNMENT + +Detecting task boundaries and developing skills to solve each task are still open problems. Existing methods detect task boundaries based on model prediction errors in model-based RL settings [27], task performance drops in model-free RL settings, or policy reconstruction errors in imitation learning. In this work, we propose to detect different groups of tasks instead to facilitate knowledge transfer between tasks in each group and discourage negative policy interferences across different groups. + +Assignments based on invariant features. In contrast to prior works that calculate the distance in state space, our method determines policy assignments based on distance in the invariant feature space. Specifically, we compute the Wasserstein distance between the invariant features derived from $\mathcal{O}$ and buffer $\mathcal{B}$ of each policy in the collection. The invariant features are obtained from the equivariant feature extractor via a group pooling operation. + +Let $\mathbf{X}$ be a matrix constructed by invariant features extracted from the state buffer $\mathcal{B}$ of size $n$ . + +$$ +\mathbf{X} = {\left( {X}_{1},{X}_{2},\ldots ,{X}_{n}\right) }^{\mathrm{T}}, \tag{8} +$$ + +$$ +{X}_{i} = h\left( {s}_{i}\right) ,i \in \left\lbrack n\right\rbrack ,{s}_{i} \in \mathcal{B}\text{ . } +$$ + +We use the Earth Movers distance (EMD) to measure the distance between two empirical distributions $\mathbf{X}$ and $\mathbf{Y}$ with $n$ and $m$ features, respectively. More specifically, + +$$ +{EMD}\left( {\mathbf{X},\mathbf{Y}}\right) = \mathop{\min }\limits_{\gamma }\;\langle \gamma ,\mathbf{M}{\rangle }_{F} \tag{9} +$$ + +$$ +\text{ s.t. }\gamma \mathbf{1} = \mathbf{a},{\gamma }^{T}\mathbf{1} = \mathbf{b},\gamma \geq 0 +$$ + + < g r a p h i c s > + +Fig. 2: Equivariant policy network architecture, ReLU nonlinearity is omitted in the figure. A layer with a suffix of $\mathrm{R}$ indicates the layer output is in the regular representation; a layer with a suffix of $\mathrm{T}$ indicates the layer output is in the trivial representation; a layer with a siffix of 'mix' meaning the layer output combines different representations. + +where $\mathbf{M}$ is the metric cost matrix and ${\mathbf{M}}_{i,j} = {\begin{Vmatrix}{X}_{i} - {Y}_{j}\end{Vmatrix}}_{2}$ , $\mathbf{a} = \left\lbrack {1/n,1/n,\ldots ,1/n}\right\rbrack ,\mathbf{b} = \left\lbrack {1/m,1/m,\ldots ,1/m}\right\rbrack .$ + +§ V. EXPERIMENTAL SETUP + +In this section, we tested the proposed method in Meta-World benchmark [29] and evaluate its performance. We want to investigate whether our method can (i) assign or expand correct task group indices, (ii) recall policy when facing an old environment and automatically initialize a new policy when encountering an unseen environment (iii) achieve equivalent or better performance compared to baseline methods. + +§ A. BASELINES + +We compare our method termed as $M$ -Equi with 3 baselines detailed as follows: + +1) Multiple policies with ground truth policy indexes (M-Equi-gt): M-Equi-gt uses ground truth group labels to assign policies to different groups. With this additional group index information, M-Equi-gt is expected to have the oracle performance. M-Equi-gt helps ablate the performance of our proposed policy assignment mechanism. + +2) Multiple policies with a CNN-based feature extractor (M-CNN): M-CNN uses the same continual learning mechanism but with a vanilla CNN block as the feature extractor. The vanilla CNN shares a similar network structure compared with our proposed invariant feature extractor in Figure 2. The only difference is switching the rotation-equivariant convolutional layer to the vanilla convolutional layer. M-CNN helps reveal the benefit of using invariant feature extractors. + +3) A single policy with a CNN-based feature extractor (S-CNN): S-CNN is similar to S-Equi but uses a CNN-based feature extractor. + +§ B. ENVIRONMENT + +Simulation Setup. Our robot manipulation scenarios consist of 4 group of tasks, and each group contains 4 tasks, rotations and reflections symmetries can be found between any 2 tasks within the same group. The robot is required to complete a sequence of groups, adhering to the standard RL setting. Here we utilized the Meta-World benchmark [29] as our simulation platform. Meta-World features a variety of table-top manipulation tasks that require interaction with diverse objects using a Sawyer robot. We chose 4 group of tasks as shown in Figure 6 from the Meta-World benchmark for our study: 'Reach', 'Button Press', 'Drawer Close', and 'Plate Slide'. + +1) Reaching. In the 'Reaching' group, the robot is required to move its gripper towards a selected target above the table. The goal location is symmetrically arranged around the center of the table. + +2) Button Press. In the 'Button Press' group, the robot is required to press a button located on the table. The button location is symmetrically arranged around the center of the table. + +3) Drawer Close. In the 'Drawer Close' group, the robot is required to close a drawer using its gripper. The location of the drawer is symmetrically arranged around the center of the table. + +4) Plate Slide. In the 'Plate Slide' group, the robot is required to move a plate to the target location using its gripper. The goal location is symmetrically arranged around the center of the table. + +Real World Setup. Our real-world experiment setup utilized a Kinova GEN3 robotic arm with is equipped with a Robotiq 2F-85 gripper. The top-down RGB image is captured with an Intel RealSense D345f. Information on the gripper's coordinates and opening angle were obtained through the robot's internal sensors. The real robot setups are demonstrated in Figure 7. + +§ C. STATES AND ACTIONS + +The agent receives two types of observations: an RGB image captured from a top-down camera centered over the table, and the gripper's 3D coordinates and opening angle. Notably, the target location is only revealed to the agent during the 'Reaching' task. To close the gap between the simulation and real-world scenarios (Sim2Real), we prepossessed the RGB images before inputting them into the network. A comparison of the original and processed images can be found in Figure 8. + +The action is a four-dimensional vector that controls the gripper's movement along three axes and its opening angle. This particular type of action has been chosen to ensure the transferability of our algorithm between different robotic models, as we utilized two distinct robots in the simulation and real-world scenarios. This choice also makes our approach more independent of the specific robot arm model and broadly applicable to various scenarios. + +§ VI. RESULTS AND ABLATIONS + +§ A. RESULTS + +In this section, we analyze the performance of our method against other baselines during training and evaluate their converged performance in each group. + +1) Training performance: As shown in Figure 5, when the environment switches to a new group, our method quickly detects changes and initialize different policy for each group. Our method also recalls the corresponding policy from the collection when facing the same group again. In general, our method's dynamic policy assignment result matches the ground truth group labels. We noticed that some assignment results did not match the ground truth. This may be due to the fact that the feature extractor of each sub-policy is not able to detect representative features of each group at the early stage. The misclassification rate drops significantly as the training episode grows. + +In Figure 4, our method's training curve is similar to M-Equi-gt which assigns policies according to the ground truth group index. The training curve shows that the performance drop due to misclassification is minor and acceptable. Comparing the training curve of baselines using vanilla CNN, the equivariance networks dramatically improve their sample efficiency. This is because the equivariance network only needs to optimize the policy for the abstracted MDP in the group so that it does not need to handle each task in the group independently. Additionally, the training curve does not differ much between our method and S-Equi at the early stage of training. But later, the training curve of S-Equi could not keep up with our method since it suffers from the forgetting problem due to not retaining parameters of previously seen groups. + +2) Converged performance: Table 1 and Table 1 show the converged performance of our method compared with other baseline methods. The performance of our method is similar to M-Equi-gt, which uses ground truth group policy indexes as extra prior knowledge. Besides M-Equi-gt, our method achieves a much higher episode reward and a much higher success rate compared to other baselines. The results show that our method is able to achieve high performance across all groups. + + < g r a p h i c s > + +Fig. 3: The continual learning environment where there are four groups of tasks based on task configurations, including the drawer-close, button-press, plate-slide, and goal-reach. Different groups streamingly come in. + + < g r a p h i c s > + +Fig. 4: Training curves for our proposed M-Equi approach and the baseline models are illustrated, with each background color corresponding to one group of tasks. The results demonstrate that M-Equi shows similar performance when compared to M-Equi-gt, which utilizes ground truth group indices, and substantially outperforms other baseline models. + + < g r a p h i c s > + +Fig. 5: The selection of policies at each episode for our proposed M-Equi method and the baseline models is represented, with each background color denoting a distinct group of tasks. Generally, following the initial stage, the policy indices allocated by M-Equi remain in alignment with the ground truth group indices. + +§ B. ABLATION STUDY + +Here we analyze the effect of multiple policies and invariant feature extractors respectively through ablation studies. + +1) The effect of group symmetric information: As shown in Table 1, methods without the invariant feature extractor, such as S-CNN, have lower episodic rewards, as well as lower success rate, compared with methods using the invariant feature extractors (M-Equi, M-Equi-gt, S-Equi). Hence, we can conclude that group symmetric information extracted by the invariant feature extractor dramatically improves the performance. The invariant feature extractor maps all tasks in the same group into an abstracted task. Thus the RL agent can optimize policies under the abstracted MDP. + +2) The effect of the dynamic policy assignment module: By comparing the performances between M-Equi, M-Equi-gt, and S-Equi, we can conclude that using multiple policies in the continuous learning environment helps improve performance. As shown in Figure 4, S-Equi's performance does not differ much from M-Equi and M-Equi-gt at the early stage of training. However, when the environment switches to different groups, S-Equi's performance drops quickly due to the forgetting problem since it can not recall parameters from seen groups. + + < g r a p h i c s > + +Fig. 6: The simulated Sawyer setup consists of four different tasks, including goal-reach, button-press, drawer-close, and plate-slide. The goal point marked in the figure is only disclosed to the agent in goal-reach-related tasks. + + < g r a p h i c s > + +Fig. 7: The real Kinova GEN3 setup consists of four different tasks, including goal-reach, button-press, drawer-close, and plate slide. The goal point marked in the figure is only disclosed to the agent in goal-reach-related tasks. + +§ VII. CONCLUSION + +We propose a novel continual RL framework that leverages group symmetries to facilitate quick generalization to unseen but equivalent tasks under group operations. Our proposed algorithm unsupervisedly detects group boundaries based on invariant state features and grows policies for each group of equivalent tasks instead of a single task. By implementing our algorithm on robotic manipulation scenarios, we show that our algorithm assigns tasks to different groups with high accuracy and has a strong generalization capability outperforming baselines by a large margin. Finally we transformed the trained algorithm from simulator to a real-world robotic arm without fine-tuning. The results indicate that the agent is capable of adeptly solving different tasks. + + < g r a p h i c s > + +Fig. 8: To address the sim-to-real gap, we incorporate preprocessing steps for both real and simulated observation images. These steps are performed before feeding the images into the network, as illustrated in the figure. + +TABLE I: Quantitative evaluation of episodic reward of policies at convergence. + +max width= + +Methods Plate-slide Button-press Drawer-close Goal-reach + +1-5 +M-Equi (Ours) 263.24 271.26 411.83 478.56 + +1-5 +M-Equi-GT 72.81 284.47 389.46 443.26 + +1-5 +S-Equi 58.78 163.92 304.61 292.39 + +1-5 +S-CNN 25.60 85.35 121.23 102.21 + +1-5 + +TABLE II: Quantitative evaluation of episodic success rate of policies at convergence. + +max width= + +Methods Plate-slide Button-press Drawer-close Goal-reach + +1-5 +M-Equi (Ours) 0.83 0.89 0.96 0.99 + +1-5 +M-Equi-GT 0.12 0.31 0.97 0.99 + +1-5 +S-Equi 0.17 0.29 0.69 0.70 + +1-5 +S-CNN 0.02 0.15 0.13 0.12 + +1-5 + +§ ACKNOWLEDGMENT + +The authors gratefully acknowledge the support from the unrestricted research grant from Toyota Motor North America. The ideas, opinions, and conclusions presented in this paper are solely those of the authors. \ No newline at end of file diff --git a/TimeMachine/TimeMachine RFC/5M3mFneb6de/Initial_manuscript_md/Initial_manuscript.md b/TimeMachine/TimeMachine RFC/5M3mFneb6de/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..4db1bae362c233b37eddb60eeb3dc2b9e481b9bf --- /dev/null +++ b/TimeMachine/TimeMachine RFC/5M3mFneb6de/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,81 @@ +# RFC on RFCs Time Machine RFC-0000 + +Frédéric Kaplan Kevin Baumer Mike Kestemont + +Daniel Jeller + +## Motivation + +Reaching consensus on the technology options to pursue in a programme as large as Time Machine is a complex issue. To ensure the open development and evaluation of work, a process inspired by the Request for Comments (RFC) that was used for the development of the Internet protocol ${}^{1}$ is being adapted to the needs of Time Machine. Time Machine Requests for Comments are freely accessible publications, identified with a unique ID, that constitute the main process for establishing rules, recommendations and core architectural choices for Time Machine components. + +## Approach + +The Time Machine RFCs are based on the following principles: + +1. Accessibility. RFCs are freely accessible, at no cost. + +2. Openness. Anybody can write an RFC. + +3. Identification. Each RFC, once published, has a unique ID and version number. It can nevertheless be revised over time as a living document, being republished with the same ID and a different version number. + +4. Incrementalism. Each RFC should be useful in its own right and act as a building block for others. Each RFC must be intended as a contribution, extension or revision of the Time Machine Infrastructure. + +5. Standardisation. RFCs should aim to make use of standardised terms to improve the clarity level of its recommendation. + +6. Scope. RFCs are designed contributions and implementation solutions for solving practical problems. RFCs are not research papers and may not necessarily contain experimental evidence. RFCs cover not only the technical infrastructure but the data standards, legal frameworks, and values and principles of Time Machine. + +--- + +${}^{1}$ https://tools.ietf.org/html/rfc791 + +--- + +7. Self-defining process. As used for the development of the Internet, RFCs are the main process for establishing Time Machine Infrastructure and Processes and also the processes and roles for managing $\mathbf{{RFCs}}$ themselves. RFC Publication Process + +![01963a40-21da-7702-8ad4-798d372cb028_1_396_599_991_663_0.jpg](images/01963a40-21da-7702-8ad4-798d372cb028_1_396_599_991_663_0.jpg) + +Figure 1: ${75}\%$ center + +The RFC Editorial Committee organises the publication process of the RFCs, maintains the consistency of the RFC System, appoints RFC teams to organise new RFCs and to improve existing RFCs, keeps track of RFC versioning, ensures the timely and regular publication of RFCs, and is responsible for the public announcement of the open review process. The governance and organisation of the RFC Editorial Committee is defined in RFC-0004. + +The publication process is the following : + +1. The RFC Editorial Committee appoints authors to write the RFCs planned in the RFC tree (RFC-0002). Alternatively, authors may contact the RFC Editorial Committee to submit their candidature to write an RFC (planned in the RFC tree or not). + +2. The authors produce an RFC draft which is reviewed, first by the RFC Editorial Committee for coherence with the rest of the RFC corpus and then by a larger community. The RFC is revised and possibly sent for review again. + +3. Once accepted by the RFC Editorial Committee, an RFC receives an official identifier and is officially published as an peer-reviewed publication with proper scholarly credits assigned to the original author(s). + +4. The RFC tree is adapted to include the published RFC and any possible sub-RFCs planned during the writing of the RFC. + +## RFC Format + +The RFC Format and Guidelines are estabished iteratively by the RFC Editorial Committee. The most-up-to-date version can be found in the RFC-0000. + +Current Format + +1. Motivation section + +2. Series of sections describing the Approach and Solution + +3. Question and Answers section + +4. Linked RFCs section + +## Question and Answers + +## What are the main differences between Time Machine RFCs and Internet Society RFCs? + +The Time Machine RFCs are being developed over 50 years after the RFCs that shaped in the Internet. The main differences are the following: + +1. Time Machine RFCs are exclusively used to describe motivated solutions and not general communication. + +2. Time Machine RFCs can be revised and are redefined iteratively, wheraeas significant improvement on an Internet Society RFC led to the creation of new RFC. + +## Linked RFCs + +- The RFC Tree is kept up to date in RFC-0002. + +- The details of the RFC platform are defined in the RFC-0003. + +- The governance and function of the RFC Editorial Committee is defined in RFC-0004. \ No newline at end of file diff --git a/TimeMachine/TimeMachine RFC/5M3mFneb6de/Initial_manuscript_tex/Initial_manuscript.tex b/TimeMachine/TimeMachine RFC/5M3mFneb6de/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..a417a2af000fe0a9e70e4df3ae2481c9d7ef6142 --- /dev/null +++ b/TimeMachine/TimeMachine RFC/5M3mFneb6de/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,77 @@ +§ RFC ON RFCS TIME MACHINE RFC-0000 + +Frédéric Kaplan Kevin Baumer Mike Kestemont + +Daniel Jeller + +§ MOTIVATION + +Reaching consensus on the technology options to pursue in a programme as large as Time Machine is a complex issue. To ensure the open development and evaluation of work, a process inspired by the Request for Comments (RFC) that was used for the development of the Internet protocol ${}^{1}$ is being adapted to the needs of Time Machine. Time Machine Requests for Comments are freely accessible publications, identified with a unique ID, that constitute the main process for establishing rules, recommendations and core architectural choices for Time Machine components. + +§ APPROACH + +The Time Machine RFCs are based on the following principles: + +1. Accessibility. RFCs are freely accessible, at no cost. + +2. Openness. Anybody can write an RFC. + +3. Identification. Each RFC, once published, has a unique ID and version number. It can nevertheless be revised over time as a living document, being republished with the same ID and a different version number. + +4. Incrementalism. Each RFC should be useful in its own right and act as a building block for others. Each RFC must be intended as a contribution, extension or revision of the Time Machine Infrastructure. + +5. Standardisation. RFCs should aim to make use of standardised terms to improve the clarity level of its recommendation. + +6. Scope. RFCs are designed contributions and implementation solutions for solving practical problems. RFCs are not research papers and may not necessarily contain experimental evidence. RFCs cover not only the technical infrastructure but the data standards, legal frameworks, and values and principles of Time Machine. + +${}^{1}$ https://tools.ietf.org/html/rfc791 + +7. Self-defining process. As used for the development of the Internet, RFCs are the main process for establishing Time Machine Infrastructure and Processes and also the processes and roles for managing $\mathbf{{RFCs}}$ themselves. RFC Publication Process + + < g r a p h i c s > + +Figure 1: ${75}\%$ center + +The RFC Editorial Committee organises the publication process of the RFCs, maintains the consistency of the RFC System, appoints RFC teams to organise new RFCs and to improve existing RFCs, keeps track of RFC versioning, ensures the timely and regular publication of RFCs, and is responsible for the public announcement of the open review process. The governance and organisation of the RFC Editorial Committee is defined in RFC-0004. + +The publication process is the following : + +1. The RFC Editorial Committee appoints authors to write the RFCs planned in the RFC tree (RFC-0002). Alternatively, authors may contact the RFC Editorial Committee to submit their candidature to write an RFC (planned in the RFC tree or not). + +2. The authors produce an RFC draft which is reviewed, first by the RFC Editorial Committee for coherence with the rest of the RFC corpus and then by a larger community. The RFC is revised and possibly sent for review again. + +3. Once accepted by the RFC Editorial Committee, an RFC receives an official identifier and is officially published as an peer-reviewed publication with proper scholarly credits assigned to the original author(s). + +4. The RFC tree is adapted to include the published RFC and any possible sub-RFCs planned during the writing of the RFC. + +§ RFC FORMAT + +The RFC Format and Guidelines are estabished iteratively by the RFC Editorial Committee. The most-up-to-date version can be found in the RFC-0000. + +Current Format + +1. Motivation section + +2. Series of sections describing the Approach and Solution + +3. Question and Answers section + +4. Linked RFCs section + +§ QUESTION AND ANSWERS + +§ WHAT ARE THE MAIN DIFFERENCES BETWEEN TIME MACHINE RFCS AND INTERNET SOCIETY RFCS? + +The Time Machine RFCs are being developed over 50 years after the RFCs that shaped in the Internet. The main differences are the following: + +1. Time Machine RFCs are exclusively used to describe motivated solutions and not general communication. + +2. Time Machine RFCs can be revised and are redefined iteratively, wheraeas significant improvement on an Internet Society RFC led to the creation of new RFC. + +§ LINKED RFCS + + * The RFC Tree is kept up to date in RFC-0002. + + * The details of the RFC platform are defined in the RFC-0003. + + * The governance and function of the RFC Editorial Committee is defined in RFC-0004. \ No newline at end of file diff --git a/UAI/UAI 2022/UAI 2022 Conference/B0gGoUIiqx9/Initial_manuscript_tex/Initial_manuscript.tex b/UAI/UAI 2022/UAI 2022 Conference/B0gGoUIiqx9/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..4c6dc91a0d9fa0768700b9007125dcbb57c37ea5 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Conference/B0gGoUIiqx9/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,478 @@ +Multi-winner Approval Voting Goes Epistemic + +§ ABSTRACT + +Epistemic voting interprets votes as noisy signals about a ground truth. We consider contexts where the truth consists of a set of objective winners, knowing a lower and upper bound on its cardinality. A prototypical problem for this setting is the aggregation of multi-label annotations with prior knowledge on the size of the ground truth. We posit noise models, for which we define rules that output an optimal set of winners. We report on experiments on multi-label annotations (which we collected). + +§ 1 INTRODUCTION + +The epistemic view of voting assumes the existence of a ground truth which, usually, is either an alternative or a ranking over alternatives. Votes reflect opinions or beliefs about this ground truth; the goal is to aggregate these votes so as to identify it. Usual methods define a noise model specifying the probability of each voting profile given the ground truth, and output the alternative that is the most likely state of the world, or the ranking that is most likely the true ranking. + +Now, there are contexts where the ground truth does not consist of a single alternative nor a ranking, but of a set of alternatives. Typical examples are multi-label crowdsourc-ing (find the items in a set that satisfy some property, e.g. the sport teams appearing on a picture) or finding the objectively $k$ best candidates (best papers at a conference, best performance in artistic sports, $k$ patients with highest probabilities of survival if being assigned a scarce medical resource). + +These alternatives that are truly in the ground truth are called 'winning' alternatives. Depending on the context, the number of winning alternatives can be fixed, unconstrained, or more generally, constrained to be in a given interval. This constraint expresses some prior knowledge on the cardinality of the ground truth. This prior knowledge is held by the central authority that aggregates the votes, and not necessarily by the voters themselves. Here are some examples: + + * Picture annotation via crowdsourcing: participants are shown a picture taken from a soccer match and have to identify the team(s) appearing in it. The ground truth is known to contain one or two teams. + + * Guitar chord transcription: voters are base classifier algorithms Nguyen et al. [2020] which, for a given chord, select the set of notes constitute it. The true set of notes can contain three to six alternatives. + + * Jury: participants are members of a jury which has to give an award to three papers presented at a conference: the number of objective winners is fixed to three. (In a variant, the number of awards would be at most three.) + + * Resource allocation: participants are doctors and alternatives are Covid-19 patients in urgent need of intensive care; there is a limited number $k$ of intensive care units. The ground truth consists of those patients who most deserve to be cured (for example those with the $k$ highest probabilities of survival if cured). + +We assume that voters provide a simple form of information: approval ballots, indicating which alternatives they consider plausible winners. These approval ballots are not subject to any cardinality constraint: a voter may approve a number of alternatives, even if it does not lie in the interval bearing on the output. This is typically the case for totally ignorant voters, who may plausibly approve all alternatives. + +Sometimes, the aggregating mechanism has some prior information about the likelihood of alternatives and the reliability of voters. We first study a simple case where this information is specified in the input: in the noise model, each voter has a probability ${p}_{i}$ (resp. ${q}_{i}$ ) of approving a winning (resp. non-winning) alternative, and each alternative has a prior probability to be winning. This departs from classical voting, where voters are usually treated equally (anonymity), and similarly for alternatives (neutrality). + +This simple case serves as a building component for the more complex case where these parameters are not known beforehand but estimated from the votes: votes allow to infer information about plausibly winning alternatives, from which we infer information about voter reliabilities, which leads to revise information about winning alternatives, and so on until the process converges. Here we move back to an anonymous and neutral setting, since all alternatives (resp. voters) are treated equally before votes are known. + +After discussing related work (Section 2), we introduce the model (Section 3) and give an estimation algorithm (Section 4), first in the case where the parameters are known, and then in the case where they are estimated from the votes. In Section 5 we present a data gathering task and analyse the results of the experiments. Section 6 concludes. + +§ 2 RELATED WORK + +Epistemic social choice Epistemic social choice consists in recovering an objective ground truth from votes seen as noisy reports about the ground truth, using maximum likelihood estimation. It dates back from Condorcet's jury theorem [Condorcet, 1785]: n independent, equally reliable voters vote on two alternatives that are a priori equally likely; if every vote is correct with probability $p > \frac{1}{2}$ , then majority outputs the correct alternative with a probability increasing with $n$ and tending to 1 when $n$ grows to infinity. + +There are several extensions of Condorcet's jury theorem: Young [1988] for an arbitrary number of alternatives; Shapley and Grofman [1984] and Drissi-Bakhkhat and Truchon [2004] for voters with various competence degrees; Ben-Yashar and Nitzan [1997] and Ben-Yashar and Paroush [2001] for nonuniform priors over alternatives; Pi-vato [2013] and Pivato [2017] for dependent voters. Conitzer and Sandholm [2005] and Conitzer et al. [2009] characterize various voting rules as maximum likelihood estimators, each associated with a particular noise model. See Nitzan and Paroush [2017] and Elkind and Slinko [2016]. for surveys on recent developments. + +Multi-winner voting Multi-winner voting rules map voting profiles into sets of alternatives. A voting profile can be either a collection of subsets of alternatives (approval ballots) or a collection of ranking over alternatives (ordinal ballots). The output is often constrained to have a fixed cardinality, but not always: see Kilgour [2016], Faliszewski et al. [2020]. There have been a lot of recent developments in the field: see the recent surveys Faliszewski et al. [2017]) and Lackner and Skowron [2020]. They, however, deal only with the classical (non-epistemic) view of social choice, where votes express preferences. + +Multi-winner epistemic voting Multi-winner epistemic voting has received only little attention so far. Procaccia et al. [2012] assume a ground truth ranking over alternatives, and identify rules that output the $k$ alternatives maximizing the likelihood to contain the best alternative, or the likelihood to coincide with the top- $k$ alternatives. The last section of [Xia and Conitzer, 2011] defines a noise model where the ground truth is a set of $k$ alternatives (and the reported votes are partial orders). The only work we know where the noise models produce random approval votes from a ground truth consisting of a set of alternatives is Caragiannis et al., 2020]. They define a family of distance-based noise models, whose prototypical instance generates approval votes selecting an alternative in the ground truth (resp. not in the ground truth) with probability $p$ (resp. $1 - p$ ); as we see further, this is a specific case of our noise model. Generalizing multiwinner voting, Xia et al. [2010] study epistemic voting on combinatorial (or multi-attribute) domains. + +Epistemic approval voting Epistemic voting with approval ballots has scarcely been considered. Procaccia and Shah [2015] assume that the ground truth is a ranking over alternatives, and identify noise models for which approval voting is optimal given $k$ -approval votes, in the sense that the objectively best alternative gets elected. Allouche et al. [2022] continue this line of research but assume instead that the ground truth consists of a single alternative. They define various noise models and show that those that work best on real datasets are those that give a higher confidence to voters who approve few alternatives. Caragiannis and Micha [2017] study the number of samples needed to recover the ground truth ranking over alternatives with high enough probability from approval ballots; they show that is is exponential if ballots are required to approve $k$ candidates, but polynomial if the size of the ballots is randomized. + +Crowdsourcing and social choice A social choice-theoretic study of collective annotation tasks was done by Kruger et al. [2014] and Qing et al. [2014]. Mechanisms for incentive-compatible elicitation with approval ballots in crowdsourcing applications have been designed by Shah and Zhou [2020]. Meir et al. [2019] define a method to aggregate votes weighted according to their average proximity to the other votes as an estimation of their reliability. + +Prelec et al. [2017] introduce the Bayesian truth serum approach: eliciting, in addition to the voters' answers, their prediction of the distribution of answers, gives much better results. This approach was generalized by Hosseini et al. [2021] to contexts where the ground truth is a ranking. + +Beyond social choice, collective multi-label annotation was first addressed by Nowak and Rüger [2010], who study the agreement between experts and non-experts in some multi-labelling tasks, and by Deng et al. [2014], who solve the multi-label estimation problem with a scalable aggregation method. + +§ 3 THE MODEL + +Let $\mathcal{N} = \{ 1,\ldots ,n\}$ be a set of voters, and $\mathcal{A} =$ $\left\{ {{a}_{1},\ldots ,{a}_{m}}\right\}$ a set of alternatives (possible objects in images, notes in chords, papers, patients...). Consider a set of $L$ instances: an instance $z$ consists of an approval profile ${A}^{z} = \left( {{A}_{1}^{z},\ldots ,{A}_{n}^{z}}\right)$ where ${A}_{i}^{z} \subseteq \mathcal{A}$ is an approval ballot for every $i \in \mathcal{N}$ . For example, in a crowdsourcing context, a task usually contains multiple questions, and an instance comprises the voters' answers to one of these questions. + +For each instance $z \in L$ , there exists an unknown ground truth ${S}_{z}^{ * }$ belonging to $\mathcal{S} = {2}^{\mathcal{A}}$ , which is the set of objectively correct alternatives in instance $z$ . It is prior knowledge by the central authority (but not necessarily by voters), that the number of alternatives in each of them lies in the interval $\left\lbrack {l,u}\right\rbrack : {S}_{z}^{ * } \in {\mathcal{S}}_{l,u} = \{ S \in \mathcal{S},l \leq \left| S\right| \leq u\}$ , for given bounds $0 \leq l \leq u \leq m$ . + +Our goal is to unveil the ground truth for each of these instance using the votes and the prior knowledge on the number of winning alternatives. We define a noise model consisting of two parametric distributions, namely, a conditional distribution of the approval ballots given the ground truth, and a prior distribution on the ground truth. Here we depart from classical noise models in epistemic social choice, as we suppose that the parameters of these distributions may be unknown and thus need to be estimated. + +For each voter $i \in \mathcal{N}$ , we suppose that there exist two unknown parameters $\left( {{p}_{i},{q}_{i}}\right)$ in(0,1)such that the approval ballot ${A}_{i}^{z}$ on an instance $z \in L$ is drawn according to the following distribution: for each $a \in \mathcal{A}$ , + +$$ +P\left( {a \in {A}_{i}^{z} \mid {S}_{z}^{ * } = S}\right) = \left\{ \begin{array}{ll} {p}_{i} & \text{ if }a \in S \\ {q}_{i} & \text{ if }a \notin S \end{array}\right. +$$ + +where ${p}_{i}$ (resp. ${q}_{i}$ ) is the (unknown) probability that voter $i$ approves a correct (resp. incorrect) alternative. Then we make the following assumptions: + +(1) A voter's approvals of alternatives are mutually independent given the ground truth and parameters ${\left( {p}_{i},{q}_{i}\right) }_{i \in \mathcal{N}}$ . + +(2) Voters' ballots are mutually independent given the ground truth. + +(3) Instances are independent given the parameters ${\left( {p}_{i},{q}_{i}\right) }_{i \in \mathcal{N}}$ and the ground truths. + +To model the prior probability of any set $S$ to be the ground truth ${S}^{ * }$ , we define parameters ${t}_{j} = P\left( {{a}_{j} \in {S}^{ * }}\right)$ . ${t}_{j}$ can be understood as the prior probability of ${a}_{j}$ to be in the ground truth set ${S}^{ * }$ before the cardinality constraints are taken into account. These, together with an independence assumption on the events $\left\{ {{a}_{j} \in {S}^{ * }}\right\}$ , gives $P\left( {S = {S}^{ * }}\right) = \mathop{\prod }\limits_{{{a}_{j} \in S}}{t}_{j}\mathop{\prod }\limits_{{{a}_{j} \notin S}}1 - {t}_{j}$ . Note that the choice of the parameters ${t}_{j}$ is not crucial when running the algorithm for estimating the ground truth: we will see in Section 4.3 that it converges whatever their values. The distribution conditional to the prior knowledge on the size of the ground truth can be seen as a projection on the constraints followed by a normalization: + +$$ +\widetilde{P}\left( S\right) = P\left( {{S}^{ * } = S\left| {l \leq }\right| {S}^{ * } \mid \leq u}\right) = \frac{P\left( {{S}^{ * } = S \cap \left| {S}^{ * }\right| \in \left\lbrack {l,u}\right\rbrack }\right) }{P\left( {\left| {S}^{ * }\right| \in \left\lbrack {l,u}\right\rbrack }\right) } +$$ + +It follows: + +$$ +\widetilde{P}\left( S\right) = \left\{ \begin{array}{ll} \frac{1}{\beta \left( {l,u,t}\right) }\mathop{\prod }\limits_{{{a}_{j} \in S}}{t}_{j}\mathop{\prod }\limits_{{{a}_{j} \notin S}}\left( {1 - {t}_{j}}\right) & \text{ if }S \in {\mathcal{S}}_{l,u} \\ 0 & \text{ if }S \notin {\mathcal{S}}_{l,u} \end{array}\right. +$$ + +where $\beta \left( {l,u,t}\right) = \mathop{\sum }\limits_{{S \in {\mathcal{S}}_{l,u}}}\mathop{\prod }\limits_{{{a}_{j} \in S}}{t}_{j}\mathop{\prod }\limits_{{{a}_{j} \notin S}}\left( {1 - {t}_{j}}\right)$ . + +The ground truths associated with different instances are assumed to be mutually independent given the parameters. + +Two particular cases are worth discussing. First, when $\left( {l,u}\right) = \left( {0,m}\right)$ , the problem is unconstrained and we have $\beta \left( {0,m,t}\right) = P\left( {\left| {S}^{ * }\right| \in \left\lbrack {0,m}\right\rbrack }\right) = 1$ , so $\widetilde{P}\left( S\right) = P(S =$ $\left. {S}^{ * }\right)$ . In this case the problem degenerates into a series of independent binary label-wise estimations (see Subsection 4.1). + +Second, in the single-winner case $\left( {l,u}\right) = \left( {1,1}\right)$ , we have $\widetilde{P}\left( \left\{ {a}_{j}\right\} \right) = \frac{{t}_{j}\mathop{\prod }\limits_{{h \neq j}}1 - {t}_{h}}{\beta \left( {1,1,t}\right) }$ , therefore, for any approval profile $A,P\left( {{S}^{ * } = \left\{ {a}_{j}\right\} \mid A,\left| {S}^{ * }\right| = 1}\right) \propto \frac{{t}_{j}}{1 - {t}_{j}}P\left( {A \mid {S}^{ * } = \left\{ {a}_{j}\right\} }\right) .$ We recover the same estimation problem if we simply introduce ${\alpha }_{j} = P\left( {{S}^{ * } = \left\{ {a}_{j}\right\} }\right)$ with $\sum {\alpha }_{j} = 1$ as in Ben-Yashar and Paroush [2001], in which case we have $P\left( {{S}^{ * } = \left\{ {a}_{j}\right\} \left| {A,}\right| {S}^{ * } \mid = 1}\right) \propto {\alpha }_{j}P\left( {A \mid {S}^{ * } = \left\{ {a}_{j}\right\} }\right) .$ + +§ 4 ESTIMATING THE GROUND TRUTH + +Our aim is the intertwined estimation of the ground truth and the parameters via maximizing the total likelihood of the instances: + +$$ +\mathcal{L}\left( {A,S,p,q,t}\right) = \mathop{\prod }\limits_{{z = 1}}^{L}\widetilde{P}\left( {S}_{z}\right) \mathop{\prod }\limits_{{i = 1}}^{n}P\left( {{A}_{i}^{z} \mid {S}_{z}}\right) +$$ + +where: + +$$ +P\left( {{A}_{i}^{z} \mid {S}_{z}}\right) = {p}_{i}^{\left| {A}_{i}^{z} \cap {S}_{z}\right| }{q}_{i}^{\left| {A}_{i}^{z} \cap \overline{{S}_{z}}\right| }{\left( 1 - {p}_{i}\right) }^{\left| \overline{{A}_{i}^{z}} \cap {S}_{z}\right| }{\left( 1 - {q}_{i}\right) }^{\left| \overline{{A}_{i}^{z}} \cap \overline{{S}_{z}}\right| } +$$ + +To this aim, we will introduce an iterative algorithm whose main two steps will be presented in sequence, in the next subsections, before the main algorithm is formally defined and its convergence shown. These two steps are: + + * Estimating the ground truths given the parameters. + + * Estimating the parameters given the ground truths. + +Simply put, the algorithm consists in iterating these two steps until it converges to a fixed point. + +§ 4.1 ESTIMATING THE GROUND TRUTH GIVEN THE VOTES AND THE PARAMETERS + +Since instances are independent given the parameters, we focus here on one instance with ground truth ${S}^{ * }$ and profile $A = \left( {{A}_{1},\ldots ,{A}_{n}}\right)$ . Before diving into maximum likelihood estimation (MLE), we introduce some notions and prove some lemmas. In this subsection, we suppose that the parameters ${\left( {p}_{i},{q}_{i}\right) }_{i \in \mathcal{N}}$ and ${\left( {t}_{j}\right) }_{j \in \mathcal{A}}$ are known (later on, these parameters will be replaced by their estimations at each iteration of the algorithm). Thus, all in all, input and output are as follows: + + * Input: approval profile $A$ ; parameters ${\left( {p}_{i},{q}_{i}\right) }_{i \in \mathcal{N}}$ and ${\left( {t}_{j}\right) }_{j \in \mathcal{A}}$ . + + * Output: MLE of the ground truth ${S}^{ * }$ . + +Definition 1 (weighted approval score). Given an approval profile $\left( {{A}_{1},\ldots ,{A}_{n}}\right)$ , noise parameters ${\left( {p}_{i},{q}_{i}\right) }_{1 \leq i \leq n}$ and prior parameters ${\left( {t}_{j}\right) }_{1 \leq j \leq m}$ , define: + +$$ +{ap}{p}_{w}\left( {a}_{j}\right) = \ln \left( \frac{{t}_{j}}{1 - {t}_{j}}\right) + \mathop{\sum }\limits_{{i : {a}_{j} \in {A}_{i}}}\ln \left( \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) }\right) +$$ + +The scores ${\operatorname{app}}_{w}\left( {a}_{j}\right)$ can be interpreted as weighted approval scores for a $\left( {n + m}\right)$ -voter profile where: + + * for each voter $1 \leq i \leq n : i$ has a weight ${w}_{i} =$ $\ln \left( \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) }\right)$ and casts approval ballot ${A}_{i}$ . + + * for each $1 \leq j \leq m$ : there is a virtual voter with weight ${w}_{j} = \ln \left( \frac{{t}_{j}}{1 - {t}_{j}}\right)$ who casts approval ballot ${A}_{j} = \left\{ {a}_{j}\right\}$ . + +While the weight of each voter $i \in \mathcal{N}$ depends on her reliability, each prior information on an alternative plays the role of a virtual voter who only selects the concerned alternative, with a weight that increases as the prior parameter increases. + +From now on, we suppose without loss of generality that the alternatives are ranked according to their score: + +$$ +{ap}{p}_{w}\left( {a}_{1}\right) \geq {ap}{p}_{w}\left( {a}_{2}\right) \geq \cdots \geq {ap}{p}_{w}\left( {a}_{m}\right) +$$ + +Definition 2 (threshold and partition). Define the threshold: + +$$ +{\tau }_{n} = \mathop{\sum }\limits_{{i = 1}}^{n}\ln \left( \frac{1 - {q}_{i}}{1 - {p}_{i}}\right) +$$ + +and the partition of the set of alternatives in three sets: + +$$ +\left\{ \begin{array}{ll} {S}_{max}^{{\tau }_{n}} & = \left\{ {a \in A,{ap}{p}_{w}\left( a\right) > {\tau }_{n}}\right\} \\ {S}_{tie}^{{\tau }_{n}} & = \left\{ {a \in A,{ap}{p}_{w}\left( a\right) = {\tau }_{n}}\right\} \\ {S}_{min}^{{\tau }_{n}} & = \mathcal{A} \backslash \left( {{S}_{max}^{{\tau }_{n}} \cup {S}_{tie}^{{\tau }_{n}}}\right) \end{array}\right. +$$ + +and let ${k}_{\max }^{{\tau }_{n}} = \left| {S}_{\max }^{{\tau }_{n}}\right| ,{k}_{\text{ tie }}^{{\tau }_{n}} = \left| {S}_{\text{ tie }}^{{\tau }_{n}}\right| ,{k}_{\min }^{{\tau }_{n}} = \left| {S}_{\min }^{{\tau }_{n}}\right|$ . + +The next result characterizes the sets in $\mathcal{S}$ that are MLEs of the ground truth given the parameters. + +Theorem 1. $\widetilde{S} \in \arg \mathop{\max }\limits_{{S \in \mathcal{S}}}\mathcal{L}\left( {A,S,p,q,t}\right)$ if and only if there exists $k \in \left\lbrack {l,u}\right\rbrack$ such that $\widetilde{S}$ is the set of $k$ alternatives with the highest $k$ values of ${ap}{p}_{w}$ and: + +$$ +\left\{ \begin{array}{ll} \left| {\widetilde{S} \cap {S}_{max}^{{\tau }_{n}}}\right| & = \min \left( {u,{k}_{max}^{{\tau }_{n}}}\right) \\ \left| {\widetilde{S} \cap {S}_{min}^{{\tau }_{n}}}\right| & = \max \left( {0,l - {k}_{tie}^{{\tau }_{n}} - {k}_{max}^{{\tau }_{n}}}\right) \end{array}\right. \tag{1} +$$ + +So the estimator $\widetilde{S}$ is made of some top- $k$ alternatives, where the possible values of $k$ are determined by Eq. (1). The first equation imposes that $\widetilde{S}$ includes as many elements as possible from ${S}_{\max }^{{\tau }_{n}}$ (without exceeding the upper-bound $u$ ), whereas the second one imposes that $\widetilde{S}$ includes as few elements as possible from ${S}_{min}^{{\tau }_{n}}$ (without getting below the lower-bound $l$ ). An example is included in the appendix. + +Proof. Since $\widetilde{P}\left( S\right) > 0 \Leftrightarrow S \in {\mathcal{S}}_{l,u}$ , we have that $\arg \mathop{\max }\limits_{{S \in \mathcal{S}}}L\left( S\right) = \arg \mathop{\max }\limits_{{S \in {\mathcal{S}}_{l,u}}}L\left( S\right)$ . Moreover, we have that for any $S \in {\mathcal{S}}_{l,u}$ : + +$$ +L\left( S\right) = \widetilde{P}\left( S\right) \mathop{\prod }\limits_{{i = 1}}^{n}{p}_{i}^{\left| {A}_{i} \cap S\right| }{q}_{i}^{\left| {A}_{i} \cap \bar{S}\right| }{\left( 1 - {p}_{i}\right) }^{\left| \overline{{A}_{i}} \cap S\right| }{\left( 1 - {q}_{i}\right) }^{\left| \overline{{A}_{i}} \cap \bar{S}\right| } +$$ + +$$ += \widetilde{P}\left( S\right) \mathop{\prod }\limits_{{i = 1}}^{n}{p}_{i}^{\left| {A}_{i} \cap S\right| }{q}_{i}^{\left| {A}_{i}\right| - \left| {{A}_{i} \cap S}\right| }{\left( 1 - {p}_{i}\right) }^{\left| S\right| - \left| {{A}_{i} \cap S}\right| } +$$ + +$$ +{\left( 1 - {q}_{i}\right) }^{\left| \overline{{A}_{i}}\right| - \left| S\right| + \left| {{A}_{i} \cap S}\right| } +$$ + +$$ +\propto \widetilde{P}\left( S\right) \mathop{\prod }\limits_{{i = 1}}^{n}{\left\lbrack \frac{1 - {p}_{i}}{1 - {q}_{i}}\right\rbrack }^{\left| S\right| }{\left\lbrack \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) }\right\rbrack }^{\left| {A}_{i} \cap S\right| } +$$ + +$$ +\propto \frac{1}{\beta }\mathop{\prod }\limits_{{{a}_{j} \in S}}{t}_{j}\mathop{\prod }\limits_{{{a}_{j} \notin S}}\left( {1 - {t}_{j}}\right) \mathop{\prod }\limits_{{i = 1}}^{n}{\left\lbrack \frac{1 - {p}_{i}}{1 - {q}_{i}}\right\rbrack }^{\left| S\right| }{\left\lbrack \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) }\right\rbrack }^{\left| {A}_{i} \cap S\right| } +$$ + +$$ +\propto \mathop{\prod }\limits_{{{a}_{j} \in S}}\frac{{t}_{j}}{1 - {t}_{j}}\mathop{\prod }\limits_{{i = 1}}^{n}{\left\lbrack \frac{1 - {p}_{i}}{1 - {q}_{i}}\right\rbrack }^{\left| S\right| }{\left\lbrack \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) }\right\rbrack }^{\left| {A}_{i} \cap S\right| } +$$ + +Thus the log-likelihood reads: + +$$ +l\left( S\right) = \mathop{\sum }\limits_{{{a}_{j} \in S}}\ln \frac{{t}_{j}}{1 - {t}_{j}} + \mathop{\sum }\limits_{{i = 1}}^{n}\left| S\right| \ln \frac{1 - {p}_{i}}{1 - {q}_{i}} + \left| {{A}_{i} \cap S}\right| \ln \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) } +$$ + +$$ += \mathop{\sum }\limits_{{{a}_{j} \in S}}\left\lbrack \underset{{ap}{p}_{w}\left( {a}_{j}\right) }{\underbrace{\ln \frac{{t}_{j}}{1 - {t}_{j}} + \mathop{\sum }\limits_{\substack{{i : {a}_{j} \in {A}_{i}} \\ {{ap}{p}_{w}\left( {a}_{j}\right) } }}\ln \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) } - \mathop{\sum }\limits_{{i = 1}}^{n}\ln \frac{1 - {q}_{i}}{1 - {p}_{i}}}}\right\rbrack +$$ + +This means that $a \in {S}_{\max }^{{\tau }_{n}}$ if and only if $l\left( a\right) > 0,a \in$ ${S}_{\min }^{{\tau }_{n}}$ if and only if $l\left( a\right) < 0$ and $a \in {S}_{\text{ tie }}^{{\tau }_{n}}$ if and only if $l\left( a\right) = 0$ . Now, let ${S}_{M}$ be a maximizer of the likelihood. Since $l\left( {a}_{j}\right) \geq l\left( {a}_{h}\right) \Leftrightarrow {ap}{p}_{w}\left( {a}_{j}\right) \geq {ap}{p}_{w}\left( {a}_{h}\right)$ we have that ${S}_{M}$ , which maximizes $\mathop{\sum }\limits_{{{a}_{j} \in S}}l\left( {a}_{j}\right)$ , is made of top- $k$ alternatives for some $k \in \left\lbrack {l\ldots u}\right\rbrack$ . + +Furthermore, $\left| {{S}_{M} \cap {S}_{min}^{{\tau }_{n}}}\right| = \max \left( {0,l - {k}_{tie}^{{\tau }_{n}} - {k}_{max}^{{\tau }_{n}}}\right)$ . Start by noticing that $\left| {{S}_{M} \cap {S}_{min}^{{\tau }_{n}}}\right| \geq \max \left( {0,l - {k}_{\text{ tie }}^{{\tau }_{n}} - {k}_{max}^{{\tau }_{n}}}\right)$ , since $\left| {{S}_{M} \cap {S}_{min}^{{\tau }_{n}}}\right| \geq l - \left| {{S}_{M} \cap {S}_{max}^{{\tau }_{n}}}\right| - \left| {{S}_{M} \cap {S}_{tie}^{{\tau }_{n}}}\right| \geq l -$ ${k}_{\max }^{{\tau }_{n}} - {k}_{\text{ tie }}^{{\tau }_{n}}$ . Suppose that $\left| {{S}_{M} \cap {S}_{\min }^{{\tau }_{n}}}\right| > \max \left( {0,l - {k}_{\text{ tie }}^{{\tau }_{n}} - }\right.$ $\left. {k}_{\max }^{{\tau }_{n}}\right)$ . Then we have that $\left| {S}_{M}\right| > l$ because otherwise, if $\left| {S}_{M}\right| = l$ , then $\left| {{S}_{M} \cap {S}_{\max }^{{\tau }_{n}}}\right| + \left| {{S}_{M} \cap {S}_{\text{ tie }}^{{\tau }_{n}}}\right| = l - \mid {S}_{M} \cap$ ${S}_{min}^{{\tau }_{n}} \mid < {k}_{max}^{{\tau }_{n}} + {k}_{tie}^{{\tau }_{n}}$ , which would mean that there are elements in ${S}_{\text{ tie }}^{{\tau }_{n}}$ and ${S}_{\max }^{{\tau }_{n}}$ which are not in ${S}_{M}$ , which is a contradiction since $\left| {{S}_{M} \cap {S}_{min}^{{\tau }_{n}}}\right| > 0$ and ${S}_{M}$ is a top- $k$ set. Now consider $a \in {S}_{M} \cap {S}_{\min }^{{\tau }_{n}}$ , we have that $\left| {{S}_{M}\smallsetminus \{ a\} }\right| \geq l$ and $l\left( {S}_{M}\right) = l\left( {{S}_{M}\smallsetminus \{ a\} }\right) + l\left( a\right) < l\left( {{S}_{M}\smallsetminus \{ a\} }\right)$ which is a contradiction. + +With the same idea we can prove that $\left| {{S}_{M} \cap {S}_{\text{ max }}^{{\tau }_{n}}}\right| =$ $\min \left( {u,{k}_{\max }^{{\tau }_{n}}}\right)$ . + +Conversely, consider an admissible set $S$ of top- $k$ alternatives that verifies the constraints (1). Let ${S}_{M}$ be a MLE which, by the first part of the proof, is a top- ${k}^{\prime }$ set that also satisfies the same constraints (1). Thus we have that $\left| {{S}_{M} \cap {S}_{max}^{{\tau }_{n}}}\right| = \left| {S \cap {S}_{max}^{{\tau }_{n}}}\right| = \min \left( {u,{k}_{max}^{{\tau }_{n}}}\right)$ , and since $S$ and ${S}_{M}$ are top- $k$ and top- ${k}^{\prime }$ sets, we have that $S \cap {S}_{\max }^{{\tau }_{n}} =$ ${S}_{M} \cap {S}_{\max }^{{\tau }_{n}}$ . Similarly we have that $S \cap {S}_{\min }^{{\tau }_{n}} = {S}_{M} \cap {S}_{\min }^{{\tau }_{n}}$ . This suffices to prove that $l\left( S\right) = l\left( {S}_{M}\right)$ is maximal. + +Notice that when $\left( {l,u}\right) = \left( {0,m}\right)$ , the problem degenerates into a collection of label-wise problems, one for each alternative: ${a}_{j}$ is selected if ${a}_{j} \in {S}_{\max }^{{\tau }_{n}}$ , rejected if ${a}_{j} \in {S}_{\min }^{{\tau }_{n}}$ , and those that are on the fence can be arbitrarily selected or not. + +Example 1. Consider 5 alternatives $\mathcal{A} = \{ a,b,c,d,e\}$ and 10 voters $\mathcal{N}$ all sharing the same parameters $\left( {p,q}\right) =$ (0.7,0.4). We thus have that all voters share the same weight $w = \ln \left( \frac{p\left( {1 - q}\right) }{q\left( {1 - p}\right) }\right) = {1.25}$ and ${\tau }_{n} = \mathop{\sum }\limits_{{i = 1}}^{n}\ln \left( \frac{1 - q}{1 - p}\right) =$ 6.93. We consider the constraints $\left( {l,u}\right) = \left( {1,4}\right)$ + +First, suppose that ${t}_{d} = {0.6}$ and that ${t}_{j} = {0.5}$ for all the remaining candidates. Consider also the approval counts (and weighted approval scores) in the table below. + +max width= + +Candidate $a$ $b$ $C$ $d$ $e$ + +1-6 +Approval count 9 8 7 5 5 + +1-6 +${ap}{p}_{w}$ 11.25 10 8.75 6.65 6.25 + +1-6 + +We can easily check, by Theorem 1 that $\widetilde{S} =$ $\arg \mathop{\max }\limits_{{S \in \mathcal{S}}}P\left( {S = {S}^{ * } \mid A}\right) = \{ a,b,c\}$ . We have that ${S}_{max}^{{\tau }_{n}} = \{ a,b,c\} ,{S}_{tie}^{{\tau }_{n}} = \varnothing$ and ${S}_{min}^{{\tau }_{n}} = \{ d,e\}$ . We know that there exists some $k \in \left\lbrack {1,4}\right\rbrack$ such that $\widetilde{S}$ would consist of the top $k$ alternatives. We also have that: + +$$ +\left\{ \begin{array}{ll} \left| {\widetilde{S} \cap {S}_{max}^{{\tau }_{n}}}\right| & = \min \left( {u,{k}_{max}^{{\tau }_{n}}}\right) = 3 \Rightarrow \{ a,b,c\} \subseteq \widetilde{S} \\ \left| {\widetilde{S} \cap {S}_{min}^{{\tau }_{n}}}\right| & = \max \left( {0,l - {k}_{tie}^{{\tau }_{n}} - {k}_{max}^{{\tau }_{n}}}\right) = 0 \Rightarrow d,e \notin \widetilde{S} \end{array}\right. +$$ + +So the only possibility is $\widetilde{S} = \{ a,b,c\}$ . + +§ 4.2 ESTIMATING THE PARAMETERS GIVEN THE GROUND TRUTH + +§ 4.2.1 ESTIMATING THE PRIOR PARAMETERS OVER ALTERNATIVES + +Once the ground truths are estimated at one iteration of the algorithm, the next step consists in estimating the prior parameters ${\left( {t}_{j}\right) }_{j \in \mathcal{A}}$ , with the ground truths being given (in Subsection 4.3 the ground truth will be replaced by its estimation at each iteration). The next proposition explicits the closed-form expression of the MLE of the prior parameter of each alternative given the ground truth of each instance ${S}_{z}^{ * }$ once the prior parameters of all other alternatives are fixed. + + * Input: Approval profile $\left( {{A}_{1},\ldots ,{A}_{n}}\right)$ , ground truths ${S}_{z}^{ * }$ , and all but one prior parameters ${\left( {t}_{h}\right) }_{h \neq j}$ . + + * Output: MLE of ${t}_{j}$ . + +Proposition 2. For every ${a}_{j} \in \mathcal{A}$ : + +$$ +\underset{t \in \left( {0,1}\right) }{\arg \max }\mathcal{L}\left( {A,S,p,q,t,{t}_{-j}}\right) = \frac{{occ}\left( j\right) {\bar{\alpha }}_{j}}{\left( {L - {occ}\left( j\right) }\right) {\underline{\alpha }}_{j} + {occ}\left( j\right) {\bar{\alpha }}_{j}} +$$ + +$$ +\text{ where: }\left\{ \begin{array}{ll} {\bar{\alpha }}_{j} & = \mathop{\sum }\limits_{{S \in {S}_{l,u}}}\mathop{\prod }\limits_{{{a}_{h} \in S}}{t}_{h}\mathop{\prod }\limits_{{{a}_{h} \notin S}}\left( {1 - {t}_{h}}\right) \\ {\underline{\alpha }}_{j} & = \mathop{\sum }\limits_{{S \in {S}_{l,u}}}\mathop{\prod }\limits_{{{a}_{h} \in S}}{t}_{h}\mathop{\prod }\limits_{{{a}_{h} \in S}}\left( {1 - {t}_{h}}\right) \\ {\underline{\alpha }}_{j} & = \mathop{\sum }\limits_{{{a}_{j} \notin S}}\mathop{\prod }\limits_{{{a}_{h} \in S}}{t}_{h}\mathop{\prod }\limits_{{{a}_{h} \in S}}\left( {1 - {t}_{h}}\right) \\ \operatorname{occ}\left( j\right) & = \left| {z \in \{ 1,\ldots ,L\} ,{a}_{j} \in {S}_{z}}\right| \end{array}\right. +$$ + +Notice that ${\bar{\alpha }}_{j} = P\left( {l \leq \left| {S}^{ * }\right| \leq u \mid {a}_{j} \in {S}^{ * }}\right)$ and ${\underline{\alpha }}_{j} =$ $P\left( {l \leq \left| {S}^{ * }\right| \leq u \mid {a}_{j} \notin {S}^{ * }}\right)$ so $\beta = {\bar{\alpha }}_{j}{t}_{j} + {\underline{\alpha }}_{j}\left( {1 - {t}_{j}}\right)$ . occ(j) is the number of instances whose ground truth contains ${a}_{j}$ . The proof is deferred to the Appendix. + +We will see later that the algorithm applies Proposition 2 sequentially to estimate the alternatives' parameters one by one (see Example 2). + +§ 4.2.2 ESTIMATING THE VOTER PARAMETERS + +Once the ground truths are known (or estimated), we can estimate the voters’ parameters(p, q). + + * Input: Instances $\left( {{A}^{1},\ldots ,{A}^{L}}\right)$ , ground truths $\left( {{S}_{1}^{ * },\ldots ,{S}_{L}^{ * }}\right)$ . + + * Output: MLE of voter reliabilities(p, q). + +The next result simply states that the maximum likelihood estimator of ${p}_{i}$ of some voter is the fraction of alternatives that the voter approves and that actually belong to the ground truth; the estimation of ${q}_{i}$ is similar. See Example 2. + +Proposition 3. Fix sets ${S}_{z} \in {\mathcal{S}}_{l,u}$ and prior parameters ${t}_{j}$ . Then: + +$$ +\underset{\left( {p,q}\right) \in {\left( 0,1\right) }^{2n}}{\arg \max }\mathcal{L}\left( {A,S,p,q,t}\right) = \left( {\widehat{p},\widehat{q}}\right) +$$ + +where: ${\widehat{p}}_{i} = \frac{\mathop{\sum }\limits_{{z \in L}}\left| {{A}_{i}^{z} \cap {S}_{z}}\right| }{\mathop{\sum }\limits_{{z \in L}}\left| {S}_{z}\right| },{\widehat{q}}_{i} = \frac{\mathop{\sum }\limits_{{z \in L}}\left| {{A}_{i}^{z} \cap \overline{{S}_{z}}}\right| }{\mathop{\sum }\limits_{{z \in L}}\left| \overline{{S}_{z}}\right| }$ + +The (simple) proof is omitted. + +§ 4.3 ALTERNATING MAXIMUM LIKELIHOOD ESTIMATION + +Now the estimation of the ground truths and that of the parameters are intertwined to maximize the overall likelihood $\mathcal{L}\left( {A,S,p,q,t}\right)$ by the Alternating Maximum Likelihood Estimation algorithm. AMLE is an iterative procedure similar to the Expectation-Maximization procedure introduced in Baharad et al. [2011] but with a coordinate-steepest-ascent-like iteration, whose aim is to intertwinedly estimate the voter reliabilities, the alternatives' prior parameters and the instances' ground truths. The idea behind this estimation consists in alternating a MLE of the ground truths given the current estimate of the parameters, and an updating of these parameters via a MLE based on the current estimate of the ground truths. ${}^{\top }$ Each of these steps have been discussed in the previous subsections and are now incorporated into Algo. 1. + +Algorithm 1 AMLE procedure + +Input: Approval ballots ${\left( {A}_{i}^{z}\right) }_{1 \leq z \leq L,i \in \mathcal{N}}$ + + Initial parameters ${\widehat{\theta }}^{\left( 0\right) }$ , Bounds(l, u), Tolerance $\varepsilon$ + +Output: Estimations $\left( {\widehat{S}}_{z}\right) ,\left( {{\widehat{p}}_{i},{\widehat{q}}_{i}}\right) ,\left( {\widehat{t}}_{j}\right)$ + + repeat + + for $z = 1\ldots L$ do + + Compute ${\widehat{S}}_{z}^{\left( v + 1\right) } = \left\{ {{a}_{1},\ldots ,{a}_{k}}\right\}$ with $k \in \left\lbrack {l,u}\right\rbrack$ + + and: + +$$ +\left\{ \begin{array}{ll} \left| {{\widehat{S}}_{z}^{\left( v + 1\right) } \cap {S}_{\max ,z}^{\left( v\right) }}\right| & = \min \left( {u,{k}_{\max ,z}^{\left( v\right) }}\right) \\ \left| {{\widehat{S}}_{z}^{\left( v + 1\right) } \cap {S}_{\min ,z}^{\left( v\right) }}\right| & = \max \left( {0,l - {k}_{{tie},z}^{\left( v\right) } - {k}_{\max ,z}^{\left( v\right) }}\right. \end{array}\right. +$$ + + end for + + for $i = 1\ldots \mathcal{N}$ do + + Update the parameters $\left( {{p}_{i},{q}_{i}}\right)$ given ${\widehat{S}}^{\left( v + 1\right) }$ : + +$$ +{\widehat{p}}_{i}^{\left( v + 1\right) } = \frac{\mathop{\sum }\limits_{{z \in L}}\left| {{A}_{i}^{z} \cap {\widehat{S}}_{z}^{\left( v + 1\right) }}\right| }{\mathop{\sum }\limits_{{z \in L}}\left| {\widehat{S}}_{z}^{\left( v + 1\right) }\right| },{\widehat{q}}_{i}^{\left( v + 1\right) } = \frac{\mathop{\sum }\limits_{{z \in L}}\left| {{A}_{i}^{z} \cap \overline{{\widehat{S}}_{z}^{\left( v + 1\right) }}}\right| }{\mathop{\sum }\limits_{{z \in L}}\left| \overline{{\widehat{S}}_{z}^{\left( v + 1\right) }}\right| } +$$ + + end for + + for $j = 1\ldots m$ do + + Update ${\widehat{t}}_{j}^{\left( v + 1\right) }$ by: + +$$ +{\widehat{t}}_{j}^{\left( v + 1\right) } = \frac{{oc}{c}^{\left( v + 1\right) }\left( j\right) {\bar{\alpha }}_{j}^{\left( v + 1\right) }}{{oc}{c}^{\left( v + 1\right) }\left( j\right) {\bar{\alpha }}_{j}^{\left( v + 1\right) } + \left( {L - {oc}{c}^{\left( v + 1\right) }\left( j\right) }\right) {\underline{\alpha }}_{j}^{\left( v + 1\right) }} +$$ + + where : + +$$ +\left\{ \begin{array}{ll} {\operatorname{occ}}^{\left( v + 1\right) }\left( j\right) & = \mathop{\sum }\limits_{{z = 1}}^{L}\mathbb{1}\left\{ {{a}_{j} \in {\widehat{S}}_{z}^{\left( v + 1\right) }}\right\} \\ {\bar{\alpha }}_{j}^{\left( v + 1\right) } & = \beta \left( {{\left( l - 1\right) }^{ + },u - 1,{\widehat{t}}_{ < j}^{\left( v + 1\right) },{\widehat{t}}_{ > j}^{\left( v\right) }}\right) \\ {\underline{\alpha }}_{j}^{\left( v + 1\right) } & = \beta \left( {l,u,{\widehat{t}}_{ < j}^{\left( v + 1\right) },{\widehat{t}}_{ > j}^{\left( v\right) }}\right) \end{array}\right. +$$ + + end for + + until $\begin{Vmatrix}{{\widehat{\theta }}^{\left( v + 1\right) } - {\widehat{\theta }}^{\left( v\right) }}\end{Vmatrix} \leq \varepsilon$ + +The algorithm continues to run until a convergence criterion is met in the form of a bound on the norm of the change in the parameters’ estimations. In practice we chose ${\ell }_{\infty }$ , but any other norm could be used in Algorithm 1 as in finite dimensions, all norms are equivalent (if a sequence converges according to one norm then it does so for any norm). + +We define the vector of parameters ${\widehat{\theta }}^{\left( v\right) } = \left( {{\widehat{p}}^{\left( v\right) },{\widehat{q}}^{\left( v\right) },{\widehat{t}}^{\left( v\right) }}\right)$ containing the voters' estimated noise parameters as well as the prior information estimated parameters at iteration $v$ . In particular ${\widehat{\theta }}^{\left( 0\right) }$ is the input initial values. The choice of the exact initial values depends on the application at hand. + +Note that at convergence, only local optimality is guaranteed, as classical in optimization. + +Theorem 4. For any initial values ${\widehat{\theta }}^{\left( 0\right) }$ , AMLE converges to a fixed point after a finite number of iterations. + +We only provide a sketch of proof and defer the full proof to the Appendix. + +Proof. First we have by Theorem 1 that: + +$$ +\mathcal{L}\left( {A,{\widehat{S}}^{\left( v + 1\right) },{\widehat{\theta }}^{\left( v\right) }}\right) \geq \mathcal{L}\left( {A,{\widehat{S}}^{\left( v\right) },{\widehat{\theta }}^{\left( v\right) }}\right) +$$ + +By Proposition 2 and Proposition 3, we deduce that: + +$$ +\mathcal{L}\left( {A,{\widehat{S}}^{\left( v + 1\right) },{\widehat{\theta }}^{\left( v + 1\right) }}\right) \geq \mathcal{L}\left( {A,{\widehat{S}}^{\left( v + 1\right) },{\widehat{\theta }}^{\left( v\right) }}\right) +$$ + +Hence, the likelihood increases at every step. Since there is a finite number of possible values for the ground truth (namely ${2}^{mL}$ ), the convergence of the algorithm is guaranteed. + +Because $\mathcal{L}\left( {A,{\widehat{S}}^{\left( v + 1\right) },{\widehat{\theta }}^{\left( v + 1\right) }}\right) \geq \mathcal{L}\left( {A,{\widehat{S}}^{\left( v + 1\right) },{\widehat{\theta }}^{\left( v\right) }}\right) \geq$ $\mathcal{L}\left( {A,{\widehat{S}}^{\left( v\right) },{\widehat{\theta }}^{\left( v\right) }}\right)$ , the likelihood increases at each step of the algorithm. This guarantees that whenever the execution stops, the likelihood is closer to the maximum than it initially was. Therefore the algorithm can not only be run until convergence, but it can also be run as an anytime algorithm. + +Example 2. Take $n = 3,m = 5,l = 1,u = 2,L = 4$ , and the following profile and initial parameters: + +$$ +\left\{ \begin{array}{lll} {\widehat{p}}_{1}^{\left( 0\right) } = {0.5} & {\widehat{p}}_{2}^{\left( 0\right) } = {0.5} & {\widehat{p}}_{3}^{\left( 0\right) } = {0.5} \\ {\widehat{q}}_{1}^{\left( 0\right) } = {0.44} & {\widehat{q}}_{2}^{\left( 0\right) } = {0.41} & {\widehat{q}}_{3}^{\left( 0\right) } = {0.32} \\ {\widehat{t}}_{1}^{\left( 0\right) } = \cdots = {\widehat{t}}_{5}^{\left( 0\right) } & = {0.5} & \end{array}\right. +$$ + +max width= + +X ${A}^{1}$ ${A}^{2}$ ${A}^{3}$ ${A}^{4}$ + +1-5 +Voter 1 $\left\{ {{a}_{1},{a}_{4}}\right\}$ $\left\{ {a}_{1}\right\}$ $\left\{ {a}_{3}\right\}$ $\left\{ {a}_{1}\right\}$ + +1-5 +Voter 2 $\left\{ {a}_{2}\right\}$ $\left\{ {a}_{5}\right\}$ $\left\{ {a}_{4}\right\}$ $\left\{ {a}_{1}\right\}$ + +1-5 +Voter 3 $\left\{ {{a}_{2},{a}_{3},{a}_{4}}\right\}$ $\left\{ {{a}_{2},{a}_{3},{a}_{5}}\right\}$ $\left\{ {{a}_{2},{a}_{3}}\right\}$ $\left\{ {a}_{3}\right\}$ + +1-5 + +Estimating the ground truth: The first step is the application of Theorem 1 to estimate the ground truth of the instances given the initial parameters, yielding ${\widehat{S}}_{1}^{\left( 1\right) } = \left\{ {{a}_{2},{a}_{4}}\right\} ,{\widehat{S}}_{2}^{\left( 1\right) } = \left\{ {{a}_{2},{a}_{5}}\right\} ,{\widehat{S}}_{3}^{\left( 1\right) } = \left\{ {{a}_{2},{a}_{3}}\right\} ,{\widehat{S}}_{4}^{\left( 1\right) } = \left\{ {{a}_{1},{a}_{3}}\right\}$ + +${}^{1}$ In case of ties between subsets when estimating the ground truth, a tie-breaking priority over subsets is used. No ties occurred in our experiments. + +Estimating the voter reliabilities: In the next step we use these estimates of the ground truths to compute the MLEs of the voter reliabilities. For instance, voter 1 has 2 false positive labels from a total of 12 negative labels so ${\widehat{q}}_{1}^{\left( 1\right) } =$ $\frac{2}{12} = {0.17}$ and she has 3 true positive labels out of 8 positive ones so ${\widehat{p}}_{1}^{\left( 1\right) } = \frac{3}{8} = {0.38}$ . In the end, we get: + +$$ +\left\{ \begin{array}{lll} {\widehat{p}}_{1}^{\left( 1\right) } = {0.38} & {\widehat{p}}_{2}^{\left( 1\right) } = {0.38} & {\widehat{p}}_{3}^{\left( 1\right) } = {0.88} \\ {\widehat{q}}_{1}^{\left( 1\right) } = {0.17} & {\widehat{q}}_{2}^{\left( 1\right) } = {0.08} & {\widehat{q}}_{3}^{\left( 1\right) } = {0.17} \end{array}\right. +$$ + +Estimating the prior parameters: The final step of this iteration consists in updating the estimations of the prior parameters by applying Proposition 2 sequentially. First we estimate ${\widehat{t}}_{1}^{\left( 1\right) }$ given ${\widehat{S}}^{\left( 1\right) }$ and ${\widehat{t}}_{2}^{\left( 0\right) },\ldots ,{\widehat{t}}_{5}^{\left( 0\right) }$ by maximum likelihood estimation. We first compute ${\bar{\alpha }}_{1} =$ $\beta \left( {0,1,{t}_{2},\ldots ,{t}_{5}}\right) = {0.3125},{\underline{\alpha }}_{1} = \beta \left( {1,2,{t}_{2},\ldots ,{t}_{5}}\right) = 1$ and $\operatorname{occ}\left( {a}_{1}\right) = 1$ . Then the MLE of ${t}_{1}$ is: + +$$ +{\widehat{t}}_{1} = \frac{\operatorname{occ}\left( {a}_{1}\right) {\bar{\alpha }}_{1}}{\left( {L - \operatorname{occ}\left( {a}_{1}\right) }\right) {\underline{\alpha }}_{1} + \operatorname{occ}\left( {a}_{1}\right) {\bar{\alpha }}_{1}} = {0.09} +$$ + +The next steps are to estimate ${\widehat{t}}_{2}^{\left( 1\right) }$ given ${\widehat{t}}_{1}^{\left( 1\right) },{\widehat{t}}_{3}^{\left( 0\right) },{\widehat{t}}_{4}^{\left( 0\right) },{\widehat{t}}_{5}^{\left( 0\right) }$ and so on. Finally, we get: + +$$ +{\widehat{t}}_{1}^{\left( 1\right) } = {0.09},{\widehat{t}}_{2}^{\left( 1\right) } = {0.56},{\widehat{t}}_{3}^{\left( 1\right) } = {0.28},{\widehat{t}}_{4}^{\left( 1\right) } = {0.14},{\widehat{t}}_{5}^{\left( 1\right) } = {0.20} +$$ + +Fix $\varepsilon = {10}^{-5}$ . We repeat all steps until convergence (according to ${\ell }_{\infty }$ ), after 5 full iterations. In the fixed point, the estimations of the ground truths are: + +$$ +{\widehat{S}}_{1} = \left\{ {{a}_{2},{a}_{3}}\right\} ,{\widehat{S}}_{2} = \left\{ {{a}_{2},{a}_{3}}\right\} ,{\widehat{S}}_{3} = \left\{ {{a}_{2},{a}_{3}}\right\} ,{\widehat{S}}_{4} = \left\{ {a}_{3}\right\} +$$ + +§ 5 EXPERIMENTS + +§ 5.1 EXPERIMENT DESIGN AND DATA COLLECTION + +We designed an image annotation task as a football quiz. ${}^{2}$ We selected 15 pictures taken during different matches between two of the following teams: Real Madrid, Inter Milan, Bayern Munich, Barcelona, Paris Saint-Germain. In each picture, it may be the case that players from both teams appear, or players from only one team, therefore $l = 1$ and $u = 2$ . Each participant is shown the instances one by one, and is each time asked to select all the teams she can spot (see Figure 1). We designed a simple incentive for participants, consisting in ranking them according to the following principle: + + * The participants get one point whenever their answer contains all correct alternatives for a picture. They are then ranked according to their cumulated points. + + * To break ties, the participant who selected a smaller number of alternatives overall is ranked first. + + < g r a p h i c s > + +Figure 1: Example of Annotation Task + +We gathered the answers of 76 participants: only two of them spammed by simply selecting all the alternatives. Figure 2 shows that voters responded well to the incentives by mostly selecting one or two alternatives. + + < g r a p h i c s > + +Figure 2: Histogram of answers' size + +§ 5.2 ANNA KARENINA'S INITIALIZATION + +Inspired by the Anna Karenina Principle in Meir et al. [2019], we assign more weight to voters who are closer to the others on average, initializing the precision parameters $\left( {{p}_{i},{q}_{i}}\right)$ accordingly. This suits our context, where voter competence is highly polarized: some voters are experts and cast similar answers close to the ground truth, the others are less reliable and their answers are dispersed among all combinations. We use the following heuristics (see Algorithm 2) for the initialization: + +Algorithm 2 Initializing ${\left( {p}_{i},{q}_{i}\right) }_{i}$ + +Input: Approval ballots ${\left( {A}_{i}^{z}\right) }_{z,i}$ + + Output: Initialization $\left( {{\widehat{p}}_{i}^{\left( 0\right) },{\widehat{q}}_{i}^{\left( 0\right) }}\right)$ + + -Compute ${w}_{\max } = \frac{n}{1 + n},{w}_{\min } = \frac{1}{1 + n}$ + + -Compute ${d}_{i} = \mathop{\sum }\limits_{{j \neq i}}{d}_{Jacc}\left( {{A}_{i},{A}_{j}}\right)$ (Jaccard distance) + + -Compute ${d}_{\max } = \max {d}_{i},{d}_{\min } = \min {d}_{i}$ + + -Compute ${w}_{i} = \left( {{w}_{\max } - {w}_{\min }}\right) \left( \frac{\frac{1}{{d}_{i}} - \frac{1}{{d}_{\max }}}{\frac{1}{{d}_{\min }} - \frac{1}{{d}_{\max }}}\right) + {w}_{\min }$ + + $- \operatorname{Fix}{\widehat{p}}_{i}^{\left( 0\right) } = \frac{1}{2}$ and ${\widehat{q}}_{i}^{\left( 0\right) } = \frac{1 - \frac{{e}^{{w}_{i}} - 1}{{e}^{{w}_{i}} + 1}}{2}$ + +Algorithm 2 guarantees that the parameters $\left( {{\widehat{p}}_{i}^{\left( 0\right) },{\widehat{q}}_{i}^{\left( 0\right) }}\right)$ of a voter are such that her initial weight is equal to ${w}_{i}$ , and that $\frac{{w}_{\max }}{{w}_{\min }} = n$ : therefore, initially, the voter closest in average to the other voters counts $n$ times more than the voter with the largest average distance. + +${}^{2}$ The dataset and code are in the supplementary material. + + < g r a p h i c s > + +Figure 3: Accuracies of different aggregation methods + +In the Appendix we give an example illustrating this initialization, and an empirical comparison with other classical initializations. + +§ 5.3 RESULTS + +To assess the importance of prior information on the size of the ground truth, we tested the AMLE algorithm with free bounds $\left( {l,u}\right) = \left( {0,m}\right)$ (will be referred to as ${\mathrm{{AMLE}}}_{f}$ ) and the ${\operatorname{AMLE}}_{c}$ algorithm with $\left( {l,u}\right) = \left( {1,2}\right)$ . We also apply the modal rule Caragiannis et al. [2020] which outputs the subset of alternatives that most frequently appears as an approval ballot arg $\mathop{\max }\limits_{{S \in \mathcal{S}}}\left| {i \in \mathcal{N},S = {A}_{i}}\right|$ , and a variant of label-wise majority rule which outputs the subset of alternatives $S$ such that $a \in S \Leftrightarrow \left| {i \in \mathcal{N},a \in {A}_{i}}\right| > \frac{n}{2}$ . If this subset is empty it is replaced by the alternative with highest approval count, and if it has more than two alternatives then we only keep the top-2 alternatives. + +We took 20 batches of $n = {10}$ to $n = {74}$ randomly drawn voters and applied the four methods to all of them (see Figure 3a, 3b). As classically done in the literature Nguyen et al. [2020], we use the Hamming accuracy $\frac{1}{mL}\mathop{\sum }\limits_{{z = 1}}^{L} \mid {S}_{z}^{ * } \cap$ ${\widehat{S}}^{z}\left| +\right| \overline{{S}_{z}^{ * }} \cap \overline{{\widehat{S}}^{z}} \mid$ and the $0/1$ accuracy $\frac{1}{L}\mathop{\sum }\limits_{{z = 1}}^{L}\mathbb{1}\left\{ {{S}_{z}^{ * } = {\widehat{S}}^{z}}\right\}$ as metrics and report their 0.95 confidence intervals. + +We notice that the majority and the modal rule are outperformed by AMLE, which can be explained by the fact that they do not take into account the voters' reliabilities. Comparing the performances of ${\mathrm{{AMLE}}}_{c}$ and ${\mathrm{{AMLE}}}_{f}$ emphasizes the importance of the prior knowledge on the committee size to improve the quality of the estimation. + +We also compared the execution time of ${\mathrm{{AMLE}}}_{c}$ and ${\mathrm{{AMLE}}}_{f}$ (see Figure 4) when run on Intel Core i7-10610U CPU @1.80Ghz 4 cores, 8 threads and 32Gb RAM. Unsurprisingly, ${\mathrm{{AMLE}}}_{c}$ needs more running time, especially for more than 40 voters. + + < g r a p h i c s > + +Figure 4: Execution time + +§ 6 CONCLUSION + +We study multi-winner approval voting from an epistemic point of view. The specificity of our work is threefold: (a) the ground truth consists of a set of alternatives; (b) the input consists of approval votes; (c) the competence of the various voters is not known a priori but learnt from the input. We proposed a noise model that incorporates the prior belief about the size of the ground truth. Then we derived an iterative algorithm to intertwinedly estimate the ground truth labels, the voter noise parameters and the prior belief parameters and we prove its convergence. Our algorithm is based on a simplification of Expectation-Maximization (EM), and its simple steps are more easily explainable to voters than EM and other similar statistical learning approaches. + +Although we mainly considered a general multi-instance task that fits the collective annotation framework, where each voter answers several questions on the same set of alternatives, we can nonetheless apply the same algorithm to single-instance problems (such as the allocation of scarce medical resources) where only one question is answered. In this case, the prior parameters cannot be updated and it suffices to fix them once and for all and alternate between the estimation of the ground truth and the voter parameters. + +In some contexts (e.g., patients in a hospital), alternatives and votes are not observed at once but streamed. To cope with this online setup we consider extending our AMLE algorithm in the spirit of Cappé and Moulines [2009]. \ No newline at end of file diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_521_358_78_76_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_521_358_78_76_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..418dd3bc0d044277aa774398fb1078f1fd79887c --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_521_358_78_76_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73bd05a2a57e15a96b55dbd5bf9fa9c2737b9a9c151f2f1ff32868a1d12bb965 +size 1110 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_521_551_81_83_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_521_551_81_83_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e7a504f10a98827f358aa242638466152cd86791 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_521_551_81_83_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7311921115af29d4453c92704c4d3efce90a59146f6698edebd28ddc9d4b6a20 +size 1374 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_522_948_78_84_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_522_948_78_84_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c6f7aae06b56ebb443d85d4cc47124fcb45df8c4 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_522_948_78_84_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:459d0e2748f03f33caa20151bd74c12881a09512db6360f7abe20a1571a9dabe +size 1273 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_523_648_81_88_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_523_648_81_88_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..65cd95d881048ade3e6d6314701e8367b95dff89 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_523_648_81_88_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:526bf75ae0a7ae595b3710aea5cec74c019d17002cf84a3d7ef92234d782651e +size 1452 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_523_850_78_79_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_523_850_78_79_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..27fc3d2ca8faa2fe283e2024fc09081f4200854c --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_523_850_78_79_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f06edf9c1b4909b77a634fe5a7220e8d1d118dd8f2fb45373d3f23426fbce352 +size 1142 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_524_1449_77_83_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_524_1449_77_83_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d12b43eba068be2624d1c75012bc248b2a710460 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_524_1449_77_83_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:504c85694608876defb739ed8a9077af3874d75e8cabe3bdd5005de2feb580f6 +size 987 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_524_1551_81_83_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_524_1551_81_83_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6fce1355f32c9a4ea2c68dd2d15dfbfcac1367b0 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_524_1551_81_83_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb49ab23747fe7aaba93816fd13d5b05faff79a152f4e976285b574bb4c854c1 +size 1175 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_524_752_78_83_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_524_752_78_83_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c71527889272214914b0542150fb6a0e21f18b3d --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_524_752_78_83_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54b52930bbe2ca2c49b916f7fe682d03d68e81039367d28a94be228c615899f2 +size 1215 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_525_1355_79_82_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_525_1355_79_82_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f25680816282a45cec5f30d838ea5ff2f0e87f0b --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_525_1355_79_82_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db85be1ecfe6c16941cdc4a047214375f272a5e5df305c29affb17719c47d659 +size 1153 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_528_1152_72_78_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_528_1152_72_78_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..53d89ca9a13752b4258d1d89256edd36983c5a15 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_528_1152_72_78_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e91cd1ce27481491c5db615405a571710d06d1047a683b8da9b1693a09eb5df +size 971 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_529_1254_73_82_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_529_1254_73_82_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0d4c86a852696f24e1e329977d3e7fef7b5377c6 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_529_1254_73_82_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e55e4519a3f70b1e4fec6f7be029c9e91ecbcb5e97a1f5a388be3541079cd1a +size 1002 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_608_156_80_82_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_608_156_80_82_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..2e0e19a31b6a30b25a1cf7ea9a8b65c31d796814 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_608_156_80_82_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a43a607786684df5d331f46ddd32e2544a9ab65922c65f6d9530289fa3afd04 +size 1263 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_610_263_78_79_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_610_263_78_79_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..693a311a476b300a2cd24046afcd8289bd48dd30 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_610_263_78_79_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85b2f43bd2da0dc122894bf278115e1b128c6a156b13740ac3003d9e6f3b765e +size 1203 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_610_462_75_75_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_610_462_75_75_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..39ee9d43c9fb179de8fa198c7e8a23787c25ce79 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_610_462_75_75_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:348b4365533f94d5a611ce7c5c10b04dc3cd1f053913bfaaa21c4aff375ca2dd +size 1249 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_610_957_80_78_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_610_957_80_78_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a9542102d15798a4d0d4827fb3155063eab271ab --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_610_957_80_78_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3280b3d54c6bec5879ae67169743a5bd146fcb751a05d20cbbfd114bea0ab2a +size 1200 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_611_1053_80_73_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_611_1053_80_73_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6f0e785b3c5c3fa7cf0ee52ac277297a4925d3e8 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_611_1053_80_73_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d11e0d42d8fb6594d171a77817ba30a33429c41cb82128b6596fc841d4ee015d +size 1116 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_612_364_74_67_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_612_364_74_67_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0b10cc430b9dd425bf70e9c2a0dd00ab0364e961 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_612_364_74_67_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10a0af34ce20db34b725c172f227915c85efe3fe6d526ac1eaac550bd55809a4 +size 1090 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_613_661_76_75_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_613_661_76_75_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..062b2bffd5eb5efb78b616595a15b8e1378a47d6 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_613_661_76_75_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:238433ddebd3e1548317074d60a06bc6c1775367e165a5b2a07514b2bb31c97f +size 1233 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_613_760_76_75_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_613_760_76_75_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0fe42ca122704767ebdb3c07be429700aa869675 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_613_760_76_75_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55ba51818f071e62e491c764360240657ad75dc13bcf9c6cc4949d5cd39457fc +size 1141 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_615_560_70_50_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_615_560_70_50_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..fcc0f157f6e802597e4d937fd796d7188aa03e05 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_615_560_70_50_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:536c62f3808b6ec8773380574ff86beab2c09d74a521af177b2982c96c77b34c +size 1033 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_615_868_73_59_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_615_868_73_59_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6bb01b6dec415137d8f23cbc8861796bd051d732 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_615_868_73_59_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8bf0746479c463a81d2faa03f7caabc975182b6922016e374a2904a5998fc51 +size 1075 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_616_1254_69_83_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_616_1254_69_83_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d74a9b8120aa0883cc7fa9624bf87776bb3f0d56 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_616_1254_69_83_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12e12aeb778391c59a0acb4f64bed3558c941f279621a522f34f69b28e8ac683 +size 1079 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_617_1452_76_78_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_617_1452_76_78_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..455915460afd37d8c6c4946b6d1183db0987142d --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_617_1452_76_78_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:95369f4f8749dc753a6df898b944d0256e3beded0d82f90e184660d2adf676a3 +size 943 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_617_1556_68_69_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_617_1556_68_69_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1e4c7687f6aa912371c95327f0eff76eceff3137 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_617_1556_68_69_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34cf858c35e7f01bbff35ebb3ba8f63ba7daf81b557b54d554f0acb2a66a523b +size 970 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_618_1154_72_76_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_618_1154_72_76_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7f4458debccdfbb5765e73fb9f2f59f08bbbae12 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_618_1154_72_76_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:138c5d86fc4457b416b188e8a0b8238cb6e407218b4426ac4d347cb33f560d35 +size 976 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_696_952_78_83_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_696_952_78_83_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..5caee2d86fc2a4910f75c60a08c526683c2b7249 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_696_952_78_83_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb8318a97496c89eb89bdc15502644488fe56aa8c163daec38849ca8d24214f6 +size 1227 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_697_1051_78_78_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_697_1051_78_78_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1d2e8a34b71c8747ff63d1ef3fa0b0bd61aec398 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_697_1051_78_78_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7543689c1a56aa4f81247ffac37fa952a55e010c156a24e67603e26787e282c6 +size 1188 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_697_671_77_63_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_697_671_77_63_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ecd40410a18fa856f1763a7f4098e155d29919a5 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_697_671_77_63_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c9da30c7fd085db6cc5b4c6424815482aeed8650d4185d8306f69b971d71f95 +size 1079 diff --git a/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_699_162_78_77_0.jpg b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_699_162_78_77_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4fee41b5f0250b96c629bb1ed99258345a77d948 --- /dev/null +++ b/UAI/UAI 2022/UAI 2022 Workshop/UAI 2022 Workshop CRL/TTeMp6953v4/Initial_manuscript_tex/images/019638d5-6211-7bc8-882b-196c2516dbad_14_699_162_78_77_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f51e82629e0590c4d03c6d0265c332c4a76289fe684e34aa9beb5cc7e1ba8da +size 1221