ACL-OCL / Base_JSON /prefixE /json /eacl /2021.eacl-demos.37.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:44:28.998640Z"
},
"title": "PunKtuator: A Multilingual Punctuation Restoration System for Spoken and Written Text",
"authors": [
{
"first": "Varnith",
"middle": [],
"last": "Chordia",
"suffix": "",
"affiliation": {},
"email": "vchordia@parc.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Text transcripts without punctuation or sentence boundaries are hard to comprehend for both humans and machines. Punctuation marks play a vital role by providing meaning to the sentence and incorrect use or placement of punctuation marks can often alter it. This can impact downstream tasks such as language translation and understanding, pronoun resolution, text summarization, etc. for humans and machines. An automated punctuation restoration (APR) system with minimal human intervention can improve comprehension of text and help users write better. In this paper we describe a multitask modeling approach as a system to restore punctuation in multiple high resource-Germanic (English and German), Romanic (French)-and low resource languages-Indo-Aryan (Hindi) Dravidian (Tamil)-that does not require extensive knowledge of grammar or syntax of a given language for both spoken and written form of text. For German language and the given Indic based languages this is the first towards restoring punctuation and can serve as a baseline for future work.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Text transcripts without punctuation or sentence boundaries are hard to comprehend for both humans and machines. Punctuation marks play a vital role by providing meaning to the sentence and incorrect use or placement of punctuation marks can often alter it. This can impact downstream tasks such as language translation and understanding, pronoun resolution, text summarization, etc. for humans and machines. An automated punctuation restoration (APR) system with minimal human intervention can improve comprehension of text and help users write better. In this paper we describe a multitask modeling approach as a system to restore punctuation in multiple high resource-Germanic (English and German), Romanic (French)-and low resource languages-Indo-Aryan (Hindi) Dravidian (Tamil)-that does not require extensive knowledge of grammar or syntax of a given language for both spoken and written form of text. For German language and the given Indic based languages this is the first towards restoring punctuation and can serve as a baseline for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic speech recognition (ASR) has become ubiquitous these days and has wide applications in business and personal life. One of the drawbacks of ASR is it produces an unpunctuated stream of text. Restoring punctuation manually is a timeconsuming task. Apart from spoken text a large amount of written text online -blogs, articles, social media,etc. -sometimes lack the appropriate punctuation marks due to human inconsistencies, which can alter the meaning of text. An APR system designed with an understanding of ASR and written forms of text can help resolve these issues. Transcriptions passed to an APR system, can improve the following machine learning tasks such as machine translation, conversational agents, coreference resolution, etc. Further it can be used as an unsupervised auxiliary or pretext task, for training large scale transformer language models, as it would require understanding about global structure of the text. Prior punctuation restoration methods have mostly been solved using lexical features, prosodic features or combination of both. Due to large availability of text data, majority of the methods have focused on using lexical features. Early methods (Christensen et al., 2001) used Hidden Markov Models (HMM) to model punctuation using acoustic features such as pause duration, pitch and intensity. Though the acoustic based models perform well on ASR system, they can perform better when combined with textual data. Liu et al. (2006) ; Batista et al. (2007) ; Kol\u00e1\u0159 and Lamel (2012) proposed various methods that combined lexical features along with prosodic information thereby improving APR tasks. Alum\u00e4e (2015, 2016) proposed unidirectional and bidirectional Long Short Term Memory (Bi-LSTM) based punctuation prediction model which did not require extensive feature engineering. Though the above method considered the long distant token dependencies, it ignored label dependencies. To address label dependencies (Klejch et al., 2017 ) made use of recurrent neural networks for sequence to sequence mapping using an encoder-decoder architecture. Recently the use of transformer based approaches combination of speech and pre-trained word embeddings have achieved state of art performance on IWSLT datasets (spoken transcripts from TED talks for ASR tasks, but often used as benchmark for comparison of punctuation restoration models). Yi et al. (2020) used pretrained BERT (Devlin et al., 2018) that is used to perform adversarial multi-task learning to restore punctuation. Alam et al. (2020) and used an augmentation strategy to make models more robust to ASR errors. Though most approaches have shown considerable improvement in overcoming some of the challenges faced in terms of modeling and achieving the state of performance in spoken language transcripts in English, there are the following limitations:",
"cite_spans": [
{
"start": 1188,
"end": 1214,
"text": "(Christensen et al., 2001)",
"ref_id": "BIBREF5"
},
{
"start": 1455,
"end": 1472,
"text": "Liu et al. (2006)",
"ref_id": "BIBREF12"
},
{
"start": 1475,
"end": 1496,
"text": "Batista et al. (2007)",
"ref_id": "BIBREF1"
},
{
"start": 1499,
"end": 1521,
"text": "Kol\u00e1\u0159 and Lamel (2012)",
"ref_id": "BIBREF9"
},
{
"start": 1639,
"end": 1658,
"text": "Alum\u00e4e (2015, 2016)",
"ref_id": null
},
{
"start": 1955,
"end": 1975,
"text": "(Klejch et al., 2017",
"ref_id": "BIBREF8"
},
{
"start": 2377,
"end": 2393,
"text": "Yi et al. (2020)",
"ref_id": null
},
{
"start": 2415,
"end": 2436,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 2517,
"end": 2535,
"text": "Alam et al. (2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Restoring punctuation varies in spoken and written text due to differences in rules of writing and speaking. The frequent use of personal pronouns, colloquial words and usage of direct speech often results in more varied use of punctuation in spoken text as compared to written text. This often affects readability for humans and machines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Though there has been some research (Tilk and Alum\u00e4e, 2016; Kol\u00e1\u0159 and Lamel, 2012; Alam et al., 2020) that has focused on developing non-english APR system, extensive research and baseline results have not been studied for other languages.",
"cite_spans": [
{
"start": 38,
"end": 61,
"text": "(Tilk and Alum\u00e4e, 2016;",
"ref_id": "BIBREF16"
},
{
"start": 62,
"end": 84,
"text": "Kol\u00e1\u0159 and Lamel, 2012;",
"ref_id": "BIBREF9"
},
{
"start": 85,
"end": 103,
"text": "Alam et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To overcome some of the challenges, we make the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We implemented a multi-task multilingual punctuation restoration model. Our technique implements punctuation restoration task as sequence labeling task, which is jointly trained with language classifiers and text mode classification ('Spoken' and 'Written'). We use the proposed technique to build two multilingual models for high resource and low resource languages, thereby reducing the dependency of multiple monolingual language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We developed a web browser extension that can help multilingual spoken and written users to punctuate transcripts as a post-processing step. We have made a demo of the web extension available online. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We prepared training and test datasets and evaluated the performance of our proposed model. Further to evaluate the generalization of the model we evaluated across the benchmark IWSLT reference dataset. The code and models have been made publicly available. 2 2 Punctuation restoration system",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to varying set of language data, we segregated the data sources according to the languages, which we gathered for spoken and written text. For Written text we considered data from news web sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Gathering",
"sec_num": "2.1"
},
{
"text": "For high resource European languages, we considered a parallel sentence corpus known as the 'EU-ROPARL' corpus (Vanmassenhove and Hardmeier, 2018) for spoken text. This corpus is a collection of speeches made in the proceedings of European parliament from 1996 to 2012, transcribed as text.",
"cite_spans": [
{
"start": 111,
"end": 146,
"text": "(Vanmassenhove and Hardmeier, 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "High Resource Languages",
"sec_num": "2.1.1"
},
{
"text": "To gather written text we used news articles from Alexa's top-25 ranked news sources. These were publicly available 3 for every language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "High Resource Languages",
"sec_num": "2.1.1"
},
{
"text": "1 https://youtu.be/9FdkuENPhuY 2 https://github.com/VarnithChordia/ Multlingual_Punctuation_restoration 3 https://webhose.io/ ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "High Resource Languages",
"sec_num": "2.1.1"
},
{
"text": "Due to lack of language resources available for indic languages for APR, we gather publicly released datasets. For Spoken text we used the Indian Prime Minister's address to the nation. These corpora manually translated into several Indian languages. Written text was obtained from Siripragada et al. (2020) who crawled articles articles released from the Press Information Bureau (PIB), an Indian government agency that provides information to news media.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Low Resource Languages",
"sec_num": "2.1.2"
},
{
"text": "Due to lack of readily available annotated datasets and large size corpora, we used an automated approach to label the data. We analyzed languages and selected the three most common punctuation -'PERIOD', 'COMMA' and 'QUESTION MARK' -that occurred across the languages for training our model. This was done to improve the readability of text so that could be easily understood by users, one of the goals of the system. Since we treat our task as a sequence labeling task, we annotated every word in the sequence according to the punctuation following it. We achieved this by tokenizing the input text into a stream of word tokens and punctuation tokens. We converted this into a set of pairs of (token, punctuation) where punctuation is the null punctuation ('O'), if there was no punctuation mark following in the text. To make our data set more diverse and training more robust, we ended sentences (10%) a few tokens before the 'PERIOD' tag and labeled the final token as 'EOS' (end of sentence). Further we converted all our text to lowercase to remove any signal while training the language model. The distribution of the labels can be seen in table 1. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "2.2"
},
{
"text": "The model consists of four main sub-parts as observed in Figure 2 - model is used to model token dependencies better, from forward and backward directions. NCRF (Yang and Zhang, 2018 ) relies on learning the high level features from the deep neural network and passes this information to a linear CRF layer for inference, which helps manage label dependencies. This architecture sequential in nature, is trained for APR task. The output sequence representation from the BILSTM is passed through a max pooling layer, the result of which passed through linear feed forward layer for language and text mode classification. We jointly trained our sequential language model, along with the classifiers.",
"cite_spans": [
{
"start": 161,
"end": 182,
"text": "(Yang and Zhang, 2018",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 57,
"end": 67,
"text": "Figure 2 -",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Joint Multilingual Model-Architecture",
"sec_num": "2.3"
},
{
"text": "We created a web extension that can be used to punctuate text within the text editors on web pages. It lets users to select text which could range from a few words to large paragraphs to entire documents to punctuate. The text does not have to be non punctuated as the system removes punctuation as a preprocessing step and punctuates again.The steps to punctuate are shown in Fig 1. 3 Experiments",
"cite_spans": [],
"ref_spans": [
{
"start": 377,
"end": 383,
"text": "Fig 1.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Web Extension",
"sec_num": "2.4"
},
{
"text": "We used the pretrained transformer model and specific tokenizers available on HuggingFace 4 . The model architecture consists of the 12 hidden layer encoder, which is used to produce the embeddings. We used an optimized weighting technique (Peters et al., 2018) to sum all the hidden layers rather than use a common practice of using one single layer to generate embeddings. This showed an improvement in performance as seen under ablation studies in table 5. The weighting method is as defined:",
"cite_spans": [
{
"start": 240,
"end": 261,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "O i = \u03b3 L\u22121 j=0 S j H j",
"eq_num": "(1)"
}
],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "\u2022 H j is a trainable task weight for the j th layer. \u03b3 is another task trainable task parameter that aids the optimization process",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "\u2022 S j is the normalized embedding output from the j th hidden layer of the transformer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "\u2022 O j is the output vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "\u2022 L is the number of hidden layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "To train the proposed model, we used a maximum sequence length of 505. We use a subword tokenization technique -sentence piece model (Kudo and Richardson, 2018) -which might result in token length exceeding the maximum sequence length, in such cases we exclude the tokens and start a new paragraph. For sequences less than the specified max sequence length, we pad the sequences to the maximum sequence length and mask the padded sequence to avoid performing attention on it. We used a batch size of 32, grouping similar sequence length prior to padding that enhances the speed while training the model. We do not fine tune the transformer model, but use it to embed the input text. A BILSTM stacked on top of the transformer model, is set to a dimension of 512, the layers are initialized with a uniform distribution in the range of (-.003, .003). A Neural CRF layer is trained with a maximum log-likelihood loss. Viterbi algorithm is used to search for the label sequence with the highest probability during decoding. The entire model was trained with an Adam optimization algorithm with a learning rate close to 1e-4 over 10 epochs. The proposed multitask network was trained via a dynamically weighted averaging (DWA) technique to balance each task. Thereby not allowing one task to dominate over the other or negatively impact the performance of the other. This approach was proposed and utilized for training a multi-task computer vision network (Liu et al., 2019) , we followed a similar approach and implemented this on language processing task to show overall improvement in performance. Similar to Gradnorm which learns to average tasks over time, the DWA method does not use the gradients of network rather uses numeric task loss. The weighting \u03bb j for task j is defined as:",
"cite_spans": [
{
"start": 133,
"end": 160,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF10"
},
{
"start": 1452,
"end": 1470,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb j = K exp(w j (n \u2212 1)/T ) i exp(w i (n \u2212 1)/T )",
"eq_num": "(2)"
}
],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w j (n \u2212 1) = L j (n \u2212 1) L j (n \u2212 2)",
"eq_num": "(3)"
}
],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "L j is the loss function of each task j, so w j is the ratio of loss function over the last two epochs. T represents the temperature, which is used to represent the softness of task weighting. A higher value of T represents a more even distribution between the tasks, when T is high enough, the value of \u03bb j equals 1. K is the total number of the tasks that we are training for. The overall loss is the sum of the individual task loss averaged over each iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "L ovrl = \u03bb 1 L pr + \u03bb 2 L lc + \u03bb 3 L tm batchsize (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "where L pr -Maximum Likelihood loss for Punctuation restoration, L lc -Cross Entorpy loss for Language Classification and L tm -Cross entropy loss for text mode classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.1"
},
{
"text": "To evaluate the performance of our joint model, we built different multilingual neural models. We split our dataset into two parts -train set (80%),validation set (10%) and test set (10%). The performance for every model was evaluated on test set, after being trained on the train set. We chose F1-score to evaluate the performance of our model. We established a baseline using BILSTM-CRF and pretrained FastText word embeddings (Bojanowski et al., 2017) as features and trained jointly on language and text mode classification tasks. The Fast-Text word embeddings used as features for training are monolingual. To train multilingual models, we developed cross lingual embeddings by aligning monolingual embeddings of different languages along a single dimension using unsupervised techniques (Chen and Cardie, 2018) . The parameters and training setup of the baseline was similar to the proposed model, except we used FastText based word embeddings as input features. Further we make comparisons using MBERT and XLM-Roberta as pretrained models. Table 2 shows the performance of the various models on high resource European languages along with their F1 scores. To ensure a fairer comparison, we implemented the trained model by Alam et al. (2020) that achieved state of art performance on IWSLT datasets to evaluate on our test set. The Joint-Multilingual BERT NCRF as proposed in section 2.3 outperforms the other models across spoken and written text for all punctuations. We observe German language performs the best across spoken and written text. The performance of the German language can be attributed to a couple of reasons. In German multiple words can be condensed into a single word. This reduces ambiguity and thus there are fewer decision points for the machine to provide inference on. German is an inflected language i.e the word order changes according to the function in the sentence. Most word orders are defined in terms of finite verb (V), in combination with Subject (S), and object (O). In German, this can vary according to independent or dependent clauses. In cases of independent clauses, the main verb must be the second element in the sentence (SVO) and the past participle the final element. Under dependent clauses, the object must be the second element in the sentence (SOV). This may provide an additional signal to model and that can impact its performance.",
"cite_spans": [
{
"start": 429,
"end": 454,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 793,
"end": 816,
"text": "(Chen and Cardie, 2018)",
"ref_id": "BIBREF3"
},
{
"start": 1230,
"end": 1248,
"text": "Alam et al. (2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1047,
"end": 1054,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "Models F1-Score DRNN-LWMA-pre (Kim, 2019) 68.6 Self-Attention (Yi and Tao, 2019) 72.9 BERT-Adversarial (Yi et al., 2020) 77.8",
"cite_spans": [
{
"start": 30,
"end": 41,
"text": "(Kim, 2019)",
"ref_id": "BIBREF7"
},
{
"start": 62,
"end": 80,
"text": "(Yi and Tao, 2019)",
"ref_id": "BIBREF19"
},
{
"start": 103,
"end": 120,
"text": "(Yi et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "Joint M-BERT (Our Model) 80.3 XLM-R Augmented (Alam et al., 2020) 82.9 To asses the ability of our model to generalize, we evaluated our best performing model on the reference transcripts of the IWSLT dataset. Even though our model was not trained specifically using these datasets, but was able to outperform on some of the prior state of art models as shown in Table 4 . The metrics shown refer to the average F1-score. The performance of our proposed models was carried out on the low resource languages for spoken and written transcripts, which can be observed in Table 3 . We obtained the best result using the Joint-Multilingual BERT NCRF model. For low resource languages the performance of Question is lower than the Comma and Period, due to lower number of questions in true label set. We experimented with different ablations of the best performing model, as seen in table 5 .",
"cite_spans": [
{
"start": 13,
"end": 24,
"text": "(Our Model)",
"ref_id": null
},
{
"start": 46,
"end": 65,
"text": "(Alam et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 363,
"end": 371,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 569,
"end": 576,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "\u2022 BILSTM-NCRF -We do not consider any embeddings and train a simple BILSTM-NCRF model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Studies",
"sec_num": "3.3"
},
{
"text": "\u2022 MBERT-NCRF -We removed the BILSTM layer and use only NCRF layer on top of transformers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Studies",
"sec_num": "3.3"
},
{
"text": "\u2022 MBERT-BILSTM -We remove the NCRF layer and model only the token dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Studies",
"sec_num": "3.3"
},
{
"text": "\u2022 Without weighted layers -We removed the trainable weighing parameters and considered only the top layer of the transformer as input to the BILSTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Studies",
"sec_num": "3.3"
},
{
"text": "\u2022 Without classification layers -We removed the classification layers and trained the model without any auxillary information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Studies",
"sec_num": "3.3"
},
{
"text": "In this paper we described and implemented a joint modeling approach for restoring punctuation for High and low resource languages across spoken and written text. Joint language model trained with auxiliary language and text mode classification improved the performance of the APR task. We achieved reasonable performance on the benchmark IWSLT datasets without being trained on it. We also presented a web extension that can help multilingual users improve overall readability and coherence of text. Further we present baseline results on indic languages that can be used for future work. We have shown examples of punctuated text that was output from our system in the Appendix section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Input Text Output Text japan then laid siege to the syrian penalty area for most of the game but rarely breached the syrian defence oleg shatskiku made sure of the win in injury time hitting an unstoppable left foot shot from just outside the area Japan then laid siege to the syrian penalty area for most of the game ,but rarely breached the syrian defence .Oleg shatskiku made sure of the win in injury time ,hitting an unstoppable left foot shot from just outside the area . russia's refusal to support emergency supply cuts would effectively and fatally undermine OPEC+'s ability to play the role of oil price stabilizing swing producer says Rapidan Energy's Bob McNally Russia's refusal to support emergency supply cuts would effectively and fatally undermine OPEC +'s ability to play the role of oil price stabilizing . Swing producer , says Rapidan Energy's Bob McNally . Romeo Romeo wherefore art thou Romeo Romeo , Romeo , wherefore art thou Romeo ? sans pr\u00e9juger de l'efficacit\u00e9 de ce couvre-feu avanc\u00e9 ces donn\u00e9es ne sont toutefois pas si facilement lisibles selon les experts suivant l'\u00e9pid\u00e9mie de Covid-19 Tout d'abord on manque encore de recul Sans pr\u00e9juger de l'efficacit\u00e9 de ce couvre-feu avanc\u00e9, ces donn\u00e9es ne sont toutefois pas si facilement lisibles , selon les experts , suivant l'\u00e9pid\u00e9mie de Covid-19 . Tout d'abord , on manque encore de recul . ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "https://huggingface.co/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Palo Alto Research Center for providing compute resources and Sebastian Safari for providing valuable help in developing the web extension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": "5"
},
{
"text": "We present a few examples of text passed to our system in Table 6 and Figure 3 as seen in the next page. It contains two columns -'Input Text' & 'Output Text'. The 'Input Text' columns consists of unpunctuated examples that was passed to our system, while the 'Output Text' column is the punctuated text that was returned. The highlighted colors of punctuation marks indicate whether the punctuation was replaced correctly or not. Green indicates the correct punctuation restored, red indicates the incorrect punctuation mark and yellow indicates the missed punctuation mark.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 6",
"ref_id": null
},
{
"start": 70,
"end": 78,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Example Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Punctuation restoration using transformer models for high-and low-resource languages",
"authors": [
{
"first": "Tanvirul",
"middle": [],
"last": "Alam",
"suffix": ""
},
{
"first": "Akib",
"middle": [],
"last": "Khan",
"suffix": ""
},
{
"first": "Firoj",
"middle": [],
"last": "Alam",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)",
"volume": "",
"issue": "",
"pages": "132--142",
"other_ids": {
"DOI": [
"10.18653/v1/2020.wnut-1.18"
]
},
"num": null,
"urls": [],
"raw_text": "Tanvirul Alam, Akib Khan, and Firoj Alam. 2020. Punctuation restoration using transformer models for high-and low-resource languages. In Proceed- ings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), pages 132-142, Online. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Recovering punctuation marks for automatic speech recognition",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Batista",
"suffix": ""
},
{
"first": "Diamantino",
"middle": [],
"last": "Caseiro",
"suffix": ""
},
{
"first": "Nuno",
"middle": [],
"last": "Mamede",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2007,
"venue": "Eighth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Batista, Diamantino Caseiro, Nuno Mamede, and Isabel Trancoso. 2007. Recovering punctuation marks for automatic speech recognition. In Eighth Annual Conference of the International Speech Com- munication Association.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised multilingual word embeddings",
"authors": [
{
"first": "Xilun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "261--270",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1024"
]
},
"num": null,
"urls": [],
"raw_text": "Xilun Chen and Claire Cardie. 2018. Unsupervised multilingual word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 261-270, Brus- sels, Belgium. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks",
"authors": [
{
"first": "Zhao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Vijay",
"middle": [],
"last": "Badrinarayanan",
"suffix": ""
},
{
"first": "Chen-Yu",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Rabinovich",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "794--803",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. 2018. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In International Conference on Machine Learning, pages 794-803. PMLR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Punctuation annotation using statistical prosody models",
"authors": [
{
"first": "Heidi",
"middle": [],
"last": "Christensen",
"suffix": ""
},
{
"first": "Yoshihiko",
"middle": [],
"last": "Gotoh",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Renals",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heidi Christensen, Yoshihiko Gotoh, and Steve Re- nals. 2001. Punctuation annotation using statistical prosody models.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep recurrent neural networks with layer-wise multi-head attentions for punctuation restoration",
"authors": [
{
"first": "Seokhwan",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2019,
"venue": "ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "7280--7284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seokhwan Kim. 2019. Deep recurrent neural networks with layer-wise multi-head attentions for punctua- tion restoration. In ICASSP 2019-2019 IEEE Inter- national Conference on Acoustics, Speech and Sig- nal Processing (ICASSP), pages 7280-7284. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sequence-to-sequence models for punctuated transcription combining lexical and acoustic features",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Klejch",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Bell",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Renals",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5700--5704",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Klejch, Peter Bell, and Steve Renals. 2017. Sequence-to-sequence models for punctuated tran- scription combining lexical and acoustic features. In 2017 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 5700-5704. IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Development and evaluation of automatic punctuation for french and english speech-to-text",
"authors": [
{
"first": "J\u00e1chym",
"middle": [],
"last": "Kol\u00e1\u0159",
"suffix": ""
},
{
"first": "Lori",
"middle": [],
"last": "Lamel",
"suffix": ""
}
],
"year": 2012,
"venue": "Thirteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00e1chym Kol\u00e1\u0159 and Lori Lamel. 2012. Development and evaluation of automatic punctuation for french and english speech-to-text. In Thirteenth Annual Con- ference of the International Speech Communication Association.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.06226"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "End-to-end multi-task learning with attention",
"authors": [
{
"first": "Shikun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Johns",
"suffix": ""
},
{
"first": "Andrew J",
"middle": [],
"last": "Davison",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1871--1880",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikun Liu, Edward Johns, and Andrew J Davison. 2019. End-to-end multi-task learning with attention. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 1871- 1880.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A study in machine learning from imbalanced data for sentence boundary detection in speech",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Nitesh",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"P"
],
"last": "Chawla",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Harper",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2006,
"venue": "Computer Speech & Language",
"volume": "20",
"issue": "4",
"pages": "468--494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Nitesh V Chawla, Mary P Harper, Elizabeth Shriberg, and Andreas Stolcke. 2006. A study in machine learning from imbalanced data for sentence boundary detection in speech. Computer Speech & Language, 20(4):468-494.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A multilingual parallel corpora collection effort for indian languages",
"authors": [
{
"first": "Shashank",
"middle": [],
"last": "Siripragada",
"suffix": ""
},
{
"first": "Jerin",
"middle": [],
"last": "Philip",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Vinay",
"suffix": ""
},
{
"first": "C",
"middle": [
"V"
],
"last": "Namboodiri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jawahar",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.07691"
]
},
"num": null,
"urls": [],
"raw_text": "Shashank Siripragada, Jerin Philip, Vinay P Nambood- iri, and CV Jawahar. 2020. A multilingual parallel corpora collection effort for indian languages. arXiv preprint arXiv:2007.07691.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Lstm for punctuation restoration in speech transcripts",
"authors": [
{
"first": "Ottokar",
"middle": [],
"last": "Tilk",
"suffix": ""
},
{
"first": "Tanel",
"middle": [],
"last": "Alum\u00e4e",
"suffix": ""
}
],
"year": 2015,
"venue": "Sixteenth annual conference of the international speech communication association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ottokar Tilk and Tanel Alum\u00e4e. 2015. Lstm for punctu- ation restoration in speech transcripts. In Sixteenth annual conference of the international speech com- munication association.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bidirectional recurrent neural network with attention mechanism for punctuation restoration",
"authors": [
{
"first": "Ottokar",
"middle": [],
"last": "Tilk",
"suffix": ""
},
{
"first": "Tanel",
"middle": [],
"last": "Alum\u00e4e",
"suffix": ""
}
],
"year": 2016,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "3047--3051",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ottokar Tilk and Tanel Alum\u00e4e. 2016. Bidirectional recurrent neural network with attention mechanism for punctuation restoration. In Interspeech, pages 3047-3051.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Europarl datasets with demographic speaker information",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Vanmassenhove",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Vanmassenhove and Christian Hardmeier. 2018. Europarl datasets with demographic speaker infor- mation.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Ncrf++: An opensource neural sequence labeling toolkit",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.05626"
]
},
"num": null,
"urls": [],
"raw_text": "Jie Yang and Yue Zhang. 2018. Ncrf++: An open- source neural sequence labeling toolkit. arXiv preprint arXiv:1806.05626.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Self-attention based model for punctuation prediction using word and speech embeddings",
"authors": [
{
"first": "Jiangyan",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tao",
"suffix": ""
}
],
"year": 2019,
"venue": "ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "7270--7274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiangyan Yi and J. Tao. 2019. Self-attention based model for punctuation prediction using word and speech embeddings. ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7270-7274.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Zhengkun Tian, and Cunhang Fan. 2020. Adversarial transfer learning for punctuation restoration",
"authors": [
{
"first": "Jiangyan",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Jianhua",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Ye",
"middle": [],
"last": "Bai",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.00248"
]
},
"num": null,
"urls": [],
"raw_text": "Jiangyan Yi, Jianhua Tao, Ye Bai, Zhengkun Tian, and Cunhang Fan. 2020. Adversarial transfer learn- ing for punctuation restoration. arXiv preprint arXiv:2004.00248.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Unpunctuated text transcripts within the editor window (b) Select the text to be punctuated and right click to punctuate (c) Output punctuated text Figure 1: Example of punctuation via web extension.Source: www.github.com",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Joint Punctuation model on indic languages for spoken and written text. Looks better when Zoomed.",
"num": null
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"text": "Indic language example",
"num": null
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Distribution of the High resource and long resource language datasets. The top-3 languages in the table are considered high Resource,while the bottom 2 are low resource languages"
},
"TABREF3": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": ""
},
"TABREF5": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Results on low resource languages"
},
"TABREF6": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Performance of the Joint Model on the IWSLT Ref dataset in comparison with other models. The table indicates the average F1 Scores."
},
"TABREF7": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Ablation Study on our dataset. HRL -High Resource Languages, LRL -Low resource languages"
},
"TABREF8": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Examples of automatic punctuation restoration of text in our system for European languages."
}
}
}
}