ACL-OCL / Base_JSON /prefixT /json /teachingnlp /2021.teachingnlp-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:51:39.448237Z"
},
"title": "Teaching a Massive Open Online Course on Natural Language Processing",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Artemova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HSE University",
"location": {
"settlement": "Moscow",
"country": "Russia"
}
},
"email": "elartemova@hse.ru"
},
{
"first": "Murat",
"middle": [],
"last": "Apishev",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HSE University",
"location": {
"settlement": "Moscow",
"country": "Russia"
}
},
"email": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Sarkisyan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HSE University",
"location": {
"settlement": "Moscow",
"country": "Russia"
}
},
"email": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Aksenov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HSE University",
"location": {
"settlement": "Moscow",
"country": "Russia"
}
},
"email": ""
},
{
"first": "Denis",
"middle": [],
"last": "Kirjanov",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Oleg",
"middle": [],
"last": "Serikov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HSE University",
"location": {
"settlement": "Moscow",
"country": "Russia"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a new Massive Open Online Course on Natural Language Processing, targeted at non-English speaking students. The course lasts 12 weeks; every week consists of lectures, practical sessions, and quiz assignments. Three weeks out of 12 are followed by Kaggle-style coding assignments. Our course intends to serve multiple purposes: (i) familiarize students with the core concepts and methods in NLP, such as language modeling or word or sentence representations, (ii) show that recent advances, including pretrained Transformer-based models, are built upon these concepts; (iii) introduce architectures for most demanded real-life applications, (iv) develop practical skills to process texts in multiple languages. The course was prepared and recorded during 2020, launched by the end of the year, and in early 2021 has received positive feedback.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a new Massive Open Online Course on Natural Language Processing, targeted at non-English speaking students. The course lasts 12 weeks; every week consists of lectures, practical sessions, and quiz assignments. Three weeks out of 12 are followed by Kaggle-style coding assignments. Our course intends to serve multiple purposes: (i) familiarize students with the core concepts and methods in NLP, such as language modeling or word or sentence representations, (ii) show that recent advances, including pretrained Transformer-based models, are built upon these concepts; (iii) introduce architectures for most demanded real-life applications, (iv) develop practical skills to process texts in multiple languages. The course was prepared and recorded during 2020, launched by the end of the year, and in early 2021 has received positive feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The vast majority of recently developed online courses on Artificial Intelligence (AI), Natural Language Processing (NLP) included, are oriented towards English-speaking audiences. In non-English speaking countries, such courses' audience is unfortunately quite limited, mainly due to the language barrier. Students, who are not fluent in English, find it difficult to cope with language issues and study simultaneously. Thus the students face serious learning difficulties and lack of motivation to complete the online course. While creating new online courses in languages other than English seems redundant and unprofitable, there are multiple reasons to support it. First, students may find it easier to comprehend new concepts and problems in their native language. Secondly, it may be easier to build a strong online learning community if students can express themselves fluently. Finally, and more specifically to NLP, an NLP course aimed at building practical skills should include languagespecific tools and applications. Knowing how to use tools for English is essential to understand the core principles of the NLP pipeline. However, it is of little use if the students work on real-life applications in the non-English industry.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present an overview of an online course aimed at Russian-speaking students. This course was developed and run for the first time in 2020, achieving positive feedback. Our course is a part of the HSE university's online specialization on AI and is built upon previous courses in the specialization, which introduced core concepts in calculus, probability theory, and programming in Python. Outside of the specialization, the course can be used for additional training of students majoring in computer science or software engineering and others who fulfill prerequisites.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this paper are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We present the syllabus of a recent wide-scope massive open online course on NLP, aimed at a broad audience;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We describe methodological choices made for teaching NLP to non-English speaking students;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 In this course, we combine recent deep learning trends with other best practices, such as topic modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is organized as follows: Section 2 introduces methodological choices made for the course design. Section 3 presents the course structure and topics in more details. Section 4 lists home works. Section 5 describes the hosting platform and its functionality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The course presented in this paper is split into two main parts, six weeks each, which cover (i) core NLP concepts and approaches and (ii) main applications and more sophisticated problem formulations. The first six weeks' main goal is to present different word and sentence representation methods, starting from bag-of-words and moving to word and sentence embeddings, reaching contextualized word embeddings and pre-trained language models. Simultaneously we introduce basic problem definitions: text classification, sequence labeling, and sequence-to-sequence transformation. The first part of the course roughly follows Yoav Goldberg's textbook (Goldberg, 2017) , albeit we extend it with pre-training approaches and recent Transformerbased architectures.",
"cite_spans": [
{
"start": 649,
"end": 665,
"text": "(Goldberg, 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Course overview",
"sec_num": "2"
},
{
"text": "The second part of the course introduces BERTbased models and such NLP applications as question answering, text summarization, and information extraction. This part adopts some of the explanations from the recent draft of \"Speech and Language Processing\" (Jurafsky and Martin, 2000) . An entire week is devoted to topic modeling, and BigARTM , a tool for topic modeling developed in MIPT, one of the top Russian universities and widely used in real-life applications. Overall practical sessions are aimed at developing text processing skills and practical coding skills.",
"cite_spans": [
{
"start": 255,
"end": 282,
"text": "(Jurafsky and Martin, 2000)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Course overview",
"sec_num": "2"
},
{
"text": "Every week comprises both a lecture and a practical session. Lectures have a \"talking head\" format, so slides and pre-recorded demos are presented, while practical sessions are real-time coding sessions. The instructor writes code snippets in Jupyter notebooks and explains them at the same time. Overall every week, there are 3-5 lecture videos and 2-3 practical session videos. Weeks 3, 5, 9 are extended with coding assignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Course overview",
"sec_num": "2"
},
{
"text": "Weeks 7 and 9 are followed by interviews. In these interviews, one of the instructors' talks to the leading specialist in the area. Tatyana Shavrina, one of the guests interviewed, leads an R&D team in Sber, one of the leading IT companies. The second guest, Konstantin Vorontsov, is a professor from one of the top universities. The guests are asked about their current projects and interests, career paths, what keeps them inspired and motivated, and what kind of advice they can give.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Course overview",
"sec_num": "2"
},
{
"text": "The final mark is calculated according to the formula: # of accepted coding assignment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Course overview",
"sec_num": "2"
},
{
"text": "Coding assignments are evaluated on the binary scale (accepted or rejected), and quiz assignments are evaluated on the 10 point scale. To earn a certificate, the student has to earn at least 4 points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "+0.7mean(quiz assignment mark)",
"sec_num": null
},
{
"text": "In practical sessions, we made a special effort to introduce tools developed for processing texts in Russian. The vast majority of examples, utilized in lectures, problems, attempted during practical sessions, and coding assignments, utilized datasets in Russian. The same choice was made by Pavel Braslavski, who was the first to create an NLP course in Russian in 2017 (Braslavski, 2017) . We utilized datasets in English only if Russian lacks the non-commercial and freely available datasets for the same task of high quality.",
"cite_spans": [
{
"start": 371,
"end": 389,
"text": "(Braslavski, 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "+0.7mean(quiz assignment mark)",
"sec_num": null
},
{
"text": "Some topics are intentionally not covered in the course. We focus on written texts and do not approach the tasks of text-to-speech and speech-totext transformations. Low-resource languages spoken in Russia are out of the scope, too. Besides, we almost left out potentially controversial topics, such as AI ethics and green AI problems. Although we briefly touch upon potential biases in pre-trained language models, we have to leave out a large body of research in the area, mainly oriented towards the English language and the US or European social problems. Besides, little has been explored in how neural models are affected by those biases and problems in Russia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "+0.7mean(quiz assignment mark)",
"sec_num": null
},
{
"text": "The team of instructors includes specialists from different backgrounds in computer science and theoretical linguists. Three instructors worked on lectures, two instructors taught practical sessions, and three teaching assistants prepared home assignments and conducted question-answering sessions in the course forum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "+0.7mean(quiz assignment mark)",
"sec_num": null
},
{
"text": "Week 1. Introduction. The first introductory lecture consists of two parts. The first part overviews the core tasks and problems in NLP, presents the main industrial applications, such as search engines, Business Intelligence tools, and conversational engines, and draws a comparison between broad-defined linguistics and NLP. To conclude this part, we touch upon recent trends, which can be grasped easily without the need to go deep into details, such as multi-modal applications , cross-lingual methods (Feng et al., 2020; Conneau et al., 2020) and computational humor (Braslavski et al., 2018; West and Horvitz, 2019) . Throughout this part lecture, we try to show NLP systems' duality: those aimed at understanding language (or speech) and those aimed at generating language (or speech). The most complex systems used for machine translation, for example, aim at both. The second part of the lecture introduces such basic concepts as bag-of-words, count-based document vector representation, tf-idf weighting. Finally, we explore bigram association measures, PMI and t-score. We point out that these techniques can be used to conduct an exploratory analysis of a given collection of texts and prepare input for machine learning methods.",
"cite_spans": [
{
"start": 506,
"end": 525,
"text": "(Feng et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 526,
"end": 547,
"text": "Conneau et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 572,
"end": 597,
"text": "(Braslavski et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 598,
"end": 621,
"text": "West and Horvitz, 2019)",
"ref_id": "BIBREF88"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Practical session gives an overview of text prepossessing techniques and simple count-based text representation models. We emphasize how prepossessing pipelines can differ for languages such as English and Russian (for example, what is preferable, stemming or lemmatization) and give examples of Python frameworks that are designed to work with the Russian language (pymystem3 (Segalovich) , pymorphy2 (Korobov, 2015) ). We also included an intro to regular expressions because we find this knowledge instrumental both within and outside NLP tasks.",
"cite_spans": [
{
"start": 377,
"end": 389,
"text": "(Segalovich)",
"ref_id": null
},
{
"start": 402,
"end": 417,
"text": "(Korobov, 2015)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "During the first weeks, most participants are highly motivated, we can afford to give them more practical material, but we still need to end up with some close-to-life clear examples. We use a simple sentiment analysis task on Twitter data to demonstrate that even the first week's knowledge (together with understanding basic machine learning) allows participants to solve real-world problems. At the same time, we illustrate how particular steps of text prepossessing can have a crucial impact on the model's outcome.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Week 2. Word embeddings. The lecture introduces the concepts of distributional semantics and word vector representations. We familiarize the students with early models, which utilized singular value decomposition (SVD) and move towards more advanced word embedding models, such as word2vec (Mikolov et al., 2013) and fasttext (Bojanowski et al., 2017) . We briefly touch upon the hierarchical softmax and the hashing trick and draw attention to negative sampling techniques. We show ways to compute word distance, including Euclidean and cosine similarity measures.",
"cite_spans": [
{
"start": 290,
"end": 312,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF53"
},
{
"start": 326,
"end": 351,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "We discuss the difference between word2vec and GloVe (Pennington et al., 2014) models and emphasize main issues, such as dealing with outof-vocabulary (OOV) words and disregarding rich morphology. fasttext is then claimed to address these issues. To conclude, we present approaches for intrinsic and extrinsic evaluation of word embeddings. Fig. 1 explains the difference between bag-of-words and bag-of-vectors. In practical session we explore only advanced word embedding models (word2vec, fasttext and GloVe) and we cover three most common scenarios for working with such models: using pre-trained models, training models from scratch and tuning pre-trained models. Giving a few examples, we show that fasttext as a character-level model serves as a better word representation model for Russian and copes better with Russian rich morphology. We also demonstrate some approaches of intrinsic evaluation of models' quality, such as solving analogy tasks (like well known \"king -man + woman = queen\") and evaluating semantic similarity and some useful techniques for visualization of word embeddings space.",
"cite_spans": [
{
"start": 53,
"end": 78,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [
{
"start": 341,
"end": 347,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "This topic can be fascinating for students when supplemented with illustrative examples. Exploring visualization of words clusters on plots or solving analogies is a memorable part of the \"classic\" NLP part of most students' course.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Week 3. Text classification. The lecture considers core concepts for supervised learning. We begin by providing examples for text classification applications, such as sentiment classification and spam filtering. Multiple problem statements, such as binary, multi-class, and multi-label classification, are stated. To introduce ML algorithms, we start with logistic regression and move towards neu-ral methods for text classification. To this end, we introduce fasttext as an easy, out-of-the-box solution. We introduce the concept of sentence (paragraph) embedding by presenting doc2vec model (Le and Mikolov, 2014) and show how such embeddings can be used as input to the classification model. Next, we move towards more sophisticated techniques, including convolutional models for sentence classification (Kim, 2014) . We do not discuss backpropagation algorithms but refer to the DL course of the specialization to refresh understanding of neural network training. We show ways to collect annotated data on crowdsourcing platforms and speed up the process using active learning (Esuli and Sebastiani, 2009) . Finally, we conclude with text augmentation techniques, including SMOTE (Chawla et al., 2002) and EDA (Wei and Zou, 2019) .",
"cite_spans": [
{
"start": 593,
"end": 615,
"text": "(Le and Mikolov, 2014)",
"ref_id": "BIBREF40"
},
{
"start": 807,
"end": 818,
"text": "(Kim, 2014)",
"ref_id": "BIBREF35"
},
{
"start": 1081,
"end": 1109,
"text": "(Esuli and Sebastiani, 2009)",
"ref_id": "BIBREF23"
},
{
"start": 1178,
"end": 1205,
"text": "SMOTE (Chawla et al., 2002)",
"ref_id": null
},
{
"start": 1214,
"end": 1233,
"text": "(Wei and Zou, 2019)",
"ref_id": "BIBREF87"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "In the practical session we continue working with the text classification on the IMDb movies reviews dataset. We demonstrate several approaches to create classification models with different word embeddings. We compare two different ways to get sentence embedding, based on any word embedding model: by averaging word vectors and using tf-idf weights for a linear combination of word vectors. We showcase fasttext tool for text classification using its built-in classification algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Additionally, we consider use GloVe word embedding model to build a simple Convolutional Neural Network for text classification. In this week and all of the following, we use PyTorch 1 as a framework for deep learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Week 4. Language modeling. The lecture focuses on the concept of language modelling. We start with early count-based models (Song and Croft, 1999) and create a link to Markov chains. We refer to the problem of OOV words and show the add-one smoothing method, avoiding more sophisticated techniques, such as Knesser-Ney smoothing (Kneser and Ney, 1995) , for the sake of time. Next, we introduce neural language models. To this end, we first approach Bengio's language model (Bengio et al., 2003) , which utilizes fully connected layers. Second, we present recurrent neural networks and show how they can be used for language modeling. Again, we remind the students of backpropagation through time and gradient vanishing or explosion, introduced earlier in the DL course. We claim, that LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Chung et al., 2014) cope with these problems. As a brief revision of the LSTM architecture is necessary, we utilize Christopher Olah's tutorial (Olah, 2015) . We pay extra attention to the inner working of the LSTM, following Andrej Karpathy's tutorial (Karpathy, 2015) . To add some research flavor to the lecture, we talk about text generation (Sutskever et al., 2011) , its application, and different decoding strategies (Holtzman et al., 2019) , including beam search and nucleus sampling. Lastly, we introduce the sequence labeling task (Ma and Hovy, 2016) for part-of-speech (POS) tagging and named entity recognition (NER) and show how RNN's can be utilized as sequence models for the tasks.",
"cite_spans": [
{
"start": 124,
"end": 146,
"text": "(Song and Croft, 1999)",
"ref_id": "BIBREF75"
},
{
"start": 329,
"end": 351,
"text": "(Kneser and Ney, 1995)",
"ref_id": "BIBREF37"
},
{
"start": 474,
"end": 495,
"text": "(Bengio et al., 2003)",
"ref_id": "BIBREF6"
},
{
"start": 834,
"end": 854,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF15"
},
{
"start": 979,
"end": 991,
"text": "(Olah, 2015)",
"ref_id": "BIBREF56"
},
{
"start": 1088,
"end": 1104,
"text": "(Karpathy, 2015)",
"ref_id": "BIBREF34"
},
{
"start": 1181,
"end": 1205,
"text": "(Sutskever et al., 2011)",
"ref_id": "BIBREF78"
},
{
"start": 1259,
"end": 1282,
"text": "(Holtzman et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 1377,
"end": 1396,
"text": "(Ma and Hovy, 2016)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "The practical session in this week is divided into two parts. The first part is dedicated to language models for text generation. We experiment with count-based probabilistic models and RNN's to generate dinosaur names and get familiar with perplexity calculation (the task and the data were introduced in Sequence Models course from DeepLearning.AI 2 ). To bring things together, students are asked to make minor changes in the code and run it to answer some questions in the week's quiz assignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "The second part of the session demonstrates the application of RNN's to named entity recognition. We first introduce the BIO and BIOES annotation schemes and show frameworks with pre-trained NER models for English (Spacy 3 ) and Russian (Natasha 4 ) languages. Further, we move on to CNN-biLSTM-CRF architecture described in the lecture and test it on CoNLL 2003 shared task data (Sang and De Meulder, 2003) .",
"cite_spans": [
{
"start": 390,
"end": 407,
"text": "De Meulder, 2003)",
"ref_id": "BIBREF69"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Week 5. Machine Translation. This lecture starts with referring to the common experience of using machine translation tools and a historical overview of the area. Next, the idea of encoderdecoder (seq2seq) architecture opens the technical part of the lecture. We start with RNN-based seq2seq models (Sutskever et al., 2014) and introduce the concept of attention (Bahdanau et al., 2015) . We show how attention maps can be used for \"black box\" interpretation. Next, we reveal the core architecture of modern NLP, namely, the Transformer model (Vaswani et al., 2017) and ask the students explicitly to take this part seriously. Following Jay Allamar's tutorial (Alammar, 2015) , we decompose the transformer architecture and go through it step by step. In the last part of the lecture, we return to machine translation and introduce quality measures, such as WER and BLEU (Papineni et al., 2002) , touch upon human evaluation and the fact that BLEU correlates well with human judgments. Finally, we discuss briefly more advanced techniques, such as non-autoregressive models (Gu et al., 2017) and back translation (Hoang et al., 2018). Although we do not expect the student to comprehend these techniques immediately, we want to broaden their horizons so that they can think out of the box of supervised learning and autoregressive decoding.",
"cite_spans": [
{
"start": 299,
"end": 323,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF79"
},
{
"start": 363,
"end": 386,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 543,
"end": 565,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF81"
},
{
"start": 660,
"end": 675,
"text": "(Alammar, 2015)",
"ref_id": "BIBREF1"
},
{
"start": 871,
"end": 894,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF57"
},
{
"start": 1074,
"end": 1091,
"text": "(Gu et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "In the first part of practical session we solve the following task: given a date in an arbitrary format transform it to the standard format \"dd-mmyyyy\" (for example, \"18 Feb 2018\", \"18.02.2018\", \"18/02/2018\" \u2192 \"18-02-2018\"). We adopt the code from PyTorch machine translation tutorial 5 to our task: we use the same RNN encoder, RNN decoder, and its modification -RNN encoder with attention mechanism -and compare the quality of two decoders. We also demonstrate how to visualize attention weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "The second part is dedicated to the Transformer model and is based on the Harvard NLP tutorial (Klein et al., 2017) that decomposes the article \"Attention is All You Need\" (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 95,
"end": 115,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF36"
},
{
"start": 172,
"end": 194,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF81"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Step by step, like in the lecture, we go through the Transformer code, trying to draw parallels with a simple encoder-decoder model we have seen in the first part. We describe and comment on every layer and pay special attention to implementing the attention layer and masking and the shapes of embeddings and layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Week 6. Sesame Street I. The sixth lecture and the next one are the most intense in the course. The paradigm of pre-trained language models is introduced in these two weeks. The first model to discuss in detail is ELMo (Peters et al., 2018) . Next, we move to BERT (Devlin et al., 2019) and introduce the masked language modeling and next sentence prediction objectives. While presenting BERT, we briefly revise the inner working of Trans- former blocks. We showcase three scenarios to fine-tune BERT: (i) text classification by using different pooling strategies ([CLS] , max or mean), (ii) sentence pair classification for paraphrase identification and for natural language inference, (iii) named entity recognition. SQuAD-style questionanswering, at which BERT is aimed too, as avoided here, as we will have another week for QA systems. Next, we move towards GPT-2 (Radford et al.) and elaborate on how high-quality text generation can be potentially harmful. To make the difference between BERT's and GPT-2's objective more clear, we draw parallels with the Transformer architecture for machine translation and show that BERT is an encoder-style model, while GPT-2 is a decoder-style model. We show Allen NLP (Gardner et al., 2018) demos of how GPT-2 generates texts and how attention scores implicitly resolve coreference.",
"cite_spans": [
{
"start": 214,
"end": 240,
"text": "ELMo (Peters et al., 2018)",
"ref_id": null
},
{
"start": 265,
"end": 286,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 564,
"end": 570,
"text": "([CLS]",
"ref_id": null
},
{
"start": 868,
"end": 884,
"text": "(Radford et al.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "In this week, we massively rely on Jay Allamar's (Alammar, 2015) tutorial and adopt some of these brilliant illustrations. One of the main problems, though, rising in this week is the lack of Russian terminology, as the Russian-speaking community has not agreed on the proper ways to translate such terms as \"contextualized encoder\" or \"fine-tuning\". To spice up this week, we were dressed in Sesame Street kigurumis (see Fig. 2 ).",
"cite_spans": [
{
"start": 49,
"end": 64,
"text": "(Alammar, 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 422,
"end": 428,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "The main idea of the practical session is to demonstrate ELMo and BERT models, considered earlier in the lecture. The session is divided into two parts, and in both parts, we consider text classification, using ELMo and BERT models, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "In the first part, we demonstrate how to use ELMo word embeddings for text classification on the IMBdb dataset used in previous sessions. We use pre-trained ELMo embeddings by Al-lenNLP library and implement a simple recurrent neural network with a GRU layer on top for text classification. In the end, we compare the performance of this model with the scores we got in previous sessions on the same dataset and demonstrate that using ELMo embeddings can improve model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "The second part of the session is focused on models based on Transformer architecture. We use huggingface-transformers library (Wolf et al., 2020 ) and a pre-trained BERT model to build a classification algorithm for Google play applications reviews written in English. We implement an entire pipeline of data preparation, using a pretrained model and demonstrating how to fine-tune the downstream task model. Besides, we implement a wrapper for the BERT classification model to get the prediction on new text.",
"cite_spans": [
{
"start": 127,
"end": 145,
"text": "(Wolf et al., 2020",
"ref_id": "BIBREF89"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Week 7. Sesame Street II. To continue diving into the pre-trained language model paradigm, the lecture first questions, how to evaluate the model. We discuss some methods to interpret the BERT's inner workings, sometimes referred to as BERTology (Rogers et al., 2021) . We introduce a few common ideas: BERT's lower layers account for surface features, lower to middle layers are responsible for morphology, while the upper-middle layers have better syntax representation (Conneau and Kiela, 2018) . We talk about ethical issues (May et al., 2019) , caused by pre-training on raw web texts. We move towards the extrinsic evaluation of pre-trained models and familiarize the students with GLUE-style evaluations (Wang et al., 2019b,a) . The next part of the lecture covers different improvements of BERT-like models. We show how different design choices may affect the model's performance in different tasks and present RoBERTa (Liu et al., 2019) , and ALBERT (Lan et al., 2019) as members of a BERT-based family. We touch upon the computational inefficiency of pre-trained models and introduce lighter models, including DistillBERT (Sanh et al., 2019) . To be solid, we touch upon other techniques to compress pre-trained models, including pruning (Sajjad et al., 2020) and quantization (Zafrir et al., 2019) , but do not expect the students to be able to implement these techniques immediately. We present the concept of language transferring and introduce multilingual Transformers, such as XLM-R (Conneau et al., 2020) . Language transfer becomes more and more crucial for non-English applications, and thus we draw more attention to it. Finally, we cover some of the basic multi-modal models aimed at image captioning and visual question answering, such as the unified Vision-Language Pre-training (VLP) model .",
"cite_spans": [
{
"start": 246,
"end": 267,
"text": "(Rogers et al., 2021)",
"ref_id": "BIBREF67"
},
{
"start": 472,
"end": 497,
"text": "(Conneau and Kiela, 2018)",
"ref_id": "BIBREF18"
},
{
"start": 529,
"end": 547,
"text": "(May et al., 2019)",
"ref_id": "BIBREF50"
},
{
"start": 711,
"end": 733,
"text": "(Wang et al., 2019b,a)",
"ref_id": null
},
{
"start": 927,
"end": 945,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF46"
},
{
"start": 959,
"end": 977,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 1132,
"end": 1151,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF70"
},
{
"start": 1248,
"end": 1269,
"text": "(Sajjad et al., 2020)",
"ref_id": "BIBREF68"
},
{
"start": 1287,
"end": 1308,
"text": "(Zafrir et al., 2019)",
"ref_id": "BIBREF91"
},
{
"start": 1499,
"end": 1521,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "In the practical session we continue discussing BERT-based models, shown in the lectures. The session's main idea is to consider different tasks that may be solved by BERT-based models and to demonstrate different tools and approaches for solving them. So the practical session is divided into two parts. The first part is devoted to named entity recognition. We consider a pre-trained crosslingual BERT-based NER model from the Deep-Pavlov library (Burtsev et al., 2018) and demonstrate how it can be used to extract named entities from Russian and English text. The second part is focused on multilingual zero-shot classification. We consider the pre-trained XLM-based model by HuggingFace, discuss the approach's key ideas, and demonstrate how the model works, classifying short texts in English, Russian, Spanish, and French.",
"cite_spans": [
{
"start": 449,
"end": 471,
"text": "(Burtsev et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Week 8. Syntax parsing. The lecture is devoted to computational approaches to syntactic parsing and is structured as follows. After a brief introduction about the matter and its possible applications (both as an auxiliary task and an independent one), we consider syntactic frameworks developed in linguistics: dependency grammar (Tesni\u00e8re, 2015) and constituency grammar (Bloomfield, 1936). Then we discuss only algorithms that deal with dependency parsing (mainly because there are no constituency parsers for Russian), so we turn to graph-based (McDonald et al., 2005) and transition-based (Aho and Ullman, 1972) dependency parsers and consider their logics, structure, sorts, advantages, and drawbacks. Afterward, we familiarize students with the practical side of parsing, so we introduce syntactically annotated corpora, Universal Dependencies project (Nivre et al., 2016b) and some parsers which perform for Russian well (UDPipe (Straka and Strakov\u00e1, 2017) , DeepPavlov Project (Burtsev et al., 2018) ). The last part of our lecture represents a brief overview of the problems which were not covered in previous parts: BERTology, some issues of web-texts parsing, latest advances in computational syntax (like enhanced dependencies (Schuster and Manning, 2016) ).",
"cite_spans": [
{
"start": 548,
"end": 571,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF51"
},
{
"start": 858,
"end": 879,
"text": "(Nivre et al., 2016b)",
"ref_id": "BIBREF55"
},
{
"start": 936,
"end": 963,
"text": "(Straka and Strakov\u00e1, 2017)",
"ref_id": "BIBREF77"
},
{
"start": 985,
"end": 1007,
"text": "(Burtsev et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 1239,
"end": 1267,
"text": "(Schuster and Manning, 2016)",
"ref_id": "BIBREF71"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "The practical session starts with a quick overview of CoNLL-U annotation format (Nivre et al., 2016a) : we show how to load, parse and visualize such data on the example from the SynTagRus corpus 6 . Next, we learn to parse data with pretrained UDPipe models (Straka et al., 2016) and Russian-language framework Natasha. To demonstrate some practical usage of syntax parsing, we first understand how to extract subject-verb-object (SVO) triples and then design a simple templatebased text summarization model.",
"cite_spans": [
{
"start": 80,
"end": 101,
"text": "(Nivre et al., 2016a)",
"ref_id": "BIBREF54"
},
{
"start": 259,
"end": 280,
"text": "(Straka et al., 2016)",
"ref_id": "BIBREF76"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Week 9. Topic modelling The focus of this lecture is topic modeling. First, we formulate the topic modeling problem and ways it can be used to cluster texts or extract topics. We explain the basic probabilistic latent semantic analysis (PLSA) model (HOFMANN, 1999) , that modifies early approaches, which were based on SVD (Dumais, 2004) . We approach the PLSA problem using the Expectation-Minimization (EM) algorithm and introduce the basic performance metrics, such as perplexity and topic coherence.",
"cite_spans": [
{
"start": 249,
"end": 264,
"text": "(HOFMANN, 1999)",
"ref_id": "BIBREF31"
},
{
"start": 323,
"end": 337,
"text": "(Dumais, 2004)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "As the PLSA problem is ill-posed, we familiarize students with regularization techniques using Additive Regularization for Topic Modeling (ARTM) model (Vorontsov and Potapenko, 2015) as an example. We describe the general EM algorithm for ARTM and some basic regularizers. Then we move towards the Latent Dirichlet Allocation (LDA) model (Blei et al., 2003) and show that the maximum a posteriori estimation for LDA is the special case of the ARTM model with a smoothing or sparsing regularizer (see Fig. 3 for the explanation snippet). We conclude the lecture with a brief introduction to multi-modal ARTM models and show how to generalize different Bayesian topic models based on LDA. We showcase classification, word translation, and trend detection tasks as multimodal models.",
"cite_spans": [
{
"start": 151,
"end": 182,
"text": "(Vorontsov and Potapenko, 2015)",
"ref_id": "BIBREF84"
},
{
"start": 338,
"end": 357,
"text": "(Blei et al., 2003)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 500,
"end": 506,
"text": "Fig. 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "In practical session we consider the models discussed in the lecture in a slightly different order. First, we take a closer look at Gensim realization 6 https://universaldependencies.org/ treebanks/ru_syntagrus/index.html of the LDA model (\u0158eh\u016f\u0159ek and Sojka, 2010) , pick up the model's optimal parameters in terms of perplexity and topic coherence, and visualize the model with pyLDAvis library. Next, we explore BigARTM library, particularly LDA, PLSA, and multi-modal models, and the impact of different regularizers. For all experiments, we use a corpus of Russian-language news from Lenta.ru 7 which allows us to compare the models to each other.",
"cite_spans": [
{
"start": 239,
"end": 264,
"text": "(\u0158eh\u016f\u0159ek and Sojka, 2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Week 10. In this lecture we discussed monolingual seq2seq problems, text summarization and sentence simplification. We start with extractive summarization techniques. The first approach introduced is TextRank (Mihalcea and Tarau, 2004) . We present each step of this approach and explain that any sentence or keyword embeddings can be used to construct a text graph, as required by the method. Thus we refer the students back to earlier lectures, where sentence embeddings were discussed. Next, we move to abstractive summarization techniques. To this end, we present performance metrics, such as ROUGE (Lin, 2004) and METEOR (Banerjee and Lavie, 2005) and briefly overview pre-Transformer architectures, including Pointer networks (See et al., 2017) . Next, we show recent pre-trained Transformer-based models, which aim at multi-task learning, including summarization. To this end, we discuss pre-training approaches of T5 (Raffel et al., 2020) and BART (Lewis et al., 2020) , and how they help to improve the performance of mono-lingual se2seq tasks. Unfortunately, when this lecture was created, multilingual versions of these models were not available, so they are left out of the scope. Finally, we talk about sentence simplification task (Coster and Kauchak, 2011; Alva-Manchego et al., 2020) and its social impact. We present SARI (Xu et al., 2016) as a metric for sentence simplification performance and state, explain how T5 or BART can be utilized for the task.",
"cite_spans": [
{
"start": 209,
"end": 235,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF52"
},
{
"start": 603,
"end": 614,
"text": "(Lin, 2004)",
"ref_id": "BIBREF44"
},
{
"start": 732,
"end": 750,
"text": "(See et al., 2017)",
"ref_id": "BIBREF72"
},
{
"start": 925,
"end": 946,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF63"
},
{
"start": 956,
"end": 976,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF41"
},
{
"start": 1245,
"end": 1271,
"text": "(Coster and Kauchak, 2011;",
"ref_id": "BIBREF19"
},
{
"start": 1272,
"end": 1299,
"text": "Alva-Manchego et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 1339,
"end": 1356,
"text": "(Xu et al., 2016)",
"ref_id": "BIBREF90"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "The practical session is devoted to extractive summarization and TextRank algorithm. We are urged to stick to extractive summarization, as Russian lacks annotated datasets, but, at the same time, the task is demanded by in industry-extractive summarization compromises than between the need for summarization techniques and the absence of training datasets. Nevertheless, we used annotated English datasets to show how performance metrics can be used for the task. The CNN/DailyMail articles are used as an example of a dataset for the summarization task. As there is no standard benchmark for text summarization in Russian, we have to use English to measure different models' performance. We implement the TextRank algorithm and compare it with the algorithm from the Net-workX library (Hagberg et al., 2008) . Also, we demonstrate how to estimate the performance of the summarization by calculating the ROUGE metric for the resulting algorithm using the PyRouge library 8 . This practical session allows us to refer back the students to sentence embedding models and showcase another application of sentence vectors.",
"cite_spans": [
{
"start": 787,
"end": 809,
"text": "(Hagberg et al., 2008)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Week 11. The penultimate lecture approaches Question-Answering (QA) systems and chat-bot technologies. We present multiple real-life industrial applications, where chat-bots and QA technologies are used, ranging from simple taskoriented chat-bots for food ordering to help desk or hotline automation. Next, we formulate the core problems of task-oriented chat-bots, which are intent classification and slot-filling (Liu and Lane, 2016) and revise methods, to approach them. After that, we introduce the concept of a dialog scenario graph and show how such a graph can guide users to complete their requests. Without going deep into technical details, we show how readymade solutions, such as Google Dialogflow 9 , can be used to create task-oriented chat-bots. Next, we move towards QA models, of which we pay more attention to information retrieval-based (IRbased) approaches and SQuAD-style (Rajpurkar et al., 2016) approaches. Since natural language generation models are not mature enough (at least for Russian) to be used in free dialog, we explain how IR-based techniques imitate a conversation with a user. Finally, we show how BERT can be used to tackle the SQuAD problem. The lecture is concluded by comparing industrial dialog assistants created by Russian companies, such as Yandex.Alisa or Mail.ru Marusya.",
"cite_spans": [
{
"start": 415,
"end": 435,
"text": "(Liu and Lane, 2016)",
"ref_id": "BIBREF45"
},
{
"start": 893,
"end": 917,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF65"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "In the practical session we demonstrate several examples of using Transformer-based models for QA task. Firstly, we try to finetune Electra model (Clark et al., 2020) on COVID-19 questions dataset 10 and BERT on SQuAD 2.0 (Rajpurkar et al., 2018) (we use code from hugginface tutorial 11 for the latter). Next, we show an example of usage of pretrained model for Russian-language data from DeepPavlov project. Finally, we explore how to use BERT for joint intent classification and slot filling task .",
"cite_spans": [
{
"start": 146,
"end": 166,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "Week 12. The last lecture wraps up the course by discussing knowledge graphs (KG) and some of their applications for QA systems. We revise core information extraction problems, such as NER and relation detection, and show how they can be used to extract a knowledge graph from unstructured texts (Paulheim, 2017) . We touch upon the entity linking problem but do not go deep into details. To propose to students an alternative view to information extraction, we present machine reading comprehension approaches for NER (Li et al., 2019a) and relation detection (Li et al., 2019b) , referring to the previous lecture. Finally, we close the course by revising all topics covered. We recite the evolution of text representation models from bag-of-words to BERT. We show that all the problems discussed throughout the course fall into one of three categories: (i) text classification or sentence pair classification, (ii) sequence tagging, (iii) sequence-to-sequence transformation. We draw attention to the fact that the most recent models can tackle all of the problem categories. Last but not least we revise, how all of these problem statements are utilized in real-life applications.",
"cite_spans": [
{
"start": 296,
"end": 312,
"text": "(Paulheim, 2017)",
"ref_id": "BIBREF58"
},
{
"start": 519,
"end": 537,
"text": "(Li et al., 2019a)",
"ref_id": "BIBREF42"
},
{
"start": 561,
"end": 579,
"text": "(Li et al., 2019b)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "The practical session in this week is dedicated to information extraction tasks with Stanford CoreNLP library . The session's main idea is to demonstrate using the tool for constructing knowledge graphs based on natural text. We consider different ways of using the library and experimented with using the library to solve different NLP tasks that were already considered in the course: tokenization, lemmatization, POS-tagging, and dependency parsing. The library includes models for 53 languages, so we consider examples of solving these tasks for English and Russian texts. Besides, relation extraction is considered using the Open Information Extraction (Ope-nIE) module from the CoreNLP library.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syllabus",
"sec_num": "3"
},
{
"text": "The course consists of multiple ungraded quiz assignments, 11 graded quiz assignments, three graded coding assignments. Grading is performed automatically in a Kaggle-like fashion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Home works",
"sec_num": "4"
},
{
"text": "Every video lecture is followed by an ungraded quiz, consisting of 1-2 questions. A typical question address the core concepts introduced:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "\u2022 What kind of vectors are more common for word embedding models? A1: dense (true), A2: sparse (false)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "\u2022 What kind of layers are essential for GPT-2 model? A1: transformer stacks (true), A2: recurrent layers (false), A3: convolutional layers (false), A4: dense layers (false)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "A graded test is conducted every week, except the very last one. It consists of 12-15 questions, which we tried to split into three parts, being more or less of the same complexity. First part questions about main concepts and ideas introduced during the week. These questions are a bit more complicated than after video ones:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "\u2022 What part of an encoder-decoder model solves the language modeling problem, i.e., the next word prediction? A1: encoder (false), A2: decoder (true)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "\u2022 What are the BPE algorithm units? A1: syllables (false), A2: morphemes (false), A3: n\u2212grams (true), A4: words (false)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "Second part of the quiz asks the students to conduct simple computations by hand:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "\u2022 Given a detailed description of an neural architecture, compute the number of parameters;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "\u2022 Given a gold-standard NER annotation and a system output, compute token-based and span-based micro F 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "The third part of the quiz asks to complete a simple programming assignment or asks about the code presented in practical sessions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "\u2022 Given a pre-trained language model, compute perplexity of a test sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "\u2022 Does DeepPavlov cross-lingual NER model require to announce the language of the input text?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "For convenience and to avoid format ambiguity, all questions are in multiple-choice format. For questions, which require a numerical answer, we provided answer options in the form of intervals, with one of the endpoints excluded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "Each quiz is estimated on a 10 point scale. All questions have equal weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "The final week is followed by a comprehensive quiz covering all topics studied. This quiz is obligatory for those students who desire to earn a certificate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quiz Assignments",
"sec_num": "4.1"
},
{
"text": "There are three coding assignments concerning the following topics: (i) text classification, (ii) sequence labeling, (iii) topic modeling. Assignments grading is binary. Text classification and sequence labeling assignments require students to beat the score of the provided baseline submission. Topic modeling assignment is evaluated differently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coding assignments",
"sec_num": "4.2"
},
{
"text": "All the coding tasks provide students with the starter code and sample submission bundles. The number of student's submissions is limited. Sample submission bundles illustrate the required submission format and could serve as the random baseline for each task. Submissions are evaluated using the Moodle 12 (Dougiamas and Taylor, 2003) CodeRunner 13 (Lobb and Harlow, 2016) plugin.",
"cite_spans": [
{
"start": 307,
"end": 335,
"text": "(Dougiamas and Taylor, 2003)",
"ref_id": "BIBREF21"
},
{
"start": 350,
"end": 373,
"text": "(Lobb and Harlow, 2016)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coding assignments",
"sec_num": "4.2"
},
{
"text": "labeling coding assignments Text classification assignment is based on the Harry Potter and the Action Prediction Challenge from Natural Language dataset (Vilares and G\u00f3mez-Rodr\u00edguez, 2019) , which uses fiction fantasy texts. Here, the task is the following: given some text preceding a spell occurrence in the text, predict this spell name. Students are provided with starter code in Jupyter notebooks (P\u00e9rez and Granger, 2007) . Starter code implements all the needed data pre-processing, shows how to implement the baseline Logistic Regression model, and provides code needed to generate the submission.",
"cite_spans": [
{
"start": 154,
"end": 189,
"text": "(Vilares and G\u00f3mez-Rodr\u00edguez, 2019)",
"ref_id": "BIBREF82"
},
{
"start": 403,
"end": 428,
"text": "(P\u00e9rez and Granger, 2007)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification and sequence",
"sec_num": "4.2.1"
},
{
"text": "Students' goal is to build three different models performing better than the baseline. The first one should differ from the baseline model by only hyperparameter values. The second one should be a Gradient Boosting model. The third model to build is a CNN model. All the three models' predictions on the provided testing dataset should be then submitted to the scoring system. Submissions, where all the models beat the baseline models classification F1-score, are graded positively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification and sequence",
"sec_num": "4.2.1"
},
{
"text": "Sequence labeling Sequence labeling assignment is based on the LitBank data (Bamman et al., 2019) . Here, the task is to given fiction texts, perform a NER labeling. Students are provided with a starter code for data pre-processing and submission packaging. Starter code also illustrates building a recurrent neural model using the PyTorch framework, showing how to compose a single-layer unidirectional RNN model. Students' goal is to build a bidirectional LSTM model that would outperform the baseline. Submissions are based on the held-out testing subset provided by the course team.",
"cite_spans": [
{
"start": 76,
"end": 97,
"text": "(Bamman et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification and sequence",
"sec_num": "4.2.1"
},
{
"text": "Topic modeling assignment motivation is to give students practical experience with LDA (Blei et al., 2003) algorithm. The assignment is organized as follows: first, students have to download and preprocess Wikipedia texts.",
"cite_spans": [
{
"start": 83,
"end": 106,
"text": "LDA (Blei et al., 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic modeling assignment",
"sec_num": "4.2.2"
},
{
"text": "Then, the following experiment should be conducted. The experiment consists of training and exploring an LDA model for the given collection of texts. The task is to build several LDA models for the given data: models differ only in the configured number of topics. Students are asked to explore the obtained models using the pyLDAvis (Sievert and Shirley, 2014) tool. This stage is not evaluated. Finally, students are asked to submit the topic labels that LDA models assign to words provided by the course team. Such a prediction should be performed for each of the obtained models.",
"cite_spans": [
{
"start": 334,
"end": 361,
"text": "(Sievert and Shirley, 2014)",
"ref_id": "BIBREF74"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic modeling assignment",
"sec_num": "4.2.2"
},
{
"text": "The course is hosted on OpenEdu 14 -an educational platform created by the Association \"National Platform for Open Education\", established by leading Russian universities. Our course and all courses on the platform are available free of charge so that everyone can access all materials (including videos, practical Jupyter notebooks, tests, and coding assessments). The platform also provides a forum where course participants can ask questions or discuss the material with each other and lecturers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Platform description",
"sec_num": "5"
},
{
"text": "First of all, we expect the students to understand basic formulations of the NLP tasks, such as text classification, sentence pair modeling, sequence tagging, and sequence-to-sequence transformation. We expect the students to be able to recall core terminology and use it fluently. In some weeks, we provide links to extra materials, mainly in English, so that the students can learn more about the topic themselves. We hope that after completing the course, the students become able to read those materials. Secondly, we anticipate that after completing the course, the students are comfortable using popular Python tools to process texts in Russian and English and utilize pre-trained models. Thirdly, we hope that the students can state and approach their tasks related to NLP, using the knowledge acquired, conducting experiments, and evaluating the results correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expected outcomes",
"sec_num": "6"
},
{
"text": "The early feedback we have received so far is positive. Although the course has only been advertised so far to a broader audience, we know that there are two groups interested in the course. First, some students come to study at their own will. Secondly, selected topics were used in offline courses in an inverse classroom format or as additional materials. The students note that our course is a good starting point for studying NLP and helps navigate a broad range of topics and learn the terminology. Some of the students note that it was easy for them to learn in Russian, and now, as they feel more comfortable with the core concepts, they can turn to read detailed and more recent sources. Unfortunately, programming assignments turn out to be our weak spot, as there are challenging to complete, and little feedback on them can be provided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "7"
},
{
"text": "We ask all participants to fill in a short survey after they enroll in the course. So far, we have received about 100 responses. According to the results, most students (78%) have previously taken online courses, but only 24% of them have experience with courses from foreign universities. The average age of course participants is 32 years; most of them already have or are getting a higher education (see Fig. 4 for more details). Almost half of the students are occupied in Computer Science area, 20% have a background in Humanities, followed by Engineering Science (16%).",
"cite_spans": [],
"ref_spans": [
{
"start": 407,
"end": 413,
"text": "Fig. 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Feedback",
"sec_num": "7"
},
{
"text": "We also ask students about their motivation in the form of a multiple-choice question: almost half of them (46%) stated that they want to improve their qualification either to improve at their current job (33%) or to change their occupation (13%), and 20% answered they enrolled the course for research and academic purposes. For the vast majority of the student, the reputation of HSE university was the key factor to select this course among other available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "7"
},
{
"text": "This paper introduced and described a new massive open online course on Natural Language Processing targeted at Russian-speaking students. This twelve-week course was designed and recorded during 2020 and launched by the end of the year. In the lectures and practical session, we managed to document a paradigm shift caused by the discovery and widespread use of pre-trained Transformerbased language models. We inherited the best of two worlds, showing how to utilize both static word embeddings in a more traditional machine learning setup and contextualized word embeddings in the most recent fashion. The course's theoretical outcome is understanding and knowing core concepts and problem formulations, while the practical outcome covers knowing how to use tools to process text in Russian and English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Early feedback we got from the students is positive. As every week was devoted to a new topic, they did not find it difficult to keep being engaged. The ways we introduce the core problem formulations and showcase different tools to process texts in Russian earned approval. What is more, the presented course is used now as supplementary material in a few off-line educational programs to the best of our knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Further improvements and adjustments, which could be made for the course, include new home works related to machine translation or monolingual sequence-to-sequence tasks and the development of additional materials in written form to support mathematical calculations, avoided in the video lecture for the sake of time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "https://pytorch.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.coursera.org/learn/ nlp-sequence-models 3 https://spacy.io 4 https://natasha.github.io",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pytorch.org/tutorials/ intermediate/seq2seq_translation_ tutorial.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/yutkin/Lenta. Ru-News-Dataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "urlhttps://github.com/andersjo/pyrouge 9 https://cloud.google.com/dialogflow",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/xhlulu/covid-qa 11 https://huggingface.co/ transformers/custom_datasets.html# question-answering-with-squad-2-0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://moodle.org/ 13 https://coderunner.org.nz/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://npoed.ru/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The theory of parsing, translation, and compiling",
"authors": [
{
"first": "V",
"middle": [],
"last": "Alfred",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jeffrey D Ullman",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alfred V Aho and Jeffrey D Ullman. 1972. The theory of parsing, translation, and compiling.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The illustrated transformer. Jay Alammar blog",
"authors": [
{
"first": "Jay",
"middle": [],
"last": "Alammar",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jay Alammar. 2015. The illustrated transformer. Jay Alammar blog.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Data-driven sentence simplification: Survey and benchmark",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Alva-Manchego",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2020,
"venue": "Computational Linguistics",
"volume": "46",
"issue": "1",
"pages": "135--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Alva-Manchego, Carolina Scarton, and Lu- cia Specia. 2020. Data-driven sentence simplifica- tion: Survey and benchmark. Computational Lin- guistics, 46(1):135-187.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyung",
"middle": [
"Hyun"
],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An annotated dataset of literary entities",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Sejal",
"middle": [],
"last": "Popat",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2138--2144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman, Sejal Popat, and Sheng Shen. 2019. An annotated dataset of literary entities. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2138-2144.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evalu- ation measures for machine translation and/or sum- marization, pages 65-72.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A neural probabilistic language model. The journal of machine learning research",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janvin",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. The journal of machine learning re- search, 3:1137-1155.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Language or ideas? Language",
"authors": [
{
"first": "Leonard",
"middle": [],
"last": "Bloomfield",
"suffix": ""
}
],
"year": 1936,
"venue": "",
"volume": "",
"issue": "",
"pages": "89--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leonard Bloomfield. 1936. Language or ideas? Lan- guage, pages 89-95.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Nlp -how will it be in russian? Habr blog",
"authors": [
{
"first": "Pavel",
"middle": [],
"last": "Braslavski",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pavel Braslavski. 2017. Nlp -how will it be in russian? Habr blog.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "How to evaluate humorous response generation, seriously?",
"authors": [
{
"first": "Pavel",
"middle": [],
"last": "Braslavski",
"suffix": ""
},
{
"first": "Vladislav",
"middle": [],
"last": "Blinov",
"suffix": ""
},
{
"first": "Valeria",
"middle": [],
"last": "Bolotova",
"suffix": ""
},
{
"first": "Katya",
"middle": [],
"last": "Pertsova",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Human Information Interaction & Retrieval",
"volume": "",
"issue": "",
"pages": "225--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pavel Braslavski, Vladislav Blinov, Valeria Bolotova, and Katya Pertsova. 2018. How to evaluate humor- ous response generation, seriously? In Proceedings of the 2018 Conference on Human Information In- teraction & Retrieval, pages 225-228.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deeppavlov: Open-source library for dialogue systems",
"authors": [
{
"first": "Mikhail",
"middle": [],
"last": "Burtsev",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Seliverstov",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Airapetyan",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Arkhipov",
"suffix": ""
},
{
"first": "Dilyara",
"middle": [],
"last": "Baymurzina",
"suffix": ""
},
{
"first": "Nickolay",
"middle": [],
"last": "Bushkov",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Gureenkova",
"suffix": ""
},
{
"first": "Taras",
"middle": [],
"last": "Khakhulin",
"suffix": ""
},
{
"first": "Yurii",
"middle": [],
"last": "Kuratov",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Kuznetsov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL 2018, System Demonstrations",
"volume": "",
"issue": "",
"pages": "122--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikhail Burtsev, Alexander Seliverstov, Rafael Airapetyan, Mikhail Arkhipov, Dilyara Baymurz- ina, Nickolay Bushkov, Olga Gureenkova, Taras Khakhulin, Yurii Kuratov, Denis Kuznetsov, et al. 2018. Deeppavlov: Open-source library for dia- logue systems. In Proceedings of ACL 2018, System Demonstrations, pages 122-127.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Smote: synthetic minority over-sampling technique",
"authors": [
{
"first": "V",
"middle": [],
"last": "Nitesh",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"W"
],
"last": "Chawla",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"O"
],
"last": "Bowyer",
"suffix": ""
},
{
"first": "W Philip",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kegelmeyer",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of artificial intelligence research",
"volume": "16",
"issue": "",
"pages": "321--357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. Smote: synthetic minority over-sampling technique. Journal of artifi- cial intelligence research, 16:321-357.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BERT for joint intent classification and slot filling",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhu",
"middle": [],
"last": "Zhuo",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Zhu Zhuo, and Wen Wang. 2019. BERT for joint intent classification and slot filling. CoRR, abs/1902.10909.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS 2014 Workshop on Deep Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence mod- eling. In NIPS 2014 Workshop on Deep Learning, December 2014.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "ELECTRA: Pretraining text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pre- training text encoders as discriminators rather than generators. In ICLR.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "\u00c9douard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, \u00c9douard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Senteval: An evaluation toolkit for universal sentence representations",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representa- tions. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Simple english wikipedia: a new text simplification task",
"authors": [
{
"first": "William",
"middle": [],
"last": "Coster",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Kauchak",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "665--669",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Coster and David Kauchak. 2011. Simple en- glish wikipedia: a new text simplification task. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 665-669.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Moodle: Using learning communities to create an open source course management system",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Dougiamas",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Dougiamas and Peter Taylor. 2003. Moo- dle: Using learning communities to create an open source course management system.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Latent semantic analysis. Annual review of information science and technology",
"authors": [
{
"first": "T",
"middle": [],
"last": "Susan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "38",
"issue": "",
"pages": "188--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan T Dumais. 2004. Latent semantic analysis. An- nual review of information science and technology, 38(1):188-230.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Active learning strategies for multi-label text classification",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2009,
"venue": "European Conference on Information Retrieval",
"volume": "",
"issue": "",
"pages": "102--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Esuli and Fabrizio Sebastiani. 2009. Active learning strategies for multi-label text classification. In European Conference on Information Retrieval, pages 102-113. Springer.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Languageagnostic bert sentence embedding",
"authors": [
{
"first": "Fangxiaoyu",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.01852"
]
},
"num": null,
"urls": [],
"raw_text": "Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language- agnostic bert sentence embedding. arXiv preprint arXiv:2007.01852.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Allennlp: A deep semantic natural language processing platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Nelson",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Workshop for NLP Open Source Software (NLP-OSS)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language process- ing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1-6.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Neural network methods for natural language processing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "10",
"issue": "",
"pages": "1--309",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg. 2017. Neural network methods for nat- ural language processing. Synthesis lectures on hu- man language technologies, 10(1):1-309.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Nonautoregressive neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.02281"
]
},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, James Bradbury, Caiming Xiong, Vic- tor OK Li, and Richard Socher. 2017. Non- autoregressive neural machine translation. arXiv preprint arXiv:1711.02281.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Exploring network structure, dynamics, and function using networkx",
"authors": [
{
"first": "Aric",
"middle": [],
"last": "Hagberg",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Swart",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Chult",
"suffix": ""
}
],
"year": 2008,
"venue": "Los Alamos National Lab.(LANL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aric Hagberg, Pieter Swart, and Daniel S Chult. 2008. Exploring network structure, dynamics, and func- tion using networkx. Technical report, Los Alamos National Lab.(LANL), Los Alamos, NM (United States).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Iterative backtranslation for neural machine translation",
"authors": [
{
"first": "Duy",
"middle": [],
"last": "Vu Cong",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "18--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Probabilistic latent semantic analysis",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. Conf. on Uncertainty in Artificial Intelligence (UAI)",
"volume": "",
"issue": "",
"pages": "289--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T HOFMANN. 1999. Probabilistic latent semantic analysis. In Proc. Conf. on Uncertainty in Artificial Intelligence (UAI), 1999, pages 289-296.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The curious case of neural text degeneration",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text de- generation. In International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Jurafsky and James H Martin. 2000. Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The unreasonable effectiveness of recurrent neural networks",
"authors": [
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
}
],
"year": 2015,
"venue": "Andrej Karpathy blog",
"volume": "21",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrej Karpathy. 2015. The unreasonable effective- ness of recurrent neural networks. Andrej Karpathy blog, 21:23.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. CoRR, abs/1408.5882.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Opennmt: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P17-4012"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proc. ACL.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Improved backing-off for m-gram language modeling",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Kneser",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1995,
"venue": "1995 international conference on acoustics, speech, and signal processing",
"volume": "1",
"issue": "",
"pages": "181--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In 1995 international conference on acoustics, speech, and signal processing, volume 1, pages 181-184. IEEE.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Morphological analyzer and generator for russian and ukrainian languages",
"authors": [
{
"first": "Mikhail",
"middle": [],
"last": "Korobov",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Analysis of Images, Social Networks and Texts",
"volume": "",
"issue": "",
"pages": "320--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikhail Korobov. 2015. Morphological analyzer and generator for russian and ukrainian languages. In International Conference on Analysis of Images, So- cial Networks and Texts, pages 320-332. Springer.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Interna- tional conference on machine learning, pages 1188- 1196. PMLR.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A unified mrc framework for named entity recognition",
"authors": [
{
"first": "Xiaoya",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jingrong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Yuxian",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Qinghong",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.11476"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2019a. A unified mrc framework for named entity recognition. arXiv preprint arXiv:1910.11476.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Entity-relation extraction as multi-turn question answering",
"authors": [
{
"first": "Xiaoya",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Fan",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Zijun",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xiayu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Duo",
"middle": [],
"last": "Chai",
"suffix": ""
},
{
"first": "Mingxin",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1340--1350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, and Jiwei Li. 2019b. Entity-relation extraction as multi-turn question an- swering. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 1340-1350.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text summarization branches out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Attention-based recurrent neural network models for joint intent detection and slot filling",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Lane",
"suffix": ""
}
],
"year": 2016,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "685--689",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu and Ian Lane. 2016. Attention-based recur- rent neural network models for joint intent detection and slot filling. Interspeech 2016, pages 685-689.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Coderunner: A tool for assessing computer programming skills",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Lobb",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Harlow",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "7",
"issue": "",
"pages": "47--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Lobb and Jenny Harlow. 2016. Coderunner: A tool for assessing computer programming skills. ACM Inroads, 7(1):47-51.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1064--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "The stanford corenlp natural language processing toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Bauer",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguis- tics: system demonstrations, pages 55-60.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "On measuring social biases in sentence encoders",
"authors": [
{
"first": "Chandler",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "622--628",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandler May, Alex Wang, Shikha Bordia, Samuel Bowman, and Rachel Rudinger. 2019. On measur- ing social biases in sentence encoders. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of de- pendency parsers. In Proceedings of the 43rd An- nual Meeting of the Association for Computational Linguistics (ACL'05), pages 91-98.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Textrank: Bringing order into text",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into text. In Proceedings of the 2004 con- ference on empirical methods in natural language processing, pages 404-411.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems- Volume 2, pages 3111-3119.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Universal Dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Silveira",
"suffix": ""
},
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1659--1666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Haji\u010d, Christopher D. Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016a. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666, Portoro\u017e, Slovenia. European Language Resources Associa- tion (ELRA).",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Universal dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Silveira",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1659--1666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine De Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajic, Christopher D Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016b. Universal dependen- cies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Understanding lstm networks",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Olah",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Olah. 2015. Understanding lstm networks. Christopher Olah blog.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Knowledge graph refinement: A survey of approaches and evaluation methods. Semantic web",
"authors": [
{
"first": "Heiko",
"middle": [],
"last": "Paulheim",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "8",
"issue": "",
"pages": "489--508",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heiko Paulheim. 2017. Knowledge graph refinement: A survey of approaches and evaluation methods. Se- mantic web, 8(3):489-508.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "IPython: a system for interactive scientific computing",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"E"
],
"last": "Granger",
"suffix": ""
}
],
"year": 2007,
"venue": "Computing in Science and Engineering",
"volume": "9",
"issue": "3",
"pages": "21--29",
"other_ids": {
"DOI": [
"10.1109/MCSE.2007.53"
]
},
"num": null,
"urls": [],
"raw_text": "Fernando P\u00e9rez and Brian E. Granger. 2007. IPython: a system for interactive scientific computing. Com- puting in Science and Engineering, 9(3):21-29.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of NAACL-HLT, pages 2227-2237.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language mod- els are unsupervised multitask learners.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:1-67.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Know what you don't know: Unanswerable questions for squad",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for squad. CoRR, abs/1806.03822.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Squad: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta. ELRA. http://is.muni.cz/ publication/884893/en.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "A primer in bertology: What we know about how bert works",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2021,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "842--866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2021. A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics, 8:842-866.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Poor man's bert: Smaller and faster transformer models",
"authors": [
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.03844"
]
},
"num": null,
"urls": [],
"raw_text": "Hassan Sajjad, Fahim Dalvi, Nadir Durrani, and Preslav Nakov. 2020. Poor man's bert: Smaller and faster transformer models. arXiv preprint arXiv:2004.03844.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Introduction to the conll-2003 shared task: Languageindependent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"Tjong"
],
"last": "",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Tjong Kim Sang and Fien De Meulder. 2003. In- troduction to the conll-2003 shared task: Language- independent named entity recognition. In Proceed- ings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Enhanced english universal dependencies: An improved representation for natural language understanding tasks",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "2371--2378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Schuster and Christopher D Manning. 2016. Enhanced english universal dependencies: An im- proved representation for natural language under- standing tasks. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Eval- uation (LREC'16), pages 2371-2378.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "A fast morphological algorithm with unknown word guessing induced by a dictionary for a web search engine",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Segalovich",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Segalovich. A fast morphological algorithm with unknown word guessing induced by a dictionary for a web search engine.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Ldavis: A method for visualizing and interpreting topics",
"authors": [
{
"first": "Carson",
"middle": [],
"last": "Sievert",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Shirley",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the workshop on interactive language learning, visualization, and interfaces",
"volume": "",
"issue": "",
"pages": "63--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carson Sievert and Kenneth Shirley. 2014. Ldavis: A method for visualizing and interpreting topics. In Proceedings of the workshop on interactive lan- guage learning, visualization, and interfaces, pages 63-70.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "A general language model for information retrieval",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the eighth international conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "316--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Song and W Bruce Croft. 1999. A general language model for information retrieval. In Proceedings of the eighth international conference on Information and knowledge management, pages 316-321.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "UD-Pipe: Trainable pipeline for processing CoNLL-U files performing tokenization, morphological analysis, POS tagging and parsing",
"authors": [
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "4290--4297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milan Straka, Jan Haji\u010d, and Jana Strakov\u00e1. 2016. UD- Pipe: Trainable pipeline for processing CoNLL-U files performing tokenization, morphological analy- sis, POS tagging and parsing. In Proceedings of the Tenth International Conference on Language Re- sources and Evaluation (LREC'16), pages 4290- 4297, Portoro\u017e, Slovenia. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe",
"authors": [
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "88--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milan Straka and Jana Strakov\u00e1. 2017. Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 88-99.",
"links": null
},
"BIBREF78": {
"ref_id": "b78",
"title": "Generating text with recurrent neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Martens",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2011,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, James Martens, and Geoffrey E Hin- ton. 2011. Generating text with recurrent neural net- works. In ICML.",
"links": null
},
"BIBREF79": {
"ref_id": "b79",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in Neural Information Processing Systems, 27:3104-3112.",
"links": null
},
"BIBREF80": {
"ref_id": "b80",
"title": "Elements of Structural Syntax",
"authors": [
{
"first": "Lucien",
"middle": [],
"last": "Tesni\u00e8re",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucien Tesni\u00e8re. 2015. Elements of Structural Syntax. John Benjamins Publishing Company.",
"links": null
},
"BIBREF81": {
"ref_id": "b81",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, pages 6000-6010.",
"links": null
},
"BIBREF82": {
"ref_id": "b82",
"title": "Harry Potter and the action prediction challenge from natural language",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vilares",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2124--2130",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1218"
]
},
"num": null,
"urls": [],
"raw_text": "David Vilares and Carlos G\u00f3mez-Rodr\u00edguez. 2019. Harry Potter and the action prediction challenge from natural language. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2124-2130, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF83": {
"ref_id": "b83",
"title": "Bigartm: Open source library for regularized multimodal topic modeling of large collections",
"authors": [
{
"first": "Konstantin",
"middle": [],
"last": "Vorontsov",
"suffix": ""
},
{
"first": "Oleksandr",
"middle": [],
"last": "Frei",
"suffix": ""
},
{
"first": "Murat",
"middle": [],
"last": "Apishev",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Romov",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Dudarenko",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Analysis of Images, Social Networks and Texts",
"volume": "",
"issue": "",
"pages": "370--381",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Konstantin Vorontsov, Oleksandr Frei, Murat Apishev, Peter Romov, and Marina Dudarenko. 2015. Bi- gartm: Open source library for regularized multi- modal topic modeling of large collections. In Inter- national Conference on Analysis of Images, Social Networks and Texts, pages 370-381. Springer.",
"links": null
},
"BIBREF84": {
"ref_id": "b84",
"title": "Additive regularization of topic models",
"authors": [
{
"first": "Konstantin",
"middle": [],
"last": "Vorontsov",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Potapenko",
"suffix": ""
}
],
"year": 2015,
"venue": "Machine Learning",
"volume": "101",
"issue": "",
"pages": "303--323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Konstantin Vorontsov and Anna Potapenko. 2015. Ad- ditive regularization of topic models. Machine Learning, 101(1):303-323.",
"links": null
},
"BIBREF85": {
"ref_id": "b85",
"title": "Superglue: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in Neural Infor- mation Processing Systems, 32.",
"links": null
},
"BIBREF86": {
"ref_id": "b86",
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019b. Glue: A multi-task benchmark and analysis platform for natural language understanding. In 7th Inter- national Conference on Learning Representations, ICLR 2019.",
"links": null
},
"BIBREF87": {
"ref_id": "b87",
"title": "Eda: Easy data augmentation techniques for boosting performance on text classification tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "6383--6389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Wei and Kai Zou. 2019. Eda: Easy data augmen- tation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 6383-6389.",
"links": null
},
"BIBREF88": {
"ref_id": "b88",
"title": "Reverseengineering satire, or \"paper on computational humor accepted despite making serious advances",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "West",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Horvitz",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "7265--7272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert West and Eric Horvitz. 2019. Reverse- engineering satire, or \"paper on computational hu- mor accepted despite making serious advances\". In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 7265-7272.",
"links": null
},
"BIBREF89": {
"ref_id": "b89",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF90": {
"ref_id": "b90",
"title": "Optimizing statistical machine translation for text simplification",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Quanze",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "401--415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.",
"links": null
},
"BIBREF91": {
"ref_id": "b91",
"title": "Q8bert: Quantized 8bit bert",
"authors": [
{
"first": "Ofir",
"middle": [],
"last": "Zafrir",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Boudoukh",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Izsak",
"suffix": ""
},
{
"first": "Moshe",
"middle": [],
"last": "Wasserblat",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.06188"
]
},
"num": null,
"urls": [],
"raw_text": "Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019. Q8bert: Quantized 8bit bert. arXiv preprint arXiv:1910.06188.",
"links": null
},
"BIBREF92": {
"ref_id": "b92",
"title": "Unified Vision-language Pre-training for Image Captioning and VQA",
"authors": [
{
"first": "Luowei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hamid",
"middle": [],
"last": "Palangi",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Houdong",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Corso",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "13041--13049",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason Corso, and Jianfeng Gao. 2020. Unified Vision-language Pre-training for Image Captioning and VQA. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13041- 13049.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "One slide from Lecture 2. Difference between raw texts (top line), bag-of-words (middle line), and bag-of-vectors (bottom line). Background words: text, words, vectors.",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "To spice up the lectures, the lecturer is dressed in an ELMo costume",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "One slide from Lecture 9. Sparsification of an ARTM model explained.",
"num": null,
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"text": "The results of survey among course participants. Left: current educational level. Right: professional area.",
"num": null,
"uris": null
}
}
}
}