| { |
| "paper_id": "2022", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T01:09:51.264523Z" |
| }, |
| "title": "Keynote Talk: ML and NLP for Language Learning at Scale", |
| "authors": [ |
| { |
| "first": "Nicolas", |
| "middle": [], |
| "last": "Hernandez", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Kristopher", |
| "middle": [], |
| "last": "Kyle", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Irina", |
| "middle": [], |
| "last": "Maslowski", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Sandeep", |
| "middle": [], |
| "last": "Mathias", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Janet", |
| "middle": [], |
| "last": "Mee", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Marcos", |
| "middle": [], |
| "last": "Zampieri", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Klinton", |
| "middle": [ |
| "Bicknell" |
| ], |
| "last": "Duolingo", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [ |
| "I" |
| ], |
| "last": "Cristea", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Fiacco", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "As scalable learning technologies become ubiquitous, it generates a large amount of student data, which can be used with machine learning and NLP to develop new instructional technologies, such as personalized practice schedules and adaptive lessons. Additionally, machine learning and NLP are uniquely poised to solve the problems inherent in scaling language instruction to a large number of languages and courses. In this talk, I will describe several projects illustrating these two uses of ML and NLP in language learning at scale at Duolingo-the world's largest language education platform with over 100 courses and around 40 million monthly active learners.", |
| "pdf_parse": { |
| "paper_id": "2022", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "As scalable learning technologies become ubiquitous, it generates a large amount of student data, which can be used with machine learning and NLP to develop new instructional technologies, such as personalized practice schedules and adaptive lessons. Additionally, machine learning and NLP are uniquely poised to solve the problems inherent in scaling language instruction to a large number of languages and courses. In this talk, I will describe several projects illustrating these two uses of ML and NLP in language learning at scale at Duolingo-the world's largest language education platform with over 100 courses and around 40 million monthly active learners.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "This year marks the 17th edition of the Workshop on Innovative Use of NLP for Building Educational Applications. We received an impressive number of 66 submissions, from which we accepted 4 papers as oral and 27 as poster presentations, for an overall acceptance rate of 47 percent. We in the Organizing Committee were excited to see so many truly diverse and excellent submissions and selecting the ones to be presented at the workshop was often a hard decision. The papers accepted were selected on the basis of several factors, including the relevance to a core educational problem space, the novelty of the approach or domain, and the strength of the research. As always, excellence in research was one of the main factors considered. Each paper was reviewed by at least three members of the Program Committee who we believed to be most appropriate for the paper. As in the previous years, we also continue to have a strong policy to deal with conflicts of interest and double submission policy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Being a long-running workshop, we are glad to see novel research and publications from the regular BEA authors. At the same time, we are also very happy to welcome our new authors who are publishing their work with BEA for the first time this year. We hope the new authors will become active members of the BEA and the SIGEDU communities. We also hope that with our relatively high acceptance rate, we were able to include a diverse set of papers on a variety of topics and from a wide set of institutions, which is itself a clear indicator of the growing variety of research interests in the field of educational applications.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "In addition to oral and poster presentation, BEA 2022 is hosting two invited talks: by Klinton Bicknell, a staff research scientist at Duolingo, where he co-leads the Learning AI Lab, and by Alexandra I. Cristea, Professor, Deputy Head, Director of Research and Head of the Artificial Intelligence in Human Systems research group in the Department of Computer Science at Durham University. As in the previous years, we are also hosting an ambassador paper talk from one of the sister societies from the International Alliance to Advance Learning in the Digital Era (IAALDE). This year, the talk will be given by James Fiacco (Carnegie Mellon University) from the International Society of the Learning Sciences (ISLS).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "This year, a number of authors released their data and code for the benefit of the educational community; we list these resources below. The papers present a wide variety of approaches: from traditional NLP and ML models to the state-of-the-art techniques applied to the educational applications. In addition, it is exciting to see a variety of domains and applications addressed in this year's papers -from language learning to engineering and math education. Last but not least, this year's submissions represent a wide variety of applications developed for languages other than English. Three papers address applications to German: Rietsche et al. introduce an automatic peer-to-peer feedback classification model; Weiss and Meurers present a new state-of-the-art readability assessment model for German L2 readers; and Laarmann-Quante et al. explore acceptability of spelling variants in free-text answers to listening comprehension prompts. In addition, Moner and Volodina introduce a synthetic error dataset for Swedish; Chang et al. perform automatic short answer assessment on texts written in Finnish; while Reyes et al. present a baseline readability model for Cebuano; and Ahumada et al. introduce a tool aimed at supporting educational activities in Mapuzugun. It is exciting to see educational applications developed for such a wide variety of languages, many of which are traditionally considered to be low resource, and we hope to see even more publications addressing other languages in the coming years.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "The BEA 2022 workshop has presentations on a variety of topics, including automated writing evaluation, item generation, readability, discourse analysis, dialogue, annotation, speech, grammatical error detection and correction, feedback, and multi-modal approaches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Automated Writing Evaluation (AWE) and Grading: Four papers address this topic. Bexte et al. introduce an architecture that efficiently learns a similarity model for content scoring and find that results on the standard ASAP dataset are on par with a BERT-based classification approach. Takano and Ichikawa present a BERT-based automated scoring model for short-answer questions that benefits from pre-training on a large amount of general text data. Chang et al. investigate the grouping of short textual answers, which is approached as a paraphrase identification task and evaluated on a dataset consisting of textual answers from various disciplines written in Finnish. Jalota et al. discuss debiasing approaches to mitigate the impact of an author's L1 on automated CEFR classification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Automated Item Generation (AIG): Four papers present various approaches to automated item generation. Zou et al. propose an unsupervised True / False Question Generation approach (TF-QG) that automatically generates questions from a given passage for reading comprehension and show that this approach can generate valuable testing items. Keim and Littman explore a novel approach that leverages large language models to select inline challenges and automatically generate context cloze items that discourage skipping during reading. Rathod et al. propose a new Multi-Question Generation task aimed at generating multiple semantically similar but lexically diverse questions assessing the same concept in reading comprehension and report preliminary results from sampling multiple questions from their model. Heck and Meurers present a tool that builds on a language-aware search engine that helps identify suitable texts for readers and generates practice exercises from authentic texts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Reading and Text Complexity: In addition to the papers that generate testing items for reading comprehension, three more focus on readability assessment models. Reyes et al. present the first baseline readability model for the Cebuano language, the second most used native language in the Philippines with about 27.5 million speakers. Weiss and Meurers present a new state-of-the-art sentence-wise readability assessment model for German L2 readers and make a number of insightful conclusions about this model. Finally, North et al. investigate the performance of binary comparative Lexical Complexity Prediction (LCP) models for complex word identification applied to CompLex 2.0 dataset that was used in SemEval-2021 Task 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Discourse and dialogue: This year, a number of papers focused on various aspects of discourse analysis in educational contexts and on dialogue and conversational systems. Among them, Suresh et al. investigate the feasibility of using enriched contextual cues to improve model performance on the classification of talk moves -discursive strategies used by teachers and students to facilitate conversations in classrooms; they apply their models to the publicly available TalkMoves dataset and report new state of the art over previously published results on this task. Alic et al. propose the task of computationally detecting funneling and focusing questions in classroom discourse, create and release an annotated dataset of teacher utterances, and introduce a range of approaches to differentiate between these questions. Ding et al. explore the role of topic information in student essays from an argument mining perspective and show that, given the same amount of training data, prompt-specific training performs better than cross-prompt training. Fiacco et al. propose a state-of-the-art method for automated analysis of structure and flow of writing and lay a foundation for a generalizable approach to automated writing feedback related to these aspects. Ganesh et al. introduce a new task called response construct tagging (RCT), in which student responses to tailored survey questions are automatically tagged for six constructs measuring transformative experiences and engineering identity of students. Finally, Tyen et al. make an initial foray into adapting open-domain dialogue generation for second language learning, propose and implement decoding strategies that can adjust the difficulty level of the chatbot according to the learner's needs, and evaluate these strategies using judgements from human examiners trained in language education.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Speech: Speech processing and assessment, as usual, are very popular topics at BEA. This year, we have six presentations in these areas. Kwako et al. investigate potential biases of transformer-based models for automated English speech assessment and report that no statistically significant difference that can be related to biases was found in their preliminary experiments. Chen et al. report on their first effort of using deep learning to evaluate L2 learners' reduced form pronunciations, which are useful in training ASR applications. Laarmann-Quante et al. present a corpus study in which they analyze human accepv tability decisions in a high stakes listening test for German; they show that spelling variants are harder to score consistently than other answer variants and examine how the decision can be operationalized using features that could be applied by an automatic scoring system. Skidmore and Moore explore the application of laughter as a feature for incremental disfluency detection in spoken learner English and show that, combined with silence, these features reduce the impact of learner errors on model precision and lead to an overall improvement of model performance. Kyle et al. introduce and release a dependency treebank of spoken L2 English that is annotated with part of speech (Penn POS) tags and syntactic dependencies (Universal Dependencies) and then evaluate the impact of this treebank on training models for POS and UD annotation tasks. The work by Dutta et al. explores the fusion of conversational speech and real-time location in the context of cognitive development in children and provides preliminary evidence that the use of speech technology in educational settings supports early childhood intervention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Grammatical Error Detection (GED) and Correction (GEC): Remarkably, two more papers at BEA are at the intersection of speech and grammatical error correction. Specifically, the work by Lu et al. focuses on the assessment and development of spoken grammatical error correction (SGEC) systems and discusses evaluation metrics, the problem of error propagation in cascaded approaches, and the importance of accurate feedback for learners. In the same vein, Bann\u00f2 and Matassoni address the task of automatically predicting proficiency scores for spoken test responses of English as a second language learners by training models on written data and using the presence of grammatical errors as a feature; they investigate the impact of the feature extractor on spoken proficiency assessment and conclude that their approach can be beneficial for assessing spoken language proficiency.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Feedback: The topic of feedback generation in learning environments also attracted a lot of attention this year. For intstance, Jia et al. present a new paradigm, which they call incremental zero-shot learning (IZSL), to tackle the problem of lacking sufficient historical data for the task of peer assessment, which is an effective pedagogical strategy for delivering feedback to learners. Rietsche et al. present an automatic classification model to measure sentence specificity in written peer-to-peer feedback; they train and test their models on student feedback texts written in German, and their results suggest that specificity of feedback sentences weakly correlates with perceptions of helpfulness. Wambsganss et al. present a novel tool to support and engage English language learners with feedback on the quality of their argument structures, which automatically detects claim-premise structures and provides visual feedback to learners to prompt them to repair any broken argumentation structures.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Annotation: Moner and Volodina generate a synthetic error dataset for Swedish by replicating errors observed in the authentic error-annotated dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Multi-modal approaches: Loginova and Benoit propose an adaptation of NLP techniques from the field of machine comprehension to the area of mathematical educational data mining; they show that incorporating syntactic information can improve performance in predicting exercise difficulty. To conclude, we would like to thank everyone who showed interest and submitted a paper this year -all of the authors for their contributions, the members of the Program Committee for their valuable feedback and thoughtful reviews, and everyone who is attending the workshop. We hope to see many of you at the workshop, both remotely and in person in Seattle. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Ambassador paper presentation from the 2021 Annual Meeting of the ISLS (International Society of the Learning Sciences), a member society of the IAALDE (International Alliance to Advance Learning in the Digital Era)Abstract: Transactivity is a valued collaborative process, which has been associated with elevated learning gains, collaborative product quality, and knowledge transfer within teams. Dynamic forms of collaboration support have made use of real time monitoring of transactivity, and automation of its analysis has been affirmed as valuable to the field. Early models were able to achieve high reliability within restricted domains. More recent approaches have achieved a level of generality across learning domains. In this study, we investigate generalizability of models developed primarily in computer science courses to a new student population, namely, masters students in a leadership course, where we observe strikingly different patterns of transactive exchange than in prior studies. This difference prompted both a reformulation of the coding standards and innovation in the modeling approach, both of which we report on here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Automatic scoring of short answers using justification cues estimated by BERT Shunya Takano and Osamu Ichikawa", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Automatic scoring of short answers using justification cues estimated by BERT Shunya Takano and Osamu Ichikawa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Mitigating Learnerese Effects for CEFR Classification Rricha Jalota", |
| "authors": [ |
| { |
| "first": "Van", |
| "middle": [], |
| "last": "Sas", |
| "suffix": "" |
| }, |
| { |
| "first": "Huiyan", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| "." |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitigating Learnerese Effects for CEFR Classification Rricha Jalota, Peter Bourgonje, Jan Van Sas and Huiyan Huang . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "27 Generation of Synthetic Error Data of Verb Order Errors for Swedish Judit Casademont Moner and Elena Volodina", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [ |
| ". . . . ." |
| ], |
| "last": "Mohammed Hussien", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohammed Hussien . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Generation of Synthetic Error Data of Verb Order Errors for Swedish Judit Casademont Moner and Elena Volodina . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "An Incremental Zero-shot Learning Approach for Assessing Peer Feedback Comments Qinjin Jia", |
| "authors": [ |
| { |
| "first": "Yupeng", |
| "middle": [], |
| "last": "Starting From Zero ;", |
| "suffix": "" |
| }, |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| "." |
| ], |
| "last": "Gehringer", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Starting from Zero\": An Incremental Zero-shot Learning Approach for Assessing Peer Feedback Com- ments Qinjin Jia, Yupeng Cao and Edward Gehringer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "On Assessing and Developing Spoken 'Grammatical Error Correction' Systems Yiting Lu, Stefano Bann\u00f2 and Mark Gales", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "On Assessing and Developing Spoken 'Grammatical Error Correction' Systems Yiting Lu, Stefano Bann\u00f2 and Mark Gales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Automatic True/False Question Generation for Educational Purpose", |
| "authors": [ |
| { |
| "first": "Bowei", |
| "middle": [], |
| "last": "Zou", |
| "suffix": "" |
| }, |
| { |
| "first": "Pengfei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Liangming", |
| "middle": [], |
| "last": "Pan", |
| "suffix": "" |
| }, |
| { |
| "first": "Ai", |
| "middle": [ |
| ". ." |
| ], |
| "last": "Ti Aw", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Automatic True/False Question Generation for Educational Purpose Bowei Zou, Pengfei Li, Liangming Pan and Ai Ti Aw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Fine-tuning Transformers with Additional Context to Classify Discursive Moves in Mathematics Classrooms Abhijit Suresh", |
| "authors": [ |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Jacobs", |
| "suffix": "" |
| }, |
| { |
| "first": "Margaret", |
| "middle": [], |
| "last": "Perkoff", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "H" |
| ], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tamara", |
| "middle": [], |
| "last": "Sumner", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| "." |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fine-tuning Transformers with Additional Context to Classify Discursive Moves in Mathematics Clas- srooms Abhijit Suresh, Jennifer Jacobs, Margaret Perkoff, James H. Martin and Tamara Sumner . . . . . 71", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Cross-corpora experiments of automatic proficiency assessment and error detection for spoken English Stefano Bann\u00f2 and Marco Matassoni", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cross-corpora experiments of automatic proficiency assessment and error detection for spoken English Stefano Bann\u00f2 and Marco Matassoni . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Activity focused Speech Recognition of Preschool Children in Early Childhood Classrooms Satwik Dutta", |
| "authors": [ |
| { |
| "first": "Dwight", |
| "middle": [], |
| "last": "Irvin", |
| "suffix": "" |
| }, |
| { |
| "first": "Jay", |
| "middle": [], |
| "last": "Buzhardt", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "L" |
| ], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| "." |
| ], |
| "last": "Hansen", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Activity focused Speech Recognition of Preschool Children in Early Childhood Classrooms Satwik Dutta, Dwight Irvin, Jay Buzhardt and John H.L. Hansen . . . . . . . . . . . . . . . . . . . . . . . . . . . 92", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Structural information in mathematical formulas for exercise difficulty prediction: a comparison of NLP representations Ekaterina Loginova and Dries Benoit", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Structural information in mathematical formulas for exercise difficulty prediction: a comparison of NLP representations Ekaterina Loginova and Dries Benoit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "The Specificity and Helpfulness of Peer-to-Peer Feedback in Higher Education Roman Rietsche", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "The Specificity and Helpfulness of Peer-to-Peer Feedback in Higher Education Roman Rietsche, Andrew Caines, Cornelius Schramm, Dominik Pf\u00fctze and Paula Buttery . . . 107", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Similarity-Based Content Scoring -How to Make S-BERT Keep Up With BERT Marie Bexte", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Similarity-Based Content Scoring -How to Make S-BERT Keep Up With BERT Marie Bexte, Andrea Horbach and Torsten Zesch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Don't Drop the Topic -The Role of the Prompt in Argument Identification in Student Writing Yuning Ding, Marie Bexte and Andrea Horbach", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Don't Drop the Topic -The Role of the Prompt in Argument Identification in Student Writing Yuning Ding, Marie Bexte and Andrea Horbach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Argumentative Writing", |
| "authors": [ |
| { |
| "first": "; Andrew", |
| "middle": [], |
| "last": "Alen App", |
| "suffix": "" |
| }, |
| { |
| "first": "Paula", |
| "middle": [], |
| "last": "Caines", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| "." |
| ], |
| "last": "Buttery", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "ALEN App: Argumentative Writing Support To Foster English Language Learning Thiemo Wambsganss, Andrew Caines and Paula Buttery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Assessing sentence readability for German language learners with broad linguistic modeling or readability formulas: When do linguistic insights make a difference? Zarah Weiss and Detmar Meurers", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Assessing sentence readability for German language learners with broad linguistic modeling or reada- bility formulas: When do linguistic insights make a difference? Zarah Weiss and Detmar Meurers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Parametrizable exercise generation from authentic texts: Effectively targeting the language means on the curriculum Tanja Heck and Detmar Meurers", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Parametrizable exercise generation from authentic texts: Effectively targeting the language means on the curriculum Tanja Heck and Detmar Meurers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Meet me at the ribary' -Acceptability of spelling variants in free-text answers to listening comprehension prompts Ronja Laarmann-Quante, Leska Schwarz, Andrea Horbach and Torsten Zesch", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Keim", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| "." |
| ], |
| "last": "Littman", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Selecting Context Clozes for Lightweight Reading Compliance Greg Keim and Michael Littman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 'Meet me at the ribary' -Acceptability of spelling variants in free-text answers to listening comprehen- sion prompts Ronja Laarmann-Quante, Leska Schwarz, Andrea Horbach and Torsten Zesch . . . . . . . . . . . . . 173", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "An Evaluation of Binary Comparative Lexical Complexity Models Kai North, Marcos Zampieri and Matthew Shardlow", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "An Evaluation of Binary Comparative Lexical Complexity Models Kai North, Marcos Zampieri and Matthew Shardlow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Toward Automatic Discourse Parsing of Student Writing Motivated by Neural Interpretation James Fiacco", |
| "authors": [ |
| { |
| "first": "Shiyan", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Adamson", |
| "suffix": "" |
| }, |
| { |
| "first": "Carolyn", |
| "middle": [], |
| "last": "Ros\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| "." |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Toward Automatic Discourse Parsing of Student Writing Motivated by Neural Interpretation James Fiacco, Shiyan Jiang, David Adamson and Carolyn Ros\u00e9 . . . . . . . . . . . . . . . . . . . . . . . . . . . 204", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Educational Multi-Question Generation for Reading Comprehension Manav", |
| "authors": [ |
| { |
| "first": "Tony", |
| "middle": [], |
| "last": "Rathod", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [], |
| "last": "Tu", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| "." |
| ], |
| "last": "Stasaski", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Educational Multi-Question Generation for Reading Comprehension Manav Rathod, Tony Tu and Katherine Stasaski . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Computationally Identifying Funneling and Focusing Questions in Classroom Discourse Sterling Alic, Dorottya Demszky, Zid Mancenido, Jing Liu, Heather Hill and Dan Jurafsky", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Computationally Identifying Funneling and Focusing Questions in Classroom Discourse Sterling Alic, Dorottya Demszky, Zid Mancenido, Jing Liu, Heather Hill and Dan Jurafsky . . 224", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Towards an open-domain chatbot for language practice Gladys Tyen, Mark Brenchley, Andrew Caines and Paula Buttery", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Towards an open-domain chatbot for language practice Gladys Tyen, Mark Brenchley, Andrew Caines and Paula Buttery . . . . . . . . . . . . . . . . . . . . . . . . . 234", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Response Construct Tagging: NLP-Aided Assessment for Engineering Education Ananya Ganesh", |
| "authors": [ |
| { |
| "first": "Hugh", |
| "middle": [], |
| "last": "Scribner", |
| "suffix": "" |
| }, |
| { |
| "first": "Jasdeep", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Hertzberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Katharina", |
| "middle": [], |
| "last": "Kann", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| "." |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Response Construct Tagging: NLP-Aided Assessment for Engineering Education Ananya Ganesh, Hugh Scribner, Jasdeep Singh, Katherine Goodman, Jean Hertzberg and Katha- rina Kann . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Towards Automatic Short Answer Assessment for Finnish as a Paraphrase Retrieval Task Li-Hsin Chang", |
| "authors": [ |
| { |
| "first": "Jenna", |
| "middle": [], |
| "last": "Kanerva", |
| "suffix": "" |
| }, |
| { |
| "first": "Filip", |
| "middle": [], |
| "last": "Ginter", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| "." |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Towards Automatic Short Answer Assessment for Finnish as a Paraphrase Retrieval Task Li-Hsin Chang, Jenna Kanerva and Filip Ginter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "Reyes et al. open-source the code and data used to develop the baseline readability model for the Cebuano language. The language tool presented by Ahumada et al. for Mapuzugun is also publicly available through an online interface in both Mapuzugun and Spanish. Tyen et al. release the code and demo of their controllable complexity chatbot. Moner and Volodina release for public use fa-keDaLAJ (S-FinV), synthetic error dataset generated using error labels based on linguistic analysis of real-life error-annotated learner data. Kyle et al. make their SL2E Treebank publicly available for noncommercial purposes. Rietsche et al. release both code and annotated data used for their peer-to-peer feedback evaluation model. Bexte et al. make their code for the S-BERT similarity-based content scoring publicly available. Ding et al. release their code and clustering results for argument identification in student writing. Rathod et al. release the code for their Multi-Question Generation model for reading comprehension. Annotated data and code for distinguishing between funneling and focusing questions is also released by Alic et al. Finally, Ganesh et al. release the data, code and models for the Response Construct Tagging task.", |
| "uris": null, |
| "type_str": "figure" |
| } |
| } |
| } |
| } |