| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:51:35.014985Z" |
| }, |
| "title": "Applied Language Technology: NLP for the Humanities", |
| "authors": [ |
| { |
| "first": "Tuomo", |
| "middle": [], |
| "last": "Hiippala", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Helsinki", |
| "location": { |
| "addrLine": "Unioninkatu 40", |
| "postBox": "P.O. Box 24", |
| "postCode": "B. 00014" |
| } |
| }, |
| "email": "tuomo.hiippala@helsinki.fi" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This contribution describes a two-course module that seeks to provide humanities majors with a basic understanding of language technology and its applications using Python. The learning materials consist of interactive Jupyter Notebooks and accompanying YouTube videos, which are openly available with a Creative Commons licence.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This contribution describes a two-course module that seeks to provide humanities majors with a basic understanding of language technology and its applications using Python. The learning materials consist of interactive Jupyter Notebooks and accompanying YouTube videos, which are openly available with a Creative Commons licence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Language technology is increasingly applied in the humanities (Hinrichs et al., 2019) . This contribution describes a two-course module named Applied Language Technology, which seeks to provide humanities majors with a basic understanding of language technology and practical skills needed to apply language technology using Python. The module is intended to empower the students by showing that language technology is both accessible and applicable to research in the humanities.", |
| "cite_spans": [ |
| { |
| "start": 62, |
| "end": 85, |
| "text": "(Hinrichs et al., 2019)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The learning materials seek to address two major pedagogical challenges. The first challenge concerns terminology: in my experience, the greatest hurdle in teaching language technology to humanities majors is not 'technophobia' (\u00d6hman, 2019, 480) , but the technical jargon that acts as a gatekeeper to knowledge in the field (cf. Maton, 2014) . This issue is fundamental to teaching students with no previous experience of programming. To exemplify, beginners in my class occasionally interpret the term 'code' in phrases such as \"Write your code here\" as a numerical code needed to unlock an exercise, as opposed to a command written in a programming language. For this reason, the learning materials introduce concepts in Python and language technology in layperson terms and gradually build up the vocabulary needed to advance beyond the learning materials.", |
| "cite_spans": [ |
| { |
| "start": 228, |
| "end": 246, |
| "text": "(\u00d6hman, 2019, 480)", |
| "ref_id": null |
| }, |
| { |
| "start": 331, |
| "end": 343, |
| "text": "Maton, 2014)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pedagogical Approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The second challenge involves the diversity of the humanities, which covers a broad range of disciplines with different epistemological and methodological standpoints. This results in considerable differences in previous knowledge among the students: linguistics majors may be more likely to be exposed to computational methods and tools than their counterparts majoring in philosophy or art history. Some students may have taken an introductory course in Java or Python, whereas others have never used a command line interface before. To address this issue, the learning materials are based on Jupyter Notebooks, which provide an environment familiar to most students -a web browserfor interactive programming. The command line is used for interacting with GitHub, which is used to distribute the learning materials and exercises.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pedagogical Approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The module also emphasises peer and collaborative learning: 20% of the course grade is awarded for activity on the course discussion forum hosted on GitHub. All activity -both asking and answering questions -counts positively towards the final grade. This allows the students with previous knowledge to help onboard newcomers. According to student feedback, this also fosters a sense of community. The discussion forum is also used to discuss weekly readings, which focus on ethics (e.g. Hovy and Spruit, 2016; Bird, 2020) and the relationship between language technology and humanities (e.g. Kuhn, 2019; Nguyen et al., 2020) . These discussions are guided by questions that encourage the students to draw on their disciplinary backgrounds, which exposes them to a wide range of perspectives to language technology and the humanities.", |
| "cite_spans": [ |
| { |
| "start": 488, |
| "end": 510, |
| "text": "Hovy and Spruit, 2016;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 511, |
| "end": 522, |
| "text": "Bird, 2020)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 593, |
| "end": 604, |
| "text": "Kuhn, 2019;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 605, |
| "end": 625, |
| "text": "Nguyen et al., 2020)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pedagogical Approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The learning materials cover two seven-week courses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Materials", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The first course starts by introducing rich, plain and structured text and character encodings, fol-lowed by file input/output in Python, common data structures for manipulating textual data and regular expressions. The course then exemplifies basic NLP tasks, such as tokenisation, part-of-speech tagging, syntactic parsing and sentence segmentation by using the spaCy 3.0 natural language processing library (Honnibal et al., 2020) to process examples in the English language. This is followed by an introduction to basic metrics for evaluating the performance of language models. The course concludes with a brief tour of the pandas library for storing and manipulating data (McKinney, 2010) .", |
| "cite_spans": [ |
| { |
| "start": 410, |
| "end": 433, |
| "text": "(Honnibal et al., 2020)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 678, |
| "end": 694, |
| "text": "(McKinney, 2010)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learning Materials", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The second course begins with an introduction to processing diverse languages using the Stanza library (Qi et al., 2020) , shows how Stanza can interfaced with spaCy, and how the resulting annotations can be searched for linguistic patterns using spaCy. The course then introduces word embeddings to provide the students with a general understanding of this technique and its role in modern NLP, which is also increasingly applied in research on the humanities. The course finishes with an exploration of discourse-level annotations in the Georgetown University Multilayer Corpus (Zeldes, 2017) , which showcases the CoNLL-U annotation schema.", |
| "cite_spans": [ |
| { |
| "start": 103, |
| "end": 120, |
| "text": "(Qi et al., 2020)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 580, |
| "end": 594, |
| "text": "(Zeldes, 2017)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Materials", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To what extent the students meet the learning objectives is measured in weekly exercises. The weekly assignments are distributed through GitHub Classroom and automatically graded using nbgrader 1 , which allows generating feedback files with comments that are then pushed back to the student repositories on GitHub. The exercises are also revisited in weekly walkthrough sessions to allow the students to ask questions about the assignments. The students are also required to complete a final assignment for both courses: the first course concludes with a group project that involves preparing a set of data for further analysis, whereas the second course finishes with a longer individual assignment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Materials", |
| "sec_num": "3" |
| }, |
| { |
| "text": "All learning materials are openly available with a Creative Commons 4.0 CC-BY licence at the addresses provided in the following section. Access to the weekly exercises is available on request.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Materials", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The learning materials are based on Jupyter Notebooks (Kluyver et al., 2016) hosted in their own GitHub repository. 2 This repository constitutes a submodule of a separate repository for the website, which is hosted on ReadTheDocs. 3 The notebooks containing the learning materials are rendered into HTML using the Myst-NB parser from the Executable Books project. 4 This allows keeping the learning materials synchronised, and enables the users to clone the notebooks without the source code for the website. Myst-NB also adds links to Binder (Project Jupyter et al., 2018) to each notebook on the ReadTheDocs website, which enables anyone to execute and explore the code.", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 76, |
| "text": "(Kluyver et al., 2016)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 232, |
| "end": 233, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 365, |
| "end": 366, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 544, |
| "end": 574, |
| "text": "(Project Jupyter et al., 2018)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Technical Stack", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The Jupyter Notebooks provide a familiar environment for interactively exploring Python and the various libraries used, whereas the ReadTheDocs website is meant to be used as a reference work. Both media embed videos from a YouTube channel associated with the courses. 5 These short explanatory videos exploit the features of the underlying audiovisual medium, such as overlaid arrows, animations and other modes of presentation to explain the topics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Technical Stack", |
| "sec_num": "4" |
| }, |
| { |
| "text": "This contribution has introduced a two-course module that aims to teach humanities majors to apply language technology using Python. Targeted at a student population with diverse disciplinary backgrounds and levels of previous experience, the learning materials use multiple media and layperson terms to build up the vocabulary needed to engage with Python and language technology, complemented by the use of a familiar environment -a web browser -for interactive programming using Jupyter Notebooks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "https://nbgrader.readthedocs.io", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Decolonising speech and language technology", |
| "authors": [ |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Bird", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 28th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "3504--3519", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.coling-main.313" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steven Bird. 2020. Decolonising speech and lan- guage technology. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 3504-3519.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Language technology for digital humanities: introduction to the special issue", |
| "authors": [ |
| { |
| "first": "Erhard", |
| "middle": [], |
| "last": "Hinrichs", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie", |
| "middle": [], |
| "last": "Hinrichs", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| }, |
| { |
| "first": "Thorsten", |
| "middle": [], |
| "last": "Trippel", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erhard Hinrichs, Marie Hinrichs, Sandra K\u00fcbler, and Thorsten Trippel. 2019. Language technology for digital humanities: introduction to the special issue. 2 https://github.com/ Applied-Language-Technology/notebooks 3 https://applied-language-technology. readthedocs.io 4 https://executablebooks.org 5 https://www.youtube.com/c/ AppliedLanguageTechnology", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Honnibal", |
| "suffix": "" |
| }, |
| { |
| "first": "Ines", |
| "middle": [], |
| "last": "Montani", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.5281/zenodo.1212303" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Honnibal, Ines Montani, Sofie Van Lan- deghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python. DOI: 10.5281/zenodo.1212303.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The social impact of natural language processing", |
| "authors": [ |
| { |
| "first": "Dirk", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Shannon", |
| "middle": [ |
| "L" |
| ], |
| "last": "Spruit", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "591--598", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P16-2096" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dirk Hovy and Shannon L. Spruit. 2016. The so- cial impact of natural language processing. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 591-598. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Jupyter Notebooks -a publishing format for reproducible computational workflows", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Kluyver", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Ragan-Kelley", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "P\u00e9rez", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Granger", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Bussonnier", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Frederic", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyle", |
| "middle": [], |
| "last": "Kelley", |
| "suffix": "" |
| }, |
| { |
| "first": "Jessica", |
| "middle": [], |
| "last": "Hamrick", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Grout", |
| "suffix": "" |
| }, |
| { |
| "first": "Sylvain", |
| "middle": [], |
| "last": "Corlay", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Ivanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dami\u00e1n", |
| "middle": [], |
| "last": "Avila", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Positioning and Power in Academic Publishing: Players, Agents and Agendas", |
| "volume": "", |
| "issue": "", |
| "pages": "87--90", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Kluyver, Benjamin Ragan-Kelley, Fer- nando P\u00e9rez, Brian Granger, Matthias Bussonnier, Jonathan Frederic, Kyle Kelley, Jessica Hamrick, Jason Grout, Sylvain Corlay, Paul Ivanov, Dami\u00e1n Avila, Safia Abdalla, Carol Willing, and Jupyter development team. 2016. Jupyter Notebooks -a publishing format for reproducible computational workflows. In Positioning and Power in Academic Publishing: Players, Agents and Agendas, pages 87-90, Netherlands. IOS Press.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Computational text analysis within the humanities: How to combine working practices from the contributing fields? Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "Jonas", |
| "middle": [], |
| "last": "Kuhn", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "53", |
| "issue": "", |
| "pages": "565--602", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonas Kuhn. 2019. Computational text analysis within the humanities: How to combine working practices from the contributing fields? Language Resources and Evaluation, 53(4):565-602.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Knowledge and Knowers: Towards a Realist Sociology of Education. Routledge", |
| "authors": [ |
| { |
| "first": "Karl", |
| "middle": [], |
| "last": "Maton", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karl Maton. 2014. Knowledge and Knowers: Towards a Realist Sociology of Education. Routledge, New York and London.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Data structures for statistical computing in Python", |
| "authors": [ |
| { |
| "first": "Wes", |
| "middle": [], |
| "last": "Mckinney", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 9th Python in Science Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "51--56", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wes McKinney. 2010. Data structures for statistical computing in Python. In Proceedings of the 9th Python in Science Conference, pages 51-56.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "How we do things with words: Analyzing text as social and cultural data", |
| "authors": [ |
| { |
| "first": "Dong", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Maria", |
| "middle": [], |
| "last": "Liakata", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Dedeo", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Eisenstein", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mimno", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebekah", |
| "middle": [], |
| "last": "Tromble", |
| "suffix": "" |
| }, |
| { |
| "first": "Jane", |
| "middle": [], |
| "last": "Winters", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Frontiers in Artificial Intelligence", |
| "volume": "3", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.3389/frai.2020.00062" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dong Nguyen, Maria Liakata, Simon DeDeo, Jacob Eisenstein, David Mimno, Rebekah Tromble, and Jane Winters. 2020. How we do things with words: Analyzing text as social and cultural data. Frontiers in Artificial Intelligence, 3.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Teaching computational methods to humanities students", |
| "authors": [ |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "\u00d6hman", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Digital Humanities in the Nordic Countries 4th Conference", |
| "volume": "2364", |
| "issue": "", |
| "pages": "479--494", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emily \u00d6hman. 2019. Teaching computational methods to humanities students. In Proceedings of the Digi- tal Humanities in the Nordic Countries 4th Confer- ence, volume 2364 of CEUR Workshop Proceedings, pages 479-494.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Binder 2.0 -Reproducible, interactive, sharable environments for science at scale", |
| "authors": [ |
| { |
| "first": "Project", |
| "middle": [], |
| "last": "Jupyter", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Bussonnier", |
| "suffix": "" |
| }, |
| { |
| "first": "Jessica", |
| "middle": [], |
| "last": "Forde", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeremy", |
| "middle": [], |
| "last": "Freeman", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Granger", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Head", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Holdgraf", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyle", |
| "middle": [], |
| "last": "Kelley", |
| "suffix": "" |
| }, |
| { |
| "first": "Gladys", |
| "middle": [], |
| "last": "Nalvarte", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Osheroff", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pacer", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuvi", |
| "middle": [], |
| "last": "Panda", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Perez", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 17th Python in Science Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "113--120", |
| "other_ids": { |
| "DOI": [ |
| "10.25080/Majora-4af1f417-011" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Project Jupyter, Matthias Bussonnier, Jessica Forde, Jeremy Freeman, Brian Granger, Tim Head, Chris Holdgraf, Kyle Kelley, Gladys Nalvarte, Andrew Osheroff, M Pacer, Yuvi Panda, Fernando Perez, Benjamin Ragan Kelley, and Carol Willing. 2018. Binder 2.0 -Reproducible, interactive, sharable en- vironments for science at scale. In Proceedings of the 17th Python in Science Conference, pages 113 - 120.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Stanza: A Python natural language processing toolkit for many human languages", |
| "authors": [ |
| { |
| "first": "Peng", |
| "middle": [], |
| "last": "Qi", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuhao", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuhui", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Bolton", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "101--108", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.acl-demos.14" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101-108. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Zeldes", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "51", |
| "issue": "", |
| "pages": "581--612", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/s10579-016-9343-x" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amir Zeldes. 2017. The GUM corpus: Creating mul- tilayer resources in the classroom. Language Re- sources and Evaluation, 51(3):581-612.", |
| "links": null |
| } |
| }, |
| "ref_entries": {} |
| } |
| } |