Daoze commited on
Commit
038ff73
·
verified ·
1 Parent(s): 673a0b8

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/4MLUG42Q6Xh/Initial_manuscript_md/Initial_manuscript.md +193 -0
  2. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/4rOjJXIcU0/Initial_manuscript_md/Initial_manuscript.md +111 -0
  3. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/4rOjJXIcU0/Initial_manuscript_tex/Initial_manuscript.tex +99 -0
  4. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/62qGdDGSyqr/Initial_manuscript_md/Initial_manuscript.md +193 -0
  5. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/62qGdDGSyqr/Initial_manuscript_tex/Initial_manuscript.tex +137 -0
  6. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/P1NDIAZBwq/Initial_manuscript_md/Initial_manuscript.md +105 -0
  7. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/P1NDIAZBwq/Initial_manuscript_tex/Initial_manuscript.tex +101 -0
  8. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/Qf691VTesDa/Initial_manuscript_md/Initial_manuscript.md +175 -0
  9. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/Qf691VTesDa/Initial_manuscript_tex/Initial_manuscript.tex +121 -0
  10. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/QoB8QGu5ZSL/Initial_manuscript_md/Initial_manuscript.md +213 -0
  11. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/QoB8QGu5ZSL/Initial_manuscript_tex/Initial_manuscript.tex +199 -0
  12. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/WXdE6lC7-n/Initial_manuscript_md/Initial_manuscript.md +127 -0
  13. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/WXdE6lC7-n/Initial_manuscript_tex/Initial_manuscript.tex +115 -0
  14. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/Z1kgcLVfha/Initial_manuscript_md/Initial_manuscript.md +143 -0
  15. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/Z1kgcLVfha/Initial_manuscript_tex/Initial_manuscript.tex +128 -0
  16. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/dNh4TiaOLy_/Initial_manuscript_md/Initial_manuscript.md +153 -0
  17. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/dNh4TiaOLy_/Initial_manuscript_tex/Initial_manuscript.tex +205 -0
  18. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/huJogZLN2t/Initial_manuscript_md/Initial_manuscript.md +193 -0
  19. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/huJogZLN2t/Initial_manuscript_tex/Initial_manuscript.tex +141 -0
  20. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/ijuM-MVwVEk/Initial_manuscript_md/Initial_manuscript.md +167 -0
  21. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/ijuM-MVwVEk/Initial_manuscript_tex/Initial_manuscript.tex +164 -0
  22. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/k9jaVBHot_/Initial_manuscript_md/Initial_manuscript.md +197 -0
  23. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/k9jaVBHot_/Initial_manuscript_tex/Initial_manuscript.tex +149 -0
  24. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/knwKgaspObQ/Initial_manuscript_md/Initial_manuscript.md +121 -0
  25. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/knwKgaspObQ/Initial_manuscript_tex/Initial_manuscript.tex +89 -0
  26. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/m28wDC7B3kx/Initial_manuscript_md/Initial_manuscript.md +213 -0
  27. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/m28wDC7B3kx/Initial_manuscript_tex/Initial_manuscript.tex +139 -0
  28. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/mdiNjHHdoQ7/Initial_manuscript_md/Initial_manuscript.md +165 -0
  29. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/mdiNjHHdoQ7/Initial_manuscript_tex/Initial_manuscript.tex +81 -0
  30. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/pJWKWGNvoLi/Initial_manuscript_md/Initial_manuscript.md +153 -0
  31. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/pJWKWGNvoLi/Initial_manuscript_tex/Initial_manuscript.tex +125 -0
  32. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/q8N3WvWvt4X/Initial_manuscript_md/Initial_manuscript.md +149 -0
  33. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/q8N3WvWvt4X/Initial_manuscript_tex/Initial_manuscript.tex +228 -0
  34. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/yFPqbprG2Qb/Initial_manuscript_md/Initial_manuscript.md +189 -0
  35. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/yFPqbprG2Qb/Initial_manuscript_tex/Initial_manuscript.tex +137 -0
  36. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/z-N7JMHjLO/Initial_manuscript_md/Initial_manuscript.md +131 -0
  37. papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/z-N7JMHjLO/Initial_manuscript_tex/Initial_manuscript.tex +119 -0
  38. papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/-0xPrt01VXD/Initial_manuscript_md/Initial_manuscript.md +368 -0
  39. papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/-0xPrt01VXD/Initial_manuscript_tex/Initial_manuscript.tex +287 -0
  40. papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/0X9O6VcYe_/Initial_manuscript_md/Initial_manuscript.md +212 -0
  41. papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/0X9O6VcYe_/Initial_manuscript_tex/Initial_manuscript.tex +131 -0
  42. papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/37zyB5yuPXi/Initial_manuscript_md/Initial_manuscript.md +169 -0
  43. papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/37zyB5yuPXi/Initial_manuscript_tex/Initial_manuscript.tex +208 -0
  44. papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/4LIJshtHlnk/Initial_manuscript_md/Initial_manuscript.md +323 -0
  45. papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/4LIJshtHlnk/Initial_manuscript_tex/Initial_manuscript.tex +263 -0
  46. papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/4nEHDnoLAmK/Initial_manuscript_md/Initial_manuscript.md +335 -0
  47. papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/4nEHDnoLAmK/Initial_manuscript_tex/Initial_manuscript.tex +356 -0
  48. papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/5DrUl9nn5y/Initial_manuscript_md/Initial_manuscript.md +210 -0
  49. papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/5DrUl9nn5y/Initial_manuscript_tex/Initial_manuscript.tex +362 -0
  50. papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/5fjIxS5Kahh/Initial_manuscript_md/Initial_manuscript.md +655 -0
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/4MLUG42Q6Xh/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Staying Ahead in the MOOC-Era by Teaching Innovative AI Courses
2
+
3
+ ## Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ As a result of the rapidly advancing digital transformation of teaching, universities have started to face major competition from Massive Open Online Courses (MOOCs). Universities thus have to set themselves apart from MOOCs in order to justify the added value of three to five-year degree programs to prospective students. In this paper, we show how we address this challenge at Anonymous University in ML and AI. We first share our best practices and present two concrete courses: Computer Vision and Innovation Management for AI. We then demonstrate how these courses contribute to Anonymous University's ability to differentiate itself from MOOCs (and other universities).
8
+
9
+ ## 1. Introduction
10
+
11
+ Over the past decade, access to knowledge has fundamentally changed. This process began around 2011, when Stanford professors Andrew Ng, Sebastian Thrun and others made their AI courses available to everyone through online courses (Ng & Widom, 2014). This type of course is often referred to as a Massive Open Online Course (MOOC). Popular MOOC platforms include Coursera, Udacity, edX, Udemy and others. Until 2011, AI could generally only be studied in a limited number of available university courses or from books or papers. Furthermore, those resources were mainly available in developed countries. As a consequence, potential learners in emerging markets could not easily access respective resources. Due to MOOCs, the so-called "democratization of AI knowledge" has begun to fundamentally change the way we learn and has given rise to new AI superpowers, such as China (Lee, 2018).
12
+
13
+ We argue in Section 2 that MOOCs have now given many universities and professors serious competitors. It can also be assumed that this competition will intensify even further in the coming years and decades. In order to justify the added value of three to five-year degree programs to prospective students, universities must differentiate themselves from MOOCs in some way or other.
14
+
15
+ In this paper, we show how we address this challenge at Anonymous University (ANU) in our AI courses. ANU is located in rural anonymous region and has a diverse student body of different educational and cultural backgrounds. Concretely, we present two courses (that focus on ML and include slightly broader AI topics when needed):
16
+
17
+ - Computer Vision (Section 3)
18
+
19
+ - Innovation Management for AI (Section 4)
20
+
21
+ In addition to teaching theory, we put emphasis on real-world problems, hands-on projects and taking advantage of hardware. Particularly the latter is usually not directly possible in MOOCs. In this paper, we share our best practices. We also show how our courses contribute to ANU's ability to differentiate itself from MOOCs and other universities.
22
+
23
+ ## 2. MOOCs Have Become Game Changers
24
+
25
+ In addition to courses on AI, a variety of other courses on almost any topic have emerged on various MOOC platforms over the past decade. Those courses enable learners to study high-quality content from renowned professors, remotely, at their own speed and at little or no cost. Furthermore, collaborations with renowned universities and industry partners have emerged. Some MOOC platforms offer career coaching, too. Companies have also launched collaborative programs with MOOC platforms to train their employees.
26
+
27
+ There are plenty of examples of professionals who have found new, high-paying jobs in various industries within a short period of time after completing hands-on MOOCs, for example in the news (Lohr, 2020) or on LinkedIn. This is particularly true for IT, a sector that has traditionally been open to lateral entrants and autodidacts. In recent years, MOOCs have therefore become steadily more established. This trend has also been further consolidated during the COVID-19 pandemic, as millions of people around the globe have been undergoing retraining (Bylieva et al., 2020).
28
+
29
+ In summary, universities will be facing the following challenges in the coming years:
30
+
31
+ ---
32
+
33
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
34
+
35
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
36
+
37
+ ---
38
+
39
+ 1. In just a few years many very good high school graduates could decide against the traditional completion of a university degree program. They would then rather acquire all necessary practical skills through MOOCs within a few months or perhaps a year. In parallel, they could also gain practical experience by working part-time or founding startups. As a consequence, they could quickly get excellent jobs and outperform traditional university graduates on the jobs market.
40
+
41
+ 2. Due to the demographic change (Magnus, 2012) and the potential lack of qualified new students, many universities in developed countries may become unable to maintain their current size. In view of the return on investment, politicians or administrators may thus probably sooner or later start thinking about closing individual departments or even entire universities.
42
+
43
+ 3. Many (non-computer science) degree programs have so far only taught traditional content, with little or no link to the digital transformation and automation through AI. If this important content continues to go unnoticed in education, those degree programs will almost certainly train their students for unemployment.
44
+
45
+ Universities must face up to these challenges, which also provide many opportunities, though. As a result, universities could emerge even stronger from that competition. Most importantly, universities must differentiate themselves from MOOCs. In the following sections, we show how we address these challenges by teaching cutting-edge real-world content and taking advantage of physical university infrastructure. We also actively promote our courses through social media, press releases and other channels in order to attract more prospective students. In addition, our courses are open to students of other departments, including electrical engineering, mechanical engineering, healthcare or business. This allows us to support them in learning the tools of the 21 st century that they need in order to actively contribute to the digital transformation of their disciplines.
46
+
47
+ ## 3. Computer Vision Course
48
+
49
+ Popular MOOC platforms offer a number of excellent courses ${}^{1}$ on computer vision (CV). In order to survive in international competition, the content of a today's university CV course must meaningfully differentiate itself from those by offering unique selling propositions. Based on these principles, we have started to teach this novel course in 2020 at ANU. Note that there is a separate deep learning course taught by a different professor in our department. Most students take both courses in parallel and have previously taken an introductory machine learning course.
50
+
51
+ ### 3.1. Content
52
+
53
+ We provide students with a broad and deep background in CV. That is why we discuss both, traditional and modern neural network-based CV methods. In practice, successful CV applications tend to combine both approaches (O'Mahony et al., 2019), in particular when only a limited number of training examples are available (Ahmed & Islam, 2020). Concretely, we discuss the following topics in the first half of the term:
54
+
55
+ - Introduction: applications, computational models for vision, perception and prior knowledge, levels of vision, how humans see
56
+
57
+ - Pixels and filters: digital cameras, image representations, noise, filters, edge detection
58
+
59
+ - Regions of images: segmentation, perceptual grouping, Gestalt theory, segmentation approaches, image compression by learning clusters
60
+
61
+ - Feature detection: RANSAC, Hough transform, Harris corner detector
62
+
63
+ - Object recognition: challenges, template matching, histograms, machine learning
64
+
65
+ - Convolutional neural networks: neural networks, loss functions and optimization, backpropagation, convolutions and pooling, hyperparameters, AutoML, efficient training, selected architectures
66
+
67
+ - Image sequence processing: motion, tracking image sequences, temporal models, Kalman filter, correspondence problem, optical flow, recurrent neural networks
68
+
69
+ - Foundations of mobile robotics: robot motion, sensors, probabilistic robotics, particle filters, SLAM
70
+
71
+ - Advanced topics: 3D vision, generative adversarial networks, self-supervised learning
72
+
73
+ In the second half of the term, students work in groups of 1 to 4 members on a CV project.
74
+
75
+ ### 3.2. Unique Selling Propositions
76
+
77
+ This course differentiates itself from other CV courses, in particular MOOCs, as follows:
78
+
79
+ 1. Most CV courses taught on MOOC platforms or at universities only include smaller, isolated problems that can be implemented on almost any commercially available computer or by using cloud services. This course includes a larger real-world project in the second half of the term instead. Students choose a CV project of their choice, in which they also apply agile project management and use respective tools. In order to provide students with a real added value of a physical university course, they are highly encouraged to
80
+
81
+ ---
82
+
83
+ ${}^{1}$ These include, but are not limited to, the following courses: http://www.udacity.com/ course/computer-vision-nanodegree--nd891,
84
+
85
+ http://www.coursera.org/learn/
86
+
87
+ computer-vision-basics.
88
+
89
+ ---
90
+
91
+ use the NVIDIA Jetbot platform depicted in Figure 1.
92
+
93
+ ![01963a94-8c20-7934-912b-f1e0f0ceffa3_2_206_313_601_812_0.jpg](images/01963a94-8c20-7934-912b-f1e0f0ceffa3_2_206_313_601_812_0.jpg)
94
+
95
+ Figure 1. The NVIDIA Jetbot mobile robot platform used in the projects. Find more information at http://www.github.com/NVIDIA-AI-IOT/jetbot.
96
+
97
+ It possesses a camera and efficiently executes CV algorithms on its NVIDIA Jetson GPU. By using this platform, students can not only better understand the course content. Rather it enables them to experience how these algorithms behave in the real world. During the COVID-19 pandemic, they could take the robot kits home in order to work on their projects remotely.
98
+
99
+ 2. We cover challenging content that is more complex than in most available MOOCs: We first reviewed CV courses at introductory and advanced levels of international top universities, including Stanford, MIT and Imperial College London. We then selected the topics that we find most relevant to solving real-world problems. Furthermore, we present these topics in a more understandable way and include additional revisions of the underlying concepts. Like this, we also make this course more accessible to students of other disciplines.
100
+
101
+ ### 3.3. Outcomes and Students’ Feedback
102
+
103
+ 19 students of different degree programs signed up for the first iteration of this course. In total, they implemented 10 projects in groups of 1 to 3 members. About half of the projects used a NVIDIA Jetbot. Those projects included object following and simultaneous localization and mapping (SLAM). The other projects included a face mask detector and a clothes classifier. We find the coin counter depicted in Figure 2 particularly worth mentioning though.
104
+
105
+ ![01963a94-8c20-7934-912b-f1e0f0ceffa3_2_955_477_583_463_0.jpg](images/01963a94-8c20-7934-912b-f1e0f0ceffa3_2_955_477_583_463_0.jpg)
106
+
107
+ Figure 2. Coin counter. Image courtesy: Anonymous student 1 and anonymous student 2 .
108
+
109
+ It first applies object segmentation and detection to a photo that contains an arbitrary number of coins. It then aggregates the amounts of the individual coins. The underlying ML model also handles multiple currencies in the same photo. In the project presentation, the group also discussed how they solved the challenge of collecting a data set of coins that includes a variety of angles, conditions, reflections and currencies.
110
+
111
+ We received quantitative and qualitative feedback from students through a formal course evaluation. The overall feedback was a 1.3 on a scale 1 to 5 where 1 is the best. However, a few students suggested a longer introduction to deep learning frameworks for the first half of this course. They would then have been able to start working on their projects quicker in the second half. In the second iteration, we have therefore added an extended introduction to deep learning frameworks.
112
+
113
+ ## 4. Innovation Management for AI Course
114
+
115
+ In recent years, many companies have started to invest in ML and AI to stay competitive. However, the sad truth is that some ${80}\%$ of all AI projects fail or do not result in any financial value (Nimdzi Insights, 2019). That is a serious concern because there is clearly an acute need in industry for experts who have a comprehensive knowledge of what needs to be done so that AI adds value to businesses. In our view, one of the underlying causes is the way AI is taught in universities, as most courses cover only purely methodological and engineering aspects of AI. We are convinced that professors need to address this problem by also enabling students to think in a broader and business-oriented sense of AI innovation management. At ANU, we therefore started to teach this novel and internationally unique course in 2020.
116
+
117
+ ### 4.1. Content
118
+
119
+ We discuss a range of challenges, both technical and managerial, that companies typically face when using AI (Glauner, 2020). We first look back at some of the historic promises, successes and failures of AI. We then contrast them to some of the advances of the deep learning era and contemporary challenges. Concretely, we discuss the following topics:
120
+
121
+ - Introduction: how AI is changing our society, selected examples of successful and unsuccessful AI projects and transformations
122
+
123
+ - History and promises of AI: Dartmouth conference, AI from 1955 to 2011, AI winters
124
+
125
+ - Deep learning era: breakthroughs, DeepMind, promises and hypes, no free lunch theorem, AI innovation in China, technological singularity
126
+
127
+ - Contemporary challenges: regulation, explainable AI, ethics
128
+
129
+ - AI transformation of companies: opportunities, challenges, best practices, roles, data strategy, data governance
130
+
131
+ We offer this course as an intensive course. On day one, we teach the content above. On the following two days, students work on a case study on how to successfully implement AI in a company of their choice. They present the outcomes of their case study on day four.
132
+
133
+ ### 4.2. Unique Selling Propositions
134
+
135
+ This course differentiates itself from other courses, in particular MOOCs, as follows:
136
+
137
+ 1. During an intensive online search for related courses, we only found introductions to AI for managers ${}^{2}$ . However, we did not find any business-related courses for AI experts. In this course, we bridge that gap.
138
+
139
+ 2. Students learn respective best practices along the entire AI value chain and how these lead to productively deployed applications that add real value. They work on a case study on how specific AI use cases are implemented in companies, what challenges may be encountered and how they may be solved.
140
+
141
+ ### 4.3. Outcomes and Students’ Feedback
142
+
143
+ 21 students of different degree programs signed up for the first iteration of this course. In total, they worked on 11 case studies in groups of 2 to 4 members.
144
+
145
+ Most of the students who took this course are computer scientists studying in a part-time continuing education AI degree program. We received very positive feedback from them as they could include in their case studies some of the current challenges they face at work. A few business students also took this course as they were eager to learn more about AI. They contributed their in-depth business knowledge to the case study presentations. This turned out to be a valuable experience for the computer scientists.
146
+
147
+ We could, however, not quantitatively assess this course yet. Our university's course evaluation scheme does not include intensive courses. We are planning to address this issue in the future with an unofficial course evaluation.
148
+
149
+ ## 5. Conclusions
150
+
151
+ Universities are facing major challenges as a result of the rapidly advancing digital transformation of teaching. These include in particular competition from Massive Open Online Courses (MOOCs). This transformation is further being accelerated by the demographic change in developed countries and could result in a dwindling number of potential students in the near future. However, if universities address those challenges swiftly, ambitiously and sustainably, they can even emerge stronger from this situation by providing better and modern courses to their students. In this paper, we showed how we address those challenges in AI education at Anonymous University. Concretely, we teach innovative and unique courses on computer vision and innovation management for AI. We shared our best practices and how our courses contribute to Anonymous University's ability to differentiate itself from MOOCs and other universities.
152
+
153
+ Both courses are currently being offered again. The number of students that signed up has more than doubled. Our courses are thus positively perceived by students.
154
+
155
+ ## References
156
+
157
+ Ahmed, M. and Islam, A. N. Deep Learning: Hope or Hype. Annals of Data Science, 7:427-432, 2020.
158
+
159
+ Bylieva, D., Bekirogullari, Z., Lobatyuk, V., and Nam, T. Analysis of the consequences of the transition to online learning on the example of MOOC philosophy during the COVID-19 pandemic. Humanities & Social Sciences Reviews, 8(4):1083-1093, 2020.
160
+
161
+ Glauner, P. Unlocking the Power of Artificial Intelligence for Your Business. In Innovative Technologies for Market
162
+
163
+ ---
164
+
165
+ ${}^{2}$ These include, but are not limited to, the following courses: http://www.udacity.com/course/ ai-for-business-leaders--nd054, http://www.udemy.com/course/intro-ai-for-managers/.
166
+
167
+ ---
168
+
169
+ Leadership, pp. 45-59. Springer, 2020.
170
+
171
+ Lee, K.-F. AI Superpowers: China, Silicon Valley, and the New World Order. Houghton Mifflin Harcourt, 2018.
172
+
173
+ Lohr, S. Remember the MOOCs? After Near-Death, They're Booming. The New York Times, 2020. URL http://www.nytimes.com/2020/05/ 26/technology/moocs-online-learning. html.
174
+
175
+ Magnus, G. The Age of Aging: How Demographics are Changing the Global Economy and Our World. John Wiley & Sons, 2012.
176
+
177
+ $\mathrm{{Ng}},\mathrm{A}$ . and Widom, J. Origins of the Modern MOOC (xMOOC). Technical report, Stanford, 2014. URL http://ai.stanford.edu/~ang/papers/ mooc14-OriginsOfModernMOOC.pdf.
178
+
179
+ Nimdzi Insights. Artificial Intelligence: Localization, Winners, Losers, Heroes, Spectators, and You. Technical report, Pactera EDGE, 2019. URL http: //www.nimdzi.com/wp-content/uploads/ 2019/06/Nimdzi-AI-whitepaper.pdf.
180
+
181
+ O'Mahony, N., Campbell, S., Carvalho, A., Harapanahalli, S., Hernandez, G. V., Krpalkova, L., Riordan, D., and Walsh, J. Deep learning vs. traditional computer vision. In Science and Information Conference, pp. 128-144. Springer, 2019.
182
+
183
+ 266
184
+
185
+ 268
186
+
187
+ 269
188
+
189
+ 270
190
+
191
+ 271
192
+
193
+ 274
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/4rOjJXIcU0/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Teaching machine learning through end-to-end decision making
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ Advances in machine learning and data science hold the potential to greatly optimize the overall energy sector, and prevent the worst outcomes of anthropogenic climate change. However, despite the urgent need for trained energy data scientists and the presence of a number of technically challenging issues that need to be tackled, the sector continues to suffer from a personnel shortage and remains mired in outdated technology. In many programs, energy engineers continue to graduate without even rudimentary programming skills, let alone knowledge of data science. This paper highlights key findings from an introductory course on machine learning and optimization designed specifically for energy engineering students. The course employs a number of teaching aids, which we hope will be useful for the broader community as well. The course was developed in a pan-European setting, supported by four different European universities as part of a broader roadmap to overhaul energy education.
8
+
9
+ ## 1. Introduction
10
+
11
+ Despite its importance, energy engineers typically do not get even an introductory course to data science or machine learning. This oversight is especially tedious since thesis students and fresh graduates have to learn these concepts in an adhoc manner every year. The omission is also becoming glaringly obvious with the increasing amounts of energy-related data being collected, thanks to IoT devices and smart meters etc. which are enabling countless use cases on both the building (Kazmi and Driesen, 2020) and the grid scale (Zhang et al., 2018). Most energy companies (and engineers) are ill-equipped to cope with this data, much less create additional value from it.
12
+
13
+ To address this shortcoming, EIT InnoEnergy, a body of the European Union, initiated a working group, constituted of members from multiple universities (KU Leuven, KTH, UPC and Grenoble INP) in four different European countries (Belgium, Sweden, Spain and France) in 2019. The mandate of the working group was to harmonize data science education across the participating universities, reduce replication work in course design, create grounds for broader collaboration, and develop a long-term roadmap for data science education in energy programs in European universities.
14
+
15
+ A note on terminology is relevant here. The working group converged to the use of data science as an umbrella term that incorporates the entire data pipeline (including machine learning, but also other closely related topics including data acquisition, exploratory data analysis, optimal decision making and ethics etc.). Furthermore, due to the applied nature of the target audience (i.e. energy engineers), an emphasis was placed on case-based teaching. This was intended to help students better understand how the algorithms are applied in practice, as well as what problems do they solve concretely.
16
+
17
+ One of the first deliverables arising from the working group's activies was an introductory course on data science for energy engineers that was delivered virtually for the first time in 2020 to students from over ten European universities. A second run is planned for summer of 2021. In this paper, we highlight some of the key findings in designing such a course from scratch. We also discuss lessons we learned while teaching machine learning to engineering students with diverse backgrounds.
18
+
19
+ ## 2. Course audience
20
+
21
+ The course, titled 'Data science for energy engineers', is intended for graduate level (MS or early PhD) engineering students. The audience is then, in many ways, much less diverse than students enrolled in a typical introductory data science or artificial intelligence course. An overwhelming majority, if not all, of the students are engineering students, belonging mostly to the electrical, mechanical or energy departments. However, within these students, there is still considerable diversity owing to two key factors. Firstly, there are typically a number of differences between electrical and mechanical engineering curricula. Electrical engineers tend to have much more exposure to programming and fields closely related to machine learning such as signal processing. Secondly, individual student background also contributes considerably to diversity. For instance, high school students in many countries (can) already study programming, while in many cases it is possible that even electrical engineering graduates may make it through having written only a single 'hello world' program in Matlab. This lack of programming knowledge has been identified as one of two main barriers for students learning machine learning and data science (Sulmont et al., 2019).
22
+
23
+ ---
24
+
25
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
26
+
27
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
28
+
29
+ ---
30
+
31
+ Consequently, the course was designed in a way that it
32
+
33
+ 067 would cater to the needs of a variety of energy engineer- ing students, irrespective of their background. While the 068 course, at 3 ECTS, is too short to include a detailed in- 069 troduction to programming, students were provided with 070 relevant documentation and material. Furthermore, the level 071 of programming and software engineering knowledge re-
34
+
35
+ 073 quired to complete the course was intentionally kept low (i.e. concepts such as object oriented programming, repository control etc. were alluded to, but not dealt with).
36
+
37
+ ## 3. Intended learning outcomes
38
+
39
+ The course was crafted in a way that a number of concrete learning outcomes could be realized, while simultaneously introducing students to the end-to-end pipeline of data-driven projects ranging from data acquisition and preprocessing, to modelling and inference, to actionable decision making. Keeping in mind course participant backgrounds, these learning outcomes can be formalized as:
40
+
41
+ 1. Students should be able to load, as well as visualize and understand various energy datasets by performing exploratory data analysis on them.
42
+
43
+ 2. Students should be able to understand core machine learning algorithms for modelling and forecasting, and how (and when) to apply these in practical settings.
44
+
45
+ 3. Students should be able to formulate and solve optimization problems to perform optimal control and design in a number of different energy related settings.
46
+
47
+ 4. Students should be able to see the bigger picture surrounding individual algorithms in energy data science, and understand the complexity behind real world deployment.
48
+
49
+ ## 4. Course content and design
50
+
51
+ To realize these learning objectives, we designed the course as a series of five lectures, accompanied by four practice sessions based on Jupyter notebooks. The individual lectures
52
+
53
+ 108 cover (1) introduction to energy data science and practical use cases, (2) exploratory data analysis, (3) modelling and forecasting using machine learning, (4) optimal decision making, and (5) advanced concepts in machine learning and optimization. The entire course is delivered using a singular dataset on electricity demand and prices, which allows students to work on it in an end-to-end manner. In this section, we highlight two different aids in course design that turned out to be especially useful while teaching data science to a mathematically inclined, but non-expert audience. In the next section, we discuss three teaching aids in course delivery.
54
+
55
+ ### 4.1. Teaching with end-to-end use cases
56
+
57
+ Rather than introducing students to machine learning with toy datasets such as the Iris or MNIST dataset, we designed the course using domain specific examples in the form of a coherent use case on energy demand response. This enabled us to discuss the entire life cycle of a real-world project in practice, of which building a machine learning or optimization model is just one step. Towards this end, we discuss the real world case of an energy prosumer (i.e. a residential household with local storage and/or generation) interested in optimizing their energy demand and generation using electric batteries. The course discusses a number of optimization objectives, ranging from performing arbitrage (i.e. using an optimal controller that charges the battery when electricity is cheaper or less carbon-intensive, and vice versa) to peak shaving (i.e. reducing the maximum power demand on the grid) and maximum self-consumption of local solar generation. Additional constraints can be introduced here, based on user behaviour and grid conditions. We also take care to emphasize that the same algorithms can be used to achieve a variety of objectives, ranging from energy optimization to cost optimization to emissions optimization.
58
+
59
+ This case study requires students to create forecasts for user behavior, as well as potentially electricity prices when these are not known. This is done using machine learning models, but emphasis is placed on benchmarking the developed models using simpler methods (both naive and simple time series models). The temporal structure of the problem also allows us to introduce complex, real-world challenges such as anomalous data and concept drift etc. Another benefit of using a dataset that spans an entire year is that it reinforces the concept of statistical significance. While a naive forecast and/or controller may outperform a more sophisticated counterpart on a given day, the superior algorithm should outperform over a long enough period of time.
60
+
61
+ Posing the problem in such a relatable manner also allows students to easily see the monetary costs of prediction errors from machine learning models, and whether complex machine learning algorithms actually improve real-world results when compared to simpler baselines. Finally, it is important to note that the methodology is generalizable across case studies. For instance, in a partner university in the working group, a largely similar approach was followed with detecting wind turbine failures, where the students were again asked to make the link between prediction errors and real world costs.
62
+
63
+ ![01963a8e-25ef-7580-98d5-a78e2ed6c2e0_2_158_197_687_256_0.jpg](images/01963a8e-25ef-7580-98d5-a78e2ed6c2e0_2_158_197_687_256_0.jpg)
64
+
65
+ Figure 1. Introducing structure through understandable examples, both in machine learning and optimal decision making
66
+
67
+ ### 4.2. Teaching using algorithmic structure
68
+
69
+ A second recurring theme throughout the course was showing students the existence of different algorithmic archetypes that can be used to solve the same problem (rather than superficially introducing a large number of machine learning algorithms). The archetypes were chosen to illustrate how structure inherent in problems can reduce computational complexity, while simultaneously improving the quality of the solution.
70
+
71
+ For instance, in machine learning it is possible to fit a regression curve to a training dataset using a variety of approaches. To demonstrate this, we start off with a conventional scatter plot (where the independent variable is on the $\mathrm{x}$ -axis and the dependent variable is on the y-axis). After this, we show students that it is possible to fit infinitely many (linear) curves through the point cloud. These curves can be quickly evaluated, but it should already be obvious to the students by this point that the prospects of attempting a large number of solutions to determine the best one is terribly wasteful. As the next step, we discuss a variety of gradient-based and gradient-free algorithms which can be used to arrive at the optimal curve in far fewer iterations. From here, we informally introduce the notion of convexity, and discuss how to optimally solve this particular problem analytically.
72
+
73
+ In optimal decision making, the same analogs exist. To solve a sequential decision making problem, such as when and how much to charge or discharge an electrical battery given a price signal, it is possible to repeatedly sample the solution space (in a brute force manner) to come up with a set of candidate solutions. Here, the solution space is a vector, and each element in this vector represents the control action at a particular time index. These solutions can likewise be quickly evaluated to determine how well they perform, and the best solution can be selected. This approach has the benefit that it provides an intuitive exploration of how to formulate an optimization problem formally without getting the students bogged down in algorithmic complexities. However, as before, the wastefulness of this approach should become quickly obvious. Next, we introduce gradient free optimizers as a potential solution to speed up discovery of optimal solutions. While these should easily outperform brute force methods, their limitations should also become visible with increasing complexity of the problem to be solved (e.g. by increasing the time horizon of the optimization problem or addition of new constraints etc.). Finally, students are introduced to convex optimization since the problem under consideration is convex. This allows students to solve the problem exactly at a far smaller computational footprint.
74
+
75
+ These two examples are meant to highlight the interconnection of learning and optimization theory to engineering students that may have had a background in control. Furthermore, the same approach can also be expanded to other relevant, important topics such as hyperparameter optimization etc. This approach was preferred over introducing students to a large number of machine learning and optimization algorithms, because of our experiences in the field, where even experienced energy practitioners and software developers have trouble leveraging structure in problems and choosing the right tool for the problem at hand. More concretely, over the years, we have seen software engineering colleagues, without a background in learning or optimization theory, applying (variants of) brute force search to determine optimal control actions in extremely high dimensional settings, even when the problem could be readily and exactly solved with convex optimization techniques. On the other end of the spectrum, we have seen colleagues trying to apply convex solvers to non-convex optimization problems, without fully appreciating the complexity of the challenge. The course is structured in a way to address these two key challenges in practice.
76
+
77
+ ## 5. Course delivery
78
+
79
+ Beyond course content, we also explored different ways to facilitate course delivery. This was motivated by our objective to enable students without a programming background to quickly get up to speed. Some of these were also motivated by our intention to set up a fully functional hybrid learning experience for students scattered in technical programs across Europe.
80
+
81
+ ### 5.1. Using interactive notebooks
82
+
83
+ We used interactive Jupyter notebooks extensively in the teaching process to complement lectures covering theory. In course feedback, we found this was universally appreciated by all the course participants that provided feedback. There were a number of reasons for this. Firstly, students were able to quickly apply the theoretical concepts they learned in practice. Secondly, having access to code provided a jump start of sorts and students were able to achieve a lot more than they would have otherwise. This also considerably allayed our fears of asking non-proficient programmers to read and understand existing Python code.
84
+
85
+ ### 5.2. Using cloud infrastructure
86
+
87
+ The next question in course delivery was where to host the Jupyter notebooks, i.e. on the students' workstations or the cloud. We decided to go for a cloud-based approach after a thorough analysis of the advantages and disadvantages of such frameworks. Initially, we approached this with a proprietary cloud-based solution, which has since been replaced by Deepnote. The biggest benefit of such a setup is that it enables students without a programming background to focus on learning algorithms and programming, without having to figure out details such as installing libraries and setting up a functional development environment. Cloud frameworks also enable quick feedback to students as the instructor can seamlessly check in and see their code (without the need for git commits etc.). Another advantage with such cloud-based platforms is automatic scaling of computational resources, which means students with fewer compute resources are not at a disadvantage.
88
+
89
+ ### 5.3. Using programming aids
90
+
91
+ Even though the level of programming required for the course is not very high, we used a number of programming aids to help students better utilize course materials. These include Cambridge University's introductory Python programming notebooks, as well as a limited number of Datacamp licenses that students could borrow to quickly get up to speed on Python programming. Students were also given pointers to other helpful resources on programming in general and Python in particular.
92
+
93
+ ## 6. Learner evaluation
94
+
95
+ We had over 100 participants in the first run of the course in summer 2020. A majority of these were students enrolled in EIT InnoEnergy MS programs, with energy practitioners and PhD researchers forming a sizable minority. Likewise, the same course offered only to students at KU Leuven had 13 MS students from the electrical /energy engineering department.
96
+
97
+ Students following these courses were evaluated on the intended learning outcomes using participation in: (1) a home-made forecasting competition on Kaggle, and (2) a group project where the students were asked to extend what they learn in the course to apply it to a real world challenge. More concretely, the forecasting competition is meant to provide students with electricity demand data for a neighborhood over two months, which they are then asked to forecast for the next week. The choice of methodology is left open to students, although they are required to explain this to their peers in a presentation. Benchmarking students' performance against predefined baselines on Kag-gle also serves as an excellent way for early interventions to help struggling students. Some students only explored algorithms introduced in the lecture, while others went beyond these to also experiment with neural networks and tree-based methods. Likewise, students work in teams to complete the project where the requirement is to extend optimal decision making to also consider design choices (in the lectures, the students are only introduced to optimal control). This makes use of the same algorithms, but is conceptually harder since it interleaves learning and hierarchical optimal decision making.
98
+
99
+ ## 7. Conclusions
100
+
101
+ The course, now in its second year and third iteration, has been used to introduce energy engineering students to data science. Many of these students indicated in a pre-course survey that they did not have any background in data science, or even programming. However, most thought it was extremely important for their future career objectives, and were therefore intrinsically motivated to learn more about data science in general, and machine learning in particular.
102
+
103
+ While the course provides students with a useful introduction to the broad field of data science encompassing exploratory data analysis, machine learning and optimal decision making, it is still only a high level overview. One of the next steps for the working group is to harmonize this introductory course across universities, and develop follow-ups on more advanced topics.
104
+
105
+ ## References
106
+
107
+ Hussain Kazmi and Johan Driesen. 2020. Automated Demand Side Management in Buildings. In Artificial Intelligence Techniques for a Scalable Energy Transition. Springer, 45-76.
108
+
109
+ Elisabeth Sulmont, Elizabeth Patitsas, and Jeremy R Coop-erstock. 2019. Can You Teach Me To Machine Learn?. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education. 948-954.
110
+
111
+ Yang Zhang, Tao Huang, and Ettore Francesco Bompard. 2018. Big data analytics in smart grids: a review. Energy informatics 1, 1 (2018), 1-24.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/4rOjJXIcU0/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § TEACHING MACHINE LEARNING THROUGH END-TO-END DECISION MAKING
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ Advances in machine learning and data science hold the potential to greatly optimize the overall energy sector, and prevent the worst outcomes of anthropogenic climate change. However, despite the urgent need for trained energy data scientists and the presence of a number of technically challenging issues that need to be tackled, the sector continues to suffer from a personnel shortage and remains mired in outdated technology. In many programs, energy engineers continue to graduate without even rudimentary programming skills, let alone knowledge of data science. This paper highlights key findings from an introductory course on machine learning and optimization designed specifically for energy engineering students. The course employs a number of teaching aids, which we hope will be useful for the broader community as well. The course was developed in a pan-European setting, supported by four different European universities as part of a broader roadmap to overhaul energy education.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ Despite its importance, energy engineers typically do not get even an introductory course to data science or machine learning. This oversight is especially tedious since thesis students and fresh graduates have to learn these concepts in an adhoc manner every year. The omission is also becoming glaringly obvious with the increasing amounts of energy-related data being collected, thanks to IoT devices and smart meters etc. which are enabling countless use cases on both the building (Kazmi and Driesen, 2020) and the grid scale (Zhang et al., 2018). Most energy companies (and engineers) are ill-equipped to cope with this data, much less create additional value from it.
12
+
13
+ To address this shortcoming, EIT InnoEnergy, a body of the European Union, initiated a working group, constituted of members from multiple universities (KU Leuven, KTH, UPC and Grenoble INP) in four different European countries (Belgium, Sweden, Spain and France) in 2019. The mandate of the working group was to harmonize data science education across the participating universities, reduce replication work in course design, create grounds for broader collaboration, and develop a long-term roadmap for data science education in energy programs in European universities.
14
+
15
+ A note on terminology is relevant here. The working group converged to the use of data science as an umbrella term that incorporates the entire data pipeline (including machine learning, but also other closely related topics including data acquisition, exploratory data analysis, optimal decision making and ethics etc.). Furthermore, due to the applied nature of the target audience (i.e. energy engineers), an emphasis was placed on case-based teaching. This was intended to help students better understand how the algorithms are applied in practice, as well as what problems do they solve concretely.
16
+
17
+ One of the first deliverables arising from the working group's activies was an introductory course on data science for energy engineers that was delivered virtually for the first time in 2020 to students from over ten European universities. A second run is planned for summer of 2021. In this paper, we highlight some of the key findings in designing such a course from scratch. We also discuss lessons we learned while teaching machine learning to engineering students with diverse backgrounds.
18
+
19
+ § 2. COURSE AUDIENCE
20
+
21
+ The course, titled 'Data science for energy engineers', is intended for graduate level (MS or early PhD) engineering students. The audience is then, in many ways, much less diverse than students enrolled in a typical introductory data science or artificial intelligence course. An overwhelming majority, if not all, of the students are engineering students, belonging mostly to the electrical, mechanical or energy departments. However, within these students, there is still considerable diversity owing to two key factors. Firstly, there are typically a number of differences between electrical and mechanical engineering curricula. Electrical engineers tend to have much more exposure to programming and fields closely related to machine learning such as signal processing. Secondly, individual student background also contributes considerably to diversity. For instance, high school students in many countries (can) already study programming, while in many cases it is possible that even electrical engineering graduates may make it through having written only a single 'hello world' program in Matlab. This lack of programming knowledge has been identified as one of two main barriers for students learning machine learning and data science (Sulmont et al., 2019).
22
+
23
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
24
+
25
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
26
+
27
+ Consequently, the course was designed in a way that it
28
+
29
+ 067 would cater to the needs of a variety of energy engineer- ing students, irrespective of their background. While the 068 course, at 3 ECTS, is too short to include a detailed in- 069 troduction to programming, students were provided with 070 relevant documentation and material. Furthermore, the level 071 of programming and software engineering knowledge re-
30
+
31
+ 073 quired to complete the course was intentionally kept low (i.e. concepts such as object oriented programming, repository control etc. were alluded to, but not dealt with).
32
+
33
+ § 3. INTENDED LEARNING OUTCOMES
34
+
35
+ The course was crafted in a way that a number of concrete learning outcomes could be realized, while simultaneously introducing students to the end-to-end pipeline of data-driven projects ranging from data acquisition and preprocessing, to modelling and inference, to actionable decision making. Keeping in mind course participant backgrounds, these learning outcomes can be formalized as:
36
+
37
+ 1. Students should be able to load, as well as visualize and understand various energy datasets by performing exploratory data analysis on them.
38
+
39
+ 2. Students should be able to understand core machine learning algorithms for modelling and forecasting, and how (and when) to apply these in practical settings.
40
+
41
+ 3. Students should be able to formulate and solve optimization problems to perform optimal control and design in a number of different energy related settings.
42
+
43
+ 4. Students should be able to see the bigger picture surrounding individual algorithms in energy data science, and understand the complexity behind real world deployment.
44
+
45
+ § 4. COURSE CONTENT AND DESIGN
46
+
47
+ To realize these learning objectives, we designed the course as a series of five lectures, accompanied by four practice sessions based on Jupyter notebooks. The individual lectures
48
+
49
+ 108 cover (1) introduction to energy data science and practical use cases, (2) exploratory data analysis, (3) modelling and forecasting using machine learning, (4) optimal decision making, and (5) advanced concepts in machine learning and optimization. The entire course is delivered using a singular dataset on electricity demand and prices, which allows students to work on it in an end-to-end manner. In this section, we highlight two different aids in course design that turned out to be especially useful while teaching data science to a mathematically inclined, but non-expert audience. In the next section, we discuss three teaching aids in course delivery.
50
+
51
+ § 4.1. TEACHING WITH END-TO-END USE CASES
52
+
53
+ Rather than introducing students to machine learning with toy datasets such as the Iris or MNIST dataset, we designed the course using domain specific examples in the form of a coherent use case on energy demand response. This enabled us to discuss the entire life cycle of a real-world project in practice, of which building a machine learning or optimization model is just one step. Towards this end, we discuss the real world case of an energy prosumer (i.e. a residential household with local storage and/or generation) interested in optimizing their energy demand and generation using electric batteries. The course discusses a number of optimization objectives, ranging from performing arbitrage (i.e. using an optimal controller that charges the battery when electricity is cheaper or less carbon-intensive, and vice versa) to peak shaving (i.e. reducing the maximum power demand on the grid) and maximum self-consumption of local solar generation. Additional constraints can be introduced here, based on user behaviour and grid conditions. We also take care to emphasize that the same algorithms can be used to achieve a variety of objectives, ranging from energy optimization to cost optimization to emissions optimization.
54
+
55
+ This case study requires students to create forecasts for user behavior, as well as potentially electricity prices when these are not known. This is done using machine learning models, but emphasis is placed on benchmarking the developed models using simpler methods (both naive and simple time series models). The temporal structure of the problem also allows us to introduce complex, real-world challenges such as anomalous data and concept drift etc. Another benefit of using a dataset that spans an entire year is that it reinforces the concept of statistical significance. While a naive forecast and/or controller may outperform a more sophisticated counterpart on a given day, the superior algorithm should outperform over a long enough period of time.
56
+
57
+ Posing the problem in such a relatable manner also allows students to easily see the monetary costs of prediction errors from machine learning models, and whether complex machine learning algorithms actually improve real-world results when compared to simpler baselines. Finally, it is important to note that the methodology is generalizable across case studies. For instance, in a partner university in the working group, a largely similar approach was followed with detecting wind turbine failures, where the students were again asked to make the link between prediction errors and real world costs.
58
+
59
+ < g r a p h i c s >
60
+
61
+ Figure 1. Introducing structure through understandable examples, both in machine learning and optimal decision making
62
+
63
+ § 4.2. TEACHING USING ALGORITHMIC STRUCTURE
64
+
65
+ A second recurring theme throughout the course was showing students the existence of different algorithmic archetypes that can be used to solve the same problem (rather than superficially introducing a large number of machine learning algorithms). The archetypes were chosen to illustrate how structure inherent in problems can reduce computational complexity, while simultaneously improving the quality of the solution.
66
+
67
+ For instance, in machine learning it is possible to fit a regression curve to a training dataset using a variety of approaches. To demonstrate this, we start off with a conventional scatter plot (where the independent variable is on the $\mathrm{x}$ -axis and the dependent variable is on the y-axis). After this, we show students that it is possible to fit infinitely many (linear) curves through the point cloud. These curves can be quickly evaluated, but it should already be obvious to the students by this point that the prospects of attempting a large number of solutions to determine the best one is terribly wasteful. As the next step, we discuss a variety of gradient-based and gradient-free algorithms which can be used to arrive at the optimal curve in far fewer iterations. From here, we informally introduce the notion of convexity, and discuss how to optimally solve this particular problem analytically.
68
+
69
+ In optimal decision making, the same analogs exist. To solve a sequential decision making problem, such as when and how much to charge or discharge an electrical battery given a price signal, it is possible to repeatedly sample the solution space (in a brute force manner) to come up with a set of candidate solutions. Here, the solution space is a vector, and each element in this vector represents the control action at a particular time index. These solutions can likewise be quickly evaluated to determine how well they perform, and the best solution can be selected. This approach has the benefit that it provides an intuitive exploration of how to formulate an optimization problem formally without getting the students bogged down in algorithmic complexities. However, as before, the wastefulness of this approach should become quickly obvious. Next, we introduce gradient free optimizers as a potential solution to speed up discovery of optimal solutions. While these should easily outperform brute force methods, their limitations should also become visible with increasing complexity of the problem to be solved (e.g. by increasing the time horizon of the optimization problem or addition of new constraints etc.). Finally, students are introduced to convex optimization since the problem under consideration is convex. This allows students to solve the problem exactly at a far smaller computational footprint.
70
+
71
+ These two examples are meant to highlight the interconnection of learning and optimization theory to engineering students that may have had a background in control. Furthermore, the same approach can also be expanded to other relevant, important topics such as hyperparameter optimization etc. This approach was preferred over introducing students to a large number of machine learning and optimization algorithms, because of our experiences in the field, where even experienced energy practitioners and software developers have trouble leveraging structure in problems and choosing the right tool for the problem at hand. More concretely, over the years, we have seen software engineering colleagues, without a background in learning or optimization theory, applying (variants of) brute force search to determine optimal control actions in extremely high dimensional settings, even when the problem could be readily and exactly solved with convex optimization techniques. On the other end of the spectrum, we have seen colleagues trying to apply convex solvers to non-convex optimization problems, without fully appreciating the complexity of the challenge. The course is structured in a way to address these two key challenges in practice.
72
+
73
+ § 5. COURSE DELIVERY
74
+
75
+ Beyond course content, we also explored different ways to facilitate course delivery. This was motivated by our objective to enable students without a programming background to quickly get up to speed. Some of these were also motivated by our intention to set up a fully functional hybrid learning experience for students scattered in technical programs across Europe.
76
+
77
+ § 5.1. USING INTERACTIVE NOTEBOOKS
78
+
79
+ We used interactive Jupyter notebooks extensively in the teaching process to complement lectures covering theory. In course feedback, we found this was universally appreciated by all the course participants that provided feedback. There were a number of reasons for this. Firstly, students were able to quickly apply the theoretical concepts they learned in practice. Secondly, having access to code provided a jump start of sorts and students were able to achieve a lot more than they would have otherwise. This also considerably allayed our fears of asking non-proficient programmers to read and understand existing Python code.
80
+
81
+ § 5.2. USING CLOUD INFRASTRUCTURE
82
+
83
+ The next question in course delivery was where to host the Jupyter notebooks, i.e. on the students' workstations or the cloud. We decided to go for a cloud-based approach after a thorough analysis of the advantages and disadvantages of such frameworks. Initially, we approached this with a proprietary cloud-based solution, which has since been replaced by Deepnote. The biggest benefit of such a setup is that it enables students without a programming background to focus on learning algorithms and programming, without having to figure out details such as installing libraries and setting up a functional development environment. Cloud frameworks also enable quick feedback to students as the instructor can seamlessly check in and see their code (without the need for git commits etc.). Another advantage with such cloud-based platforms is automatic scaling of computational resources, which means students with fewer compute resources are not at a disadvantage.
84
+
85
+ § 5.3. USING PROGRAMMING AIDS
86
+
87
+ Even though the level of programming required for the course is not very high, we used a number of programming aids to help students better utilize course materials. These include Cambridge University's introductory Python programming notebooks, as well as a limited number of Datacamp licenses that students could borrow to quickly get up to speed on Python programming. Students were also given pointers to other helpful resources on programming in general and Python in particular.
88
+
89
+ § 6. LEARNER EVALUATION
90
+
91
+ We had over 100 participants in the first run of the course in summer 2020. A majority of these were students enrolled in EIT InnoEnergy MS programs, with energy practitioners and PhD researchers forming a sizable minority. Likewise, the same course offered only to students at KU Leuven had 13 MS students from the electrical /energy engineering department.
92
+
93
+ Students following these courses were evaluated on the intended learning outcomes using participation in: (1) a home-made forecasting competition on Kaggle, and (2) a group project where the students were asked to extend what they learn in the course to apply it to a real world challenge. More concretely, the forecasting competition is meant to provide students with electricity demand data for a neighborhood over two months, which they are then asked to forecast for the next week. The choice of methodology is left open to students, although they are required to explain this to their peers in a presentation. Benchmarking students' performance against predefined baselines on Kag-gle also serves as an excellent way for early interventions to help struggling students. Some students only explored algorithms introduced in the lecture, while others went beyond these to also experiment with neural networks and tree-based methods. Likewise, students work in teams to complete the project where the requirement is to extend optimal decision making to also consider design choices (in the lectures, the students are only introduced to optimal control). This makes use of the same algorithms, but is conceptually harder since it interleaves learning and hierarchical optimal decision making.
94
+
95
+ § 7. CONCLUSIONS
96
+
97
+ The course, now in its second year and third iteration, has been used to introduce energy engineering students to data science. Many of these students indicated in a pre-course survey that they did not have any background in data science, or even programming. However, most thought it was extremely important for their future career objectives, and were therefore intrinsically motivated to learn more about data science in general, and machine learning in particular.
98
+
99
+ While the course provides students with a useful introduction to the broad field of data science encompassing exploratory data analysis, machine learning and optimal decision making, it is still only a high level overview. One of the next steps for the working group is to harmonize this introductory course across universities, and develop follow-ups on more advanced topics.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/62qGdDGSyqr/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## An Introduction to AI for GLAM
2
+
3
+ ## Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ There is a growing interest in the utilisation of Machine Learning (ML) techniques within Galleries, Libraries, Archives and Museums (GLAM), and a corresponding demand for training, to enable practitioners to engage confidently in this area. Staff at these institutions are seeking practical knowledge and skills in machine learning concepts and methods, specific to the work of the sector, such as in the curation and collection of heritage collections. In this paper we discuss the motivations and methods behind 'An Introduction to AI for GLAM', a new Carpentries (Car, b). workshop under development through an international partnership between British Library, Smithsonian Institution, and The National Archives UK. This new workshop aims to introduce GLAM practitioners to the key conceptual and practical considerations for supporting, participating in and undertaking machine learning-based research and projects within the GLAM sector.
8
+
9
+ ## 1. Introduction
10
+
11
+ The past decade has seen a growing exploration of machine learning by the Galleries, Libraries, Archives and Museums (GLAM) sector. This interest is reflected in a growing number of networking initiatives ${}^{1}$ , research projects applying machine learning to GLAM collections (Lee et al., 2020; Lincoln et al., 2020; Dee) and projects developing machine learning tools aimed explicitly at GLAM institutions (Kahle et al., 2017).
12
+
13
+ A potential barrier to the effective adoption of machine learning within the sector is a "skills gap" amongst staff (Cox, 2021). The exact content of this skills gap depends largely on the perceived role of GLAM staff in relation to Machine Learning. Should staff be directly building and training models? Should they be able to document training data developed from library content? Should they be able to work as part of a team developing machine learning models?
14
+
15
+ This paper briefly provides some background on the GLAM sector and existing training initiatives aimed at GLAM staff. It then introduces our Carpentries workshop 'An Introduction to AI for GLAM'.
16
+
17
+ ## 2. Machine Learning and the GLAM Sector
18
+
19
+ The GLAM sector encompasses a broad range of institutions in terms of size, budget, collections and primary audience. It is therefore difficult to make broad statements about the sector that will be completely accurate. However, there are areas of common activity and focus:
20
+
21
+ - Cataloguing and other forms of metadata generation.
22
+
23
+ - Enabling search and discovery of collections.
24
+
25
+ - Supporting and carrying out research.
26
+
27
+ - Public engagement and crowdsourcing.
28
+
29
+ These areas are all ones in which machine learning could be - or already is - having an impact. Beyond this there are also shared aims across the GLAM sector as well as existing groups organised as part of the GLAM sector (Zorich et al., 2008). This suggests that developing teaching material for this sector is worthwhile.
30
+
31
+ ## 3. GLAM driven ML Training Initiatives
32
+
33
+ Major GLAM institutions such as the British Library, The National Archives UK and the Smithsonian Institution have all undertaken to develop a wide range of training opportunities for their staff in this area, taking a variety of training approaches in getting this complex material across to a diversity of staff.
34
+
35
+ ---
36
+
37
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
38
+
39
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
40
+
41
+ ${}^{1}$ These include AI4LAM: https://sites.google.com/view/ai41am, CENL AI in Libraries Network Group: https://www.cenl.org/networkgroups/ ai-in-libraries-network-group/, and the AEOLIAN network https://www.aeolian-network.net/ about/
42
+
43
+ ---
44
+
45
+ ### 3.1. Library Carpentry
46
+
47
+ Library Carpentry is an initiative under the Carpentries (Car, b) umbrella focused on teaching software skills to library and information related communities. Library Carpentry was originally developed in 2014 in response to demand for access to sofware and data skills amongst people working in libraries (Baker et al., 2016). The Library Carpentry curriculum covers materials that would be relevant to GLAM staff wanting to work with machine learning, including introductions to 'tidy data', version control systems, use of the command line, and basic programming in Python. ${}^{2}$
48
+
49
+ ### 3.2. Digital Scholarship Training Programme at British Library
50
+
51
+ The Digital Research and Curator Team at the National Library of the UK, has run a digital and data skills training programme for British Library staff since 2012 (Dig). This programme creates opportunities for colleagues to develop necessary skills and knowledge to support, and undertake in their own right, emerging areas of modern scholarship such as the Digital Humanities. The programme supports several different modes of training delivering, from one-hour lunchtime lectures and a monthly reading group, 2 hour hands-on exploratory Hack and Yacks, to full day or week long courses on a given digital topic. The topic of machine learning has featured heavily throughout this programme in recent years, with a particular emphasis on introducing high level concepts and use cases in the application of ML in the GLAM sector to a novice audience, rather than delivering the software skills to implement them.
52
+
53
+ ### 3.3. Machine Learning Club at The National Archives
54
+
55
+ The National Archives have run a number of activities under the banner of Machine Learning Club for all interested staff. The initiative began with a one-off introductory talk, expanding to a series of talks covering multiple aspects of the topic. Enthusiasm amongst attendees to learn more led to the running of a series of workshops for technical and non-technical staff alike to gain hands-on experience of machine learning. As with the British Library, the aim was to teach concepts and for participants to understand their role within an ML eco-system.
56
+
57
+ ### 3.4. Computing for Cultural Heritage
58
+
59
+ The Computing for Cultural Heritage project (Com, a) was an Institute of Coding funded trial (2019/2021) that saw Birkbeck University of London, British Library and The National Archives UK develop a new one-year postgraduate certificate aimed at providing information professionals across GLAM sector with an understanding of basic programming, analytic tools and computing environments to support them in their daily work. It was born out of a need to further support those who, having gained a keen interest in machine learning through institutional staff training, required the programming skills to practically undertake it(Com, b).
60
+
61
+ These diverse training initiatives, all designed from a GLAM practitioners perspective on ML, have had positive impacts, with participants reporting increased confidence in interactions with external data scientists, and being able to bring ML into their own research. ${}^{3}$
62
+
63
+ ### 4.An Introduction to AI for GLAM
64
+
65
+ In this section, we introduce our workshop, "An Introduction to AI for GLAM", currently under development as part of the "Carpentries Incubator". ${}^{4}$ This next section outlines the aims, topics and delivery methods of this workshop alongside a reflection on why these were chosen.
66
+
67
+ The material for this course were developed as part of a Carpentries Lesson Development Study Group (Car, a). This study group took place over a couple of months and was intended to help participants develop new Carpentries lessons. Four members of the group, representing three GLAM institutions, decided to collaborate on this AI lesson.
68
+
69
+ ### 4.1. Learner profiles
70
+
71
+ As part of the process of developing the lesson materials, learner profiles were created. The Carpentries recommend the development of learner profiles as a way of better identifying the target audience and their needs (Wilson, 2019). Learner profiles require the description of some characteristics of an expected learner for the material. For example; "What is their expected educational level?", "What type of exposure do they have to the technologies you plan to teach?" and "What are the pain points they are currently experiencing?" (Wilson, 2019). Although the people depicted in the profiles are fictional, the development of the profiles often drew on real people who work in our home institutions.
72
+
73
+ One challenge of using the learner profiles for our lesson was adapting them to focus slightly less on their technical skills. The learner profiles were also adapted slightly to focus more on our learners' potential attitudes toward machine learning. Whilst we were developing the lessons, we wanted to keep in mind the varying levels of enthusiasm for machine learning from the target audience.
74
+
75
+ ---
76
+
77
+ ${}^{3}$ See for instance student projects undertaken by Computing for Cultural Heritage https://www.bl.uk/case-studies/ computing-for-cultural-heritage-student-projects and https://blog.nationalarchives.gov.uk/ computing-cholera-topic-modelling-the-catalogue-entries-
78
+
79
+ ${}^{4}$ The incubator is a place for lessons to be developed outside of the core Carpentries curriculum https://github.com/ carpentries-incubator
80
+
81
+ ${}^{2}$ https://librarycarpentry.org/lessons/
82
+
83
+ ---
84
+
85
+ ### 4.2. Aims
86
+
87
+ The role that GLAM staff should play in machine learning projects remains an open question. Some GLAM institutions might choose to "outsource" most of their machine learning efforts to commercial solutions, whilst others will want their staff to be involved more directly. Whilst the stakes are not as high, this mirrors discussions in other disciplines, such as medicine, around the role domain experts should play and, crucially, what they need to know about machine learning (Sim et al., 2021; Olczak et al., 2021). We would argue that regardless of whether GLAM staff will be "directly" involved, machine learning methods will be underpinning so many technologies in the future that basic literacy around machine learning is crucial for all GLAM staff.
88
+
89
+ Following this desire to develop a basic understanding of machine learning, 'An Introduction to AI for GLAM' has several aims. The primary overarching goal of the material is to provide an accessible introduction to machine learning for GLAM staff that is relevant to their work. In particular, the materials aim to introduce basic machine learning concepts and demystify training machine-learning models. The material also seeks to provide a high-level overview of what machine learning is good at and where its limitations lie. Giving a realistic account of the field is especially important in the context of the growing use of machine learning in the GLAM sector and the risks of machine learning approaches being "oversold" to staff.
90
+
91
+ Another central aim of the course is to emphasise ethical considerations. There are growing calls and examples of integrating ethics into the curriculums of machine learning courses (Garrett et al., 2020; Saltz et al., 2019). Highlighting potential ethical issues, particularly related to GLAM "data" challenges, is a possible role for GLAM staff in machine learning (Coleman, 2020).
92
+
93
+ Finally, the lesson material aims to give a sense of the steps involved in a machine learning project, from identifying a "business need" to deploying and monitoring models. The overview of these steps aims to help prepare GLAM staff to work as part of a machine learning project team rather than on the technical implementation of these different stages of a machine learning project.
94
+
95
+ ### 4.3. Lesson Topics
96
+
97
+ We chose the topics covered in the workshop material with the learners' and aims in mind. The topics, as a result, differ from what may often be covered as part of an introductory machine learning course. The current "episodes" include:
98
+
99
+ - What are Artificial Intelligence (AI) and Machine Learning?
100
+
101
+ - What is Machine Learning good at?
102
+
103
+ - Understanding and managing bias.
104
+
105
+ - Applying Machine Learning.
106
+
107
+ - The Machine Learning ecosystem.
108
+
109
+ Since the aims of the session are to provide a broad introduction to the topic for GLAM staff, the topics have a strong focus on the practical applications and steps in machine learning and a relatively more minor focus on some conceptual topics. Some topics included in most machine learning introductions, for example, loss functions, are not covered in any detail. With that said, the first complete module of the session does provide a basic conceptual introduction to machine learning.
110
+
111
+ ### 4.4. Delivery Methods
112
+
113
+ Unlike many introductions to machine learning, the materials do not include 'hands-on' programming components. However, the goal of the lesson material is still to be practically focused and interactive. The Carpentries place a strong emphasis on interactivity and the inclusions of exercises in lesson materials.
114
+
115
+ Since the lessons do not cover coding, we did not include coding exercises. Instead, the aims of the exercises were to:
116
+
117
+ - Test and develop an understanding of important machine learning concepts.
118
+
119
+ - Encourage a reflection of how machine learning could be used within a GLAM setting.
120
+
121
+ - Encourage a reflection on how machine learning could be utilised within specific institutions in which the participants of the workshops are based.
122
+
123
+ - Emphasise a critical reflection on the ethical issues that can be raised by using machine learning and particularly how these appear in the context of GLAM institutions and collections.
124
+
125
+ Examples of the types of exercises include;
126
+
127
+ - Multiple choice questions designed to test that concepts have been understood. For example, understanding the difference between supervised and unsupervised learning.
128
+
129
+ - A Discussion prompt for thinking about how machine learning could be utilised: "How might object detection help speed up digitisation?".
130
+
131
+ - Group discussion of "points at which bias may enter the pipeline, and questions/strategies GLAM staff might want to consider in order to manage it."
132
+
133
+ - A "hands-on" activity exploring commercial computer vision services to reflect on their potential strengths and weaknesses.
134
+
135
+ We aimed for these exercises to build on experiences of running Library Carpentry workshops which includes similar discussion exercises. ${}^{5}$ These types of exercise are often not focused on teaching a "hands-on" skill as such but instead focus on increasing the learners broad understanding of a topic and, crucially, their confidence.
136
+
137
+ ### 4.5. Community development and maintenance
138
+
139
+ We have several reasons for developing this workshop as part of the Carpentries ecosystem. The Carpentries workshop materials are generated from source files in a GitHub repository. This allows for anyone to make a pull request or open an issue related to the lesson. Beyond this technical ability to make changes, the Carpentries organise regular events to encourage review and development of existing materials.
140
+
141
+ Machine learning is a rapidly developing field with regular technical advances. Beyond this, there is also a growing maturity around the deployment of machine learning models in various domains. The use of machine learning in the GLAM sector will continue to develop over time, making it likely that aspects of our lesson will need to be updated. We hope that integrating the lesson materials inside the Carpentries ecosystem will help ensure that a broader community can update the material.
142
+
143
+ ## 5. Conclusion
144
+
145
+ Introductory machine learning training that is grounded in the specific applications and use cases relevant to cultural heritage, that is practical, without being too overtly technical, will be key to ensuring the wider adoption of machine learning methods across GLAM. Though major cultural heritage institutions have undertaken in recent years to provide their own staff with a variety of training in this area, there is much to be gained by pooling expertise, resources, and experience to deliver a variety of open training materials available sector-wide. 'An Introduction to AI for GLAM' represents one effort, aiming to meet the growing demand for machine learning training specific to the sector, and provide a strong foundation for staff to gain confidence in entering this complex area. The Carpentries, and specifically Library Carpentry, though tending towards more technical lessons historically, provides the ideal home for such training. Library Carpentry acts as a natural sign-post for digital skill seeking GLAM professionals, and has a diverse and dedicated community at the ready to ensure not just the open, shared and continued maintenance of the materials, but the development of related and more advanced courses branching off the core foundation as necessary.
146
+
147
+ ## References
148
+
149
+ Carpentries Lesson Development Study Group. https://carpentries-incubator.github.io/study-groups/, a.
150
+
151
+ The Carpentries. https://carpentries.org/index.html, b.
152
+
153
+ Computing Cultural Heritage - The British Library. https://www.bl.uk/projects/computingculturalheritage, a.
154
+
155
+ Computing Cultural Heritage - student projects. https://www.bl.uk/case-studies/computing-for-cultural-heritage-student-projects, b.
156
+
157
+ Deep Discoveries. https://tanc-ahrc.github.io/DeepDiscoveries/index.html.
158
+
159
+ Digital Scholarship Training Programme. https://www.bl.uk/projects/digital-scholarship-training-programme.
160
+
161
+ Baker, J., Moore, C., Priego, E., Alegre, R., Cope, J., Price, L., Stephens, O., van Strien, D., and Wilson, G. Library Carpentry: Software skills training for library professionals. LIBER Quarterly, 26(3):141-162, November 2016. ISSN 2213-056X. doi: 10.18352/lq.10176.
162
+
163
+ Coleman, C. N. Managing bias when library collections become data. 2020.
164
+
165
+ Cox, A. The impact of ai, machine learning, automation and robotics on the information professions. Technical report, CILIP, 2021.
166
+
167
+ Garrett, N., Beard, N., and Fiesler, C. More Than "If Time Allows": The Role of Ethics in AI Education. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES '20, pp. 272-278, New York, NY, USA, February 2020. Association for Computing Machinery. ISBN 978-1-4503-7110-0. doi: 10.1145/3375627.3375868.
168
+
169
+ Kahle, P., Colutto, S., Hackl, G., and Mühlberger, G. Tran-skribus - A Service Platform for Transcription, Recognition and Retrieval of Historical Documents. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), volume 04, pp. 19-24, November 2017. doi: 10.1109/ICDAR.2017.307.
170
+
171
+ Lee, B., Mears, J., Jakeway, E., Ferriter, M. M., Adams, C., Yarasavage, N., Thomas, D., Zwaard, K., and Weld,
172
+
173
+ ---
174
+
175
+ ${}^{5}$ An example includes a discussion exercise focused on "jargon-busting" terminology used around code or software development.
176
+
177
+ ---
178
+
179
+ D. S. The newspaper navigator dataset: Extracting and analyzing visual content from 16 million historic newspaper pages in chronicling america. ArXiv, abs/2005.01583, 2020.
180
+
181
+ Lincoln, M., Corrin, J., Davis, E., and Weingart, S. B. Campi: Computer-aided metadata generation for photo archives initiative. 2020.
182
+
183
+ Olczak, J., Pavlopoulos, J., Prijs, J., Ijpma, F., Doornberg, J., Lundström, C., Hedlund, J., and Gordon, M. Presenting artificial intelligence, deep learning, and machine learning studies to clinicians and healthcare stakeholders: an introductory reference with a guideline and a clinical ai research (cair) checklist proposal. Acta orthopaedica, pp. 1-13, 2021.
184
+
185
+ Saltz, J., Skirpan, M., Fiesler, C., Gorelick, M., Yeh, T., Heckman, R., Dewar, N., and Beard, N. Integrating Ethics within Machine Learning Courses. ACM Transactions on Computing Education, 19(4):32:1-32:26, August 2019. doi: 10.1145/3341164.
186
+
187
+ Sim, J. Z. T., Fong, Q. W., Huang, W., and Tan, C. Machine learning in medicine: what clinicians should know. Singapore medical journal, 2021.
188
+
189
+ Wilson, G. Teaching Tech Together: How to Make Your Lessons Work and Build a Teaching Community around Them. CRC Press, October 2019. ISBN 978-1-00-072801- 9.
190
+
191
+ Zorich, D., Waibel, G., and Erway, R. Beyond the Silos of the LAMs. Collaboration Among Libraries, 2008.
192
+
193
+ 270
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/62qGdDGSyqr/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § AN INTRODUCTION TO AI FOR GLAM
2
+
3
+ § ANONYMOUS AUTHORS ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ There is a growing interest in the utilisation of Machine Learning (ML) techniques within Galleries, Libraries, Archives and Museums (GLAM), and a corresponding demand for training, to enable practitioners to engage confidently in this area. Staff at these institutions are seeking practical knowledge and skills in machine learning concepts and methods, specific to the work of the sector, such as in the curation and collection of heritage collections. In this paper we discuss the motivations and methods behind 'An Introduction to AI for GLAM', a new Carpentries (Car, b). workshop under development through an international partnership between British Library, Smithsonian Institution, and The National Archives UK. This new workshop aims to introduce GLAM practitioners to the key conceptual and practical considerations for supporting, participating in and undertaking machine learning-based research and projects within the GLAM sector.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ The past decade has seen a growing exploration of machine learning by the Galleries, Libraries, Archives and Museums (GLAM) sector. This interest is reflected in a growing number of networking initiatives ${}^{1}$ , research projects applying machine learning to GLAM collections (Lee et al., 2020; Lincoln et al., 2020; Dee) and projects developing machine learning tools aimed explicitly at GLAM institutions (Kahle et al., 2017).
12
+
13
+ A potential barrier to the effective adoption of machine learning within the sector is a "skills gap" amongst staff (Cox, 2021). The exact content of this skills gap depends largely on the perceived role of GLAM staff in relation to Machine Learning. Should staff be directly building and training models? Should they be able to document training data developed from library content? Should they be able to work as part of a team developing machine learning models?
14
+
15
+ This paper briefly provides some background on the GLAM sector and existing training initiatives aimed at GLAM staff. It then introduces our Carpentries workshop 'An Introduction to AI for GLAM'.
16
+
17
+ § 2. MACHINE LEARNING AND THE GLAM SECTOR
18
+
19
+ The GLAM sector encompasses a broad range of institutions in terms of size, budget, collections and primary audience. It is therefore difficult to make broad statements about the sector that will be completely accurate. However, there are areas of common activity and focus:
20
+
21
+ * Cataloguing and other forms of metadata generation.
22
+
23
+ * Enabling search and discovery of collections.
24
+
25
+ * Supporting and carrying out research.
26
+
27
+ * Public engagement and crowdsourcing.
28
+
29
+ These areas are all ones in which machine learning could be - or already is - having an impact. Beyond this there are also shared aims across the GLAM sector as well as existing groups organised as part of the GLAM sector (Zorich et al., 2008). This suggests that developing teaching material for this sector is worthwhile.
30
+
31
+ § 3. GLAM DRIVEN ML TRAINING INITIATIVES
32
+
33
+ Major GLAM institutions such as the British Library, The National Archives UK and the Smithsonian Institution have all undertaken to develop a wide range of training opportunities for their staff in this area, taking a variety of training approaches in getting this complex material across to a diversity of staff.
34
+
35
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
36
+
37
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
38
+
39
+ ${}^{1}$ These include AI4LAM: https://sites.google.com/view/ai41am, CENL AI in Libraries Network Group: https://www.cenl.org/networkgroups/ ai-in-libraries-network-group/, and the AEOLIAN network https://www.aeolian-network.net/ about/
40
+
41
+ § 3.1. LIBRARY CARPENTRY
42
+
43
+ Library Carpentry is an initiative under the Carpentries (Car, b) umbrella focused on teaching software skills to library and information related communities. Library Carpentry was originally developed in 2014 in response to demand for access to sofware and data skills amongst people working in libraries (Baker et al., 2016). The Library Carpentry curriculum covers materials that would be relevant to GLAM staff wanting to work with machine learning, including introductions to 'tidy data', version control systems, use of the command line, and basic programming in Python. ${}^{2}$
44
+
45
+ § 3.2. DIGITAL SCHOLARSHIP TRAINING PROGRAMME AT BRITISH LIBRARY
46
+
47
+ The Digital Research and Curator Team at the National Library of the UK, has run a digital and data skills training programme for British Library staff since 2012 (Dig). This programme creates opportunities for colleagues to develop necessary skills and knowledge to support, and undertake in their own right, emerging areas of modern scholarship such as the Digital Humanities. The programme supports several different modes of training delivering, from one-hour lunchtime lectures and a monthly reading group, 2 hour hands-on exploratory Hack and Yacks, to full day or week long courses on a given digital topic. The topic of machine learning has featured heavily throughout this programme in recent years, with a particular emphasis on introducing high level concepts and use cases in the application of ML in the GLAM sector to a novice audience, rather than delivering the software skills to implement them.
48
+
49
+ § 3.3. MACHINE LEARNING CLUB AT THE NATIONAL ARCHIVES
50
+
51
+ The National Archives have run a number of activities under the banner of Machine Learning Club for all interested staff. The initiative began with a one-off introductory talk, expanding to a series of talks covering multiple aspects of the topic. Enthusiasm amongst attendees to learn more led to the running of a series of workshops for technical and non-technical staff alike to gain hands-on experience of machine learning. As with the British Library, the aim was to teach concepts and for participants to understand their role within an ML eco-system.
52
+
53
+ § 3.4. COMPUTING FOR CULTURAL HERITAGE
54
+
55
+ The Computing for Cultural Heritage project (Com, a) was an Institute of Coding funded trial (2019/2021) that saw Birkbeck University of London, British Library and The National Archives UK develop a new one-year postgraduate certificate aimed at providing information professionals across GLAM sector with an understanding of basic programming, analytic tools and computing environments to support them in their daily work. It was born out of a need to further support those who, having gained a keen interest in machine learning through institutional staff training, required the programming skills to practically undertake it(Com, b).
56
+
57
+ These diverse training initiatives, all designed from a GLAM practitioners perspective on ML, have had positive impacts, with participants reporting increased confidence in interactions with external data scientists, and being able to bring ML into their own research. ${}^{3}$
58
+
59
+ § 4.AN INTRODUCTION TO AI FOR GLAM
60
+
61
+ In this section, we introduce our workshop, "An Introduction to AI for GLAM", currently under development as part of the "Carpentries Incubator". ${}^{4}$ This next section outlines the aims, topics and delivery methods of this workshop alongside a reflection on why these were chosen.
62
+
63
+ The material for this course were developed as part of a Carpentries Lesson Development Study Group (Car, a). This study group took place over a couple of months and was intended to help participants develop new Carpentries lessons. Four members of the group, representing three GLAM institutions, decided to collaborate on this AI lesson.
64
+
65
+ § 4.1. LEARNER PROFILES
66
+
67
+ As part of the process of developing the lesson materials, learner profiles were created. The Carpentries recommend the development of learner profiles as a way of better identifying the target audience and their needs (Wilson, 2019). Learner profiles require the description of some characteristics of an expected learner for the material. For example; "What is their expected educational level?", "What type of exposure do they have to the technologies you plan to teach?" and "What are the pain points they are currently experiencing?" (Wilson, 2019). Although the people depicted in the profiles are fictional, the development of the profiles often drew on real people who work in our home institutions.
68
+
69
+ One challenge of using the learner profiles for our lesson was adapting them to focus slightly less on their technical skills. The learner profiles were also adapted slightly to focus more on our learners' potential attitudes toward machine learning. Whilst we were developing the lessons, we wanted to keep in mind the varying levels of enthusiasm for machine learning from the target audience.
70
+
71
+ ${}^{3}$ See for instance student projects undertaken by Computing for Cultural Heritage https://www.bl.uk/case-studies/ computing-for-cultural-heritage-student-projects and https://blog.nationalarchives.gov.uk/ computing-cholera-topic-modelling-the-catalogue-entries-
72
+
73
+ ${}^{4}$ The incubator is a place for lessons to be developed outside of the core Carpentries curriculum https://github.com/ carpentries-incubator
74
+
75
+ ${}^{2}$ https://librarycarpentry.org/lessons/
76
+
77
+ § 4.2. AIMS
78
+
79
+ The role that GLAM staff should play in machine learning projects remains an open question. Some GLAM institutions might choose to "outsource" most of their machine learning efforts to commercial solutions, whilst others will want their staff to be involved more directly. Whilst the stakes are not as high, this mirrors discussions in other disciplines, such as medicine, around the role domain experts should play and, crucially, what they need to know about machine learning (Sim et al., 2021; Olczak et al., 2021). We would argue that regardless of whether GLAM staff will be "directly" involved, machine learning methods will be underpinning so many technologies in the future that basic literacy around machine learning is crucial for all GLAM staff.
80
+
81
+ Following this desire to develop a basic understanding of machine learning, 'An Introduction to AI for GLAM' has several aims. The primary overarching goal of the material is to provide an accessible introduction to machine learning for GLAM staff that is relevant to their work. In particular, the materials aim to introduce basic machine learning concepts and demystify training machine-learning models. The material also seeks to provide a high-level overview of what machine learning is good at and where its limitations lie. Giving a realistic account of the field is especially important in the context of the growing use of machine learning in the GLAM sector and the risks of machine learning approaches being "oversold" to staff.
82
+
83
+ Another central aim of the course is to emphasise ethical considerations. There are growing calls and examples of integrating ethics into the curriculums of machine learning courses (Garrett et al., 2020; Saltz et al., 2019). Highlighting potential ethical issues, particularly related to GLAM "data" challenges, is a possible role for GLAM staff in machine learning (Coleman, 2020).
84
+
85
+ Finally, the lesson material aims to give a sense of the steps involved in a machine learning project, from identifying a "business need" to deploying and monitoring models. The overview of these steps aims to help prepare GLAM staff to work as part of a machine learning project team rather than on the technical implementation of these different stages of a machine learning project.
86
+
87
+ § 4.3. LESSON TOPICS
88
+
89
+ We chose the topics covered in the workshop material with the learners' and aims in mind. The topics, as a result, differ from what may often be covered as part of an introductory machine learning course. The current "episodes" include:
90
+
91
+ * What are Artificial Intelligence (AI) and Machine Learning?
92
+
93
+ * What is Machine Learning good at?
94
+
95
+ * Understanding and managing bias.
96
+
97
+ * Applying Machine Learning.
98
+
99
+ * The Machine Learning ecosystem.
100
+
101
+ Since the aims of the session are to provide a broad introduction to the topic for GLAM staff, the topics have a strong focus on the practical applications and steps in machine learning and a relatively more minor focus on some conceptual topics. Some topics included in most machine learning introductions, for example, loss functions, are not covered in any detail. With that said, the first complete module of the session does provide a basic conceptual introduction to machine learning.
102
+
103
+ § 4.4. DELIVERY METHODS
104
+
105
+ Unlike many introductions to machine learning, the materials do not include 'hands-on' programming components. However, the goal of the lesson material is still to be practically focused and interactive. The Carpentries place a strong emphasis on interactivity and the inclusions of exercises in lesson materials.
106
+
107
+ Since the lessons do not cover coding, we did not include coding exercises. Instead, the aims of the exercises were to:
108
+
109
+ * Test and develop an understanding of important machine learning concepts.
110
+
111
+ * Encourage a reflection of how machine learning could be used within a GLAM setting.
112
+
113
+ * Encourage a reflection on how machine learning could be utilised within specific institutions in which the participants of the workshops are based.
114
+
115
+ * Emphasise a critical reflection on the ethical issues that can be raised by using machine learning and particularly how these appear in the context of GLAM institutions and collections.
116
+
117
+ Examples of the types of exercises include;
118
+
119
+ * Multiple choice questions designed to test that concepts have been understood. For example, understanding the difference between supervised and unsupervised learning.
120
+
121
+ * A Discussion prompt for thinking about how machine learning could be utilised: "How might object detection help speed up digitisation?".
122
+
123
+ * Group discussion of "points at which bias may enter the pipeline, and questions/strategies GLAM staff might want to consider in order to manage it."
124
+
125
+ * A "hands-on" activity exploring commercial computer vision services to reflect on their potential strengths and weaknesses.
126
+
127
+ We aimed for these exercises to build on experiences of running Library Carpentry workshops which includes similar discussion exercises. ${}^{5}$ These types of exercise are often not focused on teaching a "hands-on" skill as such but instead focus on increasing the learners broad understanding of a topic and, crucially, their confidence.
128
+
129
+ § 4.5. COMMUNITY DEVELOPMENT AND MAINTENANCE
130
+
131
+ We have several reasons for developing this workshop as part of the Carpentries ecosystem. The Carpentries workshop materials are generated from source files in a GitHub repository. This allows for anyone to make a pull request or open an issue related to the lesson. Beyond this technical ability to make changes, the Carpentries organise regular events to encourage review and development of existing materials.
132
+
133
+ Machine learning is a rapidly developing field with regular technical advances. Beyond this, there is also a growing maturity around the deployment of machine learning models in various domains. The use of machine learning in the GLAM sector will continue to develop over time, making it likely that aspects of our lesson will need to be updated. We hope that integrating the lesson materials inside the Carpentries ecosystem will help ensure that a broader community can update the material.
134
+
135
+ § 5. CONCLUSION
136
+
137
+ Introductory machine learning training that is grounded in the specific applications and use cases relevant to cultural heritage, that is practical, without being too overtly technical, will be key to ensuring the wider adoption of machine learning methods across GLAM. Though major cultural heritage institutions have undertaken in recent years to provide their own staff with a variety of training in this area, there is much to be gained by pooling expertise, resources, and experience to deliver a variety of open training materials available sector-wide. 'An Introduction to AI for GLAM' represents one effort, aiming to meet the growing demand for machine learning training specific to the sector, and provide a strong foundation for staff to gain confidence in entering this complex area. The Carpentries, and specifically Library Carpentry, though tending towards more technical lessons historically, provides the ideal home for such training. Library Carpentry acts as a natural sign-post for digital skill seeking GLAM professionals, and has a diverse and dedicated community at the ready to ensure not just the open, shared and continued maintenance of the materials, but the development of related and more advanced courses branching off the core foundation as necessary.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/P1NDIAZBwq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Data Analysis and Machine Learning Applications Curriculum for Open Science
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ The strong and growing role of machine learning (ML) in the physical sciences is well established and appropriate given the complex detectors and large data sets at the foundational layer of many scientific explorations. Increasingly, Physics departments are offering curricula to their undergraduate and graduate students that focus on the intersection of data science, machine learning and physical science. In this paper, we provide some perspective from experience in developing curriculum at the intersection of ML and physics on the potential role of open science in ML education and present some of the opportunities and challenges in the form of open questions for our community to explore.
8
+
9
+ ## 1. Introduction
10
+
11
+ It is important to teach students the fundamentals of how to analyze and interpret scientific data. Increasingly, this involves the application machine learning method, tools and techniques to problems common in scientific research such as classification and regression. Each day there are new applications of machine learning to the physical sciences in ways that are advancing our knowledge of nature.
12
+
13
+ This paper, we provide some perspective from experience in developing curriculum at the intersection of ML and physics on the potential role of open science in ML education and present some of the opportunities and challenges in the form of open questions for our community to explore.
14
+
15
+ One may ponder the value and effectiveness of physicists teaching ML. Should this not be best left to the computer science departments? It is this question that we explore briefly in this paper.
16
+
17
+ ## 2. Personal experience
18
+
19
+ A few years ago, I began discussing with undergraduate physics majors at our university during townhall-style events about our curriculum in an attempt to assess the level of interest in a physics-oriented course in machine learning applications. The response was overwhelmingly positive and it was clear to me that many of our students want this type of training. In 2018, I developed a new course our Department titled Data Analysis and Machine Learning Applications for Physicists.
20
+
21
+ I designed this course to teach the fundamentals of scientific data analysis and interpretation to our students and empower them through practical utilization of modern machine learning tools and techniques using open source data and software. This course has a number of innovative technical elements and was, to my knowledge, the first course in the Physics Department to be delivered solely through Jupyter Notebooks, Git Hub, and mini research projects utilizing open scientific data.
22
+
23
+ I have taught this course to both undergraduate and graduate students over the Spring 2019 and Fall 2019 semesters and continue to develop the curriculum with the help of a postdoc and graduate student in the research group that I lead in particle physics.
24
+
25
+ This course is designed to be interactive and collaborative, employing Active Learning methods, at the same time developing students' skills and knowledge. We live in an increasingly data-centric world, with both people and machines learning from vast amounts of data. There has never been a time where early-career physicists were more in need of a solid understanding in the basics of scientific data analysis and interpretation, data-driven inference and machine learning, and a working knowledge of the most important tools and techniques from modern data science than today.
26
+
27
+ Particle physics holds a prominent role within academic curriculum. There are a number of reasons for this, including the "fundamental" nature of our science, the compelling historical develop of our field, theoretical research that applies and develops advanced mathematics, powerful applications, and high-visibility spin-off technologies (e.g. WWW).
28
+
29
+ ---
30
+
31
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
32
+
33
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
34
+
35
+ ---
36
+
37
+ At the same time, machine learning has an increasing prominent role in our science, Physicists are increasing collaborating with computer scientists and industry to develop "physics-driven" or "physics-inspired" machine learning architectures and methods.
38
+
39
+ Topics covered in the course that I mention include:
40
+
41
+ - Notebooks and numerical python
42
+
43
+ - Handling and Visualizing Data
44
+
45
+ - Finding structure in data
46
+
47
+ 067 - Measuring and reducing dimensionality
48
+
49
+ 068 - Adapting linear methods to nonlinear problems
50
+
51
+ 069 - Estimating probability density
52
+
53
+ - Probability theory
54
+
55
+ - Statistical methods
56
+
57
+ - Bayesian statistics
58
+
59
+ - Markov-chain Monte Carlo in practice
60
+
61
+ - Stochastic processes and Markov-chain theory
62
+
63
+ - Variational inference
64
+
65
+ - Optimization
66
+
67
+ - Computational graphs and probabilistic programming
68
+
69
+ - Bayesian model selection
70
+
71
+ - Learning in a probabilistic context
72
+
73
+ - Supervised learning in Scikit-Learn
74
+
75
+ - Cross validation
76
+
77
+ - Neural networks
78
+
79
+ - Deep learning
80
+
81
+ Topics are demonstrated in-class through live-code examples and slides within in Juypter notebooks.
82
+
83
+ Lectures The lectures include physics and data science pedagogy, demonstrated through live examples in Jupyter notebooks that students work through in class.
84
+
85
+ Homework Homework is an important part of the course where students have an opportunity to apply the techniques they learn to problems relevant to the analysis of scientific data. Students submit their homework via a private Github repository they create at the beginning of the semester. The grading is done primarily via NBGRADER but quality is checked and feedback is provided by teaching assistants that grade the submitted material.
86
+
87
+ Projects Approximately halfway through the course, students have the opportunity to choose from a set of projects that use open scientific data. They are asked to answer cer-
88
+
89
+ 108 tain questions about the data, supported by their data analy- sis and written up in a Jupyter notebook which is submitted 109 in an analogous manner as the homework. The completed project notebook also should include background information about how the data is generated, its scientific relevance and the students methodology.
90
+
91
+ Projects in this course that I mention include:
92
+
93
+ - Searching for Higgs Boson Decays with Deep Learning
94
+
95
+ - Search for Exotic Particles
96
+
97
+ - Exploration of the Galaxy Zoo (Sky Survey data)
98
+
99
+ Addition projects are underway using open data from scientific disciplines such as quantum information, materials science, biophysics, genomics, and others.
100
+
101
+ ## 3. Outlook and Opportunities
102
+
103
+ There are many opportunities and challenges of developing curriculum at the intersection of physics and machine learning. A crucial element of these courses that involve student projects is the availability and discoverability of open scientific data for education. Projects based on ML applications to physics data are a strength of these type of courses. As such, a well-curated set of data for education is critical. Similarly, curating a list of courses that science faculty have developed or taught would be incredibly useful to share information and experience.
104
+
105
+ Physicists have a great deal to offer in ML education, as topics can be explored using real data from experiments and simulation of phenomena in the world we live.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/P1NDIAZBwq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § DATA ANALYSIS AND MACHINE LEARNING APPLICATIONS CURRICULUM FOR OPEN SCIENCE
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ The strong and growing role of machine learning (ML) in the physical sciences is well established and appropriate given the complex detectors and large data sets at the foundational layer of many scientific explorations. Increasingly, Physics departments are offering curricula to their undergraduate and graduate students that focus on the intersection of data science, machine learning and physical science. In this paper, we provide some perspective from experience in developing curriculum at the intersection of ML and physics on the potential role of open science in ML education and present some of the opportunities and challenges in the form of open questions for our community to explore.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ It is important to teach students the fundamentals of how to analyze and interpret scientific data. Increasingly, this involves the application machine learning method, tools and techniques to problems common in scientific research such as classification and regression. Each day there are new applications of machine learning to the physical sciences in ways that are advancing our knowledge of nature.
12
+
13
+ This paper, we provide some perspective from experience in developing curriculum at the intersection of ML and physics on the potential role of open science in ML education and present some of the opportunities and challenges in the form of open questions for our community to explore.
14
+
15
+ One may ponder the value and effectiveness of physicists teaching ML. Should this not be best left to the computer science departments? It is this question that we explore briefly in this paper.
16
+
17
+ § 2. PERSONAL EXPERIENCE
18
+
19
+ A few years ago, I began discussing with undergraduate physics majors at our university during townhall-style events about our curriculum in an attempt to assess the level of interest in a physics-oriented course in machine learning applications. The response was overwhelmingly positive and it was clear to me that many of our students want this type of training. In 2018, I developed a new course our Department titled Data Analysis and Machine Learning Applications for Physicists.
20
+
21
+ I designed this course to teach the fundamentals of scientific data analysis and interpretation to our students and empower them through practical utilization of modern machine learning tools and techniques using open source data and software. This course has a number of innovative technical elements and was, to my knowledge, the first course in the Physics Department to be delivered solely through Jupyter Notebooks, Git Hub, and mini research projects utilizing open scientific data.
22
+
23
+ I have taught this course to both undergraduate and graduate students over the Spring 2019 and Fall 2019 semesters and continue to develop the curriculum with the help of a postdoc and graduate student in the research group that I lead in particle physics.
24
+
25
+ This course is designed to be interactive and collaborative, employing Active Learning methods, at the same time developing students' skills and knowledge. We live in an increasingly data-centric world, with both people and machines learning from vast amounts of data. There has never been a time where early-career physicists were more in need of a solid understanding in the basics of scientific data analysis and interpretation, data-driven inference and machine learning, and a working knowledge of the most important tools and techniques from modern data science than today.
26
+
27
+ Particle physics holds a prominent role within academic curriculum. There are a number of reasons for this, including the "fundamental" nature of our science, the compelling historical develop of our field, theoretical research that applies and develops advanced mathematics, powerful applications, and high-visibility spin-off technologies (e.g. WWW).
28
+
29
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
30
+
31
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
32
+
33
+ At the same time, machine learning has an increasing prominent role in our science, Physicists are increasing collaborating with computer scientists and industry to develop "physics-driven" or "physics-inspired" machine learning architectures and methods.
34
+
35
+ Topics covered in the course that I mention include:
36
+
37
+ * Notebooks and numerical python
38
+
39
+ * Handling and Visualizing Data
40
+
41
+ * Finding structure in data
42
+
43
+ 067 - Measuring and reducing dimensionality
44
+
45
+ 068 - Adapting linear methods to nonlinear problems
46
+
47
+ 069 - Estimating probability density
48
+
49
+ * Probability theory
50
+
51
+ * Statistical methods
52
+
53
+ * Bayesian statistics
54
+
55
+ * Markov-chain Monte Carlo in practice
56
+
57
+ * Stochastic processes and Markov-chain theory
58
+
59
+ * Variational inference
60
+
61
+ * Optimization
62
+
63
+ * Computational graphs and probabilistic programming
64
+
65
+ * Bayesian model selection
66
+
67
+ * Learning in a probabilistic context
68
+
69
+ * Supervised learning in Scikit-Learn
70
+
71
+ * Cross validation
72
+
73
+ * Neural networks
74
+
75
+ * Deep learning
76
+
77
+ Topics are demonstrated in-class through live-code examples and slides within in Juypter notebooks.
78
+
79
+ Lectures The lectures include physics and data science pedagogy, demonstrated through live examples in Jupyter notebooks that students work through in class.
80
+
81
+ Homework Homework is an important part of the course where students have an opportunity to apply the techniques they learn to problems relevant to the analysis of scientific data. Students submit their homework via a private Github repository they create at the beginning of the semester. The grading is done primarily via NBGRADER but quality is checked and feedback is provided by teaching assistants that grade the submitted material.
82
+
83
+ Projects Approximately halfway through the course, students have the opportunity to choose from a set of projects that use open scientific data. They are asked to answer cer-
84
+
85
+ 108 tain questions about the data, supported by their data analy- sis and written up in a Jupyter notebook which is submitted 109 in an analogous manner as the homework. The completed project notebook also should include background information about how the data is generated, its scientific relevance and the students methodology.
86
+
87
+ Projects in this course that I mention include:
88
+
89
+ * Searching for Higgs Boson Decays with Deep Learning
90
+
91
+ * Search for Exotic Particles
92
+
93
+ * Exploration of the Galaxy Zoo (Sky Survey data)
94
+
95
+ Addition projects are underway using open data from scientific disciplines such as quantum information, materials science, biophysics, genomics, and others.
96
+
97
+ § 3. OUTLOOK AND OPPORTUNITIES
98
+
99
+ There are many opportunities and challenges of developing curriculum at the intersection of physics and machine learning. A crucial element of these courses that involve student projects is the availability and discoverability of open scientific data for education. Projects based on ML applications to physics data are a strength of these type of courses. As such, a well-curated set of data for education is critical. Similarly, curating a list of courses that science faculty have developed or taught would be incredibly useful to share information and experience.
100
+
101
+ Physicists have a great deal to offer in ML education, as topics can be explored using real data from experiments and simulation of phenomena in the world we live.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/Qf691VTesDa/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Teaching Machine Learning in Argentina: the ClusterAI pipeline
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ Teaching machine learning has been a growing activity in almost any educational establishment. Despite the high availability on study materials, Latin America region has seen a lack of educational programs focused on machine learning. Additionally the majority of educational materials are available only in English. In this work we propose the ClusterAI pipeline based on a curated list of topics in Spanish and a collaboration with the Buenos Aires city government that open public data-sets that let students to apply machine learning models on real data.
8
+
9
+ ## 1. Introduction
10
+
11
+ The last decade has experienced a bloom in the machine learning and data science fields propelled mainly by the improvement of new processors, data availability and new statistical learning algorithms. Additionally the availability of thousands of new papers, open course materials and new platforms to run and implement learning models allow the access of machine learning content to a wide audience of practitioners, researchers and enthusiasts. Despite that the highly availability of materials to learn machine learning has been increasing during the last years there is a lack of content in Spanish for the Latin American region. Multiple meetings and workshops actions have been taken in the South America region such as Khipu (khipu, 2019), the Sao Paulo Advanced School on Learning from Data (sp-sas, 2019) and the Machine Learning Summer School 2018 (MLSS2018, 2018), nevertheless these events were designed mainly for a semi-senior audience and attendees with at least 1 or 2 years of experience in the machine learning field such as PhD students.
12
+
13
+ Additionally we found that there are not enough open courses designed for undergraduate students with an initial and basic training in algebra and statistics such as engineering, biology, social sciences, economics and design students to let them their first steps in Machine Learning techniques and applications. In this work we present the ClusterAI pipeline held at the Universidad Tecnologica Na-cional Buenos Aires, an free and open source Machine Learning program designed for last year STEM and Social sciences students. The course has multiple objectives: training about the statistical learning approach, use computational tools to run the machine learning methods, use real data-sets from Buenos Aires city data portal and presentation of results to a wide audience.
14
+
15
+ ## 2. Course Requirements
16
+
17
+ From a computational point of view the course does not requires students previous programming skills. To help students to deal with the first steps a crash-course workshop has been designed to introduce students to the python and Jupyter notebook framework before getting in the proposed ClusterAI course where libraries such as Numpy, Pandas, Matplotlib and Scipy are introduced for the first steps.
18
+
19
+ From a theoretical point of view the course assumes a student has a basic knowledge in algebra, statistics and calculus. It is assumed that the incoming students understand the concept of matrix, vector, random variables and probability density functions.
20
+
21
+ ### 2.1. Student profile
22
+
23
+ The ClusterAI pipeline initially started as a machine learning course in the Industrial Engineering degree at Universidad Tecnologica Nacional of Buenos Aires for last-year undergrad students. Nevertheless the program has been opened to students from other disciplines such as electronics engineering, computer engineering, biology, economics and political science. Besides any kind of heterogeneity, most of the students share lack of formal training in any programming language nor advanced analytics.
24
+
25
+ ### 3.The ClusterAI pipeline: Contents of the course and learning path
26
+
27
+ The course is divided in 7 chapters. We base the idea of making each chapter as a workshop and the full course a sequence of workshops where on each one we study a specific topic. The first six chapters of the course are: exploratory data analysis, supervised learning, unsupervised learning, dimensionality reduction, introduction to natural language processing and neural networks.
28
+
29
+ ---
30
+
31
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
32
+
33
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
34
+
35
+ ---
36
+
37
+ ![01963a93-a75d-752a-8d4f-5af07bb1ce3a_1_312_371_367_555_0.jpg](images/01963a93-a75d-752a-8d4f-5af07bb1ce3a_1_312_371_367_555_0.jpg)
38
+
39
+ Figure 1. Pipeline of topics covered during the CluterAI course.
40
+
41
+ Then the chapter 07 corresponds to the final project presented by the students and described in 4 and 6 of this work. The bibliography used are papers corresponding to each topic and the following books: Elements of statistical learning (Friedman et al., 2001), Machine Learning and Pattern Recognition (Bishop, 2006) and the Deep Learning Book (Goodfellow et al., 2016) among others. The following subsections details the first six chapters of the course.
42
+
43
+ ### 3.1. Exploratory Data Analysis
44
+
45
+ The exploratory data analysis (EDA) part is the initial stage of the course and the objective is to teach students how to handle, pre-process, explore, visualize and describe tabular data. Despite Machine Learning is commonly applied on different data modalities such as Natural Language, Images, Time Series and Graphs we decide to start with the most common and easy to work data format: tabular data.
46
+
47
+ The EDA part is composed by three chapters of the course. The first three chapters are more technical than theoretical since the objective is to introduce the students to the Python stack for data analysis. We assume the learning curve of a student to get used with Numpy, Pandas, Mat-plotlib and Scipy can take three to four weeks. During these chapters students learn to use Jupyter Notebooks and make exploratory data analysis on tabular data, mainly .CVS files. The exploratory analysis aims to explain multiple visualization and analytical approaches such as heat-maps, box-plots,
48
+
49
+ 108 scatter-plots, bar-plots and histograms among others to de- scribe visually the data-set. 109 To handle tabular data Pandas and Numpy libraries are used to filter, concatenate, merge and process data tables. Additionally pre-processing tasks such as adding dummy variables and cleaning or imputing NaN values are explained. Finally feature pre-processing methods like Auto-scaling and standardization (van den Berg et al., 2006) and Min-Max normalization (Jain & Bhandare, 2011) are presented to transform real-valued and integer features.
50
+
51
+ All the previous concepts are explained from a theoretical point of view and simultaneously by applying with coding exercises on real data from the Buenos Aires city government open data portal such as the (GCBA, 2019).
52
+
53
+ ### 3.2. Supervised Learning
54
+
55
+ Once the exploratory data analysis part is covered the first Machine Learning concept to teach is the Supervised Learning approach. This part has three chapters about statistical learning theory and applications where classification and regression learning tasks are covered.
56
+
57
+ As an introduction to supervised learning first the concept of how samples are drown from a d-dimensional space $\mathcal{X} \in {R}^{d}$ where $d$ is the number of dimensions or features and the multivariate approach is needed to learn from high dimensional data.
58
+
59
+ Then supervised learning is explained by first introducing a hypothesis space of potential functions to be used in the learning task and the role of the loss function to pick the best one within the available space.
60
+
61
+ After defining the differences between regression and classification tasks linear and nonlinear models are explained and how these play a role in the required complexity of the model. Linear decision boundaries are studied for binary and multi-class classification problems along with the variance versus bias concept and how this impact in the model selection task. To deal with the model selection step cross validation, grid search and model evaluation are studied.
62
+
63
+ The classifiers used in the course are support vector machines, k-nearest neigh boors, logistic regression and random forests. For regression tasks linear regression, polynomial regression, support vector regression and k-nearest neigh boors regression are studied.
64
+
65
+ To study the non-linear case of the support vector machines Kernel Methods are presented.
66
+
67
+ ### 3.3. Unsupervised Learning
68
+
69
+ The unsupervised learning part is based mainly on the clustering and community detection methods. The idea on this part is to explain the similarity between sample vectors concept and how different measures of similarity can be used to understand if a pair of samples are similar or not. Clusters are explained and downstream analysis studied once the segmentation of samples is obtained.
70
+
71
+ Two popular clustering methods, K-means and Hierarchical clustering, are explained and applied on Buenos Aires subway stations or house prices in Buenos Aires City data-sets. Evaluation metrics such as Silhouette Index (Brun et al., 2007) and Rand Index (Hubert & Arabie, 1985) are studied to let the student decide which number of clusters is the best for a given problem.
72
+
73
+ Despite dimension reduction can be included within unsupervised learning we considered to studied it on a separate chapter since it can be used for both supervised and unsupervised downstream learning tasks.
74
+
75
+ ### 3.4. Dimensionality Reduction
76
+
77
+ In the dimensionality reduction chapter linear and nonlinear unsupervised approaches are studied.
78
+
79
+ The first concept to introduce is the curse of dimensionality and how the sample to feature ratio is and important aspect to analyze before implementing a machine learning task. Additionally a section explaining the differences between feature selection and feature extraction methods is included. The two method studied in this chapter are Principal Component Analysis, Kernel Principal Component analysis (kPCA) and T-distributed Stochastic Neighbor Embedding (t-SNE). The idea for this chapter is to introduce the students to visualize high dimensional data in two dimensions, to reduce the complexity of a learning task due to the dimension reduction and to improve the sample to feature ratio to avoid the curse of dimensionality.
80
+
81
+ ### 3.5. Introduction to Natural Language Processing
82
+
83
+ The idea of the NLP chapter is to make a brief introduction to other types of data formats beyond the tabular data case such as Natural language. In this chapter simple and introductory techniques such as Tokenization, Bag of Words and the TF-IDF are studied and applied on toy and real data-sets. We encourage students to build their own data-sets with real data by taking more than 200 headlines of two newspapers from Argentina: Clarin and Pagina12. Both newspapers are known to be ambassadors of two extreme opposite political parties thus their economic headlines tend to encode signal according to two classes. They are encouraged to build the data-sets with these newspapers and learn a low dimensional representation of the economic headlines. Then a supervised and unsupervised approaches are used to analyze how headers from each newspaper tend to group.
84
+
85
+ Another application is the pre-processing and classification of positive and negative movie reviews, which is a popular and common teaching example for NLP tasks.
86
+
87
+ ### 3.6. Neural Networks
88
+
89
+ The neural network chapter is the last one before the student project part. To make easier the first steps for the students neural networks are explained to be trained only for tabular data since convolutions are more complex and out of the scope for the students in this initial stage.
90
+
91
+ Before studying neural nets the Perceptron model is presented followed by different activation functions. Then the multilayer perceptron model is presented and how different architectures such as number of hidden layers or number of neurons per layer. Then loss functions such as Mean Squared Error and Cross Entropy are studied in addition to the concept of local minimum in the loss landscape function. Additionally the gradient descent and backpropaga-tion algorithm are explained. Finally it is explained how to improve the neural network training by regularization terms, reducing learning rate on plateau, Dropout and Batch Normalization. With the explained concepts students are encouraged to train neural networks on simple and tabular labeled data-sets such as the Wisconsin Breast Cancer or the Iris data-set and perform classification. The idea is to let students to train neural networks on simple problems to understand how all the hyper-parameters affect the results. Once basic neural network implementations for supervised learning such as classification problems are introduced then the Autoencoder model is studied as an unsupervised nonlinear dimension reduction method. This part students benchmark low dimensional visualization tasks of the auto-encoder against the PCA and Kernel-PCA. The application case involving real data is the gene expression cancer dataset from the International Cancer Genome Consortium to let students learn low dimensional latent space of tumors and perform supervised or unsupervised downstream tasks.
92
+
93
+ Finally to show the potential of neural networks to deal with time-series and sequences the Recurrent Neural Network model is studied with an application of time-series classification from signals measured in engines where each one has a different manufacturing setup.
94
+
95
+ This topic concludes the methodology and theoretical aspects of the course. Students are encouraged to use the presented methods on real data-sets from the Buenos Aires city open data portal presented in the next section.
96
+
97
+ ## 4. Student Projects
98
+
99
+ One of the main goals for the students in this course is to develop from the ground up a applied Machine Learning based project aiming for solving real problems or getting new insights of an existing situation.
100
+
101
+ By making groups of three students each group has the objective to pick a data-set from the Open Data Portal from the Buenos Aires city government (GCBA, 2019) to make an exploratory data analysis and implement a supervised or unsupervised learning approach to discover insights from the selected data-set. All the implementations are required to be done in a Python framework as explained in section 5 . Additionally, each group has assigned one alumni student serving as a mentor and helping them to get along with the objective to ensure a consistent result to be shared at the end of the course. During the course, students have different checkpoints with professors and mentors on how to choose a profitable data-set, working strategies, best suited algorithms and other technical issues.
102
+
103
+ The final delivery is divided into 3 pillars: first, a Jupyter Notebook with the development of the project, second, a technical report explaining the root of the problem they were trying to solve, how they overcome all difficulties and the conclusions and last but not least a research poster that is presented to the public in an open event at the university where not only members of the university attend but also people from different backgrounds and disciplines.
104
+
105
+ With this methodology, students are exposed to a holistic view, from understanding the problem, building the pipeline, overcoming technical issues and presenting results in a open Data Science fair with +100 participants from different backgrounds.
106
+
107
+ ## 5. Technologies used
108
+
109
+ The ClusterAI pipeline is a Python-based class. All classroom exercises and explanations are coded in Jupyter notebooks (Pérez & Granger, 2007), that serve for multiple purposes: explaining and visualizing theoretical concepts with toy data, solving simple exercises with popular datasets such as the Wisconsin Breast Cancer or Iris dataset and coding implementations on real data as case studies class by class.
110
+
111
+ Sklearn and Tensorflow Keras are the main libraries to implement ML algorithms during the course as well as Pandas, Numpy (Harris et al., 2020) and Matplotlib for data wrangling and visualization in are used.
112
+
113
+ Additionally all workshop material is stored in a public repository in Github (clusterai, 2018) where notebooks and scripts are built and published by professors in collaboration with mentors. Each year new content is updated and created. For communication purposes between mentors, professors and students the course uses Slack channels (slack, 2021). The tool is useful to communicate the schedule for each class, upload recorded classes and complementary content, create polls and share related news, posts, blogs, etc. During the COVID-19 pandemic the channel Moreover resulted useful to replace the physical room allowing students to share content and decentralize the flow of information.
114
+
115
+ As additional content, there is a YouTube channel where the intention is to upload tutorials on common questions among the students, for example, installing Python and Anaconda (youtube, 2021).
116
+
117
+ ## 6. Alliance with Buenos Aires Government
118
+
119
+ As explained in section 5 the last stage of the ClusterAI pipeline is to request students an machine learning application project divided in two parts: an exploratory data analysis and a machine learning application. By an alliance with the Buenos Aires City Government the datasets used are obtained from the Buenos Aires Open Data portal data.buenosaires.gob.ar (BA Data). The BA DATA portal is open sourced and its data-sets record approximately 60.000 downloads per month. It showcases 421 data-sets from 31 different organizations within the Buenos Aires City Government related to 12 central government initiatives: Public Administration, Culture and Tourism, Human Development, Economy and Finances, Education, Gender, Environment, Transportation, Health, Security, Urbanism and territory and COVID-19. Since December 2019, the Undersecretary of Evidence-Based Public Policies is the central team responsible for data and open data management and manages the platform. BA DATA is Buenos Aires City's open data portal, where public data-sets are generated, saved and published. It's goal is to strengthen the city government's transparency, encourage citizenship participation, promote data reuse and facilitate innovation from data.
120
+
121
+ Since 2018 the ClusterAI students have done more than 50 projects using datasets from BA data. Some published projects done by the students include Regresion models applied to familiar violence estimation (Bellini, 2019), detection of financial behaviour using machine learning models (Weigandi, 2019), prediction of the type of vegetation (Tettamanti, 2019), commercial opportunities map (Liber-tun, 2019), quality air analysis via machine learning models (Cavallucci, 2019) and car robbery analysis (Carpaneto, 2019) among others.
122
+
123
+ ## 7. Conclusions
124
+
125
+ In this work we present the open ClusterAI pipeline used to teach machine learning to undergraduate students in Argentina. The proposed pipeline is composed by seven chapters where the first six are dedicated to theory and applications of multiple machine learning methods while the last chapter is focused on student projects using the Buenos Aires open data portal. Student projects include a wide range of applications using real data from city sensors. Finally students are encouraged to present via poster sessions, technical reports and github repositories the projects in order to promote open developments and to encourage the local community to use public data. Despite the clusterAI pipeline has been designed by engineering students it has also been validated with students from other disciplines such as social science, economics and biology.
126
+
127
+ ## References
128
+
129
+ Bellini. Family violence estimation, 2019. URL https://github.com/MartinBellini/
130
+
131
+ Bellini-Grupo14.
132
+
133
+ Bishop, C. M. Pattern recognition. Machine learning, 128 (9), 2006.
134
+
135
+ Brun, M., Sima, C., Hua, J., Lowey, J., Carroll, B., Suh, E., and Dougherty, E. R. Model-based evaluation of clustering validation measures. Pattern recognition, 40 (3):807-824, 2007.
136
+
137
+ Carpaneto. Roberry car analysis, 2019. URL https://github.com/ilcarbo/data_ science_clusterai_carpaneto.
138
+
139
+ Cavallucci. Air quality analysis, 2019. URL https://github.com/Aldicavallucci/ data_science_clusterai_cavallucci.
140
+
141
+ clusterai. Clusterai machine learning course, 2018. URL https://github.com/clusterai/ clusterai_2021.
142
+
143
+ Friedman, J., Hastie, T., Tibshirani, R., et al. The elements of statistical learning, volume 1. Springer series in statistics New York, 2001.
144
+
145
+ GCBA. Buenos aires data, 2019. URL http://https: //data.buenosaires.gob.ar/.
146
+
147
+ Goodfellow, I., Bengio, Y., Courville, A., and Bengio, Y. Deep learning, volume 1. MIT press Cambridge, 2016.
148
+
149
+ Harris, C. R., Millman, K. J., van der Walt, S. J., Gommers, R., Virtanen, P., Cournapeau, D., Wieser, E., Taylor, J., Berg, S., Smith, N. J., et al. Array programming with numpy. Nature, 585(7825):357-362, 2020.
150
+
151
+ Hubert, L. and Arabie, P. Comparing partitions. Journal of classification, 2(1):193-218, 1985.
152
+
153
+ Jain, Y. K. and Bhandare, S. K. Min max normalization based data perturbation method for privacy protection. International Journal of Computer & Communication Technology, 2(8):45-50, 2011.
154
+
155
+ khipu. Khipu, 2019. URL https://khipu.ai/.
156
+
157
+ Libertun. Commercial opportunity map, 2019. URL https://github.com/FedericoLibertun/ TP-Mapa-de-Oportunidades-Comerciales.
158
+
159
+ MLSS2018. Machine learning summer school 2019, 2018. URL https://mlss2018.net.ar/.
160
+
161
+ Pérez, F. and Granger, B. E. IPython: a system for interactive scientific computing. Computing in Science and Engineering, 9(3):21-29, May 2007. ISSN 1521- 9615. doi: 10.1109/MCSE.2007.53. URL https: //ipython.org.
162
+
163
+ slack. Slack technologies, 2021. URL https://slack.
164
+
165
+ com/.
166
+
167
+ spsas. Sao paulo school of advanced science on learning from data, 2019. URL https://sites.usp.br/datascience/ spsas-learning-from-data/.
168
+
169
+ Tettamanti. Plant type prediction, 2019. URL https: //github.com/atettamanti/TP-grupo-0.
170
+
171
+ van den Berg, R. A., Hoefsloot, H. C., Westerhuis, J. A., Smilde, A. K., and van der Werf, M. J. Centering, scaling, and transformations: improving the biological information content of metabolomics data. BMC genomics, 7(1): $1 - {15},{2006}$ .
172
+
173
+ Weigandi. Financial behaviour, 2019. URL https://github.com/iweigandi/data_ science_clusterai_weigandi.
174
+
175
+ youtube. Youtube clusterai course, 2021. URL https://www.youtube.com/channel/ UC3AG3wNJDoMbWpwEGtNO0wQ/featured.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/Qf691VTesDa/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § TEACHING MACHINE LEARNING IN ARGENTINA: THE CLUSTERAI PIPELINE
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ Teaching machine learning has been a growing activity in almost any educational establishment. Despite the high availability on study materials, Latin America region has seen a lack of educational programs focused on machine learning. Additionally the majority of educational materials are available only in English. In this work we propose the ClusterAI pipeline based on a curated list of topics in Spanish and a collaboration with the Buenos Aires city government that open public data-sets that let students to apply machine learning models on real data.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ The last decade has experienced a bloom in the machine learning and data science fields propelled mainly by the improvement of new processors, data availability and new statistical learning algorithms. Additionally the availability of thousands of new papers, open course materials and new platforms to run and implement learning models allow the access of machine learning content to a wide audience of practitioners, researchers and enthusiasts. Despite that the highly availability of materials to learn machine learning has been increasing during the last years there is a lack of content in Spanish for the Latin American region. Multiple meetings and workshops actions have been taken in the South America region such as Khipu (khipu, 2019), the Sao Paulo Advanced School on Learning from Data (sp-sas, 2019) and the Machine Learning Summer School 2018 (MLSS2018, 2018), nevertheless these events were designed mainly for a semi-senior audience and attendees with at least 1 or 2 years of experience in the machine learning field such as PhD students.
12
+
13
+ Additionally we found that there are not enough open courses designed for undergraduate students with an initial and basic training in algebra and statistics such as engineering, biology, social sciences, economics and design students to let them their first steps in Machine Learning techniques and applications. In this work we present the ClusterAI pipeline held at the Universidad Tecnologica Na-cional Buenos Aires, an free and open source Machine Learning program designed for last year STEM and Social sciences students. The course has multiple objectives: training about the statistical learning approach, use computational tools to run the machine learning methods, use real data-sets from Buenos Aires city data portal and presentation of results to a wide audience.
14
+
15
+ § 2. COURSE REQUIREMENTS
16
+
17
+ From a computational point of view the course does not requires students previous programming skills. To help students to deal with the first steps a crash-course workshop has been designed to introduce students to the python and Jupyter notebook framework before getting in the proposed ClusterAI course where libraries such as Numpy, Pandas, Matplotlib and Scipy are introduced for the first steps.
18
+
19
+ From a theoretical point of view the course assumes a student has a basic knowledge in algebra, statistics and calculus. It is assumed that the incoming students understand the concept of matrix, vector, random variables and probability density functions.
20
+
21
+ § 2.1. STUDENT PROFILE
22
+
23
+ The ClusterAI pipeline initially started as a machine learning course in the Industrial Engineering degree at Universidad Tecnologica Nacional of Buenos Aires for last-year undergrad students. Nevertheless the program has been opened to students from other disciplines such as electronics engineering, computer engineering, biology, economics and political science. Besides any kind of heterogeneity, most of the students share lack of formal training in any programming language nor advanced analytics.
24
+
25
+ § 3.THE CLUSTERAI PIPELINE: CONTENTS OF THE COURSE AND LEARNING PATH
26
+
27
+ The course is divided in 7 chapters. We base the idea of making each chapter as a workshop and the full course a sequence of workshops where on each one we study a specific topic. The first six chapters of the course are: exploratory data analysis, supervised learning, unsupervised learning, dimensionality reduction, introduction to natural language processing and neural networks.
28
+
29
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
30
+
31
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
32
+
33
+ < g r a p h i c s >
34
+
35
+ Figure 1. Pipeline of topics covered during the CluterAI course.
36
+
37
+ Then the chapter 07 corresponds to the final project presented by the students and described in 4 and 6 of this work. The bibliography used are papers corresponding to each topic and the following books: Elements of statistical learning (Friedman et al., 2001), Machine Learning and Pattern Recognition (Bishop, 2006) and the Deep Learning Book (Goodfellow et al., 2016) among others. The following subsections details the first six chapters of the course.
38
+
39
+ § 3.1. EXPLORATORY DATA ANALYSIS
40
+
41
+ The exploratory data analysis (EDA) part is the initial stage of the course and the objective is to teach students how to handle, pre-process, explore, visualize and describe tabular data. Despite Machine Learning is commonly applied on different data modalities such as Natural Language, Images, Time Series and Graphs we decide to start with the most common and easy to work data format: tabular data.
42
+
43
+ The EDA part is composed by three chapters of the course. The first three chapters are more technical than theoretical since the objective is to introduce the students to the Python stack for data analysis. We assume the learning curve of a student to get used with Numpy, Pandas, Mat-plotlib and Scipy can take three to four weeks. During these chapters students learn to use Jupyter Notebooks and make exploratory data analysis on tabular data, mainly .CVS files. The exploratory analysis aims to explain multiple visualization and analytical approaches such as heat-maps, box-plots,
44
+
45
+ 108 scatter-plots, bar-plots and histograms among others to de- scribe visually the data-set. 109 To handle tabular data Pandas and Numpy libraries are used to filter, concatenate, merge and process data tables. Additionally pre-processing tasks such as adding dummy variables and cleaning or imputing NaN values are explained. Finally feature pre-processing methods like Auto-scaling and standardization (van den Berg et al., 2006) and Min-Max normalization (Jain & Bhandare, 2011) are presented to transform real-valued and integer features.
46
+
47
+ All the previous concepts are explained from a theoretical point of view and simultaneously by applying with coding exercises on real data from the Buenos Aires city government open data portal such as the (GCBA, 2019).
48
+
49
+ § 3.2. SUPERVISED LEARNING
50
+
51
+ Once the exploratory data analysis part is covered the first Machine Learning concept to teach is the Supervised Learning approach. This part has three chapters about statistical learning theory and applications where classification and regression learning tasks are covered.
52
+
53
+ As an introduction to supervised learning first the concept of how samples are drown from a d-dimensional space $\mathcal{X} \in {R}^{d}$ where $d$ is the number of dimensions or features and the multivariate approach is needed to learn from high dimensional data.
54
+
55
+ Then supervised learning is explained by first introducing a hypothesis space of potential functions to be used in the learning task and the role of the loss function to pick the best one within the available space.
56
+
57
+ After defining the differences between regression and classification tasks linear and nonlinear models are explained and how these play a role in the required complexity of the model. Linear decision boundaries are studied for binary and multi-class classification problems along with the variance versus bias concept and how this impact in the model selection task. To deal with the model selection step cross validation, grid search and model evaluation are studied.
58
+
59
+ The classifiers used in the course are support vector machines, k-nearest neigh boors, logistic regression and random forests. For regression tasks linear regression, polynomial regression, support vector regression and k-nearest neigh boors regression are studied.
60
+
61
+ To study the non-linear case of the support vector machines Kernel Methods are presented.
62
+
63
+ § 3.3. UNSUPERVISED LEARNING
64
+
65
+ The unsupervised learning part is based mainly on the clustering and community detection methods. The idea on this part is to explain the similarity between sample vectors concept and how different measures of similarity can be used to understand if a pair of samples are similar or not. Clusters are explained and downstream analysis studied once the segmentation of samples is obtained.
66
+
67
+ Two popular clustering methods, K-means and Hierarchical clustering, are explained and applied on Buenos Aires subway stations or house prices in Buenos Aires City data-sets. Evaluation metrics such as Silhouette Index (Brun et al., 2007) and Rand Index (Hubert & Arabie, 1985) are studied to let the student decide which number of clusters is the best for a given problem.
68
+
69
+ Despite dimension reduction can be included within unsupervised learning we considered to studied it on a separate chapter since it can be used for both supervised and unsupervised downstream learning tasks.
70
+
71
+ § 3.4. DIMENSIONALITY REDUCTION
72
+
73
+ In the dimensionality reduction chapter linear and nonlinear unsupervised approaches are studied.
74
+
75
+ The first concept to introduce is the curse of dimensionality and how the sample to feature ratio is and important aspect to analyze before implementing a machine learning task. Additionally a section explaining the differences between feature selection and feature extraction methods is included. The two method studied in this chapter are Principal Component Analysis, Kernel Principal Component analysis (kPCA) and T-distributed Stochastic Neighbor Embedding (t-SNE). The idea for this chapter is to introduce the students to visualize high dimensional data in two dimensions, to reduce the complexity of a learning task due to the dimension reduction and to improve the sample to feature ratio to avoid the curse of dimensionality.
76
+
77
+ § 3.5. INTRODUCTION TO NATURAL LANGUAGE PROCESSING
78
+
79
+ The idea of the NLP chapter is to make a brief introduction to other types of data formats beyond the tabular data case such as Natural language. In this chapter simple and introductory techniques such as Tokenization, Bag of Words and the TF-IDF are studied and applied on toy and real data-sets. We encourage students to build their own data-sets with real data by taking more than 200 headlines of two newspapers from Argentina: Clarin and Pagina12. Both newspapers are known to be ambassadors of two extreme opposite political parties thus their economic headlines tend to encode signal according to two classes. They are encouraged to build the data-sets with these newspapers and learn a low dimensional representation of the economic headlines. Then a supervised and unsupervised approaches are used to analyze how headers from each newspaper tend to group.
80
+
81
+ Another application is the pre-processing and classification of positive and negative movie reviews, which is a popular and common teaching example for NLP tasks.
82
+
83
+ § 3.6. NEURAL NETWORKS
84
+
85
+ The neural network chapter is the last one before the student project part. To make easier the first steps for the students neural networks are explained to be trained only for tabular data since convolutions are more complex and out of the scope for the students in this initial stage.
86
+
87
+ Before studying neural nets the Perceptron model is presented followed by different activation functions. Then the multilayer perceptron model is presented and how different architectures such as number of hidden layers or number of neurons per layer. Then loss functions such as Mean Squared Error and Cross Entropy are studied in addition to the concept of local minimum in the loss landscape function. Additionally the gradient descent and backpropaga-tion algorithm are explained. Finally it is explained how to improve the neural network training by regularization terms, reducing learning rate on plateau, Dropout and Batch Normalization. With the explained concepts students are encouraged to train neural networks on simple and tabular labeled data-sets such as the Wisconsin Breast Cancer or the Iris data-set and perform classification. The idea is to let students to train neural networks on simple problems to understand how all the hyper-parameters affect the results. Once basic neural network implementations for supervised learning such as classification problems are introduced then the Autoencoder model is studied as an unsupervised nonlinear dimension reduction method. This part students benchmark low dimensional visualization tasks of the auto-encoder against the PCA and Kernel-PCA. The application case involving real data is the gene expression cancer dataset from the International Cancer Genome Consortium to let students learn low dimensional latent space of tumors and perform supervised or unsupervised downstream tasks.
88
+
89
+ Finally to show the potential of neural networks to deal with time-series and sequences the Recurrent Neural Network model is studied with an application of time-series classification from signals measured in engines where each one has a different manufacturing setup.
90
+
91
+ This topic concludes the methodology and theoretical aspects of the course. Students are encouraged to use the presented methods on real data-sets from the Buenos Aires city open data portal presented in the next section.
92
+
93
+ § 4. STUDENT PROJECTS
94
+
95
+ One of the main goals for the students in this course is to develop from the ground up a applied Machine Learning based project aiming for solving real problems or getting new insights of an existing situation.
96
+
97
+ By making groups of three students each group has the objective to pick a data-set from the Open Data Portal from the Buenos Aires city government (GCBA, 2019) to make an exploratory data analysis and implement a supervised or unsupervised learning approach to discover insights from the selected data-set. All the implementations are required to be done in a Python framework as explained in section 5 . Additionally, each group has assigned one alumni student serving as a mentor and helping them to get along with the objective to ensure a consistent result to be shared at the end of the course. During the course, students have different checkpoints with professors and mentors on how to choose a profitable data-set, working strategies, best suited algorithms and other technical issues.
98
+
99
+ The final delivery is divided into 3 pillars: first, a Jupyter Notebook with the development of the project, second, a technical report explaining the root of the problem they were trying to solve, how they overcome all difficulties and the conclusions and last but not least a research poster that is presented to the public in an open event at the university where not only members of the university attend but also people from different backgrounds and disciplines.
100
+
101
+ With this methodology, students are exposed to a holistic view, from understanding the problem, building the pipeline, overcoming technical issues and presenting results in a open Data Science fair with +100 participants from different backgrounds.
102
+
103
+ § 5. TECHNOLOGIES USED
104
+
105
+ The ClusterAI pipeline is a Python-based class. All classroom exercises and explanations are coded in Jupyter notebooks (Pérez & Granger, 2007), that serve for multiple purposes: explaining and visualizing theoretical concepts with toy data, solving simple exercises with popular datasets such as the Wisconsin Breast Cancer or Iris dataset and coding implementations on real data as case studies class by class.
106
+
107
+ Sklearn and Tensorflow Keras are the main libraries to implement ML algorithms during the course as well as Pandas, Numpy (Harris et al., 2020) and Matplotlib for data wrangling and visualization in are used.
108
+
109
+ Additionally all workshop material is stored in a public repository in Github (clusterai, 2018) where notebooks and scripts are built and published by professors in collaboration with mentors. Each year new content is updated and created. For communication purposes between mentors, professors and students the course uses Slack channels (slack, 2021). The tool is useful to communicate the schedule for each class, upload recorded classes and complementary content, create polls and share related news, posts, blogs, etc. During the COVID-19 pandemic the channel Moreover resulted useful to replace the physical room allowing students to share content and decentralize the flow of information.
110
+
111
+ As additional content, there is a YouTube channel where the intention is to upload tutorials on common questions among the students, for example, installing Python and Anaconda (youtube, 2021).
112
+
113
+ § 6. ALLIANCE WITH BUENOS AIRES GOVERNMENT
114
+
115
+ As explained in section 5 the last stage of the ClusterAI pipeline is to request students an machine learning application project divided in two parts: an exploratory data analysis and a machine learning application. By an alliance with the Buenos Aires City Government the datasets used are obtained from the Buenos Aires Open Data portal data.buenosaires.gob.ar (BA Data). The BA DATA portal is open sourced and its data-sets record approximately 60.000 downloads per month. It showcases 421 data-sets from 31 different organizations within the Buenos Aires City Government related to 12 central government initiatives: Public Administration, Culture and Tourism, Human Development, Economy and Finances, Education, Gender, Environment, Transportation, Health, Security, Urbanism and territory and COVID-19. Since December 2019, the Undersecretary of Evidence-Based Public Policies is the central team responsible for data and open data management and manages the platform. BA DATA is Buenos Aires City's open data portal, where public data-sets are generated, saved and published. It's goal is to strengthen the city government's transparency, encourage citizenship participation, promote data reuse and facilitate innovation from data.
116
+
117
+ Since 2018 the ClusterAI students have done more than 50 projects using datasets from BA data. Some published projects done by the students include Regresion models applied to familiar violence estimation (Bellini, 2019), detection of financial behaviour using machine learning models (Weigandi, 2019), prediction of the type of vegetation (Tettamanti, 2019), commercial opportunities map (Liber-tun, 2019), quality air analysis via machine learning models (Cavallucci, 2019) and car robbery analysis (Carpaneto, 2019) among others.
118
+
119
+ § 7. CONCLUSIONS
120
+
121
+ In this work we present the open ClusterAI pipeline used to teach machine learning to undergraduate students in Argentina. The proposed pipeline is composed by seven chapters where the first six are dedicated to theory and applications of multiple machine learning methods while the last chapter is focused on student projects using the Buenos Aires open data portal. Student projects include a wide range of applications using real data from city sensors. Finally students are encouraged to present via poster sessions, technical reports and github repositories the projects in order to promote open developments and to encourage the local community to use public data. Despite the clusterAI pipeline has been designed by engineering students it has also been validated with students from other disciplines such as social science, economics and biology.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/QoB8QGu5ZSL/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Teaching Uncertainty Quantification in Machine Learning through Use Cases
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ Uncertainty in machine learning is not generally taught as general knowledge in Machine Learning course curricula. In this paper we propose a short curriculum for a course about uncertainty in machine learning, and complement the course with a selection of use cases, aimed to trigger discussion and let students play with the concepts of uncertainty in a programming setting. Our use cases cover the concept of output uncertainty, Bayesian neural networks and weight distributions, sources of uncertainty, and out of distribution detection. We expect that this curriculum and set of use cases motivates the community to adopt these important concepts for safety in AI.
8
+
9
+ ## 1. Introduction
10
+
11
+ Neural networks and machine learning models are ubiquitous in real-world applications, but in general model and data uncertainty are not well explored, and this propagates on how machine learning is taught at different levels. Uncertainty is an important concept that should be taught to all students interested in machine learning.
12
+
13
+ Overall Uncertainty Quantification of machine learning models is not part of the standard curricula at the undergraduate or graduate level, mostly being present in advanced summer schools (like MLSS, EEML, DeepLearn, SMILES, etc), with some exceptions at graduate courses aimed mostly at theory of Bayesian NNs.
14
+
15
+ In this paper we aim to develop a concept for teaching uncertainty quantification in machine learning, first with a short curriculum, and then through different use cases, starting from why we need models with uncertainty and ending at out of distribution detection. We hope that this material can be used for easier plannig of future courses. Teaching with clear use cases can be beneficial for student's learning (Lynn Jr, 1999), specially when they are combined with practical experience.
16
+
17
+ Uncertainty in ML is a subject that is heavy on probability and statistics, and this is a topic that might not be easy for some students. We believe that having clear use cases for this purpose can help students learn and to clarify concepts. These use cases can be implemented in code using standard machine learning frameworks like Keras, TensorFlow, and PyTorch.
18
+
19
+ ## 2. Curricula for UQ in ML
20
+
21
+ We first introduce a short curricula template for a uncertainty in machine learning course. This could be a graduate-level course, requiring students to know basic neural networks, machine learning theory, and probability and statistics, as well as having appropriate coding skills in a programming language in order to understand and implement the use cases in a framework of their choice.
22
+
23
+ The overall curriculum is presented in Table 1. Any teacher should of course adapt this course to their institution or student body, and we encourage the teacher to also include seminar-style discussions including state of the art research in BNNs and uncertainty in ML, as this is still a very research heavy field.
24
+
25
+ The ultimate goal of this course is to enable students to perform research in this field, and to apply this knowledge into neighboring task field like Computer Vision, Reinforcement Learning, or Robotics.
26
+
27
+ ### 3.Use Cases
28
+
29
+ In this section we present a selection of use cases to teach concepts of uncertainty in machine learning settings. These represent what we think are the most difficult concepts for students to grasp, which motivate the application of use cases as teaching methodology.
30
+
31
+ ### 3.1. Output Uncertainty
32
+
33
+ The best use case to teach the concept of uncertainty at the output of a machine learning model is in a simple regression setting, as the output mean can be associated to the output of a classical model (without uncertainty), and the standard deviation of the output can be directly associated with the uncertainty in the output. In a classification setting with probabilities associated to each class, it is more difficult to directly see the effect of uncertainty in the model.
34
+
35
+ ---
36
+
37
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
38
+
39
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
40
+
41
+ ---
42
+
43
+ <table><tr><td>Unit</td><td>Content</td></tr><tr><td>Introduction to UQ</td><td>Point-wise outputs versus distribution outputs in ML models. Sources of un- certainty. Representations of output uncertainty. Applications and possible legal requirements. Relationship to Explainable AI. Connections to safety and trustworthiness in AI.</td></tr><tr><td>Bayesian NNs</td><td>Distribution over weights. Predictive posterior distribution. Inference using Bayes Rule.</td></tr><tr><td>Methods for UQ</td><td>Deep Ensembles (Lakshminarayanan et al., 2016), Monte Carlo methods like Dropout (Gal & Ghahramani, 2016) and DropConnect (Mobiny et al., 2019). For advanced courses, Gaussian Processes and Markov Chain Monte Carlo methods can also be included.</td></tr><tr><td>Metrics and Evaluation</td><td>Losses with uncertainty, entropy, calibration, reliability plots, and related cali- bration metrics (Guo et al., 2017).</td></tr><tr><td>Out of Distribution Detection</td><td>In distribution and Out of distribution data. Evaluation protocol with standard datasets (CIFAR10 vs SVHN, MNIST vs Fashion MNIST). Evaluation using histograms and ROC curves.</td></tr><tr><td>Challenges and Future Work</td><td>Scalability of BNNs, generalization of out of distribution detection, compu- tational performance, datasets with uncertainty, and real-world applications (Valdenegro-Toro, 2021).</td></tr></table>
44
+
45
+ Table 1. Curriculum for a graduate course in Uncertainty Quantification in Machine Learning
46
+
47
+ Learning Objective. Students will learn about the difference between a classical machine learning model and one with output uncertainty.
48
+
49
+ Use Case. Students will implement a standard neural network using a framework of their or the teacher's choice. Students will generate data by sampling the following function:
50
+
51
+ $$
52
+ f\left( x\right) = \sin \left( x\right) + \epsilon \tag{1}
53
+ $$
54
+
55
+ $$
56
+ \epsilon \sim \mathcal{N}\left( {0,\sigma \left( x\right) }\right) \tag{2}
57
+ $$
58
+
59
+ $$
60
+ \sigma \left( x\right) = {0.15}{\left( 1 + {e}^{-x}\right) }^{-1} \tag{3}
61
+ $$
62
+
63
+ For the range $x \in \left\lbrack {-\pi ,\pi }\right\rbrack$ . Two neural network models can be used. One is a standard neural network and the other is a ensemble of 5 neural networks (Lakshminarayanan et al., 2016), which is a simple method to estimate uncertainty. An example of this setting can be seen in Figure 1, where output uncertainty is represented as confidence intervals. Students can the visually compare their results, and relate on how the standard neural network does not model the training data points variations, while the neural network with uncertainty does. This is specially noticeable as the standard deviation of the noise is variable, which is not captured with a standard
64
+
65
+ 108 neural network. 109
66
+
67
+ ### 3.2. Bayesian Neural Networks
68
+
69
+ Bayesian Neural Networks are difficult to understand conceptually since the formulation is heavy in probability, and weights are replaced by probability distributions. In this use case we simplify the concept for easy understanding.
70
+
71
+ Learning Objective. Students will learn the conceptual differences between a standard and an Bayesian neural network and how it relates to produce uncertainty at the model output.
72
+
73
+ Use Case. Students will implement the forward pass of a simple neural network using numpy or a similar linear algebra framework. For a standard neural network, scalar or point-wise weights are used, and for a BNN, weights will be drawn from a given Gaussian distribution (the actual weight values for this use case do not matter). Sampling can be used to produce predictions from a BNN, by sampling a set of weights and producing a forward pass with a given input.
74
+
75
+ Students will compare the outputs given random weights for each of their networks, and compare how the BNN is a stochastic model, meaning that predictions vary with a given input, as different weights are sampled and propagate through the network to produce different outputs, but these predictions are not completely random, and are samples of the predictive posterior distribution.
76
+
77
+ In comparison, the standard neural network has fixed predictions with a given input and weights, which cannot model uncertainty.
78
+
79
+ ![01963a94-1a29-761d-a102-cea6a37a63dc_2_197_195_1288_357_0.jpg](images/01963a94-1a29-761d-a102-cea6a37a63dc_2_197_195_1288_357_0.jpg)
80
+
81
+ Figure 1. Comparison between classic and neural networks with output uncertainty for regression of $f\left( x\right) = \sin \left( x\right) + \epsilon$
82
+
83
+ ### 3.3. Bayesian NN Intractability
84
+
85
+ Connecting to the previous use case, it is well known that inference in BNNs is intractable, due to the high computational complexity required to estimate weight distributions, particularly for highly parameterized neural networks. In this use case we wish the student to form an intuition on why this is the case.
86
+
87
+ Learning Objective. Students will learn an intuition on why BNNs are intractable with a thought experiment and validate it with a code implementation.
88
+
89
+ Use Case. As a thought experiment, students should think about the predictive posterior distribution (Eq 4), which integrates a term over the weights of the network to produce a distribution output.
90
+
91
+ $$
92
+ P\left( {y \mid x}\right) = {\int }_{w}P\left( {y \mid x, w}\right) P\left( w\right) {dw} \tag{4}
93
+ $$
94
+
95
+ For the experimental setting, students should implement a simple BNN using numpy, with randomly initialized weight distributions (Gaussian distributions can be used for simplicity), and then produce predictions with random data (similarly to the previous use case). But then students are asked to vary their network architectures, increasing depth from a few layer to over 50 layers, and then estimate and plot the computation time as network depth and number of samples is varied. Students the analyze their results and comment on the applicability of BNNs for real-world applications, considering their computational costs.
96
+
97
+ ### 3.4. Aleatoric vs Epistemic Uncertainty
98
+
99
+ Different sources of uncertainty are not always easy to see and learn intuitively. This use case tries to show the difference with a practical example in a regression setting.
100
+
101
+ Learning Objective. Students will learn the difference between aleatoric and epistemic uncertainty through a simple regression problem, and how different parts of the model contribute to these sources of uncertainty.
102
+
103
+ Use Case. We will use the same setting as the output uncertainty use case (Sec 3.1), but only a model with uncertainty. Since an ensemble is used to estimate uncertainty in this case, we will use the negative log-likelihood loss formulation to estimate epistemic uncertainty:
104
+
105
+ $$
106
+ L\left( {{y}_{n},{\mathbf{x}}_{n}}\right) = \frac{\log {\sigma }_{i}^{2}\left( {\mathbf{x}}_{n}\right) }{2} + \frac{{\left( {\mu }_{i}\left( {\mathbf{x}}_{n}\right) - {y}_{n}\right) }^{2}}{2{\sigma }_{i}^{2}\left( {\mathbf{x}}_{n}\right) } \tag{5}
107
+ $$
108
+
109
+ Students should train an ensemble of 5 networks, and then plot the predictions separately. First students plot the predictions of the mean output of each ensemble member (which produces epistemic uncertainty), and then separately plot the standard deviation outputs of each ensemble member (which estimate aleatoric uncertainty). This concept is shown in Figure 2.
110
+
111
+ Students then compare both kinds of predictions and try to explain the differences, and how they relate to the epistemic and aleatoric sources of uncertainty. A plot of the training data might also help students visualize aleatoric uncertainty.
112
+
113
+ #### 3.5.Out of Distribution Detection
114
+
115
+ Out of distribution detection entails detecting input samples outside of the training set distribution, through output uncertainty or other confidence measures. In this setting we present two use cases.
116
+
117
+ Learning Objective. Students will learn how to perform and evaluate out of distribution using standard image classification datasets and in a regression toy example, and to get the intuitions on how uncertainty enables the out of distribution detection task.
118
+
119
+ Classification Use Case. Using an appropriate neural network framework, students will implement and train a BNN (or an approximation) on the SVHN dataset (ID, in-distribution), and evaluate in the train and test splits. Then students are asked to make predictions using their model on the CIFAR10 test set (OOD, out-of-distribution) and to look at the class probabilities that their model produces. Entropy can be used to obtain a single measure for each sample,
120
+
121
+ 165
122
+
123
+ ![01963a94-1a29-761d-a102-cea6a37a63dc_3_153_195_631_708_0.jpg](images/01963a94-1a29-761d-a102-cea6a37a63dc_3_153_195_631_708_0.jpg)
124
+
125
+ Figure 2. Comparison between Epistemic and Aleatoric Uncertainty in the Toy Regression example
126
+
127
+ 166
128
+
129
+ 167
130
+
131
+ 168
132
+
133
+ 169
134
+
135
+ 170
136
+
137
+ 171
138
+
139
+ 172
140
+
141
+ 173
142
+
143
+ 174
144
+
145
+ 175
146
+
147
+ 176
148
+
149
+ 177
150
+
151
+ 178
152
+
153
+ 179
154
+
155
+ 180
156
+
157
+ 181
158
+
159
+ 182
160
+
161
+ 183 and then compare the ID vs OOD entropy values using a histogram. The use case can be completed by obtaining a threshold between ID and OOD entropy distributions using an ROC curve, in order to perform out of distribution detection in the wild.
162
+
163
+ Regression Use Case. For a toy example in regression, we use the same setting as Sec 3.1, keeping the training set $x \in$ $\left\lbrack {-\pi ,\pi }\right\rbrack$ , and introducing an OOD set of $x \in \left\lbrack {-{2\pi }, - \pi }\right\rbrack \cup$ $\left\lbrack {\pi ,{2\pi }}\right\rbrack$ . Students then plot the predictions from their model, noting the values on the two datasets (ID and OOD). A sample result can be seen in Figure 3.
164
+
165
+ Students should compare the output standard deviation produced by their model in the ID and OOD datasets. They should observe that uncertainty as predicted by the output standard deviation should be higher in the OOD data than in the ID data, which indicates that the model is extrapolating. Students can add additional evidence of extrapolation by plotting the function $f\left( x\right) = \sin \left( x\right)$ which is the true function that generated the training data, and confirm that the model predictions in the OOD data are very incorrect when compared to the true function, while predictions in the ID data are correct inside the training range $\left( \left\lbrack {-\pi ,\pi }\right\rbrack \right)$ . Error metrics like mean absolute error can be used to confirm this difference.
166
+
167
+ The teacher can also show that uncertainty in the OOD set should be proportional to the distance (in input space) from
168
+
169
+ 218 the sample to the edge of the OOD set, and that this propor- tionality is expected for proper uncertainty quantification. 219
170
+
171
+ ![01963a94-1a29-761d-a102-cea6a37a63dc_3_900_196_621_302_0.jpg](images/01963a94-1a29-761d-a102-cea6a37a63dc_3_900_196_621_302_0.jpg)
172
+
173
+ Figure 3. Out of Distribution Detection in the Toy Regression Example. Values $x > \pi$ and $x < - \pi$ are out of distribution in this example, which triggers large epistemic uncertainty which can be used to detect this condition.
174
+
175
+ Misconceptions. Students might be confused that some OOD examples have low uncertainty and are easily confused with ID examples. This
176
+
177
+ ## 4. Conclusions and Future Work
178
+
179
+ In this paper we have presented a small course curriculum and a selection of use cases to teach students about uncertainty quantification in machine learning models. We hope that this work can motivate the community about the importance of teaching uncertainty quantification and BNNs to students learning about machine learning, and how it relates to the concept of safety in artificial intelligence.
180
+
181
+ Future course contents and use cases can be centered in specific applications of machine learning and artificial intelligence, such as Computer Vision, Robotics, or Autonomous Systems. There is a good demand to connect theoretical fields (BNNs in particular) into practical applications as a way to lead future research.
182
+
183
+ ## References
184
+
185
+ Gal, Y. and Ghahramani, Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050-1059. PMLR, 2016.
186
+
187
+ Guo, C., Pleiss, G., Sun, Y., and Weinberger, K. Q. On calibration of modern neural networks. arXiv preprint arXiv:1706.04599, 2017.
188
+
189
+ Lakshminarayanan, B., Pritzel, A., and Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. arXiv preprint arXiv:1612.01474, 2016.
190
+
191
+ Lynn Jr, L. E. Teaching and learning with cases: A guidebook. CQ Press, 1999.
192
+
193
+ Mobiny, A., Nguyen, H. V., Moulik, S., Garg, N., and Wu, C. C. Dropconnect is effective in modeling un-
194
+
195
+ certainty of bayesian deep networks. arXiv preprint arXiv:1906.04569, 2019.
196
+
197
+ Valdenegro-Toro, M. I find your lack of uncertainty in computer vision disturbing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 1263-1272, June 2021.
198
+
199
+ 258
200
+
201
+ 260
202
+
203
+ 261
204
+
205
+ 262
206
+
207
+ 263
208
+
209
+ 266
210
+
211
+ 269
212
+
213
+ 274
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/QoB8QGu5ZSL/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § TEACHING UNCERTAINTY QUANTIFICATION IN MACHINE LEARNING THROUGH USE CASES
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ Uncertainty in machine learning is not generally taught as general knowledge in Machine Learning course curricula. In this paper we propose a short curriculum for a course about uncertainty in machine learning, and complement the course with a selection of use cases, aimed to trigger discussion and let students play with the concepts of uncertainty in a programming setting. Our use cases cover the concept of output uncertainty, Bayesian neural networks and weight distributions, sources of uncertainty, and out of distribution detection. We expect that this curriculum and set of use cases motivates the community to adopt these important concepts for safety in AI.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ Neural networks and machine learning models are ubiquitous in real-world applications, but in general model and data uncertainty are not well explored, and this propagates on how machine learning is taught at different levels. Uncertainty is an important concept that should be taught to all students interested in machine learning.
12
+
13
+ Overall Uncertainty Quantification of machine learning models is not part of the standard curricula at the undergraduate or graduate level, mostly being present in advanced summer schools (like MLSS, EEML, DeepLearn, SMILES, etc), with some exceptions at graduate courses aimed mostly at theory of Bayesian NNs.
14
+
15
+ In this paper we aim to develop a concept for teaching uncertainty quantification in machine learning, first with a short curriculum, and then through different use cases, starting from why we need models with uncertainty and ending at out of distribution detection. We hope that this material can be used for easier plannig of future courses. Teaching with clear use cases can be beneficial for student's learning (Lynn Jr, 1999), specially when they are combined with practical experience.
16
+
17
+ Uncertainty in ML is a subject that is heavy on probability and statistics, and this is a topic that might not be easy for some students. We believe that having clear use cases for this purpose can help students learn and to clarify concepts. These use cases can be implemented in code using standard machine learning frameworks like Keras, TensorFlow, and PyTorch.
18
+
19
+ § 2. CURRICULA FOR UQ IN ML
20
+
21
+ We first introduce a short curricula template for a uncertainty in machine learning course. This could be a graduate-level course, requiring students to know basic neural networks, machine learning theory, and probability and statistics, as well as having appropriate coding skills in a programming language in order to understand and implement the use cases in a framework of their choice.
22
+
23
+ The overall curriculum is presented in Table 1. Any teacher should of course adapt this course to their institution or student body, and we encourage the teacher to also include seminar-style discussions including state of the art research in BNNs and uncertainty in ML, as this is still a very research heavy field.
24
+
25
+ The ultimate goal of this course is to enable students to perform research in this field, and to apply this knowledge into neighboring task field like Computer Vision, Reinforcement Learning, or Robotics.
26
+
27
+ § 3.USE CASES
28
+
29
+ In this section we present a selection of use cases to teach concepts of uncertainty in machine learning settings. These represent what we think are the most difficult concepts for students to grasp, which motivate the application of use cases as teaching methodology.
30
+
31
+ § 3.1. OUTPUT UNCERTAINTY
32
+
33
+ The best use case to teach the concept of uncertainty at the output of a machine learning model is in a simple regression setting, as the output mean can be associated to the output of a classical model (without uncertainty), and the standard deviation of the output can be directly associated with the uncertainty in the output. In a classification setting with probabilities associated to each class, it is more difficult to directly see the effect of uncertainty in the model.
34
+
35
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
36
+
37
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
38
+
39
+ max width=
40
+
41
+ Unit Content
42
+
43
+ 1-2
44
+ Introduction to UQ Point-wise outputs versus distribution outputs in ML models. Sources of un- certainty. Representations of output uncertainty. Applications and possible legal requirements. Relationship to Explainable AI. Connections to safety and trustworthiness in AI.
45
+
46
+ 1-2
47
+ Bayesian NNs Distribution over weights. Predictive posterior distribution. Inference using Bayes Rule.
48
+
49
+ 1-2
50
+ Methods for UQ Deep Ensembles (Lakshminarayanan et al., 2016), Monte Carlo methods like Dropout (Gal & Ghahramani, 2016) and DropConnect (Mobiny et al., 2019). For advanced courses, Gaussian Processes and Markov Chain Monte Carlo methods can also be included.
51
+
52
+ 1-2
53
+ Metrics and Evaluation Losses with uncertainty, entropy, calibration, reliability plots, and related cali- bration metrics (Guo et al., 2017).
54
+
55
+ 1-2
56
+ Out of Distribution Detection In distribution and Out of distribution data. Evaluation protocol with standard datasets (CIFAR10 vs SVHN, MNIST vs Fashion MNIST). Evaluation using histograms and ROC curves.
57
+
58
+ 1-2
59
+ Challenges and Future Work Scalability of BNNs, generalization of out of distribution detection, compu- tational performance, datasets with uncertainty, and real-world applications (Valdenegro-Toro, 2021).
60
+
61
+ 1-2
62
+
63
+ Table 1. Curriculum for a graduate course in Uncertainty Quantification in Machine Learning
64
+
65
+ Learning Objective. Students will learn about the difference between a classical machine learning model and one with output uncertainty.
66
+
67
+ Use Case. Students will implement a standard neural network using a framework of their or the teacher's choice. Students will generate data by sampling the following function:
68
+
69
+ $$
70
+ f\left( x\right) = \sin \left( x\right) + \epsilon \tag{1}
71
+ $$
72
+
73
+ $$
74
+ \epsilon \sim \mathcal{N}\left( {0,\sigma \left( x\right) }\right) \tag{2}
75
+ $$
76
+
77
+ $$
78
+ \sigma \left( x\right) = {0.15}{\left( 1 + {e}^{-x}\right) }^{-1} \tag{3}
79
+ $$
80
+
81
+ For the range $x \in \left\lbrack {-\pi ,\pi }\right\rbrack$ . Two neural network models can be used. One is a standard neural network and the other is a ensemble of 5 neural networks (Lakshminarayanan et al., 2016), which is a simple method to estimate uncertainty. An example of this setting can be seen in Figure 1, where output uncertainty is represented as confidence intervals. Students can the visually compare their results, and relate on how the standard neural network does not model the training data points variations, while the neural network with uncertainty does. This is specially noticeable as the standard deviation of the noise is variable, which is not captured with a standard
82
+
83
+ 108 neural network. 109
84
+
85
+ § 3.2. BAYESIAN NEURAL NETWORKS
86
+
87
+ Bayesian Neural Networks are difficult to understand conceptually since the formulation is heavy in probability, and weights are replaced by probability distributions. In this use case we simplify the concept for easy understanding.
88
+
89
+ Learning Objective. Students will learn the conceptual differences between a standard and an Bayesian neural network and how it relates to produce uncertainty at the model output.
90
+
91
+ Use Case. Students will implement the forward pass of a simple neural network using numpy or a similar linear algebra framework. For a standard neural network, scalar or point-wise weights are used, and for a BNN, weights will be drawn from a given Gaussian distribution (the actual weight values for this use case do not matter). Sampling can be used to produce predictions from a BNN, by sampling a set of weights and producing a forward pass with a given input.
92
+
93
+ Students will compare the outputs given random weights for each of their networks, and compare how the BNN is a stochastic model, meaning that predictions vary with a given input, as different weights are sampled and propagate through the network to produce different outputs, but these predictions are not completely random, and are samples of the predictive posterior distribution.
94
+
95
+ In comparison, the standard neural network has fixed predictions with a given input and weights, which cannot model uncertainty.
96
+
97
+ < g r a p h i c s >
98
+
99
+ Figure 1. Comparison between classic and neural networks with output uncertainty for regression of $f\left( x\right) = \sin \left( x\right) + \epsilon$
100
+
101
+ § 3.3. BAYESIAN NN INTRACTABILITY
102
+
103
+ Connecting to the previous use case, it is well known that inference in BNNs is intractable, due to the high computational complexity required to estimate weight distributions, particularly for highly parameterized neural networks. In this use case we wish the student to form an intuition on why this is the case.
104
+
105
+ Learning Objective. Students will learn an intuition on why BNNs are intractable with a thought experiment and validate it with a code implementation.
106
+
107
+ Use Case. As a thought experiment, students should think about the predictive posterior distribution (Eq 4), which integrates a term over the weights of the network to produce a distribution output.
108
+
109
+ $$
110
+ P\left( {y \mid x}\right) = {\int }_{w}P\left( {y \mid x,w}\right) P\left( w\right) {dw} \tag{4}
111
+ $$
112
+
113
+ For the experimental setting, students should implement a simple BNN using numpy, with randomly initialized weight distributions (Gaussian distributions can be used for simplicity), and then produce predictions with random data (similarly to the previous use case). But then students are asked to vary their network architectures, increasing depth from a few layer to over 50 layers, and then estimate and plot the computation time as network depth and number of samples is varied. Students the analyze their results and comment on the applicability of BNNs for real-world applications, considering their computational costs.
114
+
115
+ § 3.4. ALEATORIC VS EPISTEMIC UNCERTAINTY
116
+
117
+ Different sources of uncertainty are not always easy to see and learn intuitively. This use case tries to show the difference with a practical example in a regression setting.
118
+
119
+ Learning Objective. Students will learn the difference between aleatoric and epistemic uncertainty through a simple regression problem, and how different parts of the model contribute to these sources of uncertainty.
120
+
121
+ Use Case. We will use the same setting as the output uncertainty use case (Sec 3.1), but only a model with uncertainty. Since an ensemble is used to estimate uncertainty in this case, we will use the negative log-likelihood loss formulation to estimate epistemic uncertainty:
122
+
123
+ $$
124
+ L\left( {{y}_{n},{\mathbf{x}}_{n}}\right) = \frac{\log {\sigma }_{i}^{2}\left( {\mathbf{x}}_{n}\right) }{2} + \frac{{\left( {\mu }_{i}\left( {\mathbf{x}}_{n}\right) - {y}_{n}\right) }^{2}}{2{\sigma }_{i}^{2}\left( {\mathbf{x}}_{n}\right) } \tag{5}
125
+ $$
126
+
127
+ Students should train an ensemble of 5 networks, and then plot the predictions separately. First students plot the predictions of the mean output of each ensemble member (which produces epistemic uncertainty), and then separately plot the standard deviation outputs of each ensemble member (which estimate aleatoric uncertainty). This concept is shown in Figure 2.
128
+
129
+ Students then compare both kinds of predictions and try to explain the differences, and how they relate to the epistemic and aleatoric sources of uncertainty. A plot of the training data might also help students visualize aleatoric uncertainty.
130
+
131
+ § 3.5.OUT OF DISTRIBUTION DETECTION
132
+
133
+ Out of distribution detection entails detecting input samples outside of the training set distribution, through output uncertainty or other confidence measures. In this setting we present two use cases.
134
+
135
+ Learning Objective. Students will learn how to perform and evaluate out of distribution using standard image classification datasets and in a regression toy example, and to get the intuitions on how uncertainty enables the out of distribution detection task.
136
+
137
+ Classification Use Case. Using an appropriate neural network framework, students will implement and train a BNN (or an approximation) on the SVHN dataset (ID, in-distribution), and evaluate in the train and test splits. Then students are asked to make predictions using their model on the CIFAR10 test set (OOD, out-of-distribution) and to look at the class probabilities that their model produces. Entropy can be used to obtain a single measure for each sample,
138
+
139
+ 165
140
+
141
+ < g r a p h i c s >
142
+
143
+ Figure 2. Comparison between Epistemic and Aleatoric Uncertainty in the Toy Regression example
144
+
145
+ 166
146
+
147
+ 167
148
+
149
+ 168
150
+
151
+ 169
152
+
153
+ 170
154
+
155
+ 171
156
+
157
+ 172
158
+
159
+ 173
160
+
161
+ 174
162
+
163
+ 175
164
+
165
+ 176
166
+
167
+ 177
168
+
169
+ 178
170
+
171
+ 179
172
+
173
+ 180
174
+
175
+ 181
176
+
177
+ 182
178
+
179
+ 183 and then compare the ID vs OOD entropy values using a histogram. The use case can be completed by obtaining a threshold between ID and OOD entropy distributions using an ROC curve, in order to perform out of distribution detection in the wild.
180
+
181
+ Regression Use Case. For a toy example in regression, we use the same setting as Sec 3.1, keeping the training set $x \in$ $\left\lbrack {-\pi ,\pi }\right\rbrack$ , and introducing an OOD set of $x \in \left\lbrack {-{2\pi }, - \pi }\right\rbrack \cup$ $\left\lbrack {\pi ,{2\pi }}\right\rbrack$ . Students then plot the predictions from their model, noting the values on the two datasets (ID and OOD). A sample result can be seen in Figure 3.
182
+
183
+ Students should compare the output standard deviation produced by their model in the ID and OOD datasets. They should observe that uncertainty as predicted by the output standard deviation should be higher in the OOD data than in the ID data, which indicates that the model is extrapolating. Students can add additional evidence of extrapolation by plotting the function $f\left( x\right) = \sin \left( x\right)$ which is the true function that generated the training data, and confirm that the model predictions in the OOD data are very incorrect when compared to the true function, while predictions in the ID data are correct inside the training range $\left( \left\lbrack {-\pi ,\pi }\right\rbrack \right)$ . Error metrics like mean absolute error can be used to confirm this difference.
184
+
185
+ The teacher can also show that uncertainty in the OOD set should be proportional to the distance (in input space) from
186
+
187
+ 218 the sample to the edge of the OOD set, and that this propor- tionality is expected for proper uncertainty quantification. 219
188
+
189
+ < g r a p h i c s >
190
+
191
+ Figure 3. Out of Distribution Detection in the Toy Regression Example. Values $x > \pi$ and $x < - \pi$ are out of distribution in this example, which triggers large epistemic uncertainty which can be used to detect this condition.
192
+
193
+ Misconceptions. Students might be confused that some OOD examples have low uncertainty and are easily confused with ID examples. This
194
+
195
+ § 4. CONCLUSIONS AND FUTURE WORK
196
+
197
+ In this paper we have presented a small course curriculum and a selection of use cases to teach students about uncertainty quantification in machine learning models. We hope that this work can motivate the community about the importance of teaching uncertainty quantification and BNNs to students learning about machine learning, and how it relates to the concept of safety in artificial intelligence.
198
+
199
+ Future course contents and use cases can be centered in specific applications of machine learning and artificial intelligence, such as Computer Vision, Robotics, or Autonomous Systems. There is a good demand to connect theoretical fields (BNNs in particular) into practical applications as a way to lead future research.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/WXdE6lC7-n/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Feedbacks on a Machine Learning Curriculum for an International Audience
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ Teaching Machine Learning (ML) can be challenging due to the breadth of the subject and the diversity of audience that might be interested in it. In this article, we collectively place ourselves both at the curriculum level and at the lesson level, describing existing practices and sketching directions for improvement. We first describe a curriculum for ML practitioners that involves different constraints including the variety of audience, both in terms of background and in terms of learning goals. We also explain how some of the lessons relate to the teaching principles that are pushed forward for example by The Carpentries and how the future of the curriculum could be reshaped.
8
+
9
+ ## 1. Introduction
10
+
11
+ With the maturation of the field, learning fundamentals of Machine Learning (ML) can become a key differentiator for learners and researchers of all domains. Indeed, many application domains and research domains are being structurally transformed by the availability of data and the use of ML. In this context, effectively teaching ${ML}$ becomes of utmost importance so that ML practitioners understand the tools they manipulate and are able to creatively and robustly use them. Teaching ML, however, presents many challenges due to this pervasive applicability.
12
+
13
+ Probably one of the biggest challenges is the heterogeneity of the possible audience. This compounds with the fact that ML is at the crossroads of computer science, maths and statistics. As in the traditional lessons from The Carpentries, a part of the target audience might have none or very limited programming background. The same goes for mathematical background, where math anxiety is a common issue across disciplines. As far as statistics are concerned, almost all scientists would benefit from a better statistical background. In Section 2, presenting an existing curriculum, we describe the target audience(s) we consider and how we have mitigated for the differences in background.
14
+
15
+ Another challenge is the one of deciding of what topics should be taught. While a mostly practical approach can be tempting, having only a shallow understanding of the ML methods often leads to unsatisfying results, waste of time and incapacity to innovate. Indeed, in addition to practice, the learners must be able to analyze and understand why, when, and how the methods they use work. In Section 2, we additionally present our target goals and guidelines in the curriculum. We concisely focus, in Section 3, on some illustrative practice in our curriculum, and group in Section 4 the results of our reflection on the evolutions of the curriculum.
16
+
17
+ ### 2.A Modular Machine Learning Curriculum
18
+
19
+ In this section, we sketch an existing ML curriculum that has a 3-semester duration, in a university context. We focus on the core curriculum but also present how part of it is addressed and fined tuned to different audiences with different goals. While it surely can be improved a lot, this curriculum can be used as a basis for reasoning about the topics of interests in an ML curriculum.
20
+
21
+ ### 2.1. Audience
22
+
23
+ The core curriculum has ML at its core (together with symbolic Artificial Intelligence and Data Mining). While examining hundreds of applications every year, it is more and more common to see applicants that have already followed some introductory MOOC on Machine Learning. Despite this, while learners are supposed to be at ease with programming, their actual programming level shows a great discrepancy. Similarly, an introductory-level statistical background is presupposed but in practice, its mastery is variable.
24
+
25
+ The audience is also international, with people coming from tens of different countries; all lessons are in English, which is not the mother tongue of most of the audience, adding an additional difficulty to the understanding. This difficulty is worth its benefits. The diversity of linguistic, cultural and work experience ${}^{1}$ backgrounds makes the learning experience richer and more open on many aspects.
26
+
27
+ ---
28
+
29
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
30
+
31
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
32
+
33
+ ${}^{1}$ Applicants range from students to people having worked for a few years in various domains to people coming back to studies after decades of work in the industry.
34
+
35
+ ---
36
+
37
+ Secondary audience(s). In addition to the main audience, other audiences reuse the presented lessons. Some lessons are directly shared (mixing learners and thus increasing the benefits of heterogeneity), some are tuned to the audience to cope for background discrepancies. These audiences include learners that aspire to become software engineers with a data science background, but also learners from other disciplines (e.g., Optics, Physics) that want to learn ML.
38
+
39
+ ### 2.2. General Goals and Guiding Principles
40
+
41
+ We aim to prepare the learners for ML research and innovation while acquiring the necessary skills to solve
42
+
43
+ 068 data-driven problems. So, the ultimate goal is to provide a strong background on the entire ML pipeline from the raw data to the final ML software and its impact on the application domain.
44
+
45
+ Preparing for research and innovation. To be able to do advances in research or to push the bleeding edge of innovation, a superficial knowledge is insufficient. We aim at making our learners being able to imagine/visualize the data, to understand how each type of model processes these data and to understand the dynamic of the training process of these models. Such deep understanding makes it possible diagnose unexpected problems and limitations of the models in particular situations. Most importantly we believe they are mandatory to be able to improve existing methods, propose novel models, or formalize a new problem to be solved.
46
+
47
+ Learning to carry out a data project. All ML methods are driven by the data and the task at hand. One of the objectives is to make the learners aware that raw data is by definition not smooth and that analyzing, understanding and pre-processing the data are mandatory tasks. Given the data (and their characteristics), the learners have to be able to mathematically formalize the issue to propose or to design ML solutions. A key point is the ability to perform experiments to validate the methods (and the results) and to design a relevant and rigorous experimental protocol.
48
+
49
+ Acquiring transversal skills. At any stage, three crosscutting aspects are very important for our learners:
50
+
51
+ - working in teams especially when software implementation plays a central role.
52
+
53
+ - presenting their work orally or in writing.
54
+
55
+ - understanding the broader impact of ML design choices, like fairness, privacy or environmental impact.
56
+
57
+ #### 2.3.ML Curriculum: a quick tour
58
+
59
+ The current embodiment of the curriculum is sketched in Fig. 1. With the increase of applications (thus their diversity) and the maturation of the domain, the curriculum must be continuously evolving. This evolution driven by systematic feedback gathering from learners and teachers, but also from
60
+
61
+ 109 the research labs and companies that hire our learners. The curriculum aims at achieving the presented goals, mainly by covering a broad variety of machine learning topics in great details and by including a lot of projects. Some lessons are more targeted at strengthening the computer science background, while others focus on the transversal skills.
62
+
63
+ ![01963a95-f16a-77dd-9468-657e09d32b87_1_896_188_701_949_0.jpg](images/01963a95-f16a-77dd-9468-657e09d32b87_1_896_188_701_949_0.jpg)
64
+
65
+ Figure 1. Summarizing view of an existing 3-semester university curriculum for Machine Learning, the fourth semester being dedicate to an internship (in a research facility or in a company).
66
+
67
+ ## 3. Selected Lessons and Practices
68
+
69
+ In this section, we focus on some elements of the current curriculum, starting from the most general ones to the ones that are most specific to Machine Learning (ML).
70
+
71
+ Distributed learning. We intentionally cover some topics repeatedly across the curriculum, being fundamental skills like probabilities, optimization, algebra, or more specific ML models (as SVM). Such an approach helps consolidating the understanding of these concepts. At a finer scale, many lessons spread a subject over a semester, first presenting a theoretical concept that is soon used in practical sessions and later in a project.
72
+
73
+ Progressive unscaffolding. Scaffolding, which lowers the cognitive load by providing learners with a structure for a solution to a problem, has been shown to be effective. We follow a progressive unscaffolding principle in the learners challenges: at first, most of the code is provided, either in the form of jupyter notebooks (with more or less things to complete) or of code to copy and adapt, then, we might ask to implement or study a given algorithm, and finally, we end up with very open projects where learners need to make most of the decision, starting from scratch.
74
+
75
+ Heterogeneity and team work. One specificity of our program is the heterogeneity of the audience which is an interesting playground for developing specific skills. Indeed, in their future professional life, learners will not work alone, or with competent and friendly colleagues whom they know well and on pleasant tasks from scratch. Part of the instruction must therefore prepare learners for the real-life situations they may encounter in their professional lives. In many lessons, learners are thus asked to work in larger or smaller groups, on more or less complex projects, of longer or shorter duration, in more or less complex environments. The interesting skills worked on here are: the ability to carry out several projects at once, to plan each of them, to build an efficient project team with colleagues whom they do not necessarily know and whose level they do not necessarily know at the beginning, etc. Other skills that are more related to interpersonal skills are also developed: the ability to adapt to the working methods of colleagues, to work under pressure, to communicate with others to avoid clashes, etc.
76
+
77
+ Problem based learning. During the curriculum, several practical projects are proposed. An important one is the one proposed in the last semester, in which the learners are participating to a national data challenge proposed by industrials and open to different universities. The topic and the type of data change each year: anomaly detection from sensor data provided by Airbus, temperature prediction and uncertainty estimation, identifying the presence of a wind turbine from satellite images to name a few. These projects last during one semester, and allow learners to discover methods or applied problems that are not necessarily covered during other courses. They are participating in teams (of 2 to 4) and are also encouraged to discuss with other teams The output obtained by learners on a test set can be submitted online at any time during the semester, and the results are displayed and updated on a public leader-board at every new submission. At the end of the challenge, 3 teams from the top 10 are invited for a presentation during a day dedicated to the results summary and open to all participants. We also ask the learners to write a small scientific article explaining how they obtain their results (state-of-the-art, methods tested, chosen solution and experimental details).
78
+
79
+ To model.train( )... and beyond!. Most of ML courses aim at making learners able to train a prediction model in a Jupyter notebook (and later submitting jobs to a GPU cluster). By doing this, learners feel the need for early data analysis and data wrangling, and then have a first-hand experience with various ML techniques. There is however a gap between ML a model in a notebook interface and then making it available in a corporate software infrastructure. We therefore include in some of our curriculum wider learner projects in which the learners learn how to glue together the different aspects needed to exploit a model in a distributed software architecture. This usually includes extracting datasets from a Hadoop cluster, data wrangling on the fly and store the processed data in a NoSQL database, and then use their prediction model published as a microser-vice which will store its output in the NoSQL database as well. We believe that such project let the learners reharsh several aspects of machine learning and data mining oriented software development, as well as giving them the opportunity to think and create complete software architectures and data pipelines which include machine learning. This part of the curriculum is of the utmost practical interest for those who will likely serve as research engineer later.
80
+
81
+ Towards responsible ML. We believe that it is important to teach learners about the fact that they have to think in a responsible manner on the societal impacts of what they will do as future scientists. To do so we have teach them how to be transparent and honest by designing reproducible methods (building proper pipelines). This honesty is achieved in part by learning ML models properly: Building an effective ML model relies heavily on selecting its parameters, and teaching the correct way to validate them is a key part of the ML pipeline (e.g., cross-validation). Moreover, the learners need to be aware that biases in ML can have multiple sources: the data obviously but also the model at hand. Those biases reflect our human perception of the world and our societies, therefore they are likely not totally avoidable, but being able to study the societal impacts of ML methods is key. Raising these issues in the curriculum should not be an option, and should be taught as a common thread within the various courses, through explicability, fairness, privacy, environmental impact, etc.
82
+
83
+ ## 4. Moving Forward with the Curriculum
84
+
85
+ From toy dataset to real-life/industrial dataset. To learn the principles of ML methods, toy datasets (e.g., 2D datasets, MNIST ${}^{2}$ or UCI ${}^{3}$ datasets) are often used. On the one hand, with this kind of data it is "easy" to visualize and understand the basic behavior of the methods. On the other hand, this kind of data is "too simple/too clean" to be aware of the need for a thorough study of the data to understand its characteristics and to have the intuition of which method(s) to use (either in pre-processing the data or for the learning phase itself). Indeed, when the learner is confronted with real/industrial data, he/she will be faced with different types of problems. Among these, we can mention noise and/or outliers, the amount of data available, unbalanced data, distribution drift, etc. These notions are explained during the curriculum and can be applied by the learners in particular during the challenge proposed at the end of the curriculum, we think that this only allows to glimpse the tip of the iceberg. Obviously, in an academic training context it is not possible to train learners on all the issues they may face during their career. However, it would be appropriate to offer projects with increasingly complex data during the curriculum, and to adapt the follow-up so that learners can have a discussion with the teachers more frequently in order to highlight the problems they face and discuss the solutions implemented.
86
+
87
+ ---
88
+
89
+ ${}^{2}$ http://yann.lecun.com/exdb/mnist/
90
+
91
+ ${}^{3}$ http://archive.ics.uci.edu/ml
92
+
93
+ ---
94
+
95
+ Faster learner/teacher feedback loops. Currently, learners mostly receive feedback at the end of semesters, with final exams and project defenses. Similarly, the teacher only have feedback at the end of each semester through a (long) form that each learner have to fill out, the heterogeneity of the audience clearly makes such form not enough informative. We can imagine to tighten this feedback loop by having periodic discussions between the learners and teachers in a similar way to what is done in software development with agile methods and stand up meetings every Monday morning. This method would have the advantage to(i)allow teachers to quickly adapt the course regarding the heterogeneity of the learners and (ii) to better catch up on a course where the learners can have difficulties.
96
+
97
+ Enforcing prerequisites, embracing diversity. An approach to align the curriculum and the learner population is to enforce prerequisites regarding academic background. Providing a list of prerequisites so that learners can self-assess their background is often too imprecise to be sufficient. One solution, which is very costly in human resources, is to require applicants to solve and hand in a project and also to conduct interviews. A promising intermediate solution that we would like to explore consists in suggesting online resources to acquire the necessary prerequisites, and providing an automated platform on which applicants should validate exercises.
98
+
99
+ Convenient as it may be, enforcing strong uniformity is not in the spirit of our international program. We strongly believe that systemic diversity is desirable: it develops learners openness to different cultures, encourages peer based learning and promotes group cohesion.
100
+
101
+ Generalizing helping visualizations. "A picture is worth a thousand words" and an interactive animation is worth thousands pictures. As a illustration of this, the addition of many pictures to the "Version Control with Git" lessons from Carpentry ${}^{4}$ , pushed this course to a new level and, in our experience, made it a lot more accessible for newcomers. Beyond providing sound mathematical formulations, we want to actively increase ${}^{5}$ the amount of visualizations we provide to our learners. With the advance of technologies like dynamic websites and notebooks, one can even create interactive illustrations, i.e., playgrounds, in which the learners can play with a model to better understand how it reacts to various inputs and/or parameters. In addition to pointing our learners towards broadly known interactive animations, e.g., for neural nets (MLP ${}^{6}$ , ConvNets ${}^{789}$ ) or the very well polished Distill ${}^{10}$ platform, we are committed to continuously creating new visualizations and interactive codes that favor deeper understanding.
102
+
103
+ Carpentries-style homogeneous lesson format. While our current curriculum is roughly split into lessons (red boxes in Fig. 1), uniformizing the format/template and explicitly structuring the lessons using the notion of episodes (an existing example is given in light grey in Fig. 1) could be interesting. Indeed, having the equivalent of the Carpentries' workshop schedule page and the reference pages is very useful, and could advantageously replace the course description sheets. This would also allow to recombine the lessons to more easily create new curriculum for different audiences. An important caveat is to carry this uniformization while allowing for originality and easy experimentation of new content or practice.
104
+
105
+ An condensed ML-carpentry for all researchers. While, our curriculum is aimed at giving learners strong ML foundations over the course of 2 years at the master level, we aim at reusing lessons and episodes. We would like to author and propose a doctoral level courses on ML for Ph.D. students and researchers of different domains. The format could be very close to the workshops from The Carpentries, but spread over a few weeks to leverage distributed practice. More modest learning goals are targeted: understanding ML fundamentals, practicing one framework and understanding deeply at least one method. Several follow-up "specializations" could be proposed, e.g., some on specific approaches (e.g. deep learning) and one focusing on the data pipeline.
106
+
107
+ ## 5. Conclusion
108
+
109
+ We presented an existing ML curriculum, highlighting some features and possible improvements. We hope this presentation can help fueling discussion around Teaching ML.
110
+
111
+ ---
112
+
113
+ ${}^{4}$ https://swcarpentry.github.io/git-novice/
114
+
115
+ ${}^{5}$ To preserve anonymity, none of our visualizations are used as examples.
116
+
117
+ 6 https://playground.tensorflow.org/
118
+
119
+ ${}^{7}$ https://convnetplayground.fastforwardlabs.com
120
+
121
+ 8 https://poloclub.github.io/cnn-explainer/
122
+
123
+ ${}^{9}$ https://www.cs.ryerson.ca/-aharley/vis/conv/
124
+
125
+ 10 https://distill.pub
126
+
127
+ ---
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/WXdE6lC7-n/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § FEEDBACKS ON A MACHINE LEARNING CURRICULUM FOR AN INTERNATIONAL AUDIENCE
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ Teaching Machine Learning (ML) can be challenging due to the breadth of the subject and the diversity of audience that might be interested in it. In this article, we collectively place ourselves both at the curriculum level and at the lesson level, describing existing practices and sketching directions for improvement. We first describe a curriculum for ML practitioners that involves different constraints including the variety of audience, both in terms of background and in terms of learning goals. We also explain how some of the lessons relate to the teaching principles that are pushed forward for example by The Carpentries and how the future of the curriculum could be reshaped.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ With the maturation of the field, learning fundamentals of Machine Learning (ML) can become a key differentiator for learners and researchers of all domains. Indeed, many application domains and research domains are being structurally transformed by the availability of data and the use of ML. In this context, effectively teaching ${ML}$ becomes of utmost importance so that ML practitioners understand the tools they manipulate and are able to creatively and robustly use them. Teaching ML, however, presents many challenges due to this pervasive applicability.
12
+
13
+ Probably one of the biggest challenges is the heterogeneity of the possible audience. This compounds with the fact that ML is at the crossroads of computer science, maths and statistics. As in the traditional lessons from The Carpentries, a part of the target audience might have none or very limited programming background. The same goes for mathematical background, where math anxiety is a common issue across disciplines. As far as statistics are concerned, almost all scientists would benefit from a better statistical background. In Section 2, presenting an existing curriculum, we describe the target audience(s) we consider and how we have mitigated for the differences in background.
14
+
15
+ Another challenge is the one of deciding of what topics should be taught. While a mostly practical approach can be tempting, having only a shallow understanding of the ML methods often leads to unsatisfying results, waste of time and incapacity to innovate. Indeed, in addition to practice, the learners must be able to analyze and understand why, when, and how the methods they use work. In Section 2, we additionally present our target goals and guidelines in the curriculum. We concisely focus, in Section 3, on some illustrative practice in our curriculum, and group in Section 4 the results of our reflection on the evolutions of the curriculum.
16
+
17
+ § 2.A MODULAR MACHINE LEARNING CURRICULUM
18
+
19
+ In this section, we sketch an existing ML curriculum that has a 3-semester duration, in a university context. We focus on the core curriculum but also present how part of it is addressed and fined tuned to different audiences with different goals. While it surely can be improved a lot, this curriculum can be used as a basis for reasoning about the topics of interests in an ML curriculum.
20
+
21
+ § 2.1. AUDIENCE
22
+
23
+ The core curriculum has ML at its core (together with symbolic Artificial Intelligence and Data Mining). While examining hundreds of applications every year, it is more and more common to see applicants that have already followed some introductory MOOC on Machine Learning. Despite this, while learners are supposed to be at ease with programming, their actual programming level shows a great discrepancy. Similarly, an introductory-level statistical background is presupposed but in practice, its mastery is variable.
24
+
25
+ The audience is also international, with people coming from tens of different countries; all lessons are in English, which is not the mother tongue of most of the audience, adding an additional difficulty to the understanding. This difficulty is worth its benefits. The diversity of linguistic, cultural and work experience ${}^{1}$ backgrounds makes the learning experience richer and more open on many aspects.
26
+
27
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
28
+
29
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
30
+
31
+ ${}^{1}$ Applicants range from students to people having worked for a few years in various domains to people coming back to studies after decades of work in the industry.
32
+
33
+ Secondary audience(s). In addition to the main audience, other audiences reuse the presented lessons. Some lessons are directly shared (mixing learners and thus increasing the benefits of heterogeneity), some are tuned to the audience to cope for background discrepancies. These audiences include learners that aspire to become software engineers with a data science background, but also learners from other disciplines (e.g., Optics, Physics) that want to learn ML.
34
+
35
+ § 2.2. GENERAL GOALS AND GUIDING PRINCIPLES
36
+
37
+ We aim to prepare the learners for ML research and innovation while acquiring the necessary skills to solve
38
+
39
+ 068 data-driven problems. So, the ultimate goal is to provide a strong background on the entire ML pipeline from the raw data to the final ML software and its impact on the application domain.
40
+
41
+ Preparing for research and innovation. To be able to do advances in research or to push the bleeding edge of innovation, a superficial knowledge is insufficient. We aim at making our learners being able to imagine/visualize the data, to understand how each type of model processes these data and to understand the dynamic of the training process of these models. Such deep understanding makes it possible diagnose unexpected problems and limitations of the models in particular situations. Most importantly we believe they are mandatory to be able to improve existing methods, propose novel models, or formalize a new problem to be solved.
42
+
43
+ Learning to carry out a data project. All ML methods are driven by the data and the task at hand. One of the objectives is to make the learners aware that raw data is by definition not smooth and that analyzing, understanding and pre-processing the data are mandatory tasks. Given the data (and their characteristics), the learners have to be able to mathematically formalize the issue to propose or to design ML solutions. A key point is the ability to perform experiments to validate the methods (and the results) and to design a relevant and rigorous experimental protocol.
44
+
45
+ Acquiring transversal skills. At any stage, three crosscutting aspects are very important for our learners:
46
+
47
+ * working in teams especially when software implementation plays a central role.
48
+
49
+ * presenting their work orally or in writing.
50
+
51
+ * understanding the broader impact of ML design choices, like fairness, privacy or environmental impact.
52
+
53
+ § 2.3.ML CURRICULUM: A QUICK TOUR
54
+
55
+ The current embodiment of the curriculum is sketched in Fig. 1. With the increase of applications (thus their diversity) and the maturation of the domain, the curriculum must be continuously evolving. This evolution driven by systematic feedback gathering from learners and teachers, but also from
56
+
57
+ 109 the research labs and companies that hire our learners. The curriculum aims at achieving the presented goals, mainly by covering a broad variety of machine learning topics in great details and by including a lot of projects. Some lessons are more targeted at strengthening the computer science background, while others focus on the transversal skills.
58
+
59
+ < g r a p h i c s >
60
+
61
+ Figure 1. Summarizing view of an existing 3-semester university curriculum for Machine Learning, the fourth semester being dedicate to an internship (in a research facility or in a company).
62
+
63
+ § 3. SELECTED LESSONS AND PRACTICES
64
+
65
+ In this section, we focus on some elements of the current curriculum, starting from the most general ones to the ones that are most specific to Machine Learning (ML).
66
+
67
+ Distributed learning. We intentionally cover some topics repeatedly across the curriculum, being fundamental skills like probabilities, optimization, algebra, or more specific ML models (as SVM). Such an approach helps consolidating the understanding of these concepts. At a finer scale, many lessons spread a subject over a semester, first presenting a theoretical concept that is soon used in practical sessions and later in a project.
68
+
69
+ Progressive unscaffolding. Scaffolding, which lowers the cognitive load by providing learners with a structure for a solution to a problem, has been shown to be effective. We follow a progressive unscaffolding principle in the learners challenges: at first, most of the code is provided, either in the form of jupyter notebooks (with more or less things to complete) or of code to copy and adapt, then, we might ask to implement or study a given algorithm, and finally, we end up with very open projects where learners need to make most of the decision, starting from scratch.
70
+
71
+ Heterogeneity and team work. One specificity of our program is the heterogeneity of the audience which is an interesting playground for developing specific skills. Indeed, in their future professional life, learners will not work alone, or with competent and friendly colleagues whom they know well and on pleasant tasks from scratch. Part of the instruction must therefore prepare learners for the real-life situations they may encounter in their professional lives. In many lessons, learners are thus asked to work in larger or smaller groups, on more or less complex projects, of longer or shorter duration, in more or less complex environments. The interesting skills worked on here are: the ability to carry out several projects at once, to plan each of them, to build an efficient project team with colleagues whom they do not necessarily know and whose level they do not necessarily know at the beginning, etc. Other skills that are more related to interpersonal skills are also developed: the ability to adapt to the working methods of colleagues, to work under pressure, to communicate with others to avoid clashes, etc.
72
+
73
+ Problem based learning. During the curriculum, several practical projects are proposed. An important one is the one proposed in the last semester, in which the learners are participating to a national data challenge proposed by industrials and open to different universities. The topic and the type of data change each year: anomaly detection from sensor data provided by Airbus, temperature prediction and uncertainty estimation, identifying the presence of a wind turbine from satellite images to name a few. These projects last during one semester, and allow learners to discover methods or applied problems that are not necessarily covered during other courses. They are participating in teams (of 2 to 4) and are also encouraged to discuss with other teams The output obtained by learners on a test set can be submitted online at any time during the semester, and the results are displayed and updated on a public leader-board at every new submission. At the end of the challenge, 3 teams from the top 10 are invited for a presentation during a day dedicated to the results summary and open to all participants. We also ask the learners to write a small scientific article explaining how they obtain their results (state-of-the-art, methods tested, chosen solution and experimental details).
74
+
75
+ To model.train( )... and beyond!. Most of ML courses aim at making learners able to train a prediction model in a Jupyter notebook (and later submitting jobs to a GPU cluster). By doing this, learners feel the need for early data analysis and data wrangling, and then have a first-hand experience with various ML techniques. There is however a gap between ML a model in a notebook interface and then making it available in a corporate software infrastructure. We therefore include in some of our curriculum wider learner projects in which the learners learn how to glue together the different aspects needed to exploit a model in a distributed software architecture. This usually includes extracting datasets from a Hadoop cluster, data wrangling on the fly and store the processed data in a NoSQL database, and then use their prediction model published as a microser-vice which will store its output in the NoSQL database as well. We believe that such project let the learners reharsh several aspects of machine learning and data mining oriented software development, as well as giving them the opportunity to think and create complete software architectures and data pipelines which include machine learning. This part of the curriculum is of the utmost practical interest for those who will likely serve as research engineer later.
76
+
77
+ Towards responsible ML. We believe that it is important to teach learners about the fact that they have to think in a responsible manner on the societal impacts of what they will do as future scientists. To do so we have teach them how to be transparent and honest by designing reproducible methods (building proper pipelines). This honesty is achieved in part by learning ML models properly: Building an effective ML model relies heavily on selecting its parameters, and teaching the correct way to validate them is a key part of the ML pipeline (e.g., cross-validation). Moreover, the learners need to be aware that biases in ML can have multiple sources: the data obviously but also the model at hand. Those biases reflect our human perception of the world and our societies, therefore they are likely not totally avoidable, but being able to study the societal impacts of ML methods is key. Raising these issues in the curriculum should not be an option, and should be taught as a common thread within the various courses, through explicability, fairness, privacy, environmental impact, etc.
78
+
79
+ § 4. MOVING FORWARD WITH THE CURRICULUM
80
+
81
+ From toy dataset to real-life/industrial dataset. To learn the principles of ML methods, toy datasets (e.g., 2D datasets, MNIST ${}^{2}$ or UCI ${}^{3}$ datasets) are often used. On the one hand, with this kind of data it is "easy" to visualize and understand the basic behavior of the methods. On the other hand, this kind of data is "too simple/too clean" to be aware of the need for a thorough study of the data to understand its characteristics and to have the intuition of which method(s) to use (either in pre-processing the data or for the learning phase itself). Indeed, when the learner is confronted with real/industrial data, he/she will be faced with different types of problems. Among these, we can mention noise and/or outliers, the amount of data available, unbalanced data, distribution drift, etc. These notions are explained during the curriculum and can be applied by the learners in particular during the challenge proposed at the end of the curriculum, we think that this only allows to glimpse the tip of the iceberg. Obviously, in an academic training context it is not possible to train learners on all the issues they may face during their career. However, it would be appropriate to offer projects with increasingly complex data during the curriculum, and to adapt the follow-up so that learners can have a discussion with the teachers more frequently in order to highlight the problems they face and discuss the solutions implemented.
82
+
83
+ ${}^{2}$ http://yann.lecun.com/exdb/mnist/
84
+
85
+ ${}^{3}$ http://archive.ics.uci.edu/ml
86
+
87
+ Faster learner/teacher feedback loops. Currently, learners mostly receive feedback at the end of semesters, with final exams and project defenses. Similarly, the teacher only have feedback at the end of each semester through a (long) form that each learner have to fill out, the heterogeneity of the audience clearly makes such form not enough informative. We can imagine to tighten this feedback loop by having periodic discussions between the learners and teachers in a similar way to what is done in software development with agile methods and stand up meetings every Monday morning. This method would have the advantage to(i)allow teachers to quickly adapt the course regarding the heterogeneity of the learners and (ii) to better catch up on a course where the learners can have difficulties.
88
+
89
+ Enforcing prerequisites, embracing diversity. An approach to align the curriculum and the learner population is to enforce prerequisites regarding academic background. Providing a list of prerequisites so that learners can self-assess their background is often too imprecise to be sufficient. One solution, which is very costly in human resources, is to require applicants to solve and hand in a project and also to conduct interviews. A promising intermediate solution that we would like to explore consists in suggesting online resources to acquire the necessary prerequisites, and providing an automated platform on which applicants should validate exercises.
90
+
91
+ Convenient as it may be, enforcing strong uniformity is not in the spirit of our international program. We strongly believe that systemic diversity is desirable: it develops learners openness to different cultures, encourages peer based learning and promotes group cohesion.
92
+
93
+ Generalizing helping visualizations. "A picture is worth a thousand words" and an interactive animation is worth thousands pictures. As a illustration of this, the addition of many pictures to the "Version Control with Git" lessons from Carpentry ${}^{4}$ , pushed this course to a new level and, in our experience, made it a lot more accessible for newcomers. Beyond providing sound mathematical formulations, we want to actively increase ${}^{5}$ the amount of visualizations we provide to our learners. With the advance of technologies like dynamic websites and notebooks, one can even create interactive illustrations, i.e., playgrounds, in which the learners can play with a model to better understand how it reacts to various inputs and/or parameters. In addition to pointing our learners towards broadly known interactive animations, e.g., for neural nets (MLP ${}^{6}$ , ConvNets ${}^{789}$ ) or the very well polished Distill ${}^{10}$ platform, we are committed to continuously creating new visualizations and interactive codes that favor deeper understanding.
94
+
95
+ Carpentries-style homogeneous lesson format. While our current curriculum is roughly split into lessons (red boxes in Fig. 1), uniformizing the format/template and explicitly structuring the lessons using the notion of episodes (an existing example is given in light grey in Fig. 1) could be interesting. Indeed, having the equivalent of the Carpentries' workshop schedule page and the reference pages is very useful, and could advantageously replace the course description sheets. This would also allow to recombine the lessons to more easily create new curriculum for different audiences. An important caveat is to carry this uniformization while allowing for originality and easy experimentation of new content or practice.
96
+
97
+ An condensed ML-carpentry for all researchers. While, our curriculum is aimed at giving learners strong ML foundations over the course of 2 years at the master level, we aim at reusing lessons and episodes. We would like to author and propose a doctoral level courses on ML for Ph.D. students and researchers of different domains. The format could be very close to the workshops from The Carpentries, but spread over a few weeks to leverage distributed practice. More modest learning goals are targeted: understanding ML fundamentals, practicing one framework and understanding deeply at least one method. Several follow-up "specializations" could be proposed, e.g., some on specific approaches (e.g. deep learning) and one focusing on the data pipeline.
98
+
99
+ § 5. CONCLUSION
100
+
101
+ We presented an existing ML curriculum, highlighting some features and possible improvements. We hope this presentation can help fueling discussion around Teaching ML.
102
+
103
+ ${}^{4}$ https://swcarpentry.github.io/git-novice/
104
+
105
+ ${}^{5}$ To preserve anonymity, none of our visualizations are used as examples.
106
+
107
+ 6 https://playground.tensorflow.org/
108
+
109
+ ${}^{7}$ https://convnetplayground.fastforwardlabs.com
110
+
111
+ 8 https://poloclub.github.io/cnn-explainer/
112
+
113
+ ${}^{9}$ https://www.cs.ryerson.ca/-aharley/vis/conv/
114
+
115
+ 10 https://distill.pub
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/Z1kgcLVfha/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Deep Learning Projects from a Regional Council: An Experience Report
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ Due to the impact of Deep Learning both in industry and academia, there is a growing demand of graduates with skills in this field, and Universities are starting to offer courses that include Deep Learning subjects. Hands-on assignments that teach students how to tackle Deep Learning tasks are an instrumental part of those courses. However, most Deep Learning assignments have two main drawbacks. First, they use either toy datasets, that are useful to teach concepts but whose solutions do not generalise to real problems, or employ datasets that require specialised knowledge to fully understand the problem. Secondly, most Deep Learning assignments are focused on training a model, and do not take into account other stages of the Deep Learning pipeline, such as data cleaning or model deployment. In this work, we present an experience in an Artificial Intelligence course where we have tackled the aforementioned drawbacks by using datasets from the regional council where our University is located. Namely, the students of the course have developed several computer vision and natural language processing projects; for instance, a news classifier or an application to colourise historical images. We share the workflow followed to organise this experience, several lessons that we have learned, and challenges that can be faced by other instructors that try to conduct a similar initiative.
8
+
9
+ ## 1. Introduction
10
+
11
+ Deep Learning (DL) techniques have become the state of the art approach to tackle problems in several domains such as computer vision (He et al., 2015), natural language processing (Devlin et al., 2019), recommender systems (Zhang et al., 2019), bioinformatics (Li et al., 2019) or games (Silver et al., 2018). Due to its success, there is a growing interest and demand of experts in this subfield of Machine Learning; and, several Universities are incorporating DL subjects in their courses.
12
+
13
+ In the intersected field of Data Science, studies on the design of Data Science courses have emphasised the importance of practical and application-based teaching (Song & Zhu, 2016; Ramamurthy, 2016), and this can be extrapolated to DL courses. In most DL courses today, this is achieved through assignments or projects where students have to train a DL model using a dataset acquired from public datasets portals (such as Kaggle, Amazon Datasets, or Google Datasets Search). However, the construction of a model is just one of the steps in the pipeline to tackle a project using DL such a pipeline is summarised in Figure 1 - and the rest of the steps are not taken into account in most assignments. For instance, data acquisition, cleaning and labelling are not usually conducted by students since datasets from public portals are usually preprocessed and ready to be employed; or, models are not deployed since they are only evaluated by instructors, and they are not actually used outside the classroom. In addition, most publicly available datasets are either toy examples that have a limited interest for students, or require some knowledge of the field where the data was acquired. These problems can be approached by using open urban and regional data.
14
+
15
+ Many cities and councils around the world are investing a considerable amount of resources to publicly release their data (Silva et al., 2018). This opens the door to the application of Machine Learning to answer several questions of interest for citizens, administrators, businesses, and researchers. Building models using urban or regional data is close to a real project since such data must be usually cleaned; and to be useful for the society, models must be deployed. Moreover, students are familiar with the context of the data. In this paper, we present an initiative in an Artificial Intelligence course where students have developed, from end-to-end and using DL techniques, several computer vision and natural processing language projects suggested by a regional council. In addition, we introduce the tools and workflow employed to organise such an initiative. Finally, we conclude by presenting the challenges faced and the lessons learned during this experience.
16
+
17
+ ---
18
+
19
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
20
+
21
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
22
+
23
+ ---
24
+
25
+ 055
26
+
27
+ 056
28
+
29
+ 057
30
+
31
+ Problem definition Data acquisition Data cleaning Data labeling Model training Model evaluation Model deployment
32
+
33
+ Figure 1. Pipeline of a Supervised DL project. After defining the problem to be solved, data must be acquired and cleaned. Subsequently, the data must be annotated to train a supervised prediction model. Once the model is constructed it must be evaluated using an independent test set that was not employed for training. After the model is evaluated, it must be deployed to facilitate its usage.
34
+
35
+ 058
36
+
37
+ ## 2. Experience description
38
+
39
+ 067 This initiative was conducted in an undergraduate course on 068 Artificial Intelligence. The goal of this experience was the 069 development of DL-based solutions to projects suggested by the regional council where our University is located. The course involved 25 students, with either a Computer Science or Mathematics background, that worked in teams from 2 to 4 in a total of 9 projects, summarised in Table 1.
40
+
41
+ In the first step of this experience, instructors together with members of the regional council identified several tasks in a series of meetings. Those meetings were instrumental to find a set of sensible problems that could be handled in the context of the course; namely, we focused on projects with enough data that was, at least, partially annotated. Subsequently, a summary of the available projects was provided to the students, that formed teams and selected a project. After that, instructors created an assignment through GitHub classroom ${}^{1}$ that gave access to a private GitHub repository to each team. Each repository only contained a README file that explained how to access the data for each project, and some ideas and links explaining how to tackle the project.
42
+
43
+ The first task that the teams faced was the process of acquiring, cleaning and annotating the data. All the datasets were available through public APIs; however, the associated annotation was not usually in the same place. Hence, students had to match several sources of information. Moreover, most datasets were partially annotated; so, students had to clean them.
44
+
45
+ Once these preprocessing steps were conducted, students
46
+
47
+ http://classroom.github.com
48
+
49
+ <table><tr><td>Project name</td><td>#members</td><td>Tasks</td><td>Libraries</td><td>Deployment</td></tr><tr><td>People image retrieval</td><td>4</td><td>Face recognition</td><td>Keras</td><td>Desktop app</td></tr><tr><td>Colourising historical images</td><td>3</td><td>Image colourisation</td><td>FastAI</td><td>Colab</td></tr><tr><td>Colourising old aerial images</td><td>3</td><td>Image colourisation</td><td>FastAI</td><td>Colab</td></tr><tr><td>House detection in aerial images</td><td>3</td><td>Image segmentation</td><td>FastAI</td><td>Colab</td></tr><tr><td>News classifier</td><td>3</td><td>Text classification</td><td>FastAI</td><td>Colab</td></tr><tr><td>Classification of historical images</td><td>2</td><td>Image classification</td><td>FastAI</td><td>Colab</td></tr><tr><td>Dating historical images</td><td>2</td><td>Image classification</td><td>FastAI</td><td>Binder</td></tr><tr><td>Dating museum pieces from images</td><td>2</td><td>Image classification</td><td>FastAI</td><td>Web app</td></tr><tr><td>Classification of museum pieces</td><td>3</td><td>Image classification</td><td>FastAI</td><td>Colab</td></tr></table>
50
+
51
+ Table 1. List of projects conducted during the experience trained and evaluated their models using Python as programming language, and DL libraries like Keras (Chollet et al., 2015) or FastAI (Howard & Thomas, 2019). As development environment, they used Jupyter notebooks (Kluyver et al., 2016), an open-source and browser-based tool that allows the combination of text and code. Jupyter notebooks can be run locally; however, in order to train DL models, it is necessary the use of special purpose hardware like GPUs or TPUs, and most students do not have access to those resources. Hence, Jupyter notebooks were run in Google Colaboratory (Colab) (Nelson & Hoover, 2020), a pre-configured environment with the essential Machine Learning and DL libraries that provides access to free GPUs and TPUs through a Google account. It is worth mentioning that Colab can be linked with a GitHub repository; hence, students were able to easily save changes in their repositories. All the aforementioned tools were previously introduced to students during the course. The only feature that was unknown for them was the access of teams to a GitHub repository since the rest of the assignments in the course were individual.
52
+
53
+ After training their models, students evaluated them using an independent testing set that was not used for training the models. All models, except for the model in charge of colourising historical images, achieved over a ${90}\%$ accuracy in their corresponding task - note that different metrics were employed for the different tasks. Hence, even if there is room for improvement, they can be considered as a success. Finally, students had to deploy their models in a way that they were easy to invoke. Most teams, 6 out of 9 , decided to use forms in Colab; 1 team created a Desktop application, 1 created a web application; and the last team employed Binder ${}^{2}$ .
54
+
55
+ In order to evaluate the assignments, we asked the students to document the whole process using Jupyter noteboks that should be stored in the teams' GitHub repository. Moreover, students had to present their work in a public exposition, and produce a 2-minutes video where they explained, in a non-technical manner, their work.
56
+
57
+ ---
58
+
59
+ ${}^{2}$ https://mybinder.org/
60
+
61
+ ---
62
+
63
+ ## 3. Lessons learned
64
+
65
+ In this section, we present the lessons that we have learned about how to organise similar experiences, and report the challenges that we faced and that should be taken into consideration when designing this kind of assignments. These recommendations are not only based on the instructors' opinion, but they are also founded on the students' satisfaction with the experience. To capture the students' satisfaction, we conducted an anonymous and non-compulsory survey developed with Google Forms. The survey consisted of 4 sections (valuation of GitHub, valuation of Jupyter notebooks and Colab, valuation of the projects, and comments). The valuation sections consisted of a list of questions following a 4-point Likert scale that goes from 1 (strongly disagree) to 4 (strongly agree), and the comments section allowed the students to introduce additional comments about the experience. The survey was answered by 15 out of the 25 students. We first describe the lessons learned from an organisational point of view.
66
+
67
+ Regional data. As suggested by previous work (Zeng et al., 2018), when considering assignments for undergraduate students, it is of great benefit to involve them in projects that are closely related to their life; hence, using regional data is a perfect match for this task. This fact can be seen in the video presentations where, for instance, the members of the team in charge of colourising historical aerial images showed the colourised image of the villages where their families live; or in the people image retrieval project, the members of the team searched themselves in the news of the regional council. In the anonymous survey, all the students claimed that they enjoyed working with real data from the regional council. In fact, one of the students wrote the following comment "These assignments are great, they help you to apply what we have seen during the course in a real problem".
68
+
69
+ Meetings with members of the regional council. In the long term, organising meetings with the members of the regional council has proven to be one of the best decisions taken for this experience. These meetings helped us to frame a set of problems that fit with the subjects explained in the course, solving the problem of finding enough and interesting data in urban datasets (Pineau & Bacon, 2015). In these meetings, instructors had to explain what could be done to the members of the council, and also establish some limits to their expectations - all the projects were proofs of concepts that had to be developed in a limited time, approximately 25 hours. In the following years, we plan to incorporate students into the loop of selecting the projects with the regional council.
70
+
71
+ The README file. Initially, each team had access to a GitHub repository with only a README file. In that file, instructors explained how to access the data, and provided some documentation explaining how to tackle the project. Among the documentation, we included links to open-source repositories containing Jupyter notebooks that tackle similar projects, and our first recommendation for the teams was to re-run those notebooks and use them as a basis for their projects. In this way, they had a starting point to know how to organise their datasets, and train their models. Moreover, the README file also contained pointers to additional research-oriented tasks; for instance, exploring several state-of-the-art architectures for image classification, or the application of ensemble methods. These research topics allowed some of the teams to dive deeper in the DL techniques.
72
+
73
+ Videos. We asked students to prepare short and nontechnical videos explaining their work. The aim was twofold. First, students had to make an effort to summarise their work in appealing manner. And, secondly, these videos served to present the work to members of the regional council the kind of tasks that can be solved using DL techniques. This is important to continue this experience in the future with new projects.
74
+
75
+ Now, we focus on the lessons learned from a technical point of view. These lessons can be applied not only to projects based on open-data but also to any DL project.
76
+
77
+ Transfer learning. Training a DL model from scratch is a time-consuming task that requires lots of data and computational resources; and, in most cases, it is not feasible to access to such amount of resources. This problem can be faced by applying a widely used technique known as transfer learning (Razavian et al., 2014); a method that reuses a model trained in a source task in a new target task. This considerably reduces the amount of data and time that is needed to construct an accurate DL model. This technique can be employed in almost any DL project; hence, it is important to introduce it throughout the course; so, students can employ it in their assignments.
78
+
79
+ GitHub and GitHub classroom. In this experience, all the code and models generated from the assignments were stored in GitHub repositories. Namely, all the repositories belong to an "organization" managed by the instructors of the course. In this way, students only have access to their team's repositories, and instructors have access to all the students' repositories. This approach not only simplifies the collaboration of teams' members, but also improves the communication with instructors, since they can directly access the code of students when they need feedback. In addition, instructors can follow the progress of students since it is possible to monitor the commits and who made them. From the students point of view, the usage of GitHub had positive valuations from students. Most students thought that GitHub helped them to manage better the code of their assignments, and considered that it was easy to use and facilitated working on teams.
80
+
81
+ Colab. This environment has proven to be enough to train
82
+
83
+ 168 DL models for the assignments. In addition, all of the surveyed students were satisfied with the use of Colab, and 169 considered that it was well integrated with GitHub. An 170 issue that arose when using Colab is that it has a 12 hours limitation (that is, after 12 hours, the environment halts); hence, students have to save intermediate checkpoints to avoid loosing their work, since in some cases 12 hours were not enough to complete the training process. It would be possible to train for longer using cloud environments like Amazon AWS or Google Cloud, but their configuration and management are more difficult, require adding billing information, and are out of the scope of the course.
84
+
85
+ Updates of libraries. DL is a field in constant evolution, and there is a continuous update of libraries implementing DL methods. This can lead to code that works for a version of a DL library, but fails when the library is updated. This can be solved locally by using virtual environments where the versions of the libraries are fixed by the user. However, Colab updates regularly the version of the libraries included in the underlying virtual machine; and, this is a problem for two reasons. First, students found out that their code stopped working from one day to another; and, second, the code from the online tutorials that were used as a basis raised errors when executed. In order to tackle this challenge, we provided the students with a requirements file that can be used to install concrete versions of a set of libraries in Colab.
86
+
87
+ FastAI as DL library. Nowadays, there are two main DL frameworks employed both in industry and academia: Ten-sorflow and Pytorch (He, 2019). However, for projects like the ones presented in this experience, it is better to use a library that provides a straightforward access to different kinds of models, and also implements best practices. During the course, we presented two of those libraries: Keras (Chol-let et al., 2015) and FastAI (Howard & Thomas, 2019). For the assignments, teams could choose the library to employ, and all but one used FastAI. This was mainly due to the user-friendly API of this library that allows users to train state-of-the-art models with just a few lines of code.
88
+
89
+ Data management. The last lesson is related to how teams managed their data. In the Colab environment, data is only kept during the session; hence, teams had to upload their data every time. To deal with this issue, we asked students to create a Jupyter notebook devoted only to download and clean the data, and once the data was processed, they had to upload the data to a file hosting system. In this way, both the members of the team and the instructors could easily access to the clean version of their data.
90
+
91
+ We focus now on the challenges that should be taken into
92
+
93
+ 218 account when this kind of experience is conducted.
94
+
95
+ 219
96
+
97
+ Instructors' load. One of the main challenges of this experience was the load for instructors both in terms of organisation of the assignments and supervision of the teams. Therefore, instructors must have the time to organise the assignments (that is, contact and meet with the members of the regional council, and prepare the materials to guide the students) and also provide individualised advice to each project team. In addition, in this kind of assignments, it is difficult to foresee the difficulties that students will face; and, this means an additional load in terms of supervision.
98
+
99
+ Reproducing errors. A challenge related to the supervision is how to reproduce errors found by the teams. One of the greatest advantages of GitHub repositories and Colab is the chance of accessing teams code; and, therefore, it is easy for instructors to inspect the error messages. However, just inspecting the error messages might not be informative enough, and it might be necessary to re-run the code. This might be challenging since, in some cases, it requires running a lot code before reaching the point where students encountered the problem; and, in addition, Jupyter notebooks' hidden state make difficult to reproduce the exact conditions where students had their problems (Grus, 2018). In order to minimise this challenge, our recommendation for students was to create a devoted notebook for each training experiment; in this way, it was easier to provide them with feedback to tackle their problems.
100
+
101
+ Methods beyond the curriculum. 4 out of the 9 projects required students to apply DL methods that were not explained during the course; and, the other 5 projects included additional tasks for diving deeper in the topics of image and text classification. To deal with this issue, we pointed the students to materials with examples and explanations of the techniques to employ. However, a problem that we found with this approach is that students focused on just training the models and did not understand the underlying methods. A solution to tackle this problem is based on the work of the team in charge of segmenting houses from aerial images. This team created a notebook explaining the U-net segmentation architecture (Ronneberger et al., 2015), and provided a toy example explaining how to use it. In the future, we will explore this approach.
102
+
103
+ ## 4. Conclusions
104
+
105
+ We have presented a initiative to apply DL methods to create solutions to regional projects. In this experience, students have worked with open data that is familiar for them, and they were involved in all the stages of the pipeline to develop a DL project. The goal of this paper was to present the lessons learned and challenges faced to organise similar initiatives. Among those lessons, it is worth highlighting the benefits of involving members of the regional council to select a set of projects that motivate the students.
106
+
107
+ References
108
+
109
+ Chollet, F. et al. Keras. https://keras.io, 2015.
110
+
111
+ Devlin, J., Chang, M.-W., Lee, K., et al. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pp. 4171-4186. Association for Computational Linguistics, 2019. doi: 10.18653/v1/N19-1423.
112
+
113
+ Grus, J. I don't like notebooks. In The official Jupyter conference, 2018.
114
+
115
+ He, H. The state of machine learning frameworks in 2019. The Gradient, 2019.
116
+
117
+ He, K., Zhang, X., Ren, S., et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on Ima-geNet Classification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026-1034. IEEE, 2015. doi: 10.1109/ICCV.2015.123.
118
+
119
+ Howard, J. and Thomas, R. Practical deep learning for coders. https://course.fast.ai/, 2019.
120
+
121
+ Kluyver, T., Ragan-Kelley, B., Perez, F., et al. Jupyter notebooks - a publishing format for reproducible computational workflows. pp. 87-90. IOS Press, 2016. doi: 10.3233/978-1-61499-649-1-87.
122
+
123
+ Li, Y., Huang, C., Ding, L., et al. Deep learning in bioin-formatics: Introduction, application, and perspective in the big data era. Methods, 166:4-21, 2019. doi: 10.1016/j.ymeth.2019.04.008.
124
+
125
+ Nelson, M. J. and Hoover, A. K. Notes on using google colaboratory in ai education. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, pp. 533-534. ACM, 2020. doi: 10.1145/3341525.3393997.
126
+
127
+ Pineau, J. and Bacon, P.-L. Analyzing open data from the city of montreal. In Proceedings of the 2nd International Workshop on Mining Urban Data, pp. 11-16, 2015.
128
+
129
+ Ramamurthy, B. A practical and sustainable model for learning and teaching data science. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education, pp. 169-174. ACM, 2016. doi: 10.1145/ 2839509.2844603.
130
+
131
+ Razavian, A. S., Azizpour, H., Sullivan, J., et al. CNN features off-the-shelf: An astounding baseline for recognition. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW'14, pp. 512- 519, 2014.
132
+
133
+ Ronneberger, O., Fischer, P., and Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015, volume 9351 of Lecture Notes in Computer Science, pp. 234-241, 2015. doi: 10.1007/978-3-319-24574-4_28.
134
+
135
+ Silva, B. N., Khan, M., and Han, K. Towards sustainable smart cities: A review of trends, architectures, components, and open challenges in smart cities. Sustainable Cities and Society, 38:697-713, 2018. doi: 10.1016/j.scs.2018.01.053.
136
+
137
+ Silver, D., Hubert, T., Schrittwieser, J., et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419):1140-1144, 2018. doi: 10.1126/science.aar6404.
138
+
139
+ Song, I.-Y. and Zhu, Y. Big data and data science: what should we teach? Expert Systems, 33(4):364-373, 2016. doi: 10.1111/exsy.12130.
140
+
141
+ Zeng, K., Li, Y., Xu, Y., et al. Introducing ai to undergraduate students via computer vision projects. In Proceedings of the Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, pp. 7956-7957, 2018.
142
+
143
+ Zhang, S., Yao, L., Sun, A., et al. Deep Learning Based Recommender System: A Survey and New Perspectives. Comput. Surveys, 52(1):5, 2019. doi: 10.1145/3285029.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/Z1kgcLVfha/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § DEEP LEARNING PROJECTS FROM A REGIONAL COUNCIL: AN EXPERIENCE REPORT
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ Due to the impact of Deep Learning both in industry and academia, there is a growing demand of graduates with skills in this field, and Universities are starting to offer courses that include Deep Learning subjects. Hands-on assignments that teach students how to tackle Deep Learning tasks are an instrumental part of those courses. However, most Deep Learning assignments have two main drawbacks. First, they use either toy datasets, that are useful to teach concepts but whose solutions do not generalise to real problems, or employ datasets that require specialised knowledge to fully understand the problem. Secondly, most Deep Learning assignments are focused on training a model, and do not take into account other stages of the Deep Learning pipeline, such as data cleaning or model deployment. In this work, we present an experience in an Artificial Intelligence course where we have tackled the aforementioned drawbacks by using datasets from the regional council where our University is located. Namely, the students of the course have developed several computer vision and natural language processing projects; for instance, a news classifier or an application to colourise historical images. We share the workflow followed to organise this experience, several lessons that we have learned, and challenges that can be faced by other instructors that try to conduct a similar initiative.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ Deep Learning (DL) techniques have become the state of the art approach to tackle problems in several domains such as computer vision (He et al., 2015), natural language processing (Devlin et al., 2019), recommender systems (Zhang et al., 2019), bioinformatics (Li et al., 2019) or games (Silver et al., 2018). Due to its success, there is a growing interest and demand of experts in this subfield of Machine Learning; and, several Universities are incorporating DL subjects in their courses.
12
+
13
+ In the intersected field of Data Science, studies on the design of Data Science courses have emphasised the importance of practical and application-based teaching (Song & Zhu, 2016; Ramamurthy, 2016), and this can be extrapolated to DL courses. In most DL courses today, this is achieved through assignments or projects where students have to train a DL model using a dataset acquired from public datasets portals (such as Kaggle, Amazon Datasets, or Google Datasets Search). However, the construction of a model is just one of the steps in the pipeline to tackle a project using DL such a pipeline is summarised in Figure 1 - and the rest of the steps are not taken into account in most assignments. For instance, data acquisition, cleaning and labelling are not usually conducted by students since datasets from public portals are usually preprocessed and ready to be employed; or, models are not deployed since they are only evaluated by instructors, and they are not actually used outside the classroom. In addition, most publicly available datasets are either toy examples that have a limited interest for students, or require some knowledge of the field where the data was acquired. These problems can be approached by using open urban and regional data.
14
+
15
+ Many cities and councils around the world are investing a considerable amount of resources to publicly release their data (Silva et al., 2018). This opens the door to the application of Machine Learning to answer several questions of interest for citizens, administrators, businesses, and researchers. Building models using urban or regional data is close to a real project since such data must be usually cleaned; and to be useful for the society, models must be deployed. Moreover, students are familiar with the context of the data. In this paper, we present an initiative in an Artificial Intelligence course where students have developed, from end-to-end and using DL techniques, several computer vision and natural processing language projects suggested by a regional council. In addition, we introduce the tools and workflow employed to organise such an initiative. Finally, we conclude by presenting the challenges faced and the lessons learned during this experience.
16
+
17
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
18
+
19
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
20
+
21
+ 055
22
+
23
+ 056
24
+
25
+ 057
26
+
27
+ Problem definition Data acquisition Data cleaning Data labeling Model training Model evaluation Model deployment
28
+
29
+ Figure 1. Pipeline of a Supervised DL project. After defining the problem to be solved, data must be acquired and cleaned. Subsequently, the data must be annotated to train a supervised prediction model. Once the model is constructed it must be evaluated using an independent test set that was not employed for training. After the model is evaluated, it must be deployed to facilitate its usage.
30
+
31
+ 058
32
+
33
+ § 2. EXPERIENCE DESCRIPTION
34
+
35
+ 067 This initiative was conducted in an undergraduate course on 068 Artificial Intelligence. The goal of this experience was the 069 development of DL-based solutions to projects suggested by the regional council where our University is located. The course involved 25 students, with either a Computer Science or Mathematics background, that worked in teams from 2 to 4 in a total of 9 projects, summarised in Table 1.
36
+
37
+ In the first step of this experience, instructors together with members of the regional council identified several tasks in a series of meetings. Those meetings were instrumental to find a set of sensible problems that could be handled in the context of the course; namely, we focused on projects with enough data that was, at least, partially annotated. Subsequently, a summary of the available projects was provided to the students, that formed teams and selected a project. After that, instructors created an assignment through GitHub classroom ${}^{1}$ that gave access to a private GitHub repository to each team. Each repository only contained a README file that explained how to access the data for each project, and some ideas and links explaining how to tackle the project.
38
+
39
+ The first task that the teams faced was the process of acquiring, cleaning and annotating the data. All the datasets were available through public APIs; however, the associated annotation was not usually in the same place. Hence, students had to match several sources of information. Moreover, most datasets were partially annotated; so, students had to clean them.
40
+
41
+ Once these preprocessing steps were conducted, students
42
+
43
+ http://classroom.github.com
44
+
45
+ max width=
46
+
47
+ Project name #members Tasks Libraries Deployment
48
+
49
+ 1-5
50
+ People image retrieval 4 Face recognition Keras Desktop app
51
+
52
+ 1-5
53
+ Colourising historical images 3 Image colourisation FastAI Colab
54
+
55
+ 1-5
56
+ Colourising old aerial images 3 Image colourisation FastAI Colab
57
+
58
+ 1-5
59
+ House detection in aerial images 3 Image segmentation FastAI Colab
60
+
61
+ 1-5
62
+ News classifier 3 Text classification FastAI Colab
63
+
64
+ 1-5
65
+ Classification of historical images 2 Image classification FastAI Colab
66
+
67
+ 1-5
68
+ Dating historical images 2 Image classification FastAI Binder
69
+
70
+ 1-5
71
+ Dating museum pieces from images 2 Image classification FastAI Web app
72
+
73
+ 1-5
74
+ Classification of museum pieces 3 Image classification FastAI Colab
75
+
76
+ 1-5
77
+
78
+ Table 1. List of projects conducted during the experience trained and evaluated their models using Python as programming language, and DL libraries like Keras (Chollet et al., 2015) or FastAI (Howard & Thomas, 2019). As development environment, they used Jupyter notebooks (Kluyver et al., 2016), an open-source and browser-based tool that allows the combination of text and code. Jupyter notebooks can be run locally; however, in order to train DL models, it is necessary the use of special purpose hardware like GPUs or TPUs, and most students do not have access to those resources. Hence, Jupyter notebooks were run in Google Colaboratory (Colab) (Nelson & Hoover, 2020), a pre-configured environment with the essential Machine Learning and DL libraries that provides access to free GPUs and TPUs through a Google account. It is worth mentioning that Colab can be linked with a GitHub repository; hence, students were able to easily save changes in their repositories. All the aforementioned tools were previously introduced to students during the course. The only feature that was unknown for them was the access of teams to a GitHub repository since the rest of the assignments in the course were individual.
79
+
80
+ After training their models, students evaluated them using an independent testing set that was not used for training the models. All models, except for the model in charge of colourising historical images, achieved over a ${90}\%$ accuracy in their corresponding task - note that different metrics were employed for the different tasks. Hence, even if there is room for improvement, they can be considered as a success. Finally, students had to deploy their models in a way that they were easy to invoke. Most teams, 6 out of 9, decided to use forms in Colab; 1 team created a Desktop application, 1 created a web application; and the last team employed Binder ${}^{2}$ .
81
+
82
+ In order to evaluate the assignments, we asked the students to document the whole process using Jupyter noteboks that should be stored in the teams' GitHub repository. Moreover, students had to present their work in a public exposition, and produce a 2-minutes video where they explained, in a non-technical manner, their work.
83
+
84
+ ${}^{2}$ https://mybinder.org/
85
+
86
+ § 3. LESSONS LEARNED
87
+
88
+ In this section, we present the lessons that we have learned about how to organise similar experiences, and report the challenges that we faced and that should be taken into consideration when designing this kind of assignments. These recommendations are not only based on the instructors' opinion, but they are also founded on the students' satisfaction with the experience. To capture the students' satisfaction, we conducted an anonymous and non-compulsory survey developed with Google Forms. The survey consisted of 4 sections (valuation of GitHub, valuation of Jupyter notebooks and Colab, valuation of the projects, and comments). The valuation sections consisted of a list of questions following a 4-point Likert scale that goes from 1 (strongly disagree) to 4 (strongly agree), and the comments section allowed the students to introduce additional comments about the experience. The survey was answered by 15 out of the 25 students. We first describe the lessons learned from an organisational point of view.
89
+
90
+ Regional data. As suggested by previous work (Zeng et al., 2018), when considering assignments for undergraduate students, it is of great benefit to involve them in projects that are closely related to their life; hence, using regional data is a perfect match for this task. This fact can be seen in the video presentations where, for instance, the members of the team in charge of colourising historical aerial images showed the colourised image of the villages where their families live; or in the people image retrieval project, the members of the team searched themselves in the news of the regional council. In the anonymous survey, all the students claimed that they enjoyed working with real data from the regional council. In fact, one of the students wrote the following comment "These assignments are great, they help you to apply what we have seen during the course in a real problem".
91
+
92
+ Meetings with members of the regional council. In the long term, organising meetings with the members of the regional council has proven to be one of the best decisions taken for this experience. These meetings helped us to frame a set of problems that fit with the subjects explained in the course, solving the problem of finding enough and interesting data in urban datasets (Pineau & Bacon, 2015). In these meetings, instructors had to explain what could be done to the members of the council, and also establish some limits to their expectations - all the projects were proofs of concepts that had to be developed in a limited time, approximately 25 hours. In the following years, we plan to incorporate students into the loop of selecting the projects with the regional council.
93
+
94
+ The README file. Initially, each team had access to a GitHub repository with only a README file. In that file, instructors explained how to access the data, and provided some documentation explaining how to tackle the project. Among the documentation, we included links to open-source repositories containing Jupyter notebooks that tackle similar projects, and our first recommendation for the teams was to re-run those notebooks and use them as a basis for their projects. In this way, they had a starting point to know how to organise their datasets, and train their models. Moreover, the README file also contained pointers to additional research-oriented tasks; for instance, exploring several state-of-the-art architectures for image classification, or the application of ensemble methods. These research topics allowed some of the teams to dive deeper in the DL techniques.
95
+
96
+ Videos. We asked students to prepare short and nontechnical videos explaining their work. The aim was twofold. First, students had to make an effort to summarise their work in appealing manner. And, secondly, these videos served to present the work to members of the regional council the kind of tasks that can be solved using DL techniques. This is important to continue this experience in the future with new projects.
97
+
98
+ Now, we focus on the lessons learned from a technical point of view. These lessons can be applied not only to projects based on open-data but also to any DL project.
99
+
100
+ Transfer learning. Training a DL model from scratch is a time-consuming task that requires lots of data and computational resources; and, in most cases, it is not feasible to access to such amount of resources. This problem can be faced by applying a widely used technique known as transfer learning (Razavian et al., 2014); a method that reuses a model trained in a source task in a new target task. This considerably reduces the amount of data and time that is needed to construct an accurate DL model. This technique can be employed in almost any DL project; hence, it is important to introduce it throughout the course; so, students can employ it in their assignments.
101
+
102
+ GitHub and GitHub classroom. In this experience, all the code and models generated from the assignments were stored in GitHub repositories. Namely, all the repositories belong to an "organization" managed by the instructors of the course. In this way, students only have access to their team's repositories, and instructors have access to all the students' repositories. This approach not only simplifies the collaboration of teams' members, but also improves the communication with instructors, since they can directly access the code of students when they need feedback. In addition, instructors can follow the progress of students since it is possible to monitor the commits and who made them. From the students point of view, the usage of GitHub had positive valuations from students. Most students thought that GitHub helped them to manage better the code of their assignments, and considered that it was easy to use and facilitated working on teams.
103
+
104
+ Colab. This environment has proven to be enough to train
105
+
106
+ 168 DL models for the assignments. In addition, all of the surveyed students were satisfied with the use of Colab, and 169 considered that it was well integrated with GitHub. An 170 issue that arose when using Colab is that it has a 12 hours limitation (that is, after 12 hours, the environment halts); hence, students have to save intermediate checkpoints to avoid loosing their work, since in some cases 12 hours were not enough to complete the training process. It would be possible to train for longer using cloud environments like Amazon AWS or Google Cloud, but their configuration and management are more difficult, require adding billing information, and are out of the scope of the course.
107
+
108
+ Updates of libraries. DL is a field in constant evolution, and there is a continuous update of libraries implementing DL methods. This can lead to code that works for a version of a DL library, but fails when the library is updated. This can be solved locally by using virtual environments where the versions of the libraries are fixed by the user. However, Colab updates regularly the version of the libraries included in the underlying virtual machine; and, this is a problem for two reasons. First, students found out that their code stopped working from one day to another; and, second, the code from the online tutorials that were used as a basis raised errors when executed. In order to tackle this challenge, we provided the students with a requirements file that can be used to install concrete versions of a set of libraries in Colab.
109
+
110
+ FastAI as DL library. Nowadays, there are two main DL frameworks employed both in industry and academia: Ten-sorflow and Pytorch (He, 2019). However, for projects like the ones presented in this experience, it is better to use a library that provides a straightforward access to different kinds of models, and also implements best practices. During the course, we presented two of those libraries: Keras (Chol-let et al., 2015) and FastAI (Howard & Thomas, 2019). For the assignments, teams could choose the library to employ, and all but one used FastAI. This was mainly due to the user-friendly API of this library that allows users to train state-of-the-art models with just a few lines of code.
111
+
112
+ Data management. The last lesson is related to how teams managed their data. In the Colab environment, data is only kept during the session; hence, teams had to upload their data every time. To deal with this issue, we asked students to create a Jupyter notebook devoted only to download and clean the data, and once the data was processed, they had to upload the data to a file hosting system. In this way, both the members of the team and the instructors could easily access to the clean version of their data.
113
+
114
+ We focus now on the challenges that should be taken into
115
+
116
+ 218 account when this kind of experience is conducted.
117
+
118
+ 219
119
+
120
+ Instructors' load. One of the main challenges of this experience was the load for instructors both in terms of organisation of the assignments and supervision of the teams. Therefore, instructors must have the time to organise the assignments (that is, contact and meet with the members of the regional council, and prepare the materials to guide the students) and also provide individualised advice to each project team. In addition, in this kind of assignments, it is difficult to foresee the difficulties that students will face; and, this means an additional load in terms of supervision.
121
+
122
+ Reproducing errors. A challenge related to the supervision is how to reproduce errors found by the teams. One of the greatest advantages of GitHub repositories and Colab is the chance of accessing teams code; and, therefore, it is easy for instructors to inspect the error messages. However, just inspecting the error messages might not be informative enough, and it might be necessary to re-run the code. This might be challenging since, in some cases, it requires running a lot code before reaching the point where students encountered the problem; and, in addition, Jupyter notebooks' hidden state make difficult to reproduce the exact conditions where students had their problems (Grus, 2018). In order to minimise this challenge, our recommendation for students was to create a devoted notebook for each training experiment; in this way, it was easier to provide them with feedback to tackle their problems.
123
+
124
+ Methods beyond the curriculum. 4 out of the 9 projects required students to apply DL methods that were not explained during the course; and, the other 5 projects included additional tasks for diving deeper in the topics of image and text classification. To deal with this issue, we pointed the students to materials with examples and explanations of the techniques to employ. However, a problem that we found with this approach is that students focused on just training the models and did not understand the underlying methods. A solution to tackle this problem is based on the work of the team in charge of segmenting houses from aerial images. This team created a notebook explaining the U-net segmentation architecture (Ronneberger et al., 2015), and provided a toy example explaining how to use it. In the future, we will explore this approach.
125
+
126
+ § 4. CONCLUSIONS
127
+
128
+ We have presented a initiative to apply DL methods to create solutions to regional projects. In this experience, students have worked with open data that is familiar for them, and they were involved in all the stages of the pipeline to develop a DL project. The goal of this paper was to present the lessons learned and challenges faced to organise similar initiatives. Among those lessons, it is worth highlighting the benefits of involving members of the regional council to select a set of projects that motivate the students.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/dNh4TiaOLy_/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Putting the "Machine" Back in Machine Learning for Engineering Students
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ Computer hardware architecture has played an important role in the recent advances made in deep learning and associated applications. However, effective teaching strategies for hardware architectures for machine learning require a different structure and technical background than classic machine learning. More specifically, not only does the material need to convey necessary machine learning concepts to students, but also cover the hardware and software infrastructure concepts required for supporting machine learning systems. In this paper, we describe our approach to designing the course materials along with student assessment and evaluation for the "Hardware Architectures for Machine Learning" course targeting Electrical and Computer Engineering graduate students.
8
+
9
+ ## 1. Introduction
10
+
11
+ With the recent advances in deep learning, machine learning has gained tremendous attention from students and young academic trainees. According to the 2021 AI Index Report from Stanford HAI (Zhang et al., 2021), "The number of courses that teach students the skills necessary to build or deploy a practical AI model on the undergraduate and graduate levels has increased by ${102.9}\%$ and ${41.7}\%$ , respectively, in the last four academic years." While introductory courses for machine learning is ubiquitous in academic institutions, the hardware aspect of machine learning is receiving much less attention despite its crucial role in advancing deep learning.
12
+
13
+ Understanding the hardware aspect of machine learning is critical for making machine learning models work for various hardware platforms, which can in turn democratize machine learning to a wider audience. As an example, there are around 250 billion devices based on microcon-trollers (mic) while there are merely 80 million personal computers in the world (pc). On the other hand, building suitable hardware for machine learning can be critical for the energy efficiency of the machine learning systems used for large scale training (Patterson et al., 2021). As a result, it is critical to have courses teaching the hardware aspect of machine learning.
14
+
15
+ We would be remiss if we did not mention that perhaps the main reason why teaching hardware architectures for machine learning deployment is crucial lies in the deep learning revolution that has taken over the world in last decade: indeed, none of it would have been possible without the advent of hardware platforms exhibiting large scale parallelism that have enabled the exponential growth in development and deployment of machine learning systems.
16
+
17
+ In this paper, we present our experience in designing the graduate-level course materials along with student assessment and evaluation for the "Hardware Architectures for Machine Learning" class for students majoring Electrical and Computer Engineering. Compared to a introductory machine learning course, we focus on teaching students to analyze the hardware-related metrics for machine learning algorithms with a clear focus on deep learning. To achieve this goal, we have had to introduce not only the machine learning but also the hardware concepts, which has inevitably required us to be thoughtful in what material from a standard machine learning course should be included, and what can become optional. In the following sections, we describe the topic selection process, homework and project design strategy, student feedback from the course, and finally the conclusion.
18
+
19
+ ## 2. Class structure and topics coverage
20
+
21
+ The "Hardware Architectures for Machine Learning" class was designed as graduate-level class, intended for first-year graduate students or advanced senior undergraduate students. The idea of offering the class came during summer of 2018 after the instructor had already run a pilot of a few lectures and homework assignments using machine learning as an application in a separate graduate-level class on energy efficient hardware design. The feedback from students was very positive, and with the help of several enthusiastic teaching assistants (all of whom were Ph.D. students in their second through fourth year of doctoral studies) the class came to fruition in Fall 2018 as a first iteration, and offered
22
+
23
+ ---
24
+
25
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
26
+
27
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
28
+
29
+ ---
30
+
31
+ 056 again in its final form in Spring 2019.
32
+
33
+ 057 The "Hardware Architectures for Machine Learning" class 058 requires as pre-requisites "Introduction to Machine Learn- 059 ing" (either undergraduate or graduate level) and one of 060 "Hardware Arithmetic for Machine Learning," an undergrad- 061 uate class focusing largely on computer arithmetic used in
34
+
35
+ 063 data intensive hardware architectures, or "Introduction to Computer Architecture," an upper level undergraduate class 064 that most students interested in computer hardware take. 065 The reason for allowing two different computer hardware 066 pathways to enter the class is to encourage participation 067 from students who may have an interest in circuits or logic 068
36
+
37
+ 069 design vs. computer architecture or system design. Some of
38
+
39
+ 070 the Electrical and Computer Engineering students interested in enrolling had not already taken the "Introduction to Ma- 071 chine Learning" class, so depending on their level of interest 072 or background they were allowed to take it simultaneously 073 while being enrolled in our class. This worked out well, and based on their input, it allowed for better understanding of topics covered in both classes.
40
+
41
+ This course provides an overview of current advances in hardware architectures that can enable fast and energy efficient machine learning applications from the edge to the cloud. Topics include hardware accelerators, hardware-software co-design, and general or application specific system design and resource management for machine learning applications. The course requires no textbook and relies only on technical papers available publicly. The grading algorithm included three components to assess student learning and evaluate progress. Specifically, it is separated into homework (30%), paper presentation and discussion (15%), and project (55%). To encourage discussion in paper presentation, we allocate 5% out of the 15% to be discussion. As for project, we have ${15}\%$ allocated to each of the first two reports and another ${25}\%$ of the final report.
42
+
43
+ The primary purpose of the homework assignments was to help students master the material and prepare for the projects. We encouraged students to work together with their classmates to help them understand the basic concepts. However, students were required to do their own homework, unless teaming was explicitly allowed in certain assignments. Homework assignments were due in the evening of the due date. No late homework assignments were accepted.
44
+
45
+ The project was designed to: (i) help students understand and synthesize all of the course concepts; (ii) demonstrate their ability at correctly stating and implementing the project's goals; (iii) demonstrate their ability to explore and incorporate good engineering trade-offs in a system/subsystem implementation. All project components should have clearly identified the individual contributions of each team member. Any project proposal, report or pre-
46
+
47
+ 109 sentation could have been submitted up to 5 days late, but subject to a ${10}\%$ per day late penalty.
48
+
49
+ The topics covered in the course were chosen to cover all aspects of basic supervised and unsupervised machine learning and their hardware implementation implications. The lectures were designed to cover material discussed and presented in recently published technical papers in the area, while assignments, paper presentations, and projects were designed to reinforce concepts and enable hands-on learning. A typical schedule of lecture topics, homework assignments, paper presentations, and project reports is shown in Table 1. In this course, we mainly focus on supervised convolutional neural networks with one lecture on classic supervised learning approaches such as linear and logistic regression and support vector machine, and one lecture on unsupervised learning that focuses on the K-means algorithm. For each of the machine learning topic, we discuss the corresponding hardware architectures in the literature.
50
+
51
+ ## 3. Homework design
52
+
53
+ We have designed a total of six homework for the course with each homework being related to the material of the ongoing lectures. We split the six homework assignments into three paper reading and three implementation assignments. The inclusion of paper reading has enabled coverage of broad topics in hardware architectures for machine learning and hardware-aware machine learning, both of which are active fields of research. By including paper reading assignments, students were exposed to state-of-the-art methods and were able to absorb new knowledge from papers.
54
+
55
+ ### 3.1. Design of Implementation-based Homework Assignments
56
+
57
+ The goal of implementation-based homework assignments is to strengthen students' capabilities for implementing modern machine learning models, as well as help students learn the tools to explore the hardware support for machine learning models. We gradually guided the students to learn to implement Convolutional Neural Networks (CNNs) in Py-Torch, use hardware architecture models, and finally optimize both the hardware and the model to achieve the best overall performance. To facilitate students' understanding, one of the key learning strategies we found useful for our students was visualizing the empirical data obtained from each of the assignments. Visualization can aid students' understanding by having them reason and explain why certain plots look the way they do and what general conclusions can be drawn from those behaviors.
58
+
59
+ CNN Implementation The first homework assignment involved the implementation of the well-known LeNet network (LeCun et al., 1998) with the MNIST dataset (LeCun
60
+
61
+ Table 1. Course schedule. We start with supervised learning, dive into deep learning, and close with unsupervised learning. HW: Hardware; DNN: Deep Neural Networks; FPGA: Field-programmable gate array.
62
+
63
+ <table><tr><td>LECTURE</td><td>HOMEWORK</td><td>Project</td><td>PAPER PRES.</td></tr><tr><td>INTRO TO HW ARCHITECTURES FOR ML</td><td/><td/><td/></tr><tr><td>HW ARCHITECTURES FOR SUPERVISED LEARNING</td><td/><td/><td/></tr><tr><td>TOOLS FOR DEEP LEARNING (DL)</td><td>1 OUT</td><td>PROJECT TOPICS OUT</td><td/></tr><tr><td>DNN OVERVIEW AND IMPLICATIONS IN HW</td><td/><td/><td/></tr><tr><td>PAPER PRESENTATIONS</td><td>1 DUE/2 OUT</td><td/><td>SESSION I</td></tr><tr><td>DNN LATENCY: WHERE DO THE CYCLES GO? DNN ENERGY EFFICIENCY: WHERE DO THE JOULES GO?</td><td/><td>PROJECT SELECTION DUE</td><td/></tr><tr><td>PAPER PRESENTATIONS</td><td/><td/><td>SESSION II</td></tr><tr><td>CUSTOM HARDWARE ARCHITECTURES FOR DNNS</td><td>2 DUE/3 OUT</td><td/><td/></tr><tr><td>DNNS COMPRESSION FOR EFFICIENT HW IMPLEMENTATION</td><td/><td/><td/></tr><tr><td>Project presentations I</td><td/><td>GROUP 1 - 1ST REPORT DUE</td><td/></tr><tr><td>Project presentations II</td><td/><td>GROUP 2 - 1ST REPORT DUE</td><td/></tr><tr><td>PAPER PRESENTATIONS</td><td/><td/><td>SESSION III</td></tr><tr><td>LOW & VARIABLE PRECISION ARCHITECTURES FOR DNNS</td><td>3 DUE/4 OUT</td><td/><td/></tr><tr><td>FPGA-BASED ARCHITECTURES FOR DNNS</td><td/><td/><td/></tr><tr><td>PAPER PRESENTATIONS</td><td/><td/><td>SESSION IV</td></tr><tr><td>HARDWARE ARCHITECTURE-DNN MODEL CO-DESIGN</td><td/><td/><td/></tr><tr><td>STORAGE EFFICIENT ARCHITECTURES FOR DNNS</td><td>4 DUE/5 OUT</td><td/><td/></tr><tr><td>Project Presentation I</td><td/><td>GROUP 2 - 2ND REPORT DUE</td><td/></tr><tr><td>Project Presentation II</td><td/><td>Group 1 - 2ND REPORT DUE</td><td/></tr><tr><td>DNNS ON MOBILE ARCHITECTURES</td><td/><td/><td/></tr><tr><td>PAPER PRESENTATIONS</td><td/><td/><td>SESSION V</td></tr><tr><td>EDGE-SERVER SOLUTIONS FOR DNNS</td><td>5 DUE/6 OUT</td><td/><td/></tr><tr><td>HW ARCHS FOR DISTRIBUTED, FEDERATED LEARNING</td><td/><td/><td/></tr><tr><td>HW ARCHS FOR UNSUPERVISED LEARNING</td><td/><td/><td/></tr><tr><td>PAPER PRESENTATIONS</td><td/><td/><td>SESSION VI</td></tr><tr><td>FINAL PROJECT POSTERS AND DEMOS</td><td>6 DUE</td><td>FINAL REPORT DUE</td><td/></tr></table>
64
+
65
+ 119
66
+
67
+ 120
68
+
69
+ 121
70
+
71
+ 122
72
+
73
+ 123
74
+
75
+ 124
76
+
77
+ 125
78
+
79
+ 126
80
+
81
+ 127
82
+
83
+ 128
84
+
85
+ 129
86
+
87
+ 130
88
+
89
+ 131 & Cortes, 2010) using PyTorch (Paszke et al., 2019). As a starting point for students, we provided a boilerplate code for training a standard LeNet on MNIST using PyTorch. In the homework questions included to assess learning, we asked students to try various hyperparameters involved in training as a hands-on experience for understanding the sources of randomness in modern machine learning systems. Furthermore, we have asked students to identify the number of floating point operations (FLOP) needed to carry out an inference, which was the first step in building up students' awareness that the computation intensity of the machine learning model is as important as its final predictive accuracy. To facilitate students' understanding, we have asked the students to visualize the experimental data, including accuracy vs. FLOP, FLOP vs. runtime, and accuracy vs. runtime for models characterized by different hyperparameters.
90
+
91
+ Hardware Modeling In our second implementation-based homework, we guide the students to understand a model built upon a CNN hardware accelerator (Gao et al., 2017). We have provided a Python environment for the students which includes boilerplate code to interact with the hardware models. In the assignment, we ask students to first understand the hardware architecture by reading the reference papers and guide them with reasoning questions about the content. We also ask students to change the boilerplate code to reflect different hardware architecture designs and their resulting performance. Similar to the previous assignment, students were asked to visualize the data to help them
92
+
93
+ 164 further interpret empirically the significance of the results. Specifically, one of the items required was visualizing the trade-offs between throughput and the resulting design area of a possible hardware accelerator implementing the model, given certain design knobs.
94
+
95
+ CNN and Hardware Co-exploration The last implementation-based assignment offers a synergy between the first two assignments where we guide the students to alter neural architectures to observe the resulting impact on a fixed hardware and also alter the hardware architecture given a predefined CNN. In addition, we ask students to explore changing both hardware and neural architectures by visualizing the resulting performance metrics. Specifically, we ask students to provide scatter plots for the solutions comparing model's predictive performance and execution time. Finally, students perform random search-based optimization to identify a good CNN-Hardware implementation pair and reason about the effectiveness of the obtained solution.
96
+
97
+ ### 3.2. Design of Reading-based Homework Assignments
98
+
99
+ The reading-based assignments are designed to improve students' capability in understanding fundamentals and absorbing knowledge from recent technical papers. Similar to the implementation-based assignments, the chosen papers align with the lecture material. To achieve this, for each subject, we identify relevant papers and split them into two categories. The first category includes topics to be covered in the class lectures, while the second category is used for the assignments. To aid students in absorbing the main technical content conveyed in the paper, our assignments consist of various questions that encourage students to follow along.
100
+
101
+ 168 In addition, we have also included a few (e.g., one to two) questions to aid students think critically about the papers they read. Usually these questions start with a scenario of a potential failure mode for the proposed method described in the paper and guide the students to generalize the failure mode in the described scenarios. As an example, we ask students to read PuDianNao (Liu et al., 2015) and one of the questions for the students is shown below.
102
+
103
+ Section 6.4 claims that "the efficiency and accuracy of scalable-effort classifiers is a strong function of $\delta$ , which can be easily adjusted at runtime to an appropriate value." Consider the scenario where the input data characteristics change over time, then the optimal $\delta$ should probably also change. Can you propose an algorithm to address this problem?
104
+
105
+ One thing to note is that the papers included in the reading assignments included a balanced mix of hardware and machine learning content so as to not overwhelm students whose background was lacking in one aspect or another. To achieve this, the assignments point to specific sections in the paper that students should read carefully, which makes for a more tenable learning experience, even if the students cannot grasp every little detail described in the paper.
106
+
107
+ ## 4. Paper presentations
108
+
109
+ To aid students in becoming familiar with the state-of-the-art in this area, we have included paper presentation and discussion sessions. In each session, four to five students present the paper of their choice and the rest of the class is required to participate by asking questions or participate in the discussion. We provided students with a list of papers for them to choose from and they selected the papers they want to present in the beginning of the semester. Per our past experience, students are reluctant to sign up for a presentation early in the semester, so to incentivize early participation we offer a few points of extra credit for such sign ups. As shown in Table 1, paper presentation sessions are scattered uniformly throughout the course and support in topic coverage the material presented in lectures.
110
+
111
+ ## 5. Project design
112
+
113
+ The goal of projects is to take students through a hardware-aware machine learning project experience, which exposes them to the life cycle of a complete hardware architecture design for machine learning. This includes motivation, problem definition, solution, and presentation. We provide four checkpoints for a project, which includes proposal, first report, class presentation, and final report. Since students
114
+
115
+ 219 have various backgrounds coming in the course, we have provided a wide variety of predefined project topics for students to select from, which has greatly helped students narrow down a project based on their background, experience, and interest. We provide an example below.
116
+
117
+ Topic: Implementation of the idea of "Adaptive Neural Network for Efficient Inference (Bolukbasi et al., 2017)" with current state-of-the-art deep neural networks such as NasNet (Zoph et al., 2018) and MobileNetV2 (Sandler et al., 2018).
118
+
119
+ Background: Dynamic network inference is one of the techniques used to reduce the execution latency. The goal of this project is to determine whether adaptive neural network design is a promising way for more efficient network inference given modern neural networks. Please implement the approach in this paper, and identify the challenges and limitations of such an approach considering its hardware implications.
120
+
121
+ ## 6. Student feedback
122
+
123
+ The course received positive evaluations from students during both initial and subsequent offering (averaging 4.4-4.6 on a scale of 1-5). Student comments reflect their positive experience and provide some insight into what makes for a balanced coverage of topics and suitable learning process. In general, the topics coverage was welcome in its breadth: "The course topics were sufficiently diverse and they covered both hardware, software, and hw-sw co-design approaches really well." In particular, the homework assignments were found to be well designed to enable learning and material understanding: "The assignments were not ridiculously long; hence, I had enough time to think through about the questions, improve my implementations, understand the essence of the field, and do extra readings on my own at times." or "Great class, I really liked the homework assignments." Overall, the course was viewed positively for how students were engaged in the learning process: "The courses were designed well to allow for good technical discussion in the class. I don't think that I participated in "active learning" by that degree in any other class."
124
+
125
+ ## 7. Conclusion
126
+
127
+ We have designed a graduate-level course that focuses on the hardware aspect of modern machine learning. Due to the interdisciplinary nature of the course, we put focus on deep learning, adopt paper presentation sessions to foster discussions, design homework to prepare students for course projects and getting to learn the state-of-the-art, and provide predefined projects to help our students succeed. Our approach was deemed to work well empirically as the feedback from students was positive.
128
+
129
+ References
130
+
131
+ Why tinyml is a giant opportunity. https: //venturebeat.com/2020/01/11/ why-tinyml-is-a-giant-opportunity/. Accessed: 2021-06-24.
132
+
133
+ Gartner says worldwide pc shipments grew 10.7% in fourth quarter of 2020 and 4.8% for the year. https://www.gartner.com/en/newsroom/press-releases/ 2021-01-11-gartner-says-worldwide-pc-shipments-grew-10-point-7-percent -in-the-fourth-quarter-of-2020-and -4-point-8-percent-for-the-year. Accessed: 2021-06-24.
134
+
135
+ Bolukbasi, T., Wang, J., Dekel, O., and Saligrama, V. Adaptive neural networks for efficient inference. In International Conference on Machine Learning, pp. 527-536. PMLR, 2017.
136
+
137
+ Gao, M., Pu, J., Yang, X., Horowitz, M., and Kozyrakis, C. Tetris: Scalable and efficient neural network acceleration with $3\mathrm{\;d}$ memory. In Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 751-764, 2017.
138
+
139
+ LeCun, Y. and Cortes, C. MNIST handwritten digit database. 2010. URL http://yann.lecun.com/ exdb/mnist/.
140
+
141
+ LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
142
+
143
+ Liu, D., Chen, T., Liu, S., Zhou, J., Zhou, S., Teman, O., Feng, X., Zhou, X., and Chen, Y. Pudiannao: A polyvalent machine learning accelerator. ACM SIGARCH Computer Architecture News, 43(1):369-381, 2015.
144
+
145
+ Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. Pytorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703, 2019.
146
+
147
+ Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L.- M., Rothchild, D., So, D., Texier, M., and Dean, J. Carbon emissions and large neural network training. arXiv preprint arXiv:2104.10350, 2021.
148
+
149
+ Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4510-4520, 2018.
150
+
151
+ Zhang, D., Mishra, S., Brynjolfsson, E., Etchemendy, J., Ganguli, D., Grosz, B., Lyons, T., Manyika, J., Niebles, J. C., Sellitto, M., et al. The ai index 2021 annual report. arXiv preprint arXiv:2103.06312, 2021.
152
+
153
+ Zoph, B., Vasudevan, V., Shlens, J., and Le, Q. V. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8697-8710, 2018.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/dNh4TiaOLy_/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § PUTTING THE "MACHINE" BACK IN MACHINE LEARNING FOR ENGINEERING STUDENTS
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ Computer hardware architecture has played an important role in the recent advances made in deep learning and associated applications. However, effective teaching strategies for hardware architectures for machine learning require a different structure and technical background than classic machine learning. More specifically, not only does the material need to convey necessary machine learning concepts to students, but also cover the hardware and software infrastructure concepts required for supporting machine learning systems. In this paper, we describe our approach to designing the course materials along with student assessment and evaluation for the "Hardware Architectures for Machine Learning" course targeting Electrical and Computer Engineering graduate students.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ With the recent advances in deep learning, machine learning has gained tremendous attention from students and young academic trainees. According to the 2021 AI Index Report from Stanford HAI (Zhang et al., 2021), "The number of courses that teach students the skills necessary to build or deploy a practical AI model on the undergraduate and graduate levels has increased by ${102.9}\%$ and ${41.7}\%$ , respectively, in the last four academic years." While introductory courses for machine learning is ubiquitous in academic institutions, the hardware aspect of machine learning is receiving much less attention despite its crucial role in advancing deep learning.
12
+
13
+ Understanding the hardware aspect of machine learning is critical for making machine learning models work for various hardware platforms, which can in turn democratize machine learning to a wider audience. As an example, there are around 250 billion devices based on microcon-trollers (mic) while there are merely 80 million personal computers in the world (pc). On the other hand, building suitable hardware for machine learning can be critical for the energy efficiency of the machine learning systems used for large scale training (Patterson et al., 2021). As a result, it is critical to have courses teaching the hardware aspect of machine learning.
14
+
15
+ We would be remiss if we did not mention that perhaps the main reason why teaching hardware architectures for machine learning deployment is crucial lies in the deep learning revolution that has taken over the world in last decade: indeed, none of it would have been possible without the advent of hardware platforms exhibiting large scale parallelism that have enabled the exponential growth in development and deployment of machine learning systems.
16
+
17
+ In this paper, we present our experience in designing the graduate-level course materials along with student assessment and evaluation for the "Hardware Architectures for Machine Learning" class for students majoring Electrical and Computer Engineering. Compared to a introductory machine learning course, we focus on teaching students to analyze the hardware-related metrics for machine learning algorithms with a clear focus on deep learning. To achieve this goal, we have had to introduce not only the machine learning but also the hardware concepts, which has inevitably required us to be thoughtful in what material from a standard machine learning course should be included, and what can become optional. In the following sections, we describe the topic selection process, homework and project design strategy, student feedback from the course, and finally the conclusion.
18
+
19
+ § 2. CLASS STRUCTURE AND TOPICS COVERAGE
20
+
21
+ The "Hardware Architectures for Machine Learning" class was designed as graduate-level class, intended for first-year graduate students or advanced senior undergraduate students. The idea of offering the class came during summer of 2018 after the instructor had already run a pilot of a few lectures and homework assignments using machine learning as an application in a separate graduate-level class on energy efficient hardware design. The feedback from students was very positive, and with the help of several enthusiastic teaching assistants (all of whom were Ph.D. students in their second through fourth year of doctoral studies) the class came to fruition in Fall 2018 as a first iteration, and offered
22
+
23
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
24
+
25
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
26
+
27
+ 056 again in its final form in Spring 2019.
28
+
29
+ 057 The "Hardware Architectures for Machine Learning" class 058 requires as pre-requisites "Introduction to Machine Learn- 059 ing" (either undergraduate or graduate level) and one of 060 "Hardware Arithmetic for Machine Learning," an undergrad- 061 uate class focusing largely on computer arithmetic used in
30
+
31
+ 063 data intensive hardware architectures, or "Introduction to Computer Architecture," an upper level undergraduate class 064 that most students interested in computer hardware take. 065 The reason for allowing two different computer hardware 066 pathways to enter the class is to encourage participation 067 from students who may have an interest in circuits or logic 068
32
+
33
+ 069 design vs. computer architecture or system design. Some of
34
+
35
+ 070 the Electrical and Computer Engineering students interested in enrolling had not already taken the "Introduction to Ma- 071 chine Learning" class, so depending on their level of interest 072 or background they were allowed to take it simultaneously 073 while being enrolled in our class. This worked out well, and based on their input, it allowed for better understanding of topics covered in both classes.
36
+
37
+ This course provides an overview of current advances in hardware architectures that can enable fast and energy efficient machine learning applications from the edge to the cloud. Topics include hardware accelerators, hardware-software co-design, and general or application specific system design and resource management for machine learning applications. The course requires no textbook and relies only on technical papers available publicly. The grading algorithm included three components to assess student learning and evaluate progress. Specifically, it is separated into homework (30%), paper presentation and discussion (15%), and project (55%). To encourage discussion in paper presentation, we allocate 5% out of the 15% to be discussion. As for project, we have ${15}\%$ allocated to each of the first two reports and another ${25}\%$ of the final report.
38
+
39
+ The primary purpose of the homework assignments was to help students master the material and prepare for the projects. We encouraged students to work together with their classmates to help them understand the basic concepts. However, students were required to do their own homework, unless teaming was explicitly allowed in certain assignments. Homework assignments were due in the evening of the due date. No late homework assignments were accepted.
40
+
41
+ The project was designed to: (i) help students understand and synthesize all of the course concepts; (ii) demonstrate their ability at correctly stating and implementing the project's goals; (iii) demonstrate their ability to explore and incorporate good engineering trade-offs in a system/subsystem implementation. All project components should have clearly identified the individual contributions of each team member. Any project proposal, report or pre-
42
+
43
+ 109 sentation could have been submitted up to 5 days late, but subject to a ${10}\%$ per day late penalty.
44
+
45
+ The topics covered in the course were chosen to cover all aspects of basic supervised and unsupervised machine learning and their hardware implementation implications. The lectures were designed to cover material discussed and presented in recently published technical papers in the area, while assignments, paper presentations, and projects were designed to reinforce concepts and enable hands-on learning. A typical schedule of lecture topics, homework assignments, paper presentations, and project reports is shown in Table 1. In this course, we mainly focus on supervised convolutional neural networks with one lecture on classic supervised learning approaches such as linear and logistic regression and support vector machine, and one lecture on unsupervised learning that focuses on the K-means algorithm. For each of the machine learning topic, we discuss the corresponding hardware architectures in the literature.
46
+
47
+ § 3. HOMEWORK DESIGN
48
+
49
+ We have designed a total of six homework for the course with each homework being related to the material of the ongoing lectures. We split the six homework assignments into three paper reading and three implementation assignments. The inclusion of paper reading has enabled coverage of broad topics in hardware architectures for machine learning and hardware-aware machine learning, both of which are active fields of research. By including paper reading assignments, students were exposed to state-of-the-art methods and were able to absorb new knowledge from papers.
50
+
51
+ § 3.1. DESIGN OF IMPLEMENTATION-BASED HOMEWORK ASSIGNMENTS
52
+
53
+ The goal of implementation-based homework assignments is to strengthen students' capabilities for implementing modern machine learning models, as well as help students learn the tools to explore the hardware support for machine learning models. We gradually guided the students to learn to implement Convolutional Neural Networks (CNNs) in Py-Torch, use hardware architecture models, and finally optimize both the hardware and the model to achieve the best overall performance. To facilitate students' understanding, one of the key learning strategies we found useful for our students was visualizing the empirical data obtained from each of the assignments. Visualization can aid students' understanding by having them reason and explain why certain plots look the way they do and what general conclusions can be drawn from those behaviors.
54
+
55
+ CNN Implementation The first homework assignment involved the implementation of the well-known LeNet network (LeCun et al., 1998) with the MNIST dataset (LeCun
56
+
57
+ Table 1. Course schedule. We start with supervised learning, dive into deep learning, and close with unsupervised learning. HW: Hardware; DNN: Deep Neural Networks; FPGA: Field-programmable gate array.
58
+
59
+ max width=
60
+
61
+ LECTURE HOMEWORK Project PAPER PRES.
62
+
63
+ 1-4
64
+ INTRO TO HW ARCHITECTURES FOR ML X X X
65
+
66
+ 1-4
67
+ HW ARCHITECTURES FOR SUPERVISED LEARNING X X X
68
+
69
+ 1-4
70
+ TOOLS FOR DEEP LEARNING (DL) 1 OUT PROJECT TOPICS OUT X
71
+
72
+ 1-4
73
+ DNN OVERVIEW AND IMPLICATIONS IN HW X X X
74
+
75
+ 1-4
76
+ PAPER PRESENTATIONS 1 DUE/2 OUT X SESSION I
77
+
78
+ 1-4
79
+ DNN LATENCY: WHERE DO THE CYCLES GO? DNN ENERGY EFFICIENCY: WHERE DO THE JOULES GO? X PROJECT SELECTION DUE X
80
+
81
+ 1-4
82
+ PAPER PRESENTATIONS X X SESSION II
83
+
84
+ 1-4
85
+ CUSTOM HARDWARE ARCHITECTURES FOR DNNS 2 DUE/3 OUT X X
86
+
87
+ 1-4
88
+ DNNS COMPRESSION FOR EFFICIENT HW IMPLEMENTATION X X X
89
+
90
+ 1-4
91
+ Project presentations I X GROUP 1 - 1ST REPORT DUE X
92
+
93
+ 1-4
94
+ Project presentations II X GROUP 2 - 1ST REPORT DUE X
95
+
96
+ 1-4
97
+ PAPER PRESENTATIONS X X SESSION III
98
+
99
+ 1-4
100
+ LOW & VARIABLE PRECISION ARCHITECTURES FOR DNNS 3 DUE/4 OUT X X
101
+
102
+ 1-4
103
+ FPGA-BASED ARCHITECTURES FOR DNNS X X X
104
+
105
+ 1-4
106
+ PAPER PRESENTATIONS X X SESSION IV
107
+
108
+ 1-4
109
+ HARDWARE ARCHITECTURE-DNN MODEL CO-DESIGN X X X
110
+
111
+ 1-4
112
+ STORAGE EFFICIENT ARCHITECTURES FOR DNNS 4 DUE/5 OUT X X
113
+
114
+ 1-4
115
+ Project Presentation I X GROUP 2 - 2ND REPORT DUE X
116
+
117
+ 1-4
118
+ Project Presentation II X Group 1 - 2ND REPORT DUE X
119
+
120
+ 1-4
121
+ DNNS ON MOBILE ARCHITECTURES X X X
122
+
123
+ 1-4
124
+ PAPER PRESENTATIONS X X SESSION V
125
+
126
+ 1-4
127
+ EDGE-SERVER SOLUTIONS FOR DNNS 5 DUE/6 OUT X X
128
+
129
+ 1-4
130
+ HW ARCHS FOR DISTRIBUTED, FEDERATED LEARNING X X X
131
+
132
+ 1-4
133
+ HW ARCHS FOR UNSUPERVISED LEARNING X X X
134
+
135
+ 1-4
136
+ PAPER PRESENTATIONS X X SESSION VI
137
+
138
+ 1-4
139
+ FINAL PROJECT POSTERS AND DEMOS 6 DUE FINAL REPORT DUE X
140
+
141
+ 1-4
142
+
143
+ 119
144
+
145
+ 120
146
+
147
+ 121
148
+
149
+ 122
150
+
151
+ 123
152
+
153
+ 124
154
+
155
+ 125
156
+
157
+ 126
158
+
159
+ 127
160
+
161
+ 128
162
+
163
+ 129
164
+
165
+ 130
166
+
167
+ 131 & Cortes, 2010) using PyTorch (Paszke et al., 2019). As a starting point for students, we provided a boilerplate code for training a standard LeNet on MNIST using PyTorch. In the homework questions included to assess learning, we asked students to try various hyperparameters involved in training as a hands-on experience for understanding the sources of randomness in modern machine learning systems. Furthermore, we have asked students to identify the number of floating point operations (FLOP) needed to carry out an inference, which was the first step in building up students' awareness that the computation intensity of the machine learning model is as important as its final predictive accuracy. To facilitate students' understanding, we have asked the students to visualize the experimental data, including accuracy vs. FLOP, FLOP vs. runtime, and accuracy vs. runtime for models characterized by different hyperparameters.
168
+
169
+ Hardware Modeling In our second implementation-based homework, we guide the students to understand a model built upon a CNN hardware accelerator (Gao et al., 2017). We have provided a Python environment for the students which includes boilerplate code to interact with the hardware models. In the assignment, we ask students to first understand the hardware architecture by reading the reference papers and guide them with reasoning questions about the content. We also ask students to change the boilerplate code to reflect different hardware architecture designs and their resulting performance. Similar to the previous assignment, students were asked to visualize the data to help them
170
+
171
+ 164 further interpret empirically the significance of the results. Specifically, one of the items required was visualizing the trade-offs between throughput and the resulting design area of a possible hardware accelerator implementing the model, given certain design knobs.
172
+
173
+ CNN and Hardware Co-exploration The last implementation-based assignment offers a synergy between the first two assignments where we guide the students to alter neural architectures to observe the resulting impact on a fixed hardware and also alter the hardware architecture given a predefined CNN. In addition, we ask students to explore changing both hardware and neural architectures by visualizing the resulting performance metrics. Specifically, we ask students to provide scatter plots for the solutions comparing model's predictive performance and execution time. Finally, students perform random search-based optimization to identify a good CNN-Hardware implementation pair and reason about the effectiveness of the obtained solution.
174
+
175
+ § 3.2. DESIGN OF READING-BASED HOMEWORK ASSIGNMENTS
176
+
177
+ The reading-based assignments are designed to improve students' capability in understanding fundamentals and absorbing knowledge from recent technical papers. Similar to the implementation-based assignments, the chosen papers align with the lecture material. To achieve this, for each subject, we identify relevant papers and split them into two categories. The first category includes topics to be covered in the class lectures, while the second category is used for the assignments. To aid students in absorbing the main technical content conveyed in the paper, our assignments consist of various questions that encourage students to follow along.
178
+
179
+ 168 In addition, we have also included a few (e.g., one to two) questions to aid students think critically about the papers they read. Usually these questions start with a scenario of a potential failure mode for the proposed method described in the paper and guide the students to generalize the failure mode in the described scenarios. As an example, we ask students to read PuDianNao (Liu et al., 2015) and one of the questions for the students is shown below.
180
+
181
+ Section 6.4 claims that "the efficiency and accuracy of scalable-effort classifiers is a strong function of $\delta$ , which can be easily adjusted at runtime to an appropriate value." Consider the scenario where the input data characteristics change over time, then the optimal $\delta$ should probably also change. Can you propose an algorithm to address this problem?
182
+
183
+ One thing to note is that the papers included in the reading assignments included a balanced mix of hardware and machine learning content so as to not overwhelm students whose background was lacking in one aspect or another. To achieve this, the assignments point to specific sections in the paper that students should read carefully, which makes for a more tenable learning experience, even if the students cannot grasp every little detail described in the paper.
184
+
185
+ § 4. PAPER PRESENTATIONS
186
+
187
+ To aid students in becoming familiar with the state-of-the-art in this area, we have included paper presentation and discussion sessions. In each session, four to five students present the paper of their choice and the rest of the class is required to participate by asking questions or participate in the discussion. We provided students with a list of papers for them to choose from and they selected the papers they want to present in the beginning of the semester. Per our past experience, students are reluctant to sign up for a presentation early in the semester, so to incentivize early participation we offer a few points of extra credit for such sign ups. As shown in Table 1, paper presentation sessions are scattered uniformly throughout the course and support in topic coverage the material presented in lectures.
188
+
189
+ § 5. PROJECT DESIGN
190
+
191
+ The goal of projects is to take students through a hardware-aware machine learning project experience, which exposes them to the life cycle of a complete hardware architecture design for machine learning. This includes motivation, problem definition, solution, and presentation. We provide four checkpoints for a project, which includes proposal, first report, class presentation, and final report. Since students
192
+
193
+ 219 have various backgrounds coming in the course, we have provided a wide variety of predefined project topics for students to select from, which has greatly helped students narrow down a project based on their background, experience, and interest. We provide an example below.
194
+
195
+ Topic: Implementation of the idea of "Adaptive Neural Network for Efficient Inference (Bolukbasi et al., 2017)" with current state-of-the-art deep neural networks such as NasNet (Zoph et al., 2018) and MobileNetV2 (Sandler et al., 2018).
196
+
197
+ Background: Dynamic network inference is one of the techniques used to reduce the execution latency. The goal of this project is to determine whether adaptive neural network design is a promising way for more efficient network inference given modern neural networks. Please implement the approach in this paper, and identify the challenges and limitations of such an approach considering its hardware implications.
198
+
199
+ § 6. STUDENT FEEDBACK
200
+
201
+ The course received positive evaluations from students during both initial and subsequent offering (averaging 4.4-4.6 on a scale of 1-5). Student comments reflect their positive experience and provide some insight into what makes for a balanced coverage of topics and suitable learning process. In general, the topics coverage was welcome in its breadth: "The course topics were sufficiently diverse and they covered both hardware, software, and hw-sw co-design approaches really well." In particular, the homework assignments were found to be well designed to enable learning and material understanding: "The assignments were not ridiculously long; hence, I had enough time to think through about the questions, improve my implementations, understand the essence of the field, and do extra readings on my own at times." or "Great class, I really liked the homework assignments." Overall, the course was viewed positively for how students were engaged in the learning process: "The courses were designed well to allow for good technical discussion in the class. I don't think that I participated in "active learning" by that degree in any other class."
202
+
203
+ § 7. CONCLUSION
204
+
205
+ We have designed a graduate-level course that focuses on the hardware aspect of modern machine learning. Due to the interdisciplinary nature of the course, we put focus on deep learning, adopt paper presentation sessions to foster discussions, design homework to prepare students for course projects and getting to learn the state-of-the-art, and provide predefined projects to help our students succeed. Our approach was deemed to work well empirically as the feedback from students was positive.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/huJogZLN2t/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Using Matchboxes to Teach the Basics of Machine Learning: an Analysis of (Possible) Misconceptions
2
+
3
+ ## Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ The idea of chess-playing matchboxes, conceived by Martin Gardner as early as 1962, is becoming more and more relevant in learning materials in the area of AI and Machine Learning. Thus, it can be found in a large number of workshops and papers as an innovative teaching method to convey the basic ideas of reinforcement learning. In this paper the concept and its variations will be presented and the advantages of this analog approach will be shown. At the same time, however, the limitations of the approach are analyzed and the question of alternatives is raised.
8
+
9
+ ## 1. Introduction
10
+
11
+ As Machine Learning (ML) has an increasing influence on many people's everyday life, (Nayak & Dutta, 2017) there is a need for concepts to include ML in workshops and other learning scenarios. Building these scenarios, so that beginners can gain insight into the underlying concepts, while avoiding oversimplifications and misconceptions can be very challenging, especially when talking about recent technological developments. Unplugged materials like the Hexapawn game show a creative and motivating approach to the topic. In this paper, we will take a look at these materials and discuss what misconceptions can occur, using this approach.
12
+
13
+ ## 2. Misconceptions
14
+
15
+ The study of misconceptions in computer science education and in education in general is a well-studied and still current research topic especially in computer science didactics (Sorva, 2012) (Ohrndorf, 2016) (Qian & Lehman, 2017). Misconceptions can be understood as entrenched systematic errors (Prediger & Wittmann, 2009). However, Qian
16
+
17
+ & Lehmann also discuss other hard-to-define concepts such as difficulties, mistakes, and bugs, stating that there is no single definition (Qian & Lehman, 2017). As a definition for misconceptions in CS programming education (Sorva, 2012) states the following: "understandings that are deficient or inadequate for many practical programming contexts". In reference to (Ohrndorf, 2016) we define misconceptions as cognitive representations of knowledge that contradict or deviate from the scientifically correct concepts.
18
+
19
+ (Heuer et al., 2021) examined machine learning tutorials for misconceptions and misleading explanations, identifying four main misconceptions: (H1) ML as adapting in response to new data and experiences to improve efficacy over time; (H2) ML as automating and improving the learning process of computers based on their experiences without any human assistance; (H3) ML can discover hidden patterns that are invisible to humans; (H4) ML can be applied without special expertise.
20
+
21
+ ### 3.The Game
22
+
23
+ The original game idea of (Gardner, 1962) is a chess variant called "Hexapawn". On a 3x3 chess board, three pawns of different colors face each other (Figure 1). The objective is either to move a pawn to the opponent's baseline or to capture all of the opponent's pawns. One can also win by achieving a position in which the enemy cannot move (Stalemate). The pawns move one step forward as in chess and capture diagonally forward.
24
+
25
+ ![01963a8f-7d7c-737c-afd7-8977fff4f62c_0_1064_1681_359_365_0.jpg](images/01963a8f-7d7c-737c-afd7-8977fff4f62c_0_1064_1681_359_365_0.jpg)
26
+
27
+ Figure 1. Starting position of Hexapawn
28
+
29
+ ---
30
+
31
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
32
+
33
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
34
+
35
+ ---
36
+
37
+ With these simple game rules it is possible to construct what Gardner described as a "learning machine" or "Hexapawn Educable Robot" (HER) ${}^{1}$ . The human player controls the white pieces, the AI the black ones. For each position that black might face, there is a corresponding matchbox. It contains different colored beads, each of which is assigned to one particular move, that is possible in this position (Figure 2). White begins to play and makes the first move. When it is the AI's turn, the move is determined by picking a beat (and, subsequently, a move) randomly from the matchbox, that corresponds to the current state of the game. After that the human player makes the next move, changing the state
38
+
39
+ 067 of the game from which the next move of HER is derived.
40
+
41
+ 068
42
+
43
+ 069
44
+
45
+ 070
46
+
47
+ ![01963a8f-7d7c-737c-afd7-8977fff4f62c_1_164_730_639_499_0.jpg](images/01963a8f-7d7c-737c-afd7-8977fff4f62c_1_164_730_639_499_0.jpg)
48
+
49
+ Figure 2. Possible positions with corresponding moves. Note that the number of boxes can be reduced due to symmetry
50
+
51
+ 071
52
+
53
+ 072
54
+
55
+ 073
56
+
57
+ 074
58
+
59
+ 075
60
+
61
+ 076
62
+
63
+ 077
64
+
65
+ 078
66
+
67
+ 079
68
+
69
+ 080
70
+
71
+ 081
72
+
73
+ 082
74
+
75
+ The reason why Gardner's idea is often used when teaching machine learning is that HER can learn. Everytime when HER loses a game, the last drawn bead is removed from the box to make the corresponding losing move unable to happen again, which then makes it increasingly unlikely that HER loses in the following games. After 10-15 games HER is nearly unbeatable (Gardner, 1962).
76
+
77
+ ## 4. Reception and Adaptions
78
+
79
+ Gardner's idea can be construed as CS Unplugged activity, even though it did not appear on the original website ${}^{2}$ . These focus on the problem-solving nature of computer science and are popular around the world. Unplugged material is characterized by the fact that computers are deliberately avoided and instead analog work materials and a playful approach are used to introduce the basic ideas of CS.(Nishida et al., 2009) Besides other versions (The Royal Institution, 2008) (Demšar & Demšar, 2015), the "Sweet Learning Computer" created by McOwan & Curzon as part of the cs4fn project is the most known version of Hexapawn created as teaching material. The authors focus on the playful aspect of the idea, for example by replacing beads with sweets, which may then be eaten in the learning step. Within the material, the workings of the Matchbox computer are described, but not the basic ideas of ML (McOwan & Curzon). Starting from this basic concept, several works can be found that convey individual aspects of ML or AI. (The Royal Institution, 2008) for example, used Gardner's example in its Christmas lecture, explaining how the machine learns. Motivated by the example, it then gives an overview of how a real chess computer works. A deeper insight into ML is provided in the material of Lindner & Seegerer and Opel et. al., where Hexapawn is used directly to convey two principles form the field of AI (Lindner & Seegerer, 2020). The original concept is used for reinforcement learning. Here the students should learn that computers learn by "reward" and "punishment", adapt their strategy accordingly and try to maximize their profit. For this purpose, the rules are slightly adapted by adding beads to winning moves. Second, Hexapawn is also used to explain expert systems and then compare the two methods.
80
+
81
+ (Opel et al.,2019) have created a larger set of materials ${}^{3}$ for an entire ML and AI module in which HER took the central role to explain and to reflect how ML works. The material was developed for students from the age of 12 and has been adapted accordingly. First and foremost, a role system and a game flow chart were created to make the learning process more understandable. In addition, questions were designed to reflect the insights gained from the game afterwards. Additionally background knowledge on ML, artificial neural networks and "deep learning" was provided to help teachers answer possible queries from students.
82
+
83
+ ## 5. Limitations and Possible Misconceptions
84
+
85
+ In order to work out possible misconceptions, it is required to analyze how HER is structured. A fully trained HER formally consists of a handful of formal IF-THEN rules (if the game is in state a, then use move b). HER is therefore a symbolic AI, or more precisely a rule-based expert system (Stuart, 1992, p. 3). While these rule-based systems are very popular, they generally do not have the ability to expand their knowledge base through ML (Ogidan et al., 2018).
86
+
87
+ Thus, the attempt to represent aspects of ML by Hexapawn faces the fundamental problem that Gardner's model does not learn by common means of ML. This inconsistency can lead to the following misconceptions:
88
+
89
+ ---
90
+
91
+ ${}^{3}$ https://www.wissenschaftsjahr.de/2019/ fileadmin/user_upload/Wissenschaftsjahr_ 2019/Jugendaktion/WJ19_LA_Material_Buch_ CPS_barrRZ.pdf (german)
92
+
93
+ ${}^{1}$ In the following Hexapawn and HER are used synonymously 2 https://classic.csunplugged.org/ activities/
94
+
95
+ ---
96
+
97
+ M1) ML only produces results when the complete problem/data space is exhausted.
98
+
99
+ M2) The way ML works is that undesirable behavior is trained out through negative examples.
100
+
101
+ M3) The way ML works in rule-based expert systems is that wrong rules are sorted out through negative examples.
102
+
103
+ M4) A decision tree is trained by training its nodes individually.
104
+
105
+ The authors divide these misconceptions into two areas. The first involves misconceptions about the field of ML and its understanding. The second one contains misconceptions about individual models used in ML.
106
+
107
+ ### 5.1. Misconceptions of ML
108
+
109
+ The following misconceptions arise from the structure and learning process of Gardner's model. Machine learning is the discipline of deriving actions or new insights from a (sufficiently large) collection of data. ML is used to derive an approximate solution that can be used to solve the problem for other unknown cases (Alpaydin, 2014, p. 1 ff.). The ML process roughly follows three steps. The first step is the data input step, in which data relevant to the problem is collected and processed. This data set is generally incomplete, as in practice the complete data space cannot be covered. This point is followed by step two, the abstraction. Here, the data is abstracted from its original form and passed to a model. The model is formed by appropriate procedures (e.g. training of the model) in such a way that it corresponds to the input data. In the last step the generalization takes place. The model should now be able to make decisions about an unknown data set (test data set). This is the reason why in general a predefined rule set is not sufficient. Instead, a heuristic approach is chosen, according to which the solution is approximated (Chandramouli & Dutt, 2018). There are multiple ways to implement ML, the most common being supervised, unsupervised and reinforcement learning. Common models used in ML include artificial neural networks or decision trees (Alpaydin, 2014).
110
+
111
+ ##### 5.1.1.ML VIA BRUTE FORCE
112
+
113
+ Now let us look at how HER generates its rules. The rules to move a piece in a given position are "learned" by testing all possible states of the game and eliminating moves if the result is undesirable. However, this brute force approach contradicts the general approach of ML, in that information is derived from a finite data set to lead to generalized behav-
114
+
115
+ 164 ior. This brute force method is also why Hexapawn does not scale well to more complex scenarios (4x4 or 8x8 squares) when the amount of needed matchboxes becomes impracticable. HER thereby illustrates well, why we need ML for these kinds of problems, but represents it poorly itself. It can also contribute to the misconception $\mathrm{H}2$ of (Heuer et al., 2021). Thus, one of the central problems of ML, data preparation, is downplayed by portraying the process as the collection of all possible data.
116
+
117
+ #### 5.1.2. TRAINING WITH NEGATIVE EXAMPLES
118
+
119
+ The second misconception arises from the learning process and in particular the type of data HER receives. The learning process of removing beads characterizes both reinforced learning and ML deficient. In practice reinforced learning also relies on reinforcement of behavior. However, by removing beads in the original idea, only punishment is addressed (which is also less used in practice). While HER uses negative examples to remove unwanted behavior, in classical ML tasks behavior is trained using positive examples. Thus, while the learning procedure of HER is not categorically wrong, the student may get the wrong impression, which is that knowledge or behavior is built by removing undesirable behavior through negative examples rather than generalizing behaviour through example data in a ground-up process. A solution described by (Opel et al., 2019) and (Lindner & Seegerer, 2020) is already possible in Gardner's model by adding beads for winning moves.
120
+
121
+ ### 5.2. Misconceptions of Models in ML
122
+
123
+ The second type of possible misconceptions arises from the context in which HER is used. As analyzed in the reception and adaptions section, HER is mostly used as a simple example of ML processes. Following on from this, specific models of ML are introduced. Since learners are mostly ML novices and the structure of HER is usually not analyzed in detail, there is a chance that students intuitively interpret Hexapawn as one of the specified models. However, due to the fact that Hexapawn does not adequately represent these, misconceptions about individual models of ML may be formed.
124
+
125
+ ##### 5.2.1.ML IN EXPERT SYSTEMS
126
+
127
+ As already analyzed, HER is a rule-based expert system. These, however, are generally not based on methods that conform to the classical idea of ML (which is supposed to be transported with HER), because they lack the possibility to learn from input Data and to adapt their fact base (Ogidan et al., 2018). There are rarely approaches in which either a hybrid method is used to train a ML model and then postprocess the result by an expert system (Villena-Román et al., 2011), or the rule base is derived inductively from facts (Weiss & Indurkhya, 1995). Not only are these approaches exceptions, but also is neither of them presented by HER. Therefore, HER is suitable for the representation of a rule-based expert system, but not in combination with ML. In addition, the misconception (M3) may arise, that ML on rule based expert systems is the systematic removal of false facts by negative examples. (Lindner & Seegerer, 2020) use Hexapawn elsewhere in their material as an example of an expert system. Here the use is justified and a good example with suitable didactic reduction.
128
+
129
+ ##### 5.2.2.ML CHESS ENGINES AND DECISION TREES
130
+
131
+ The use of a "brute-force" method and the presentation of the game provide further opportunities for the formation of misconceptions. A classical chess engine generally consists of a heuristic function and a search tree optimization. While the search tree optimization is a classical optimization task, a ML approach can be used for the heuristics. The trained model receives the current position as input and outputs an evaluation of the position. Thus, the mode of operation is fundamentally different from HER and cannot be derived from it (M1)(M2). This would at the same time reinforce the misconception (H4) formulated by Heuer et al, since the complexity of chess engine engineering is disregarded. It is true that the learning outcome of HER is critically dependent on the strength of the player. However, this does not sufficiently simulate the extent to which AI engineers are necessary for ML success, in that they are especially involved in modeling and data preparation rather than in the active learning process (H2). (The Royal Institution, 2008) try to make the leap from HER to chess by addressing the game tree that results over several moves in a game of Hexapawn, referring to it as a decision tree. This is problematic in several ways. A decision tree is a common model in ML but is not to be confused with the game tree of a chess game. Furthermore, HER does not adequately represent either a game tree (the white moves are missing) or a decision tree nor does HER use a decision tree to evaluate a position. In fact, each position is considered independently of its successors. Therefore, HER is not suited to represent a decision tree, which in turn enables the formation of a misconception (M4) regarding decision trees. (Lindner & Seegerer, 2020) take a different approach by replacing the pawns with crocodiles and monkeys. This weakens the similarity to chess and tries to avoid listed misconceptions.
132
+
133
+ ##### 5.2.3.ML AND NEURAL NETWORKS
134
+
135
+ Lastly, it should be noted that in the investigated material artificial neural networks (ANN) are also often mentioned. At this point it is important to point out that HER is not a good model for these networks either, since neither neurons, layers, edges, weights nor activation functions are represented by the model. Nevertheless, especially inexperienced students (for whom the material is designed) can make a connection from HER to ANN, since, on the one hand, ML is used almost synonymously with ANN in public media and, on the other hand, similarities from box to neuron and bead to weight can be made unconsciously. HER, thus, should explicitly be differentiated from ANN.
136
+
137
+ ## 6. Conclusion
138
+
139
+ HER is very motivating especially for young students due to its simple rules, illustrative learning process and playful character. Therefore, it is a suitable tool to convey the idea of ML (in that a computer can learn and adapt its behavior through new information) or teaching the concept of reinforced learning. Nonetheless, based on the previous analysis students may develop misconceptions about the general working of ML or important models used in ML with Hexapawn. Further investigations are necessary to determine whether and to what extent the described misconceptions are developed when playing Hexapawn and which adaptions have to be made to the material to circumvent these.
140
+
141
+ Additionally with the aforementioned considerations in mind, the question must be raised whether individual aspects of ML can be better conveyed through other unplugged materials avoiding listed misconceptions, as there are already materials which clearly convey the basic concepts of ML without sacrificing formal accuracy. With the material "Train a Neuron" by ${cs4f}{n}^{4}$ , for example, a similar game feeling can be created with a game board and coins, but with the advantage that both the structure, functioning and learning procedures of an ANN are taught. With the material of Blum & Girschick it is even possible to teach (un)supervised learning with pen and paper only ${}^{5}$ . If one wants to focus on reinforcement learning, Blum developed a learning activity for this as well ${}^{6}$ . If, on the other hand, one does not want to forego the approach via a game, "Brain in a Bag", also by ${cs4f}{n}^{7}$ , is a another choice. With this material, the game Snap or, with slight adaptations, games such as Nods and Crosses can be addressed. This way it is also possible to simulate a classic game AI, by having the network evaluate the current board state. This approach is also conceivable for Hexapawn. Finally, it should be pointed out that HER is also suitable for teaching a classical AI method: the rule-based expert system (as described by (Lindner & Seegerer, 2020)).
142
+
143
+ ---
144
+
145
+ ${}^{4}$ https://cs4fndownloads.wordpress.com/ train-a-neuron/
146
+
147
+ ${}^{5}$ https://explore.iteratec.com/blog/ machine-learning-tutorial-teil-1 (german)
148
+
149
+ ${}^{6}$ https://explore.iteratec.com/blog/ machine-learning-tutorial-teil-2 (german)
150
+
151
+ ${}^{7}$ http://www.cs4fn.org/teachers/ activities/braininabag/braininabag.pdf
152
+
153
+ ---
154
+
155
+ References
156
+
157
+ Alpaydin, E. Introduction to machine learning. Adaptive computation and machine learning. The MIT Press, Cambridge, Massachusetts, third edition edition, 2014. ISBN 978-0-262-02818-9.
158
+
159
+ Chandramouli, S. and Dutt, S. Machine Learning, chapter Introduction to Machine Learning. Pearson India, 2018. ISBN 978-9353066697. OCLC: 1138943535.
160
+
161
+ Demšar, I. and Demšar, J. CS unplugged: Experiences and extensions. In Brodnik, A. and Vahrenhold, J. (eds.), Informatics in Schools. Curricula, Competences, and Competitions, Lecture Notes in Computer Science, pp. 106- 117. Springer International Publishing, 2015. ISBN 978- 3-319-25396-1. doi: 10.1007/978-3-319-25396-1_10.
162
+
163
+ Gardner, M. MATHEMATICAL GAMES. Scientific American, 206(3):138-154, 1962. ISSN 00368733, 19467087. Publisher: Scientific American, a division of Nature America, Inc.
164
+
165
+ Heuer, H., Jarke, J., and Breiter, A. Machine learning in tutorials - universal applicability, underinformed application, and other misconceptions. Big Data & Society, 8(1), 2021. ISSN 2053-9517, 2053-9517. doi: 10.1177/20539517211017593.
166
+
167
+ Lindner, A. and Seegerer, S. Unplugging artificial intelligence, 2020. published under https://www.aiunplugged.org/english.pdf; last visited 14. June 2021.
168
+
169
+ McOwan, P. and Curzon, P. The sweet computer: machines that learn. published under http: //www.cs4fn.org/teachers/activities/ sweetcomputer/sweetcomputer.pdf; last visited 10. June 2021.
170
+
171
+ Nayak, A. and Dutta, K. Impacts of machine learning and artificial intelligence on mankind. In 2017 International Conference on Intelligent Computing and Control (I2C2), pp. 1-3, 2017. doi: 10.1109/I2C2.2017.8321908.
172
+
173
+ Nishida, T., Kanemune, S., Idosaka, Y., Namiki, M., Bell, T., and Kuno, Y. A CS unplugged design pattern. In Proceedings of the 40th ACM technical symposium on Computer science education - SIGCSE '09, pp. 231, Chattanooga, TN, USA, 2009. ACM Press. ISBN 978-1-60558-183-5. doi: 10.1145/1508865.1508951.
174
+
175
+ Ogidan, E. T., Dimililer, K., and Ever, Y. K. Machine Learning for Expert Systems in Data Analysis. In 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1-5, Ankara, October 2018. IEEE. ISBN 978-1-5386-4184-2. doi: 10.1109/ISMSIT.2018.8567251.
176
+
177
+ Ohrndorf, L. Entwicklung und Validierung eines Instruments zur Messung des Wissens über Fehlvorstellungen in der Informatik. phdthesis, Universität Paderborn, 2016.
178
+
179
+ Opel, S., Schlichtig, M., and Schulte, C. Developing teaching materials on artificial intelligence by using a simulation game (work in progress). In Proceedings of the 14th Workshop in Primary and Secondary Computing Education, pp. 1-2. ACM, 2019. ISBN 978-1-4503-7704-1. doi: 10.1145/3361721.3362109.
180
+
181
+ Prediger, S. and Wittmann, G. Aus fehlern lernen - (wie) ist das möglich? PM : Praxis der Mathematik in der Schule, 51(27):1-8, 2009. ISSN 0032-7042; 1617-6960.
182
+
183
+ Qian, Y. and Lehman, J. Students' misconceptions and other difficulties in introductory programming: A literature review. ACM Transactions on Computing Education, 18(1):1-24, 2017. ISSN 1946-6226. doi: 10. 1145/3077618. URL https://dl.acm.org/doi/ 10.1145/3077618.
184
+
185
+ Sorva, J. Visual program simulation in introductory programming education. Number 2012,61 in Aalto University publication series Doctoral dissertations. Aalto Univ. School of Science, 2012. ISBN 978-952-60-4626-6 978-952-60-4625-9. OCLC: 934947240.
186
+
187
+ Stuart, B. L. An Alternative Computational Model For Artificial Intelligence. PhD thesis, Purdue University, West Lafayette, 1992.
188
+
189
+ The Royal Institution. Sweet computer, 2008. published under https://www.rigb.org/ christmaslectures08/html/activities/ sweet-computer.pdf; last visited 17. June 2021.
190
+
191
+ Villena-Román, J., Collada-Pérez, S., Serrano, S., and Gonzalez-Cristobal, J. Hybrid approach combining machine learning and a rule-based expert system for text categorization. In Proceedings of the 24th International Florida Artificial Intelligence Research Society Conference, 2011.
192
+
193
+ Weiss, S. M. and Indurkhya, N. Rule-based Machine Learning Methods for Functional Prediction. Journal of Artificial Intelligence Research, 3:383-403, December 1995. ISSN 1076-9757. doi: 10.1613/jair.199.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/huJogZLN2t/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § USING MATCHBOXES TO TEACH THE BASICS OF MACHINE LEARNING: AN ANALYSIS OF (POSSIBLE) MISCONCEPTIONS
2
+
3
+ § ANONYMOUS AUTHORS ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ The idea of chess-playing matchboxes, conceived by Martin Gardner as early as 1962, is becoming more and more relevant in learning materials in the area of AI and Machine Learning. Thus, it can be found in a large number of workshops and papers as an innovative teaching method to convey the basic ideas of reinforcement learning. In this paper the concept and its variations will be presented and the advantages of this analog approach will be shown. At the same time, however, the limitations of the approach are analyzed and the question of alternatives is raised.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ As Machine Learning (ML) has an increasing influence on many people's everyday life, (Nayak & Dutta, 2017) there is a need for concepts to include ML in workshops and other learning scenarios. Building these scenarios, so that beginners can gain insight into the underlying concepts, while avoiding oversimplifications and misconceptions can be very challenging, especially when talking about recent technological developments. Unplugged materials like the Hexapawn game show a creative and motivating approach to the topic. In this paper, we will take a look at these materials and discuss what misconceptions can occur, using this approach.
12
+
13
+ § 2. MISCONCEPTIONS
14
+
15
+ The study of misconceptions in computer science education and in education in general is a well-studied and still current research topic especially in computer science didactics (Sorva, 2012) (Ohrndorf, 2016) (Qian & Lehman, 2017). Misconceptions can be understood as entrenched systematic errors (Prediger & Wittmann, 2009). However, Qian
16
+
17
+ & Lehmann also discuss other hard-to-define concepts such as difficulties, mistakes, and bugs, stating that there is no single definition (Qian & Lehman, 2017). As a definition for misconceptions in CS programming education (Sorva, 2012) states the following: "understandings that are deficient or inadequate for many practical programming contexts". In reference to (Ohrndorf, 2016) we define misconceptions as cognitive representations of knowledge that contradict or deviate from the scientifically correct concepts.
18
+
19
+ (Heuer et al., 2021) examined machine learning tutorials for misconceptions and misleading explanations, identifying four main misconceptions: (H1) ML as adapting in response to new data and experiences to improve efficacy over time; (H2) ML as automating and improving the learning process of computers based on their experiences without any human assistance; (H3) ML can discover hidden patterns that are invisible to humans; (H4) ML can be applied without special expertise.
20
+
21
+ § 3.THE GAME
22
+
23
+ The original game idea of (Gardner, 1962) is a chess variant called "Hexapawn". On a 3x3 chess board, three pawns of different colors face each other (Figure 1). The objective is either to move a pawn to the opponent's baseline or to capture all of the opponent's pawns. One can also win by achieving a position in which the enemy cannot move (Stalemate). The pawns move one step forward as in chess and capture diagonally forward.
24
+
25
+ < g r a p h i c s >
26
+
27
+ Figure 1. Starting position of Hexapawn
28
+
29
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
30
+
31
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
32
+
33
+ With these simple game rules it is possible to construct what Gardner described as a "learning machine" or "Hexapawn Educable Robot" (HER) ${}^{1}$ . The human player controls the white pieces, the AI the black ones. For each position that black might face, there is a corresponding matchbox. It contains different colored beads, each of which is assigned to one particular move, that is possible in this position (Figure 2). White begins to play and makes the first move. When it is the AI's turn, the move is determined by picking a beat (and, subsequently, a move) randomly from the matchbox, that corresponds to the current state of the game. After that the human player makes the next move, changing the state
34
+
35
+ 067 of the game from which the next move of HER is derived.
36
+
37
+ 068
38
+
39
+ 069
40
+
41
+ 070
42
+
43
+ < g r a p h i c s >
44
+
45
+ Figure 2. Possible positions with corresponding moves. Note that the number of boxes can be reduced due to symmetry
46
+
47
+ 071
48
+
49
+ 072
50
+
51
+ 073
52
+
53
+ 074
54
+
55
+ 075
56
+
57
+ 076
58
+
59
+ 077
60
+
61
+ 078
62
+
63
+ 079
64
+
65
+ 080
66
+
67
+ 081
68
+
69
+ 082
70
+
71
+ The reason why Gardner's idea is often used when teaching machine learning is that HER can learn. Everytime when HER loses a game, the last drawn bead is removed from the box to make the corresponding losing move unable to happen again, which then makes it increasingly unlikely that HER loses in the following games. After 10-15 games HER is nearly unbeatable (Gardner, 1962).
72
+
73
+ § 4. RECEPTION AND ADAPTIONS
74
+
75
+ Gardner's idea can be construed as CS Unplugged activity, even though it did not appear on the original website ${}^{2}$ . These focus on the problem-solving nature of computer science and are popular around the world. Unplugged material is characterized by the fact that computers are deliberately avoided and instead analog work materials and a playful approach are used to introduce the basic ideas of CS.(Nishida et al., 2009) Besides other versions (The Royal Institution, 2008) (Demšar & Demšar, 2015), the "Sweet Learning Computer" created by McOwan & Curzon as part of the cs4fn project is the most known version of Hexapawn created as teaching material. The authors focus on the playful aspect of the idea, for example by replacing beads with sweets, which may then be eaten in the learning step. Within the material, the workings of the Matchbox computer are described, but not the basic ideas of ML (McOwan & Curzon). Starting from this basic concept, several works can be found that convey individual aspects of ML or AI. (The Royal Institution, 2008) for example, used Gardner's example in its Christmas lecture, explaining how the machine learns. Motivated by the example, it then gives an overview of how a real chess computer works. A deeper insight into ML is provided in the material of Lindner & Seegerer and Opel et. al., where Hexapawn is used directly to convey two principles form the field of AI (Lindner & Seegerer, 2020). The original concept is used for reinforcement learning. Here the students should learn that computers learn by "reward" and "punishment", adapt their strategy accordingly and try to maximize their profit. For this purpose, the rules are slightly adapted by adding beads to winning moves. Second, Hexapawn is also used to explain expert systems and then compare the two methods.
76
+
77
+ (Opel et al.,2019) have created a larger set of materials ${}^{3}$ for an entire ML and AI module in which HER took the central role to explain and to reflect how ML works. The material was developed for students from the age of 12 and has been adapted accordingly. First and foremost, a role system and a game flow chart were created to make the learning process more understandable. In addition, questions were designed to reflect the insights gained from the game afterwards. Additionally background knowledge on ML, artificial neural networks and "deep learning" was provided to help teachers answer possible queries from students.
78
+
79
+ § 5. LIMITATIONS AND POSSIBLE MISCONCEPTIONS
80
+
81
+ In order to work out possible misconceptions, it is required to analyze how HER is structured. A fully trained HER formally consists of a handful of formal IF-THEN rules (if the game is in state a, then use move b). HER is therefore a symbolic AI, or more precisely a rule-based expert system (Stuart, 1992, p. 3). While these rule-based systems are very popular, they generally do not have the ability to expand their knowledge base through ML (Ogidan et al., 2018).
82
+
83
+ Thus, the attempt to represent aspects of ML by Hexapawn faces the fundamental problem that Gardner's model does not learn by common means of ML. This inconsistency can lead to the following misconceptions:
84
+
85
+ ${}^{3}$ https://www.wissenschaftsjahr.de/2019/ fileadmin/user_upload/Wissenschaftsjahr_ 2019/Jugendaktion/WJ19_LA_Material_Buch_ CPS_barrRZ.pdf (german)
86
+
87
+ ${}^{1}$ In the following Hexapawn and HER are used synonymously 2 https://classic.csunplugged.org/ activities/
88
+
89
+ M1) ML only produces results when the complete problem/data space is exhausted.
90
+
91
+ M2) The way ML works is that undesirable behavior is trained out through negative examples.
92
+
93
+ M3) The way ML works in rule-based expert systems is that wrong rules are sorted out through negative examples.
94
+
95
+ M4) A decision tree is trained by training its nodes individually.
96
+
97
+ The authors divide these misconceptions into two areas. The first involves misconceptions about the field of ML and its understanding. The second one contains misconceptions about individual models used in ML.
98
+
99
+ § 5.1. MISCONCEPTIONS OF ML
100
+
101
+ The following misconceptions arise from the structure and learning process of Gardner's model. Machine learning is the discipline of deriving actions or new insights from a (sufficiently large) collection of data. ML is used to derive an approximate solution that can be used to solve the problem for other unknown cases (Alpaydin, 2014, p. 1 ff.). The ML process roughly follows three steps. The first step is the data input step, in which data relevant to the problem is collected and processed. This data set is generally incomplete, as in practice the complete data space cannot be covered. This point is followed by step two, the abstraction. Here, the data is abstracted from its original form and passed to a model. The model is formed by appropriate procedures (e.g. training of the model) in such a way that it corresponds to the input data. In the last step the generalization takes place. The model should now be able to make decisions about an unknown data set (test data set). This is the reason why in general a predefined rule set is not sufficient. Instead, a heuristic approach is chosen, according to which the solution is approximated (Chandramouli & Dutt, 2018). There are multiple ways to implement ML, the most common being supervised, unsupervised and reinforcement learning. Common models used in ML include artificial neural networks or decision trees (Alpaydin, 2014).
102
+
103
+ § 5.1.1.ML VIA BRUTE FORCE
104
+
105
+ Now let us look at how HER generates its rules. The rules to move a piece in a given position are "learned" by testing all possible states of the game and eliminating moves if the result is undesirable. However, this brute force approach contradicts the general approach of ML, in that information is derived from a finite data set to lead to generalized behav-
106
+
107
+ 164 ior. This brute force method is also why Hexapawn does not scale well to more complex scenarios (4x4 or 8x8 squares) when the amount of needed matchboxes becomes impracticable. HER thereby illustrates well, why we need ML for these kinds of problems, but represents it poorly itself. It can also contribute to the misconception $\mathrm{H}2$ of (Heuer et al., 2021). Thus, one of the central problems of ML, data preparation, is downplayed by portraying the process as the collection of all possible data.
108
+
109
+ § 5.1.2. TRAINING WITH NEGATIVE EXAMPLES
110
+
111
+ The second misconception arises from the learning process and in particular the type of data HER receives. The learning process of removing beads characterizes both reinforced learning and ML deficient. In practice reinforced learning also relies on reinforcement of behavior. However, by removing beads in the original idea, only punishment is addressed (which is also less used in practice). While HER uses negative examples to remove unwanted behavior, in classical ML tasks behavior is trained using positive examples. Thus, while the learning procedure of HER is not categorically wrong, the student may get the wrong impression, which is that knowledge or behavior is built by removing undesirable behavior through negative examples rather than generalizing behaviour through example data in a ground-up process. A solution described by (Opel et al., 2019) and (Lindner & Seegerer, 2020) is already possible in Gardner's model by adding beads for winning moves.
112
+
113
+ § 5.2. MISCONCEPTIONS OF MODELS IN ML
114
+
115
+ The second type of possible misconceptions arises from the context in which HER is used. As analyzed in the reception and adaptions section, HER is mostly used as a simple example of ML processes. Following on from this, specific models of ML are introduced. Since learners are mostly ML novices and the structure of HER is usually not analyzed in detail, there is a chance that students intuitively interpret Hexapawn as one of the specified models. However, due to the fact that Hexapawn does not adequately represent these, misconceptions about individual models of ML may be formed.
116
+
117
+ § 5.2.1.ML IN EXPERT SYSTEMS
118
+
119
+ As already analyzed, HER is a rule-based expert system. These, however, are generally not based on methods that conform to the classical idea of ML (which is supposed to be transported with HER), because they lack the possibility to learn from input Data and to adapt their fact base (Ogidan et al., 2018). There are rarely approaches in which either a hybrid method is used to train a ML model and then postprocess the result by an expert system (Villena-Román et al., 2011), or the rule base is derived inductively from facts (Weiss & Indurkhya, 1995). Not only are these approaches exceptions, but also is neither of them presented by HER. Therefore, HER is suitable for the representation of a rule-based expert system, but not in combination with ML. In addition, the misconception (M3) may arise, that ML on rule based expert systems is the systematic removal of false facts by negative examples. (Lindner & Seegerer, 2020) use Hexapawn elsewhere in their material as an example of an expert system. Here the use is justified and a good example with suitable didactic reduction.
120
+
121
+ § 5.2.2.ML CHESS ENGINES AND DECISION TREES
122
+
123
+ The use of a "brute-force" method and the presentation of the game provide further opportunities for the formation of misconceptions. A classical chess engine generally consists of a heuristic function and a search tree optimization. While the search tree optimization is a classical optimization task, a ML approach can be used for the heuristics. The trained model receives the current position as input and outputs an evaluation of the position. Thus, the mode of operation is fundamentally different from HER and cannot be derived from it (M1)(M2). This would at the same time reinforce the misconception (H4) formulated by Heuer et al, since the complexity of chess engine engineering is disregarded. It is true that the learning outcome of HER is critically dependent on the strength of the player. However, this does not sufficiently simulate the extent to which AI engineers are necessary for ML success, in that they are especially involved in modeling and data preparation rather than in the active learning process (H2). (The Royal Institution, 2008) try to make the leap from HER to chess by addressing the game tree that results over several moves in a game of Hexapawn, referring to it as a decision tree. This is problematic in several ways. A decision tree is a common model in ML but is not to be confused with the game tree of a chess game. Furthermore, HER does not adequately represent either a game tree (the white moves are missing) or a decision tree nor does HER use a decision tree to evaluate a position. In fact, each position is considered independently of its successors. Therefore, HER is not suited to represent a decision tree, which in turn enables the formation of a misconception (M4) regarding decision trees. (Lindner & Seegerer, 2020) take a different approach by replacing the pawns with crocodiles and monkeys. This weakens the similarity to chess and tries to avoid listed misconceptions.
124
+
125
+ § 5.2.3.ML AND NEURAL NETWORKS
126
+
127
+ Lastly, it should be noted that in the investigated material artificial neural networks (ANN) are also often mentioned. At this point it is important to point out that HER is not a good model for these networks either, since neither neurons, layers, edges, weights nor activation functions are represented by the model. Nevertheless, especially inexperienced students (for whom the material is designed) can make a connection from HER to ANN, since, on the one hand, ML is used almost synonymously with ANN in public media and, on the other hand, similarities from box to neuron and bead to weight can be made unconsciously. HER, thus, should explicitly be differentiated from ANN.
128
+
129
+ § 6. CONCLUSION
130
+
131
+ HER is very motivating especially for young students due to its simple rules, illustrative learning process and playful character. Therefore, it is a suitable tool to convey the idea of ML (in that a computer can learn and adapt its behavior through new information) or teaching the concept of reinforced learning. Nonetheless, based on the previous analysis students may develop misconceptions about the general working of ML or important models used in ML with Hexapawn. Further investigations are necessary to determine whether and to what extent the described misconceptions are developed when playing Hexapawn and which adaptions have to be made to the material to circumvent these.
132
+
133
+ Additionally with the aforementioned considerations in mind, the question must be raised whether individual aspects of ML can be better conveyed through other unplugged materials avoiding listed misconceptions, as there are already materials which clearly convey the basic concepts of ML without sacrificing formal accuracy. With the material "Train a Neuron" by ${cs4f}{n}^{4}$ , for example, a similar game feeling can be created with a game board and coins, but with the advantage that both the structure, functioning and learning procedures of an ANN are taught. With the material of Blum & Girschick it is even possible to teach (un)supervised learning with pen and paper only ${}^{5}$ . If one wants to focus on reinforcement learning, Blum developed a learning activity for this as well ${}^{6}$ . If, on the other hand, one does not want to forego the approach via a game, "Brain in a Bag", also by ${cs4f}{n}^{7}$ , is a another choice. With this material, the game Snap or, with slight adaptations, games such as Nods and Crosses can be addressed. This way it is also possible to simulate a classic game AI, by having the network evaluate the current board state. This approach is also conceivable for Hexapawn. Finally, it should be pointed out that HER is also suitable for teaching a classical AI method: the rule-based expert system (as described by (Lindner & Seegerer, 2020)).
134
+
135
+ ${}^{4}$ https://cs4fndownloads.wordpress.com/ train-a-neuron/
136
+
137
+ ${}^{5}$ https://explore.iteratec.com/blog/ machine-learning-tutorial-teil-1 (german)
138
+
139
+ ${}^{6}$ https://explore.iteratec.com/blog/ machine-learning-tutorial-teil-2 (german)
140
+
141
+ ${}^{7}$ http://www.cs4fn.org/teachers/ activities/braininabag/braininabag.pdf
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/ijuM-MVwVEk/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Experiences from Teaching Practical Machine Learning Courses to Master Students with Mixed Backgrounds
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ Machine learning education has become more accessible and relevant to students from various backgrounds. Practical courses complement theoretical lectures by focusing on applied machine learning. In this work, we report about our experiences from teaching two machine learning practical courses to master students from different study programs; an introductory and an advanced course. We present a summary of the teaching and evaluation methods used in both courses. We summarize our experiences and the feedback collected from the students through a survey. We conclude with our recommendations on teaching and designing practical machine learning courses.
8
+
9
+ ## 1. Introduction
10
+
11
+ Machine Learning (ML) has recently grown in both relevance and popularity due to its evolving potential in various fields of research. ML technologies are gradually having a significant impact on everyday lives in modern societies (Stone et al., 2016). Because of this, education providers expand their ML-related course portfolio (Engel & Coleman, 2021). In this context, there is a present need for experimented and proven educational methods to teach competences related to ML techniques and tools (Long & Magerko, 2020).
12
+
13
+ In this paper, we provide a description of two master-level practical ML courses. We also present the methods we used to teach both courses and an evaluation of those methods. In this evaluation, we summarize our experience as teachers and the feedback of the students collected through an on-line survey. We want to share our experience and provide recommendations on the best methods to conduct ML practical courses.
14
+
15
+ ## 2. Background
16
+
17
+ Both ML courses presented in this paper are application-oriented courses (in German, "Praktikum") each worth 10 ECTS (European Credit Transfer System). The courses are offered at a German university to master students from different disciplines around computer science, information systems, data engineering, and robotics. The first course was designed for advanced learners with a focus on Deep Learning (DL) and took place between October 2020 and March 2021. The second course was an introductory course in applied ML and has been running since April 2021. In both courses, 24 students participated. The students were selected via a matching system that considers both the preferences of the students and the prioritization of the teachers.
18
+
19
+ The courses were taught completely on-line via Zoom and were split into two main phases; a teaching phase and a project phase. The aim of this organizational split was to allow the students to learn relevant skills in the first part and to apply those skills in a practical project afterwards, where they formed groups of three to four students. The advanced course had a shorter teaching phase (3 weeks) and a longer project phase (11 weeks), while the introductory course had a longer teaching phase (7 weeks) and a shorter project phase ( 6 weeks). Overall grading and project scope were adapted accordingly.
20
+
21
+ In both courses, Python was the programming language of choice. Furthermore, we relied heavily on Jupyter notebooks for coding tasks, during the sessions and as homework assignments. In the introductory course, we focused on data science process models such as the Cross Industry Standard Process for Data Mining (CRISP-DM) (Wirth & Hipp), business and data understanding, and the python stack for ML. In the advanced course, the focus was rather on DL projects and the required tools to develop, test, and deploy DL models. The topics included an introduction to common tools such as Pytorch (Paszke et al.), Keras (Chollet et al., 2015), H2O Driverless AI, containerization, and applications of deep learning. Both courses had a module about the ethical dimension of ML and Artificial Intelligence.
22
+
23
+ ---
24
+
25
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
26
+
27
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
28
+
29
+ ---
30
+
31
+ ## 3. Teaching Methods
32
+
33
+ We briefly discuss the methods we used in the two practical courses. The presented methods are organized into three categories; content delivery, evaluation methods, and feedback channels.
34
+
35
+ ### 3.1. Content Delivery
36
+
37
+ The sessions were mainly planned for delivering content to the students. Different methods of planning the sessions were employed. For example, block sessions were used to combine theoretical knowledge with hands-on sessions, where the students would listen to a presentation, work on a simple task, and receive feedback afterwards. We utilized such an approach in introducing basics of using scikit-learn (Pedregosa et al., 2011) and in teaching data loading pipelines in PyTorch (Paszke et al.), in the introductory and the advanced courses respectively. Additionally, crash courses were utilized to quickly bring all students to a common level of knowledge. In the introductory course, we held a one-day Python crash course based on ideas and content from (Chan, 2015; Needham, 2020; Severance, 2009). In the advanced course, a two-day crash course on PyTorch was held, since the majority of the audience were already familiar with Keras (Chollet et al., 2015) and/or TensorFlow (Abadi et al., 2016).
38
+
39
+ From a content perspective, different methods were used to communicate knowledge with the students, motivate interaction, and provide room for discussion. In several sessions, we started with a short presentation with slides. The presentations served as an introduction as well as a warm-up for the topic being discussed. They were used more frequently in the introductory course, iterating over the process of applied machine learning, characteristics and challenges of each phase in the process, and commonly-used tools in the Python stack. The second element we adopted heavily is the use of Jupyter notebooks. The notebooks were made available to the students before class and discussed mostly after the introductory presentations. During the sessions, we coded the Jupyter notebook live and did not walk through the notebook prepared prior to the session. Due to the online-format, we adopted the strategy of alternatively switching between presentation slides and Jupyter notebooks (coding) with a block of 20-30 minutes for each. The target was to overcome the interaction difficulties of the remote setup and connect abstract knowledge with practical use.
40
+
41
+ We integrated in-class group work to motivate for discussion and enrich the learning experience. The strategy was to divide the students into groups of three to four students and ask them to work on a specific task related to the presented content. The scope of the task was quite versatile, ranging from coding tasks to discussing ideas and brainstorming
42
+
43
+ 108 machine learning solutions. Coding tasks included solving 109 small problems, reading and understanding code, or reading documentation and applying a solution to a different problem or a dataset. After the time dedicated for the task is over, each group briefly presented what they achieved or learned to all other groups and the instructors. When relevant, a short feedback round followed each presentation.
44
+
45
+ ### 3.2. Evaluation Methods
46
+
47
+ In order to measure the learning progress and give the students the chance to apply the concepts learned during the sessions, students had to work on various tasks as graded homework and project work. In the introductory course, a mini-project covering most of the basic concepts in Python was to be completed. For the advanced course, students had to implement a complete pipeline for a simplified scenario of image inpainting (Zeng et al., 2020) using PyTorch, starting with adapting a dataset for the task, training, and evaluating a simple deep learning model. We used the German traffic signs dataset from (Stallkamp et al., 2012).
48
+
49
+ Another methodology we used in the introductory course was to provide the students with homework assignments in the form of Jupyter notebooks. The notebooks contained both guided as well as unguided exercises. The guided exercises served the purpose of introducing the usage of libraries such as Pandas (pandas development team, 2020; Wes McKinney, 2010), Matplotlib (Hunter, 2007), and Scikit-learn (Pedregosa et al., 2011). Students had to add their code to the indicated specific parts of the notebook. Later in the course, more unguided exercises were presented, where the students are only provided with a general task formulation with neither code snippets nor structure.
50
+
51
+ Homework assignments also included a few essay questions to assess the students' understanding of the concepts and their ability to formulate their ideas. Given a specific business context from our research, students of the introductory course were asked to identify use-cases for machine learning, motivate them, and prioritize them according to their business impact. For the project work, we opted for a high-level project scope, where groups of three to four students were asked to develop a concrete proposal with their ideas and plans. The requirements were to adhere to a set of milestones and deliverables, while providing room for the students to extend the project scope, integrate auxiliary modules in their implementation, and explore new ideas. The project for the introductory course involved developing a complete solution using machine learning for an actual business use-case based on an internal dataset we curated from a running system. The project of the advanced course was to develop a face recognition pipeline using existing state-of-the-art deep learning models. Students from the advanced course used GPU resources provided by our industry partner to train and fine-tune their deep learning models.
52
+
53
+ ### 3.3. Feedback Channels
54
+
55
+ We believed feedback is an integral component of the learning process; therefore, we adopted multiple feedback channels, where each channel is dedicated to a specific scope or element of course. Table 1 lists the channels we used and the corresponding scopes of questions.
56
+
57
+ Table 1. Feedback channels and corresponding scope of questions.
58
+
59
+ <table><tr><td>Question Scope</td><td>Channel</td></tr><tr><td>General</td><td>CHAT PLATFORM AND FORUM</td></tr><tr><td>HOMEWORK ASSIGNMENTS</td><td>Q&A LIVE SESSIONS BEFORE SUBMISSION TEXTUAL COMMENTS AFTER SUBMISSION</td></tr><tr><td>Project-RELATED</td><td>WEEKLY OFFICE-HOURS FEEDBACK AFTER PRESENTATIONS</td></tr></table>
60
+
61
+ ## 4. Experiences
62
+
63
+ We report about our experiences from both courses, focusing on three major aspects; methods of content delivery, scope of tasks, and project work.
64
+
65
+ ### 4.1. Content Delivery
66
+
67
+ We had a positive experience with integrating a mixture of methods in the same session when delivering content to the students. Concretely in the case of teaching practical machine learning, we slowly arrived at the following sequence of teaching activities in our sessions: short presentation with slides, live-coding session, group work, and finally presenting and discussing with all groups. Live-coding in an empty Jupyter Notebook worked out better than going through the notebook and executing the cells. Despite being more time-consuming, we found out that it improved the engagement and the follow-up of the students by regulating the pace of presenting and developing ideas. We also found that using a running use-case along several modules makes it easier for students to follow up and connect the different topics, we were inspired by the end-to-end machine learning project chapter from (Géron, 2019). Due to the practical nature of both courses, we designed and delivered the content following a suitable process model; CRISP-DM for the introductory course and a more DL-specific process model adopted from (Raghu & Schmidt, 2020). This turned out to be useful in understanding the holistic overview of the iterative process and logically connecting the various steps.
68
+
69
+ ### 4.2. Scope of Tasks
70
+
71
+ When scoping tasks for the students, we found out that realistic scenarios involving ambiguity provide a better learning opportunity for students. They simulate real-life ML problems and enable students to stretch their thoughts beyond standard toy examples. They also touch upon important
72
+
73
+ 164 skills such as identifying possible use-cases for ML given a complex business scenario, formulating each identified use-case correctly, and validating assumptions based on the available data. However, they come at the cost of being more challenging and time-consuming for both the teacher and the student. For the more practical phases of the ML process such as learning how to use a package, guided exercises proved very successful as a first step that can be later complemented with unguided exercises. Although unguided exercises are relatively challenging, they represent a more realistic scenario allowing the students to develop their own work and tackle the problem systematically.
74
+
75
+ ### 4.3. Project Work
76
+
77
+ From our experience, a flexible project scope has increased the motivation of the students. They formulated major parts of the project by themselves and demonstrated full-ownership of the whole work. Some groups explored new ideas, complemented the suggested pipeline with more tasks, and made demos for their implementations. When forming the groups, we found out that heterogeneously mixing them with respect to background engages all students and evenly distributes workload. During the project phase, we realized the importance of milestones, where the students can present their work and get constructive feedback. As explained, this was conducted in the form of intermediate presentations and regular office-hours, where meetings were held with each group separately.
78
+
79
+ ![01963a92-07aa-77d6-9203-e51c7038bf46_2_908_1234_684_412_0.jpg](images/01963a92-07aa-77d6-9203-e51c7038bf46_2_908_1234_684_412_0.jpg)
80
+
81
+ Figure 1. Wordcloud of the textual responses of the students.
82
+
83
+ ## 5. Student Feedback
84
+
85
+ Course participants were asked to evaluate several aspects of the course through an on-line survey. The survey consisted mainly of multiple-choice questions along with two essay questions where the students can deliver further feedback. The response rates for the introductory and the advanced course were ${50}\%$ and ${38}\%$ , respectively. Although the sample size is relatively small, it indicates a general trend of the experience of the students. The feedback of the students from the essay questions is summarized in a wordcloud in Figure 1.
86
+
87
+ ### 5.1. Overall Learning Success
88
+
89
+ In the survey, students were asked to self-assess their skills in machine learning before and after the course. Students could respond on a 5-point Likert scale $(1 =$ not at all; $5 =$ very much). The averages for the introductory course and the advanced course moved from (2.4, 2.9) to (3.7, 4.2), with a difference of 1.3 points. Additionally, all students assessed their skills with a higher score after the course than before.
90
+
91
+ ### 5.2. Best-Evaluated Teaching Methods
92
+
93
+ On the same Likert scale, students evaluated the different teaching methods we used during the course sessions. The top five methods in both courses are shown in Table 2.
94
+
95
+ Clearly, the project-work and coding assignments were the top-rated methods. Other methods such as "Group-work during the sessions", "Homework essay questions", and "Literature recommendations" were graded with lower average scores; 3.9, 3.3, and 2.9, respectively.
96
+
97
+ Table 2. Top-5 methods evaluated by the students and their average score on a 5-point scale $(1 =$ not helpful; $5 =$ very helpful).
98
+
99
+ <table><tr><td>Method</td><td>INTRODUCTORY</td><td>Advanced</td></tr><tr><td>WORKING ON THE PROJECT</td><td>4.75</td><td>5.0</td></tr><tr><td>LEARNING FROM EXEMPLARY CODE</td><td>4.83</td><td>4.75</td></tr><tr><td>CODING HOMEWORK</td><td>4.66</td><td>4.88</td></tr><tr><td>OFFICE HOURS & INDIVIDUAL DISCUSSIONS</td><td>4.36</td><td>4.83</td></tr><tr><td>SLIDE PRESENTATIONS VIA ZOOM</td><td>4.25</td><td>4.5</td></tr></table>
100
+
101
+ ### 5.3. Group-Work and Individual-Work
102
+
103
+ Another interesting outcome of the survey was that the students evaluated individual learning consistently higher than group-work, except for the course project. To put these results in context, all in-class group-work activities were based on random assignment of group members via the on-line conferencing software. However, students had the chance to work together within the same group for an extended period of time on the course project. Since both courses were conducted remotely, the lack of social interactions among the groups can have an impact on such results, especially when groups are temporarily formed during on-line sessions.
104
+
105
+ ### 5.4. Crash Courses
106
+
107
+ Since we used crash courses to teach practical skills at the start of each course, students were also asked to evaluate them. On the 5-point scale, students from the introductory course evaluated the Python crash course with an average of 3.8, taking into consideration that ${40}\%$ of the participants were previously familiar with Python. For the advanced course, participants evaluated the PyTorch crash course with an average of 4.8, where only ${10}\%$ of the participants used it at least once before the course.
108
+
109
+ ## 6. Conclusion
110
+
111
+ In this paper, we present teaching methods used in two practical ML courses. We also summarized our experiences as teachers with both courses; and the feedback from the students collected via a survey. We derive recommendations for teachers on the methods to use for planning the sessions, delivering content, designing assignments, and choosing project-work. A summary of our recommendations is presented in Table 3.
112
+
113
+ Table 3. Summary of recommendations for practical machine learning courses.
114
+
115
+ <table><tr><td>TEACHING</td><td>Evaluation & FEEDBACK</td></tr><tr><td>CRASH COURSES & BLOCK SESSIONS (TO LEVEL UP SKILLS)</td><td>CODING HOMEWORK (MIX GUIDED & UNGUIDED)</td></tr><tr><td>SLIDE PRESENTATIONS (CONCISE AS INTRODUCTION)</td><td>MINI-PROJECTS (INCLUDE COMPLETE ML PIPELINES)</td></tr><tr><td>JUPYTER NOTEBOOKS (LIVE-CODING)</td><td>Project scope (FLEXIBLE, REAL-WORLD SCENARIOS)</td></tr><tr><td>EXEMPLARY CODE (WELL-WRITTEN AND DOCUMENTED)</td><td>Project groups (MIX WITH RESPECT TO BACKGROUNDS)</td></tr><tr><td>IN-SESSION GROUP WORK (FOCUS ON CODING)</td><td>REGULAR FEEDBACK (ALSO DURING PROJECT PHASES)</td></tr></table>
116
+
117
+ Both in our experience and according to the students' feedback, practical coding tasks based on a realistic use-case are a successful teaching method for machine learning. Additionally, teaching techniques that involve live-coding sessions has contributed to a better learning experience for the students. For coding assignments, the combination of guided and unguided exercises trains the students to progress from simple tasks to more advanced and complex ones. Finally, projects provide a learning opportunity for students, given that they are complemented with regular feedback and concrete milestones.
118
+
119
+ ## References
120
+
121
+ Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., and Devin, M. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
122
+
123
+ Chan, J. Learn Python in One Day and Learn it Well: Python for Beginners with Hands-on Project: the Only Book You Need to Start Coding in Python Immediately. CreateSpace Independent Publishing, 2015.
124
+
125
+ Chollet, F. et al. Keras. https://keras.io, 2015.
126
+
127
+ Engel, C. and Coleman, N. AI is not just a technology. In Proceedings of the First Teaching Machine Learning and Artificial Intelligence Workshop, volume 141 of Proceedings of Machine Learning Research, pp. 23-28. PMLR, 2021. URL http://proceedings.mlr.press/ v141/engel21a.html.
128
+
129
+ Géron, A. Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: Concepts, tools, and techniques to build intelligent systems. O'Reilly Media, 2019.
130
+
131
+ Goodfellow, I., Bengio, Y., Courville, A., and Bengio, Y. Deep learning, volume 1. MIT press Cambridge, 2016.
132
+
133
+ Grus, J. Data science from scratch: first principles with python. O'Reilly Media, 2019.
134
+
135
+ Hunter, J. D. Matplotlib: A 2d graphics environment. Computing in Science & Engineering, 9(3):90-95, 2007. doi: 10.1109/MCSE.2007.55.
136
+
137
+ Knaflic, C. N. Storytelling with data: A data visualization guide for business professionals. John Wiley & Sons, 2015.
138
+
139
+ Long, D. and Magerko, B. What is ai literacy? competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, pp. 116, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450367080. doi: 10.1145/3313831.3376727. URL https://doi.org/10.1145/3313831.3376727.
140
+
141
+ Needham, T. Python: For Beginners A Crash Course Guide To Learn Python in 1 Week. Draft2Digital, 2020. ISBN 9781393160939. URL https://books.google.de/books?id=ZppkxwEACAAJ.
142
+
143
+ pandas development team, T. pandas-dev/pandas: Pandas, February 2020. URL https://doi.org/10.5281/zenodo.3509134.
144
+
145
+ Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., and Antiga, L. Pytorch: An imperative style, high-performance deep learning library. pp. 8026-8037.
146
+
147
+ Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cour-napeau, D., Brucher, M., Perrot, M., and Duchesnay, E. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011.
148
+
149
+ Provost, F. and Fawcett, T. Data Science for Business: What you need to know about data mining and data-analytic thinking. " O'Reilly Media, Inc.", 2013. ISBN 1449374298.
150
+
151
+ Raghu, M. and Schmidt, E. A survey of deep learning for scientific discovery. arXiv preprint arXiv:2003.11755, 2020.
152
+
153
+ Severance, C. R. Python for everybody. Charles Severance, 2009.
154
+
155
+ Stallkamp, J., Schlipsing, M., Salmen, J., and Igel, C. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, (0): -, 2012. ISSN 0893-6080. doi: 10.1016/j.neunet.2012.02. 016. URL http://www.sciencedirect.com/ science/article/pii/S0893608012000457.
156
+
157
+ Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., Hirschberg, J., Kalyanakrishnan, S., Kamar, E., Kraus, S., et al. Artificial intelligence and life in 2030: the one hundred year study on artificial intelligence. 2016.
158
+
159
+ VanderPlas, J. Python data science handbook: Essential tools for working with data. " O'Reilly Media, Inc.", 2016.
160
+
161
+ Wes McKinney. Data Structures for Statistical Computing in Python. In Stéfan van der Walt and Jarrod Millman (eds.), Proceedings of the 9th Python in Science Conference, pp. 56 - 61, 2010. doi: 10.25080/Majora-92bf1922-00a.
162
+
163
+ Wirth, R. and Hipp, J. Crisp-dm: Towards a standard process model for data mining. In Proceedings of the 4th international conference on the practical applications of knowledge discovery and data mining, volume 1 . Springer-Verlag London, UK.
164
+
165
+ Zeng, Y., Lin, Z., Yang, J., Zhang, J., Shechtman, E., and Lu, H. High-resolution image inpainting with iterative confidence feedback and guided upsampling. In European Conference on Computer Vision, pp. 1-17. Springer, 2020.
166
+
167
+ Zhang, A., Lipton, Z. C., Li, M., and Smola, A. J. Dive into Deep Learning. 2020. https://d21.ai.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/ijuM-MVwVEk/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § EXPERIENCES FROM TEACHING PRACTICAL MACHINE LEARNING COURSES TO MASTER STUDENTS WITH MIXED BACKGROUNDS
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ Machine learning education has become more accessible and relevant to students from various backgrounds. Practical courses complement theoretical lectures by focusing on applied machine learning. In this work, we report about our experiences from teaching two machine learning practical courses to master students from different study programs; an introductory and an advanced course. We present a summary of the teaching and evaluation methods used in both courses. We summarize our experiences and the feedback collected from the students through a survey. We conclude with our recommendations on teaching and designing practical machine learning courses.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ Machine Learning (ML) has recently grown in both relevance and popularity due to its evolving potential in various fields of research. ML technologies are gradually having a significant impact on everyday lives in modern societies (Stone et al., 2016). Because of this, education providers expand their ML-related course portfolio (Engel & Coleman, 2021). In this context, there is a present need for experimented and proven educational methods to teach competences related to ML techniques and tools (Long & Magerko, 2020).
12
+
13
+ In this paper, we provide a description of two master-level practical ML courses. We also present the methods we used to teach both courses and an evaluation of those methods. In this evaluation, we summarize our experience as teachers and the feedback of the students collected through an on-line survey. We want to share our experience and provide recommendations on the best methods to conduct ML practical courses.
14
+
15
+ § 2. BACKGROUND
16
+
17
+ Both ML courses presented in this paper are application-oriented courses (in German, "Praktikum") each worth 10 ECTS (European Credit Transfer System). The courses are offered at a German university to master students from different disciplines around computer science, information systems, data engineering, and robotics. The first course was designed for advanced learners with a focus on Deep Learning (DL) and took place between October 2020 and March 2021. The second course was an introductory course in applied ML and has been running since April 2021. In both courses, 24 students participated. The students were selected via a matching system that considers both the preferences of the students and the prioritization of the teachers.
18
+
19
+ The courses were taught completely on-line via Zoom and were split into two main phases; a teaching phase and a project phase. The aim of this organizational split was to allow the students to learn relevant skills in the first part and to apply those skills in a practical project afterwards, where they formed groups of three to four students. The advanced course had a shorter teaching phase (3 weeks) and a longer project phase (11 weeks), while the introductory course had a longer teaching phase (7 weeks) and a shorter project phase ( 6 weeks). Overall grading and project scope were adapted accordingly.
20
+
21
+ In both courses, Python was the programming language of choice. Furthermore, we relied heavily on Jupyter notebooks for coding tasks, during the sessions and as homework assignments. In the introductory course, we focused on data science process models such as the Cross Industry Standard Process for Data Mining (CRISP-DM) (Wirth & Hipp), business and data understanding, and the python stack for ML. In the advanced course, the focus was rather on DL projects and the required tools to develop, test, and deploy DL models. The topics included an introduction to common tools such as Pytorch (Paszke et al.), Keras (Chollet et al., 2015), H2O Driverless AI, containerization, and applications of deep learning. Both courses had a module about the ethical dimension of ML and Artificial Intelligence.
22
+
23
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
24
+
25
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
26
+
27
+ § 3. TEACHING METHODS
28
+
29
+ We briefly discuss the methods we used in the two practical courses. The presented methods are organized into three categories; content delivery, evaluation methods, and feedback channels.
30
+
31
+ § 3.1. CONTENT DELIVERY
32
+
33
+ The sessions were mainly planned for delivering content to the students. Different methods of planning the sessions were employed. For example, block sessions were used to combine theoretical knowledge with hands-on sessions, where the students would listen to a presentation, work on a simple task, and receive feedback afterwards. We utilized such an approach in introducing basics of using scikit-learn (Pedregosa et al., 2011) and in teaching data loading pipelines in PyTorch (Paszke et al.), in the introductory and the advanced courses respectively. Additionally, crash courses were utilized to quickly bring all students to a common level of knowledge. In the introductory course, we held a one-day Python crash course based on ideas and content from (Chan, 2015; Needham, 2020; Severance, 2009). In the advanced course, a two-day crash course on PyTorch was held, since the majority of the audience were already familiar with Keras (Chollet et al., 2015) and/or TensorFlow (Abadi et al., 2016).
34
+
35
+ From a content perspective, different methods were used to communicate knowledge with the students, motivate interaction, and provide room for discussion. In several sessions, we started with a short presentation with slides. The presentations served as an introduction as well as a warm-up for the topic being discussed. They were used more frequently in the introductory course, iterating over the process of applied machine learning, characteristics and challenges of each phase in the process, and commonly-used tools in the Python stack. The second element we adopted heavily is the use of Jupyter notebooks. The notebooks were made available to the students before class and discussed mostly after the introductory presentations. During the sessions, we coded the Jupyter notebook live and did not walk through the notebook prepared prior to the session. Due to the online-format, we adopted the strategy of alternatively switching between presentation slides and Jupyter notebooks (coding) with a block of 20-30 minutes for each. The target was to overcome the interaction difficulties of the remote setup and connect abstract knowledge with practical use.
36
+
37
+ We integrated in-class group work to motivate for discussion and enrich the learning experience. The strategy was to divide the students into groups of three to four students and ask them to work on a specific task related to the presented content. The scope of the task was quite versatile, ranging from coding tasks to discussing ideas and brainstorming
38
+
39
+ 108 machine learning solutions. Coding tasks included solving 109 small problems, reading and understanding code, or reading documentation and applying a solution to a different problem or a dataset. After the time dedicated for the task is over, each group briefly presented what they achieved or learned to all other groups and the instructors. When relevant, a short feedback round followed each presentation.
40
+
41
+ § 3.2. EVALUATION METHODS
42
+
43
+ In order to measure the learning progress and give the students the chance to apply the concepts learned during the sessions, students had to work on various tasks as graded homework and project work. In the introductory course, a mini-project covering most of the basic concepts in Python was to be completed. For the advanced course, students had to implement a complete pipeline for a simplified scenario of image inpainting (Zeng et al., 2020) using PyTorch, starting with adapting a dataset for the task, training, and evaluating a simple deep learning model. We used the German traffic signs dataset from (Stallkamp et al., 2012).
44
+
45
+ Another methodology we used in the introductory course was to provide the students with homework assignments in the form of Jupyter notebooks. The notebooks contained both guided as well as unguided exercises. The guided exercises served the purpose of introducing the usage of libraries such as Pandas (pandas development team, 2020; Wes McKinney, 2010), Matplotlib (Hunter, 2007), and Scikit-learn (Pedregosa et al., 2011). Students had to add their code to the indicated specific parts of the notebook. Later in the course, more unguided exercises were presented, where the students are only provided with a general task formulation with neither code snippets nor structure.
46
+
47
+ Homework assignments also included a few essay questions to assess the students' understanding of the concepts and their ability to formulate their ideas. Given a specific business context from our research, students of the introductory course were asked to identify use-cases for machine learning, motivate them, and prioritize them according to their business impact. For the project work, we opted for a high-level project scope, where groups of three to four students were asked to develop a concrete proposal with their ideas and plans. The requirements were to adhere to a set of milestones and deliverables, while providing room for the students to extend the project scope, integrate auxiliary modules in their implementation, and explore new ideas. The project for the introductory course involved developing a complete solution using machine learning for an actual business use-case based on an internal dataset we curated from a running system. The project of the advanced course was to develop a face recognition pipeline using existing state-of-the-art deep learning models. Students from the advanced course used GPU resources provided by our industry partner to train and fine-tune their deep learning models.
48
+
49
+ § 3.3. FEEDBACK CHANNELS
50
+
51
+ We believed feedback is an integral component of the learning process; therefore, we adopted multiple feedback channels, where each channel is dedicated to a specific scope or element of course. Table 1 lists the channels we used and the corresponding scopes of questions.
52
+
53
+ Table 1. Feedback channels and corresponding scope of questions.
54
+
55
+ max width=
56
+
57
+ Question Scope Channel
58
+
59
+ 1-2
60
+ General CHAT PLATFORM AND FORUM
61
+
62
+ 1-2
63
+ HOMEWORK ASSIGNMENTS Q&A LIVE SESSIONS BEFORE SUBMISSION TEXTUAL COMMENTS AFTER SUBMISSION
64
+
65
+ 1-2
66
+ Project-RELATED WEEKLY OFFICE-HOURS FEEDBACK AFTER PRESENTATIONS
67
+
68
+ 1-2
69
+
70
+ § 4. EXPERIENCES
71
+
72
+ We report about our experiences from both courses, focusing on three major aspects; methods of content delivery, scope of tasks, and project work.
73
+
74
+ § 4.1. CONTENT DELIVERY
75
+
76
+ We had a positive experience with integrating a mixture of methods in the same session when delivering content to the students. Concretely in the case of teaching practical machine learning, we slowly arrived at the following sequence of teaching activities in our sessions: short presentation with slides, live-coding session, group work, and finally presenting and discussing with all groups. Live-coding in an empty Jupyter Notebook worked out better than going through the notebook and executing the cells. Despite being more time-consuming, we found out that it improved the engagement and the follow-up of the students by regulating the pace of presenting and developing ideas. We also found that using a running use-case along several modules makes it easier for students to follow up and connect the different topics, we were inspired by the end-to-end machine learning project chapter from (Géron, 2019). Due to the practical nature of both courses, we designed and delivered the content following a suitable process model; CRISP-DM for the introductory course and a more DL-specific process model adopted from (Raghu & Schmidt, 2020). This turned out to be useful in understanding the holistic overview of the iterative process and logically connecting the various steps.
77
+
78
+ § 4.2. SCOPE OF TASKS
79
+
80
+ When scoping tasks for the students, we found out that realistic scenarios involving ambiguity provide a better learning opportunity for students. They simulate real-life ML problems and enable students to stretch their thoughts beyond standard toy examples. They also touch upon important
81
+
82
+ 164 skills such as identifying possible use-cases for ML given a complex business scenario, formulating each identified use-case correctly, and validating assumptions based on the available data. However, they come at the cost of being more challenging and time-consuming for both the teacher and the student. For the more practical phases of the ML process such as learning how to use a package, guided exercises proved very successful as a first step that can be later complemented with unguided exercises. Although unguided exercises are relatively challenging, they represent a more realistic scenario allowing the students to develop their own work and tackle the problem systematically.
83
+
84
+ § 4.3. PROJECT WORK
85
+
86
+ From our experience, a flexible project scope has increased the motivation of the students. They formulated major parts of the project by themselves and demonstrated full-ownership of the whole work. Some groups explored new ideas, complemented the suggested pipeline with more tasks, and made demos for their implementations. When forming the groups, we found out that heterogeneously mixing them with respect to background engages all students and evenly distributes workload. During the project phase, we realized the importance of milestones, where the students can present their work and get constructive feedback. As explained, this was conducted in the form of intermediate presentations and regular office-hours, where meetings were held with each group separately.
87
+
88
+ < g r a p h i c s >
89
+
90
+ Figure 1. Wordcloud of the textual responses of the students.
91
+
92
+ § 5. STUDENT FEEDBACK
93
+
94
+ Course participants were asked to evaluate several aspects of the course through an on-line survey. The survey consisted mainly of multiple-choice questions along with two essay questions where the students can deliver further feedback. The response rates for the introductory and the advanced course were ${50}\%$ and ${38}\%$ , respectively. Although the sample size is relatively small, it indicates a general trend of the experience of the students. The feedback of the students from the essay questions is summarized in a wordcloud in Figure 1.
95
+
96
+ § 5.1. OVERALL LEARNING SUCCESS
97
+
98
+ In the survey, students were asked to self-assess their skills in machine learning before and after the course. Students could respond on a 5-point Likert scale $(1 =$ not at all; $5 =$ very much). The averages for the introductory course and the advanced course moved from (2.4, 2.9) to (3.7, 4.2), with a difference of 1.3 points. Additionally, all students assessed their skills with a higher score after the course than before.
99
+
100
+ § 5.2. BEST-EVALUATED TEACHING METHODS
101
+
102
+ On the same Likert scale, students evaluated the different teaching methods we used during the course sessions. The top five methods in both courses are shown in Table 2.
103
+
104
+ Clearly, the project-work and coding assignments were the top-rated methods. Other methods such as "Group-work during the sessions", "Homework essay questions", and "Literature recommendations" were graded with lower average scores; 3.9, 3.3, and 2.9, respectively.
105
+
106
+ Table 2. Top-5 methods evaluated by the students and their average score on a 5-point scale $(1 =$ not helpful; $5 =$ very helpful).
107
+
108
+ max width=
109
+
110
+ Method INTRODUCTORY Advanced
111
+
112
+ 1-3
113
+ WORKING ON THE PROJECT 4.75 5.0
114
+
115
+ 1-3
116
+ LEARNING FROM EXEMPLARY CODE 4.83 4.75
117
+
118
+ 1-3
119
+ CODING HOMEWORK 4.66 4.88
120
+
121
+ 1-3
122
+ OFFICE HOURS & INDIVIDUAL DISCUSSIONS 4.36 4.83
123
+
124
+ 1-3
125
+ SLIDE PRESENTATIONS VIA ZOOM 4.25 4.5
126
+
127
+ 1-3
128
+
129
+ § 5.3. GROUP-WORK AND INDIVIDUAL-WORK
130
+
131
+ Another interesting outcome of the survey was that the students evaluated individual learning consistently higher than group-work, except for the course project. To put these results in context, all in-class group-work activities were based on random assignment of group members via the on-line conferencing software. However, students had the chance to work together within the same group for an extended period of time on the course project. Since both courses were conducted remotely, the lack of social interactions among the groups can have an impact on such results, especially when groups are temporarily formed during on-line sessions.
132
+
133
+ § 5.4. CRASH COURSES
134
+
135
+ Since we used crash courses to teach practical skills at the start of each course, students were also asked to evaluate them. On the 5-point scale, students from the introductory course evaluated the Python crash course with an average of 3.8, taking into consideration that ${40}\%$ of the participants were previously familiar with Python. For the advanced course, participants evaluated the PyTorch crash course with an average of 4.8, where only ${10}\%$ of the participants used it at least once before the course.
136
+
137
+ § 6. CONCLUSION
138
+
139
+ In this paper, we present teaching methods used in two practical ML courses. We also summarized our experiences as teachers with both courses; and the feedback from the students collected via a survey. We derive recommendations for teachers on the methods to use for planning the sessions, delivering content, designing assignments, and choosing project-work. A summary of our recommendations is presented in Table 3.
140
+
141
+ Table 3. Summary of recommendations for practical machine learning courses.
142
+
143
+ max width=
144
+
145
+ TEACHING Evaluation & FEEDBACK
146
+
147
+ 1-2
148
+ CRASH COURSES & BLOCK SESSIONS (TO LEVEL UP SKILLS) CODING HOMEWORK (MIX GUIDED & UNGUIDED)
149
+
150
+ 1-2
151
+ SLIDE PRESENTATIONS (CONCISE AS INTRODUCTION) MINI-PROJECTS (INCLUDE COMPLETE ML PIPELINES)
152
+
153
+ 1-2
154
+ JUPYTER NOTEBOOKS (LIVE-CODING) Project scope (FLEXIBLE, REAL-WORLD SCENARIOS)
155
+
156
+ 1-2
157
+ EXEMPLARY CODE (WELL-WRITTEN AND DOCUMENTED) Project groups (MIX WITH RESPECT TO BACKGROUNDS)
158
+
159
+ 1-2
160
+ IN-SESSION GROUP WORK (FOCUS ON CODING) REGULAR FEEDBACK (ALSO DURING PROJECT PHASES)
161
+
162
+ 1-2
163
+
164
+ Both in our experience and according to the students' feedback, practical coding tasks based on a realistic use-case are a successful teaching method for machine learning. Additionally, teaching techniques that involve live-coding sessions has contributed to a better learning experience for the students. For coding assignments, the combination of guided and unguided exercises trains the students to progress from simple tasks to more advanced and complex ones. Finally, projects provide a learning opportunity for students, given that they are complemented with regular feedback and concrete milestones.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/k9jaVBHot_/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Teaching the Essentials of Machine Learning in the Context of Quantitative Information Literacy
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ X College, is a small liberal arts postsecondary institution in the United States. An information literacy course, Calling Bull, serves as an introductory data science class as well as a prerequisite-free quantitative literacy class. In this context, we spend a week discussing machine learning, with an emphasis on facial recognition algorithms. The emphasis is on the general algorithmic approach, critical inquiry of the process and careful interpretation of results presented in research or other decision-making. This module relies on the use of Open Educational materials, discussion, and careful attention to issues of marginalization and algorithmic justice.
8
+
9
+ ## 1. Introduction
10
+
11
+ Calling Bull is an open educational course developed by quantitative biologist Carl Bergstrom and information scientist Jevin West. Its goal is to prepare students for a world of big data by introducing students to tools and techniques for sorting through information in the data economy and understanding what drives information in the digital and scientific ecosystems.
12
+
13
+ Bergstrom and West published a book by the same name in 2020, but their website
14
+
15
+ ## http://callingbullshit.org/
16
+
17
+ (and the k-12 friendly companion website callingbull.org), has a suggested syllabus, case studies and links to lectures on YouTube (Bergstrom & West, 2021).
18
+
19
+ In 2019, I adopted and adapted Calling Bull for a specific context at X College in the Digital and Computational Studies program as part of a broader data science set of courses.
20
+
21
+ $\mathrm{X}$ College is a small liberal arts school in the United States, and the Digital and Computational Studies program is a new interdisciplinary department which is designed to bridge multiple disciplines with computation and digital.
22
+
23
+ The goals of the Calling Bull course from an institutional perspective were 1) collaborate with a theme of interest to social science majors, such as economics and politics, 2) provide a gentle introduction to programming with $\mathrm{R},3$ ) to reinforce key skills such as interpreting graphs and understanding uncertainty in science as well as introduce data science, and 4) meet the quantitative literacy standards. All students regardless of major take one such quantitative literacy course. I found Calling Bull a particularly compelling theme for a quantitative literacy course because it captures key quantitative reasoning and critical thinking skills that I want all graduates of our college to have.
24
+
25
+ The course learning objectives for students are ${}^{1}$ :
26
+
27
+ ## This course is designed as a community learn- ing journey
28
+
29
+ Together, we will:
30
+
31
+ - Metacognitively engage in contemporary issues in equity and social justice related to their digital world, community, and identity. (Think about this class every time you hear the news, make daily choices, or even put your shoes on.)
32
+
33
+ - Play with computational ideas creatively, using a growth mindset which values revision and experimentation and demonstrate community leadership skills as a collaborator that shares strengths, builds weaknesses, and contributes to a broader shared understanding. (Participate in teamwork in respectful ways that allow people to relax and play with ideas).
34
+
35
+ - Recognize and translate between algebraic, numeric, visual, and verbal representations
36
+
37
+ ---
38
+
39
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
40
+
41
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
42
+
43
+ ${}^{1}$ Most of the parentheticals are borrowed with permission directly from the syllabus on the Calling Bull website (Bergstrom & West, 2021)
44
+
45
+ ---
46
+
47
+ 055 of data. (Remain vigilant for bull contaminat-
48
+
49
+ 056 ing your information diet and recognize said
50
+
51
+ 057 bull whenever and wherever you encounter
52
+
53
+ 058 it.)
54
+
55
+ 059 - Design models of and computationally in-
56
+
57
+ 060 vestigate ideas in practical and professional
58
+
59
+ 061 spaces through and communicate the process and meaning to others. (Figure out for yourself precisely why a particular bit of bull is bull, provide a statistician or fellow scientist with a technical explanation of why a claim is bull - using $\mathrm{R}$ and employing proper
60
+
61
+ 067 data visualization techniques where neces-
62
+
63
+ 068 sary, and provide your "casually racist uncle"
64
+
65
+ 069 with an accessible and persuasive explana-
66
+
67
+ 070 tion of why a claim is bull.)
68
+
69
+ 071
70
+
71
+ 072 To support these course learning objectives, I blended the 073 Calling Bull curriculum from Bergstrom and West with programming instruction and projects in $\mathrm{R}$ , and supplemented with daily data visualization activities, metacognitive reflections, and other activities. A full set of curriculum from can be found on QUBESHub.org. An aggregated list of this and other related community contributed curriculum can be found at the Calling Bull Instructors group at callingbull.qubeshub.org.
72
+
73
+ An emphasis of all courses taught by Digital and Computational Studies program faculty is the explicit attention to human rights and social justice, interrogating racism, sexism, bigotry, and other forms of exclusion in the design of digital spaces and algorithms. In this context, students have multiple opportunities to ask questions about how our assumptions and biases affect our data collection, models, and visualizations, as well as affect the interpretation and communication of our data analysis and model results.
74
+
75
+ Below, I present a module constructed for our version of the Calling Bull class, which uses the Jevin and West lecture videos and suggested readings as a foundation. These are supplemented with with other freely available resources - videos by Joy Buolamwini and a research paper by Garg et al. utilizing word embeddings (Garg et al., 2018) to develop a broader view of computing ethics and human rights beyond concerns driven by phrenology pseudoscience and to broaden the voices of those represented in the conversations about the ethics, process, uses and abuses of machine learning models. This module occurs about halfway through the semester, just before their mid-term independent project.
76
+
77
+ ## 2. Machine Learning Module
78
+
79
+ The course is designed as a series of one week modules for two 80 minute classes. However, I am presenting the most
80
+
81
+ 109 recent version which was conducted online in a compressed 7 week module. All one-week modules were reallocated to a single hour and 45 minutes synchronous session with additional asynchronous time. In the two classes per week format, I would assign reading for the first day to introduce the topic, and then a R lab experience on the second day that uses that theme, typically based on a case study, that would be due at the end of the week along with a written reflection. The big data module is an exception to this general rule - the case studies are part of the thematic introduction, and the "lab" is in essence a racial equity intervention.
82
+
83
+ ### 2.1. Pre-Module reading
84
+
85
+ Students are asked to watch or read the following items before class. With the exception of the research paper, all are from the Calling Bull website.:
86
+
87
+ - Videos: Lecture 5 Big Data on YouTube
88
+
89
+ - Case Study: Criminal Machine Learning
90
+
91
+ - Case Study: Machine Learning about Sexual Orientation?
92
+
93
+ - Research Paper: Word embeddings quantify 100 years of gender and ethnic stereotypes (Garg et al., 2018).
94
+
95
+ Each YouTube "lecture" is broken into a handful of shorter videos, and in total are typically less than one hour. During the Big Data Lecture 5 videos, Bergstrom and West introduce the basic idea of machine learning and how training models based on biased data and assumptions can lead to faulty results. They then verbally debunk the validity of two papers which use machine learning algorithms on photographs to determine criminality and sexuality. These arguments are also presented as case studies on the Calling Bull website - and students are asked to read and comment on these and the Garg et. al research paper through the use of Perusall ${}^{2}$ (Perusall,2021). The use of Perusall makes it possible for students to interact social around the text before class. I am able to read these responses before class and make adaptive changes to the in-class lesson plan described below based on any misconceptions or student questions.
96
+
97
+ The research paper uses word embeddings, which combine natural language processing and machine learning methods, to explore and expose stereotype bias as expressed through language over time (Garg et al., 2018). Unlike the questionable papers highlighted in the case studies, Garg et al. showcases cutting edge interdisciplinary techniques, utilizes critical quantitative inquiry, and exposes bias of machine learning models, all while exploring an interesting research question. This paper provides a disciplinary bridge between computer science and quantitative critical social science in which the use of machine learning as a research tool acts against racism and sexism instead of propagating it. The use of word embeddings to measure stereotype bias leverages the fact that machine learning is biased by the data (here historical documents) with which is it represented. This expands the scope of the discussion from image classification for criminality and sexuality to using machine learning to find language associations that reveal historical perspectives on gender, race and ethnicity. Finally, the visualizations presented in the results section are accessible to the audience which at this point has completed a regression project with Pearson correlation coefficient interpretation.
98
+
99
+ ---
100
+
101
+ ${}^{2}$ Perusall is a collaborative annotation software. The Perusall grading algorithm itself is presented in the first day of the semester as an example of a black-box proprietary algorithm driven by machine learning, and so is brought up as a reprise in this class as well. Given the critique we make in class, I relax many of conditions of the Perusall automated grading algorithm and all scores less than ${100}\%$ are manually re-graded by a teaching assistant.
102
+
103
+ ---
104
+
105
+ ### 2.2. In-class discussion of readings
106
+
107
+ On the whole, the class uses critical pedagogy which seeks reframe the students as knowledge holders and constructors (Freire & , Firm). After a data visualization activity and sharing reflections from students on their learning from the previous week, we open with a discussion of the videos and readings for the day. I begin with crowdsourcing definitions for big data and data science, after narrowing in on a co-constructed definition, I then ask "What does it mean that data science is changing our methodologies?" We then discuss various approaches to doing science, such as hypothesis driven, data-driven, and/or mathematical model-driven using the Rule-of-Five framework (Diaz Eaton et al., 2019). This helps us derive a common understanding of language and approaches to research questions. The goals is to help students see an inclusive framework for modeling by which data science is one approach, decentering a particular idealization of any one approach in an effort to decolonize our discussion of methods.
108
+
109
+ We then break into groups ${}^{3}$ with the following prompt:
110
+
111
+ 1. List 5 major insights from videos
112
+
113
+ 2. How did these insights about big data manifest in the readings?
114
+
115
+ 3. Pick at least one of the 3 readings and post on the Jamboard (Google, 2021): Which reading your group picked and two ways the big data concerns or potential bullshit issues manifest in that reading. In a face-to-face context, we groups might write this on the whiteboard or group-by-group report outs. This is followed by a co-construction of a list of issues that emerge and how they manifest in the various readings. I then tie up any specific points that have not yet been mentioned in any of the case studies.
116
+
117
+ Most students understand that the phrenology studies presented in the Calling Bull Case Studies are "bullshit," because there is no reasonable biological basis for criminality or sexuality. They can also point to the bias in the sourcing of the images in the criminality case study as a source of bias. Students also typically arrive at the idea that sexuality is not a binary, which presents an issue for classification model output. The same can be asked about whether criminality she be subject to binary classification as well. Students also clearly can understand the impact and potential legal issues surrounding the use of machine learning models and image classification to detecting criminality. However, they are often unprepared when I ask "Why would someone want to classify someone's sexuality based on their face?"
118
+
119
+ To help prompt this discussion - because, often, uncomfortable silence follows - I next ask about places in the country or world in which having a sexual identity other than heterosexual or acting on ones sexual identity is considered illegal. As of May 2021, there are 69 countries in which homosexuality has been criminalized (BBC News Reality Check Team, 2021). Fifteen states in the United States do not offer full protections against discrimination based on sexuality (Wikipedia, 2021). This allows students to consider the implications of such technology, which re-opens the door to the criminality discussion, as we ask if there are avenues of scientific inquiry that should not be pursued for the sake of scientific curiosity, particularly when the evidence for such pursuits is non-existent.
120
+
121
+ In the case of the research paper, students may also readily make the connection between what they know about the history of gender and racial discrimination and bias in the United States and the findings in the paper. Depending on the time allotment and students questions in the Perusall assignment, we might review one or two of the findings and associated visualizations together. Discussion of this paper sets us up quite well for the next module, which also focuses on gender and racial bias in training sets and models trained on those sets.
122
+
123
+ ### 2.3. Through the lens of Joy Buolamwini
124
+
125
+ To launch the second phase of the class, I ask the following two questions:
126
+
127
+ - Where do you see examples of big data being used to benefit your life?
128
+
129
+ ---
130
+
131
+ ${}^{3}$ I take care to construct the groups for this modules because of the sensitive nature of the content and discussions. When we are face-to-face, I let them choose groups, and online, I tend to group students based on what I can discern using weekly reflections about their personal challenges, struggles, and understanding about social justice in computer science.
132
+
133
+ ---
134
+
135
+ - Where do you see examples of data or algorithms that are biased?
136
+
137
+ This conversations adds to an overall uses and abuses conversation, but with an emphasis on personal impact. Depending on the student's axes of privilege or marginalization, they may have more positive or negative personal experiences with some of the automated decision-making models that run their life, from Netflix algorithms and Google ads to credit approval and policing. This portion of the class is intended to provide insight as to the continued invisibility and disregard for Blackness in technology as manifested by underrepresentation among faces in image training sets. My goal is for white students to develop both empathy and understanding related to this form of oppression and to identify ways to work towards greater justice in computing.
138
+
139
+ As we enter this difficult conversation space, I conduct a short grounding in and give permission for folx to move and breathe as needed, particularly folx of color. I introduce Joy Buolamwini as a graduate student and researcher at the Massachusetts Institute of Technology who founded the Algorithmic Justice League, and preface the series of three videos as an evolution of her scholarship with respect to algorithmic justice. In successive order and with small individual reflection breaks, I show the following three video clips:
140
+
141
+ 1. Ted Talk: The Coded Gaze (Boulamwini, 2021b)
142
+
143
+ 2. Gendershades (Boulamwini, 2021c)
144
+
145
+ 3. Ain't I a woman? (Boulamwini, 2021a)
146
+
147
+ I also emphasize that I appreciate the rich infusion of visual poetry aside the computer science justice issue of the last piece, and mention that it is why Joy Buolamwini is called the "Poet of Code." After another short individual reflective break, I again ask students to discuss in groups the "take-home messages" for these videos.
148
+
149
+ Students never fail to mention the quote from the first video "Who codes matters, how we code matters and why we code matters(Boulamwini, 2021b)." Many semesters, I have had someone ask about fixing the particular algorithm or the particular training data set. In addition to acknowledging the fix for the specific problem at hand, I steer them to recognizing a systemic problem beyond a particular example. I have also had women, particularly Black women, so inspired by Joy Buolamwini that they have decided to pursue a flavor of computer science for their major.
150
+
151
+ ## 3. Reflections and Conclusions
152
+
153
+ I want to reiterate that despite the discussion here of a "mod-
154
+
155
+ 219 ule," the entire course is structured in such a way such that this discussion about marginalization and computing and data science are not a one class event, but part of a broader quantitative critical inquiry arc in the class. The implementation of inclusive pedagogies such as the use of open sources software and educational resources, co-construction of knowledge, and attention to small group power dynamics are meant to cultivate a supportive community learning journey that can foster this discourse.
156
+
157
+ Most of the attention on the benefits of Open Educational Resources are on the redistributive properties of social justice - shifting financial burdens on students away to gain full participation in the course experience. Lambert 2018 points out two additional axes of social justice for the classroom, recognitive justice, the intentional inclusion of diverse voices and viewpoints, and representational justice, which allows marginalized people to speak for themselves. While Bergstrom and West are entertaining and easy for students to follow, their worldviews are that of white (cis)men. The series of Joy Buolamwini videos to help accomplish a course experience in which recognitive and representation justice is also present, where the Black woman is both computer scientist and social justice activist, as well as telling her story in her own words.
158
+
159
+ In preparation to facilitate such discussions in class, I point readers to scholars and scholarship in science and technology studies in addition to the resources above. Books such as Race after Technology (Benjamin, 2019), Algorithms of Oppression (Noble, 2018), and Weapons of Math Destruction (O'Neil, 2016) discuss these topics at a greater detail. I also recommend discussing class facilitation around topics relating to race, gender, and sexuality with colleagues who teach courses in the social sciences and/or one's office of equity and inclusion or intercultural education where staff are professional facilitators of such conversations on campus.
160
+
161
+ Students will not leave this course with the ability to perform machine learning techniques. However, I argue that creating the foundation to have critical discussions of such techniques will lead to deeper insight and better science when they reach courses which do teach these techniques. In addition, all corners of our data-driven economy is increasingly dependent on the results of such techniques. Therefore we have an obligation to be able to teach computational approaches such as machine learning to audiences beyond computer science and data science students and in our general education quantitative literacy courses. Likewise, these ethical discussions should not be relegated to a stand alone ethics course, but should be infused throughout the computer science curriculum at all levels.
162
+
163
+ ## References
164
+
165
+ BBC News Reality Check Team. Homosexuality: The coun- 220
166
+
167
+ tries where it is illegal to be gay, 2021. https://www.bbc.com/news/world-43822234 [Accessed: 6- 23-21].
168
+
169
+ Benjamin, R. Race after Technology: abolitionist tools for the new Jim code. Polity, Medford, MA, 2019. ISBN 1509526404;9781509526390;9781509526406;1509526390;.
170
+
171
+ Bergstrom, C. and West, J. Calling Bullshit, 2021. https: //callingbullshit.org/ [Accessed: 6-22-21].
172
+
173
+ Boulamwini, J. Ain't I a Woman?, 2021a. https:// youtu.be/QxuyfWoVV98 [Accessed: 6-22-21].
174
+
175
+ Boulamwini, J. Ted talk: Coded gaze, 2021b. https: //www.ted.com/talks/joy_buolamwini_ how_i_m_fighting_bias_in_algorithms [Accessed: 6-22-21].
176
+
177
+ Boulamwini, J. Gendershades, 2021c. https://youtu.be/TWWsW1w-BVo [Accessed: 6-22-21].
178
+
179
+ Diaz Eaton, C., Highlander, H. C., Dahlquist, K. D., Led-der, G., LaMar, M., and Schugart, R. C. A "rule-of-five" framework for models and modeling to unify mathematicians and biologists and improve student learning. PRIMUS, 29(8):799-829, 2019.
180
+
181
+ Freire, P. and (Firm), P. Pedagogy of the oppressed. Continuum, New York, 30th anniversary edition, 2000. ISBN 150130531X;9781501305313;.
182
+
183
+ Garg, N., Schiebinger, L., Jurafsky, D., and Zou, J. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635-E3644, 2018. ISSN 0027- 8424. doi: 10.1073/pnas.1720347115. URL https: //www.pnas.org/content/115/16/E3635.
184
+
185
+ Google. Jamboard, 2021. https://jamboard.google.com/ [Accessed: 6-22-21].
186
+
187
+ Lambert, S. R. Changing our dis course: A distinctive social justice aligned definition of open education. Journal of Learning for Development, 5(3):225-244, 2018.
188
+
189
+ Noble, S. U. Algorithms of oppression: How search engines reinforce racism. NYU Press, 2018.
190
+
191
+ O'Neil, C. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books, 2016.
192
+
193
+ Perusall. Perusall, 2021. https://perusall.com/ [Accessed: 6-22-21].
194
+
195
+ Wikipedia. Lgbt rights in the United States: Summary of state protections, 2021. https://en.wikipedia.org/wiki/LGBT_rights_in_the_United_ States#Summary_of_state_protections [Accessed: 6-23-21].
196
+
197
+ 221 222 223 224 225
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/k9jaVBHot_/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § TEACHING THE ESSENTIALS OF MACHINE LEARNING IN THE CONTEXT OF QUANTITATIVE INFORMATION LITERACY
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ X College, is a small liberal arts postsecondary institution in the United States. An information literacy course, Calling Bull, serves as an introductory data science class as well as a prerequisite-free quantitative literacy class. In this context, we spend a week discussing machine learning, with an emphasis on facial recognition algorithms. The emphasis is on the general algorithmic approach, critical inquiry of the process and careful interpretation of results presented in research or other decision-making. This module relies on the use of Open Educational materials, discussion, and careful attention to issues of marginalization and algorithmic justice.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ Calling Bull is an open educational course developed by quantitative biologist Carl Bergstrom and information scientist Jevin West. Its goal is to prepare students for a world of big data by introducing students to tools and techniques for sorting through information in the data economy and understanding what drives information in the digital and scientific ecosystems.
12
+
13
+ Bergstrom and West published a book by the same name in 2020, but their website
14
+
15
+ § HTTP://CALLINGBULLSHIT.ORG/
16
+
17
+ (and the k-12 friendly companion website callingbull.org), has a suggested syllabus, case studies and links to lectures on YouTube (Bergstrom & West, 2021).
18
+
19
+ In 2019, I adopted and adapted Calling Bull for a specific context at X College in the Digital and Computational Studies program as part of a broader data science set of courses.
20
+
21
+ $\mathrm{X}$ College is a small liberal arts school in the United States, and the Digital and Computational Studies program is a new interdisciplinary department which is designed to bridge multiple disciplines with computation and digital.
22
+
23
+ The goals of the Calling Bull course from an institutional perspective were 1) collaborate with a theme of interest to social science majors, such as economics and politics, 2) provide a gentle introduction to programming with $\mathrm{R},3$ ) to reinforce key skills such as interpreting graphs and understanding uncertainty in science as well as introduce data science, and 4) meet the quantitative literacy standards. All students regardless of major take one such quantitative literacy course. I found Calling Bull a particularly compelling theme for a quantitative literacy course because it captures key quantitative reasoning and critical thinking skills that I want all graduates of our college to have.
24
+
25
+ The course learning objectives for students are ${}^{1}$ :
26
+
27
+ § THIS COURSE IS DESIGNED AS A COMMUNITY LEARN- ING JOURNEY
28
+
29
+ Together, we will:
30
+
31
+ * Metacognitively engage in contemporary issues in equity and social justice related to their digital world, community, and identity. (Think about this class every time you hear the news, make daily choices, or even put your shoes on.)
32
+
33
+ * Play with computational ideas creatively, using a growth mindset which values revision and experimentation and demonstrate community leadership skills as a collaborator that shares strengths, builds weaknesses, and contributes to a broader shared understanding. (Participate in teamwork in respectful ways that allow people to relax and play with ideas).
34
+
35
+ * Recognize and translate between algebraic, numeric, visual, and verbal representations
36
+
37
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
38
+
39
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
40
+
41
+ ${}^{1}$ Most of the parentheticals are borrowed with permission directly from the syllabus on the Calling Bull website (Bergstrom & West, 2021)
42
+
43
+ 055 of data. (Remain vigilant for bull contaminat-
44
+
45
+ 056 ing your information diet and recognize said
46
+
47
+ 057 bull whenever and wherever you encounter
48
+
49
+ 058 it.)
50
+
51
+ 059 - Design models of and computationally in-
52
+
53
+ 060 vestigate ideas in practical and professional
54
+
55
+ 061 spaces through and communicate the process and meaning to others. (Figure out for yourself precisely why a particular bit of bull is bull, provide a statistician or fellow scientist with a technical explanation of why a claim is bull - using $\mathrm{R}$ and employing proper
56
+
57
+ 067 data visualization techniques where neces-
58
+
59
+ 068 sary, and provide your "casually racist uncle"
60
+
61
+ 069 with an accessible and persuasive explana-
62
+
63
+ 070 tion of why a claim is bull.)
64
+
65
+ 071
66
+
67
+ 072 To support these course learning objectives, I blended the 073 Calling Bull curriculum from Bergstrom and West with programming instruction and projects in $\mathrm{R}$ , and supplemented with daily data visualization activities, metacognitive reflections, and other activities. A full set of curriculum from can be found on QUBESHub.org. An aggregated list of this and other related community contributed curriculum can be found at the Calling Bull Instructors group at callingbull.qubeshub.org.
68
+
69
+ An emphasis of all courses taught by Digital and Computational Studies program faculty is the explicit attention to human rights and social justice, interrogating racism, sexism, bigotry, and other forms of exclusion in the design of digital spaces and algorithms. In this context, students have multiple opportunities to ask questions about how our assumptions and biases affect our data collection, models, and visualizations, as well as affect the interpretation and communication of our data analysis and model results.
70
+
71
+ Below, I present a module constructed for our version of the Calling Bull class, which uses the Jevin and West lecture videos and suggested readings as a foundation. These are supplemented with with other freely available resources - videos by Joy Buolamwini and a research paper by Garg et al. utilizing word embeddings (Garg et al., 2018) to develop a broader view of computing ethics and human rights beyond concerns driven by phrenology pseudoscience and to broaden the voices of those represented in the conversations about the ethics, process, uses and abuses of machine learning models. This module occurs about halfway through the semester, just before their mid-term independent project.
72
+
73
+ § 2. MACHINE LEARNING MODULE
74
+
75
+ The course is designed as a series of one week modules for two 80 minute classes. However, I am presenting the most
76
+
77
+ 109 recent version which was conducted online in a compressed 7 week module. All one-week modules were reallocated to a single hour and 45 minutes synchronous session with additional asynchronous time. In the two classes per week format, I would assign reading for the first day to introduce the topic, and then a R lab experience on the second day that uses that theme, typically based on a case study, that would be due at the end of the week along with a written reflection. The big data module is an exception to this general rule - the case studies are part of the thematic introduction, and the "lab" is in essence a racial equity intervention.
78
+
79
+ § 2.1. PRE-MODULE READING
80
+
81
+ Students are asked to watch or read the following items before class. With the exception of the research paper, all are from the Calling Bull website.:
82
+
83
+ * Videos: Lecture 5 Big Data on YouTube
84
+
85
+ * Case Study: Criminal Machine Learning
86
+
87
+ * Case Study: Machine Learning about Sexual Orientation?
88
+
89
+ * Research Paper: Word embeddings quantify 100 years of gender and ethnic stereotypes (Garg et al., 2018).
90
+
91
+ Each YouTube "lecture" is broken into a handful of shorter videos, and in total are typically less than one hour. During the Big Data Lecture 5 videos, Bergstrom and West introduce the basic idea of machine learning and how training models based on biased data and assumptions can lead to faulty results. They then verbally debunk the validity of two papers which use machine learning algorithms on photographs to determine criminality and sexuality. These arguments are also presented as case studies on the Calling Bull website - and students are asked to read and comment on these and the Garg et. al research paper through the use of Perusall ${}^{2}$ (Perusall,2021). The use of Perusall makes it possible for students to interact social around the text before class. I am able to read these responses before class and make adaptive changes to the in-class lesson plan described below based on any misconceptions or student questions.
92
+
93
+ The research paper uses word embeddings, which combine natural language processing and machine learning methods, to explore and expose stereotype bias as expressed through language over time (Garg et al., 2018). Unlike the questionable papers highlighted in the case studies, Garg et al. showcases cutting edge interdisciplinary techniques, utilizes critical quantitative inquiry, and exposes bias of machine learning models, all while exploring an interesting research question. This paper provides a disciplinary bridge between computer science and quantitative critical social science in which the use of machine learning as a research tool acts against racism and sexism instead of propagating it. The use of word embeddings to measure stereotype bias leverages the fact that machine learning is biased by the data (here historical documents) with which is it represented. This expands the scope of the discussion from image classification for criminality and sexuality to using machine learning to find language associations that reveal historical perspectives on gender, race and ethnicity. Finally, the visualizations presented in the results section are accessible to the audience which at this point has completed a regression project with Pearson correlation coefficient interpretation.
94
+
95
+ ${}^{2}$ Perusall is a collaborative annotation software. The Perusall grading algorithm itself is presented in the first day of the semester as an example of a black-box proprietary algorithm driven by machine learning, and so is brought up as a reprise in this class as well. Given the critique we make in class, I relax many of conditions of the Perusall automated grading algorithm and all scores less than ${100}\%$ are manually re-graded by a teaching assistant.
96
+
97
+ § 2.2. IN-CLASS DISCUSSION OF READINGS
98
+
99
+ On the whole, the class uses critical pedagogy which seeks reframe the students as knowledge holders and constructors (Freire &, Firm). After a data visualization activity and sharing reflections from students on their learning from the previous week, we open with a discussion of the videos and readings for the day. I begin with crowdsourcing definitions for big data and data science, after narrowing in on a co-constructed definition, I then ask "What does it mean that data science is changing our methodologies?" We then discuss various approaches to doing science, such as hypothesis driven, data-driven, and/or mathematical model-driven using the Rule-of-Five framework (Diaz Eaton et al., 2019). This helps us derive a common understanding of language and approaches to research questions. The goals is to help students see an inclusive framework for modeling by which data science is one approach, decentering a particular idealization of any one approach in an effort to decolonize our discussion of methods.
100
+
101
+ We then break into groups ${}^{3}$ with the following prompt:
102
+
103
+ 1. List 5 major insights from videos
104
+
105
+ 2. How did these insights about big data manifest in the readings?
106
+
107
+ 3. Pick at least one of the 3 readings and post on the Jamboard (Google, 2021): Which reading your group picked and two ways the big data concerns or potential bullshit issues manifest in that reading. In a face-to-face context, we groups might write this on the whiteboard or group-by-group report outs. This is followed by a co-construction of a list of issues that emerge and how they manifest in the various readings. I then tie up any specific points that have not yet been mentioned in any of the case studies.
108
+
109
+ Most students understand that the phrenology studies presented in the Calling Bull Case Studies are "bullshit," because there is no reasonable biological basis for criminality or sexuality. They can also point to the bias in the sourcing of the images in the criminality case study as a source of bias. Students also typically arrive at the idea that sexuality is not a binary, which presents an issue for classification model output. The same can be asked about whether criminality she be subject to binary classification as well. Students also clearly can understand the impact and potential legal issues surrounding the use of machine learning models and image classification to detecting criminality. However, they are often unprepared when I ask "Why would someone want to classify someone's sexuality based on their face?"
110
+
111
+ To help prompt this discussion - because, often, uncomfortable silence follows - I next ask about places in the country or world in which having a sexual identity other than heterosexual or acting on ones sexual identity is considered illegal. As of May 2021, there are 69 countries in which homosexuality has been criminalized (BBC News Reality Check Team, 2021). Fifteen states in the United States do not offer full protections against discrimination based on sexuality (Wikipedia, 2021). This allows students to consider the implications of such technology, which re-opens the door to the criminality discussion, as we ask if there are avenues of scientific inquiry that should not be pursued for the sake of scientific curiosity, particularly when the evidence for such pursuits is non-existent.
112
+
113
+ In the case of the research paper, students may also readily make the connection between what they know about the history of gender and racial discrimination and bias in the United States and the findings in the paper. Depending on the time allotment and students questions in the Perusall assignment, we might review one or two of the findings and associated visualizations together. Discussion of this paper sets us up quite well for the next module, which also focuses on gender and racial bias in training sets and models trained on those sets.
114
+
115
+ § 2.3. THROUGH THE LENS OF JOY BUOLAMWINI
116
+
117
+ To launch the second phase of the class, I ask the following two questions:
118
+
119
+ * Where do you see examples of big data being used to benefit your life?
120
+
121
+ ${}^{3}$ I take care to construct the groups for this modules because of the sensitive nature of the content and discussions. When we are face-to-face, I let them choose groups, and online, I tend to group students based on what I can discern using weekly reflections about their personal challenges, struggles, and understanding about social justice in computer science.
122
+
123
+ * Where do you see examples of data or algorithms that are biased?
124
+
125
+ This conversations adds to an overall uses and abuses conversation, but with an emphasis on personal impact. Depending on the student's axes of privilege or marginalization, they may have more positive or negative personal experiences with some of the automated decision-making models that run their life, from Netflix algorithms and Google ads to credit approval and policing. This portion of the class is intended to provide insight as to the continued invisibility and disregard for Blackness in technology as manifested by underrepresentation among faces in image training sets. My goal is for white students to develop both empathy and understanding related to this form of oppression and to identify ways to work towards greater justice in computing.
126
+
127
+ As we enter this difficult conversation space, I conduct a short grounding in and give permission for folx to move and breathe as needed, particularly folx of color. I introduce Joy Buolamwini as a graduate student and researcher at the Massachusetts Institute of Technology who founded the Algorithmic Justice League, and preface the series of three videos as an evolution of her scholarship with respect to algorithmic justice. In successive order and with small individual reflection breaks, I show the following three video clips:
128
+
129
+ 1. Ted Talk: The Coded Gaze (Boulamwini, 2021b)
130
+
131
+ 2. Gendershades (Boulamwini, 2021c)
132
+
133
+ 3. Ain't I a woman? (Boulamwini, 2021a)
134
+
135
+ I also emphasize that I appreciate the rich infusion of visual poetry aside the computer science justice issue of the last piece, and mention that it is why Joy Buolamwini is called the "Poet of Code." After another short individual reflective break, I again ask students to discuss in groups the "take-home messages" for these videos.
136
+
137
+ Students never fail to mention the quote from the first video "Who codes matters, how we code matters and why we code matters(Boulamwini, 2021b)." Many semesters, I have had someone ask about fixing the particular algorithm or the particular training data set. In addition to acknowledging the fix for the specific problem at hand, I steer them to recognizing a systemic problem beyond a particular example. I have also had women, particularly Black women, so inspired by Joy Buolamwini that they have decided to pursue a flavor of computer science for their major.
138
+
139
+ § 3. REFLECTIONS AND CONCLUSIONS
140
+
141
+ I want to reiterate that despite the discussion here of a "mod-
142
+
143
+ 219 ule," the entire course is structured in such a way such that this discussion about marginalization and computing and data science are not a one class event, but part of a broader quantitative critical inquiry arc in the class. The implementation of inclusive pedagogies such as the use of open sources software and educational resources, co-construction of knowledge, and attention to small group power dynamics are meant to cultivate a supportive community learning journey that can foster this discourse.
144
+
145
+ Most of the attention on the benefits of Open Educational Resources are on the redistributive properties of social justice - shifting financial burdens on students away to gain full participation in the course experience. Lambert 2018 points out two additional axes of social justice for the classroom, recognitive justice, the intentional inclusion of diverse voices and viewpoints, and representational justice, which allows marginalized people to speak for themselves. While Bergstrom and West are entertaining and easy for students to follow, their worldviews are that of white (cis)men. The series of Joy Buolamwini videos to help accomplish a course experience in which recognitive and representation justice is also present, where the Black woman is both computer scientist and social justice activist, as well as telling her story in her own words.
146
+
147
+ In preparation to facilitate such discussions in class, I point readers to scholars and scholarship in science and technology studies in addition to the resources above. Books such as Race after Technology (Benjamin, 2019), Algorithms of Oppression (Noble, 2018), and Weapons of Math Destruction (O'Neil, 2016) discuss these topics at a greater detail. I also recommend discussing class facilitation around topics relating to race, gender, and sexuality with colleagues who teach courses in the social sciences and/or one's office of equity and inclusion or intercultural education where staff are professional facilitators of such conversations on campus.
148
+
149
+ Students will not leave this course with the ability to perform machine learning techniques. However, I argue that creating the foundation to have critical discussions of such techniques will lead to deeper insight and better science when they reach courses which do teach these techniques. In addition, all corners of our data-driven economy is increasingly dependent on the results of such techniques. Therefore we have an obligation to be able to teach computational approaches such as machine learning to audiences beyond computer science and data science students and in our general education quantitative literacy courses. Likewise, these ethical discussions should not be relegated to a stand alone ethics course, but should be infused throughout the computer science curriculum at all levels.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/knwKgaspObQ/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A lesson for teaching fundamental Machine Learning concepts and skills to molecular biologists
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ Machine Learning represents an invaluable set of tools for the analysis of data in molecular biology as well as bio-medicine. Here we present an training approach to teach fundamental machine learning skills to researchers in their early career stage (PhD and postdoc level) with the aim to empower them to apply these methods in their own research projects. The content was developed for being delivered in a short and intense learning period as part of a remote systems biology workshop but can be adapted to other scenarios with a less restricted time frame.
8
+
9
+ ## 1. Introduction
10
+
11
+ #### 1.1.The need for machine learning skills in molecular biology research
12
+
13
+ With the rapidly growing amount of data in molecular biology the application of machine learning methods becomes increasingly useful and necessary to translate data - e.g. resulting from high-throughput methods like 2nd and 3rd generation sequencing (Schmidt & Hildebrandt, 2021) or proteomics (Wen et al., 2020) - into biological insights.
14
+
15
+ Based on this development we assume that having a basic understanding of the concepts of machine learning is beneficial for researches in molecular biology. This not only helps to critically question available methods but also to implement own machine learning based solutions to answer relevant research questions. Due to the availability of powerful yet comparatively easy to handle programming packages like scikit-learn (Buitinck et al., 2013), pytorch (Paszke et al., 2019), TensorFlow (Abadi et al., 2015) and Keras (Chollet, 2015), machine learning methods has become accessible to non-expert.
16
+
17
+ ### 1.2. Learning outcomes and requirements
18
+
19
+ We have designed a dense training lesson with the aim to teach fundamental knowledge of machine learning approaches as well as the application of them using the Python package scikit-learn, further supporting packages and Jupyter Notebooks. All the software used for this is available under open source licenses. After attending the training learners should be able to include machine learning based methods in their own research.
20
+
21
+ The lesson starts with an introduction to the distinction between supervised, unsupervised and reinforcement learning. It then focuses on supervised learning methods and the topics of data cleaning, feature selection, feature encoding, scaling, model fitting, model evaluation, model comparison, cross validation and grid search. Concepts like over- and under-fitting, curse of dimensionality, strengths and weaknesses of different approaches as well as deep learning was discussed as part of the introduction. Equipped with such a basic understand, learners should be able to extend their knowledge depending on their specific needs.
22
+
23
+ As requirement for this course, we expected that participants posses programming skills, ideally in Python, as well as an basic understanding of matrices and vectors besides a strong background in molecular biology. Although a deeper understanding of machine learning requires solid mathematical foundations (Parr & Howard, 2018), the lesson was designed without requiring them.
24
+
25
+ ### 1.3. Methodology
26
+
27
+ The training programme was created as part of a system biology workshop which was taught remotely in 5 day long interactive session and due to this the time available was very limited. The lesson was broken down in several component (see Figure 1). Following a flipped classroom approach (Akçayır & Akçayır, 2018), the learners had access to prerecorded videos before the actual course. In this phase theoretical foundations were delivered in a 45 minute video. The actual course started with a question-answer session of 30 minutes in which open question regarding the theoretical introduction could be discussed. After that, the selected practical coding skills were taught. In this part, the teaching methods of the "The Carpentries" (CAR; Pugachev, 2019), a community-driven organization that teaches basics of programming and data science skills to researchers and people in information-centric roles, were intensively applied. The learning methods include "Live Coding" elements, in which an instructor creates codes and executes them in front of the participants. The participants type along and replicate the code themselves instead of being taught with static code examples. "The Carpentries" teaching methods also includes collaborative teaching and continuous feedback. Therefore, in addition to the instructor, helpers are present to take questions from the learners and to support the instructor in conveying the content. The participants can ask questions at any time and problems that occur are discussed openly with the whole group for learners. This approach converts errors into possibilities to extend comprehension and help building mental models of the content efficiently.
28
+
29
+ ---
30
+
31
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
32
+
33
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
34
+
35
+ ---
36
+
37
+ ![01963ab2-0ec5-7efc-820d-e5936367794b_1_166_193_674_698_0.jpg](images/01963ab2-0ec5-7efc-820d-e5936367794b_1_166_193_674_698_0.jpg)
38
+
39
+ Figure 1. Components of the lesson
40
+
41
+ Due to the short duration of the actual interactive session, further content covering further coding example are provided as videos. Learners can, depending on their interest, deepen their skills in their own pace.
42
+
43
+ ## 2. Material
44
+
45
+ ### 2.1. Coding environment
46
+
47
+ For teaching the implementation of machine learning solutions in Python with a Live Coding approach, we have chosen Jupyter Notebooks (Kluyver et al., 2016) as coding environment that follows the literate programming paradigme (Knuth, 1984). As commonly done as part of the teaching methodology of "The Carpentry", the learners start in an empty notebooks which is successively extended under guidance of the instructor. For each piece of code added, explanation are given and references to the theoretical foundation are made.
48
+
49
+ Table 1. Description of used data sets.
50
+
51
+ <table><tr><td>DATA SET NAME</td><td>Source</td></tr><tr><td>BREAST CANCER</td><td>SKLEARN</td></tr><tr><td>Diabetes</td><td>SKLEARN</td></tr><tr><td>(Non-)RNA Binding Protein</td><td>Zhang & Liu, 2016</td></tr><tr><td>CANCER CELL EXPRESSION</td><td>WANG et al, 2014</td></tr></table>
52
+
53
+ ### 2.2. Packages
54
+
55
+ In order to empower the learners to implement own machine learning based solution efficiently we have chosen commonly used open source Python libaries:
56
+
57
+ The scikit-learn package (Buitinck et al., 2013) was selected as framework to teach machine learning methods due to its simplicity, strong community support and unified application programming interface (API) for classification and regression methods. Furthermore, it provides example data sets as well as numerous pre-processing and evaluation methods. Pandas (Wes McKinney, 2010) is the Swiss army-knife in the field of data science for reading, manipulating, writing and visualisation (tabular) data. It provides the powerful "DataFrame" data structure and by that helps to access data in a format that can be processed by scikit-learn. NumPy (Harris et al., 2020) is a foundational library for storing multi-dimensional arrays and matrices in Python and offers method to apply mathematical operation on them. The Biopython package (Cock et al., 2009) is a collection of several tools for computational biology. Among other features it provided solution to parse numerous format including FASTA. As we covered the classification of proteins in the lesson, we included the PyBioMed package (Dong et al., 2018), which includes several methods to encode amino acids sequences into numerical values.
58
+
59
+ ### 2.3. Data sets
60
+
61
+ For selecting example data sets we applied a several criteria: The data set must be open, of moderate size to ensure a quick processing during the course, well documented and biologically relevant. This limited our options significantly, nevertheless we found adequate data sets which met our criteria (see Table 1):
62
+
63
+ The Breast Cancer data set is a standard machine learning data set available at the UCI Machine Learning Repository (Dua & Graff, 2017) and scikit-learn offers a function to access the data (as "Bunch" object). Aside from meeting our main criteria, it was also selected for its simplicity. The data set contains 569 data points with 30 numerical attributes. It is labelled as two classes - malignant and benign - and the classes are already in the target vector which. In this lesson it serves as first example for classification.
64
+
65
+ The Diabetes Data Set is another widely used data set provided by the UCI Machine Learning Repository and is also included in the scikit-learn toy data set collection. It has 442 data points with 20 (mostly numerical) attributes and serves to teach regression in the lesson.
66
+
67
+ To demonstrate how classification can be performed on protein sequences, we combined a set of RNA Binding Protein data set (2,780 sequences from 638 species) and non-RNA Binding Protein data set (7,093 sequences from 1,587 species) which were original obtained by Zhang & Liu from UniProt or the Protein Data Bank, respectively.
68
+
69
+ Furthermore, we included a cancer expression data set with originated from The Cancer Genome Atlas (TCG) and was compiled by Wang et al. for testing a Similarity network fusion (SNF) aggregation method. The data set was modified and pre-processed by us to teach a multi-class classification method for 4 cancer types (breast, colon, glioblastoma mul-tiforme (GBM) and lung). The resulting set contained 518 data points (samples) with 11,925 attributes (gene expression levels).
70
+
71
+ ## 3. Conclusion
72
+
73
+ Machine learning has become an essential tool for the analysis of data in molecular biology. We have created a compact, multi-modal course for teaching fundamental theoretical knowledge and practical skills with the aim to enable researchers to included machine learning based solution into their research.
74
+
75
+ All materials including the presentation used to deliver the theoretical part and the Jupyter Notebooks are Open Education Resources (OER) and available under a Creative Commons Attribution License. We have successfully taught the lesson and further improved it based on the feedback received.
76
+
77
+ We consider to convert the current content into a lesson as part of the incubator for "The Carpentries" to increase its visibility and lay the foundation for a sustainable extension of it.
78
+
79
+ ## References
80
+
81
+ The carpentries. https://carpentries.org/.Accessed: 2021-006-25.
82
+
83
+ The cancer genome atlas program. https://www.cancer.gov/about-nci/organization/ ccg/research/structural-genomics/tcga. Accessed: 2021-06-27.
84
+
85
+ Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Lev-enberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/.Software available from tensorflow.org.
86
+
87
+ Akçayır, G. and Akçayır, M. The flipped classroom: A review of its advantages and challenges. Computers & Education, 126:334-345, November 2018. doi: 10.1016/j. compedu.2018.07.021. URL https://doi.org/10.1016/j.compedu.2018.07.021.
88
+
89
+ Buitinck, L., Louppe, G., Blondel, M., Pedregosa, F., Mueller, A., Grisel, O., Niculae, V., Prettenhofer, P., Gramfort, A., Grobler, J., Layton, R., VanderPlas, J., Joly, A., Holt, B., and Varoquaux, G. API design for machine learning software: experiences from the scikit-learn project. In ECML PKDD Workshop: Languages for Data Mining and Machine Learning, pp. 108-122, 2013.
90
+
91
+ Chollet, F. keras. https://github.com/fchollet/ keras, 2015.
92
+
93
+ Cock, P. J. A., Antao, T., Chang, J. T., Chapman, B. A., Cox, C. J., Dalke, A., Friedberg, I., Hamelryck, T., Kauff, F., Wilczynski, B., and de Hoon, M. J. L. Biopython: freely available python tools for computational molecular biology and bioinformatics. Bioinfor-matics, 25(11):1422-1423, mar 2009. doi: 10.1093/ bioinformatics/btp163. URL https://doi.org/10.1093%2Fbioinformatics%2Fbtp163.
94
+
95
+ Dong, J., Yao, Z.-J., Zhang, L., Luo, F., Lin, Q., Lu, A.- P., Chen, A. F., and Cao, D.-S. PyBioMed: a python library for various molecular representations of chemicals, proteins and DNAs and their interactions. Journal of Cheminformatics, 10(1), mar 2018. doi: 10.1186/ s13321-018-0270-2. URL https://doi.org/10.1186%2Fs13321-018-0270-2.
96
+
97
+ Dua, D. and Graff, C. UCI machine learning repository. 2017. URL http://archive.ics.uci.edu/ml.
98
+
99
+ Harris, C. R., Millman, K. J., van der Walt, S. J., Gommers, R., Virtanen, P., Cournapeau, D., Wieser, E., Taylor, J., Berg, S., Smith, N. J., Kern, R., Picus, M., Hoyer, S., van Kerkwijk, M. H., Brett, M., Haldane, A., del Río, J. F., Wiebe, M., Peterson, P., Gérard-Marchant, P., Sheppard, K., Reddy, T., Weckesser, W., Abbasi, H., Gohlke, C., and Oliphant, T. E. Array programming with NumPy. Nature, 585(7825):357-362, September 2020. doi: 10. 1038/s41586-020-2649-2. URL https://doi.org/ 10.1038/s41586-020-2649-2.
100
+
101
+ Kluyver, T., Ragan-Kelley, B., Pérez, F., Granger, B., Bus-sonnier, M., Frederic, J., Kelley, K., Hamrick, J., Grout, J., Corlay, S., Ivanov, P., Avila, D., Abdalla, S., and Willing, C. Jupyter notebooks - a publishing format for reproducible computational workflows. In Loizides, F. and Schmidt, B. (eds.), Positioning and Power in Academic Publishing: Players, Agents and Agendas, pp. 87 - 90. IOS Press, 2016.
102
+
103
+ Knuth, D. E. Literate Programming. The Computer Journal, 27(2):97-111, January 1984. ISSN 0010-4620. doi: 10.1093/comjnl/27.2.97. URL https://doi.org/ 10.1093/comjnl/27.2.97.
104
+
105
+ Parr, T. and Howard, J. The matrix calculus you need for deep learning. arXiv:1802.01528 [cs, stat], Jul 2018. URL http://arxiv.org/abs/1802.01528.arXiv: 1802.01528.
106
+
107
+ Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32, pp. 8024-8035. Curran Associates, Inc., 2019.
108
+
109
+ Pugachev, S. What are "the carpentries" and what are they doing in the library? Libraries and the Academy, 19(2): 209-213, apr 2019. doi: 10.1353/pla.2019.0011. URL https://doi.org/10.1353/pla.2019.0011.
110
+
111
+ Schmidt, B. and Hildebrandt, A. Deep learning in next-generation sequencing. Drug Discovery Today, 26 (1):173-180, January 2021. doi: 10.1016/j.drudis. 2020.10.002. URL https://doi.org/10.1016/ j.drudis.2020.10.002.
112
+
113
+ Wang, B., Mezlini, A. M., Demir, F., Fiume, M., Tu, Z., Brudno, M., Haibe-Kains, B., and Goldenberg, A.
114
+
115
+ Similarity network fusion for aggregating data types on a genomic scale. Nature Methods, 11(3):333-337, jan 2014. doi: 10.1038/nmeth.2810. URL https: //doi.org/10.1038%2Fnmeth.2810.
116
+
117
+ Wen, B., Zeng, W.-F., Liao, Y., Shi, Z., Savage, S. R., Jiang, W., and Zhang, B. Deep learning in proteomics. PROTEOMICS, 20(21-22):1900335, October 2020. doi: 10.1002/pmic.201900335. URL https://doi.org/ 10.1002/pmic.201900335.
118
+
119
+ Wes McKinney. Data Structures for Statistical Computing in Python. In Stéfan van der Walt and Jarrod Millman (eds.), Proceedings of the 9th Python in Science Conference, pp. 56 - 61, 2010. doi: 10.25080/Majora-92bf1922-00a.
120
+
121
+ Zhang, X. and Liu, S. RBPPred: predicting RNA-binding proteins from sequence using SVM. Bioin-formatics, pp. btw730, dec 2016. doi: 10.1093/ bioinformatics/btw730. URL https://doi.org/ 10.1093%2Fbioinformatics%2Fbtw730.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/knwKgaspObQ/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § A LESSON FOR TEACHING FUNDAMENTAL MACHINE LEARNING CONCEPTS AND SKILLS TO MOLECULAR BIOLOGISTS
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ Machine Learning represents an invaluable set of tools for the analysis of data in molecular biology as well as bio-medicine. Here we present an training approach to teach fundamental machine learning skills to researchers in their early career stage (PhD and postdoc level) with the aim to empower them to apply these methods in their own research projects. The content was developed for being delivered in a short and intense learning period as part of a remote systems biology workshop but can be adapted to other scenarios with a less restricted time frame.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ § 1.1.THE NEED FOR MACHINE LEARNING SKILLS IN MOLECULAR BIOLOGY RESEARCH
12
+
13
+ With the rapidly growing amount of data in molecular biology the application of machine learning methods becomes increasingly useful and necessary to translate data - e.g. resulting from high-throughput methods like 2nd and 3rd generation sequencing (Schmidt & Hildebrandt, 2021) or proteomics (Wen et al., 2020) - into biological insights.
14
+
15
+ Based on this development we assume that having a basic understanding of the concepts of machine learning is beneficial for researches in molecular biology. This not only helps to critically question available methods but also to implement own machine learning based solutions to answer relevant research questions. Due to the availability of powerful yet comparatively easy to handle programming packages like scikit-learn (Buitinck et al., 2013), pytorch (Paszke et al., 2019), TensorFlow (Abadi et al., 2015) and Keras (Chollet, 2015), machine learning methods has become accessible to non-expert.
16
+
17
+ § 1.2. LEARNING OUTCOMES AND REQUIREMENTS
18
+
19
+ We have designed a dense training lesson with the aim to teach fundamental knowledge of machine learning approaches as well as the application of them using the Python package scikit-learn, further supporting packages and Jupyter Notebooks. All the software used for this is available under open source licenses. After attending the training learners should be able to include machine learning based methods in their own research.
20
+
21
+ The lesson starts with an introduction to the distinction between supervised, unsupervised and reinforcement learning. It then focuses on supervised learning methods and the topics of data cleaning, feature selection, feature encoding, scaling, model fitting, model evaluation, model comparison, cross validation and grid search. Concepts like over- and under-fitting, curse of dimensionality, strengths and weaknesses of different approaches as well as deep learning was discussed as part of the introduction. Equipped with such a basic understand, learners should be able to extend their knowledge depending on their specific needs.
22
+
23
+ As requirement for this course, we expected that participants posses programming skills, ideally in Python, as well as an basic understanding of matrices and vectors besides a strong background in molecular biology. Although a deeper understanding of machine learning requires solid mathematical foundations (Parr & Howard, 2018), the lesson was designed without requiring them.
24
+
25
+ § 1.3. METHODOLOGY
26
+
27
+ The training programme was created as part of a system biology workshop which was taught remotely in 5 day long interactive session and due to this the time available was very limited. The lesson was broken down in several component (see Figure 1). Following a flipped classroom approach (Akçayır & Akçayır, 2018), the learners had access to prerecorded videos before the actual course. In this phase theoretical foundations were delivered in a 45 minute video. The actual course started with a question-answer session of 30 minutes in which open question regarding the theoretical introduction could be discussed. After that, the selected practical coding skills were taught. In this part, the teaching methods of the "The Carpentries" (CAR; Pugachev, 2019), a community-driven organization that teaches basics of programming and data science skills to researchers and people in information-centric roles, were intensively applied. The learning methods include "Live Coding" elements, in which an instructor creates codes and executes them in front of the participants. The participants type along and replicate the code themselves instead of being taught with static code examples. "The Carpentries" teaching methods also includes collaborative teaching and continuous feedback. Therefore, in addition to the instructor, helpers are present to take questions from the learners and to support the instructor in conveying the content. The participants can ask questions at any time and problems that occur are discussed openly with the whole group for learners. This approach converts errors into possibilities to extend comprehension and help building mental models of the content efficiently.
28
+
29
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
30
+
31
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
32
+
33
+ < g r a p h i c s >
34
+
35
+ Figure 1. Components of the lesson
36
+
37
+ Due to the short duration of the actual interactive session, further content covering further coding example are provided as videos. Learners can, depending on their interest, deepen their skills in their own pace.
38
+
39
+ § 2. MATERIAL
40
+
41
+ § 2.1. CODING ENVIRONMENT
42
+
43
+ For teaching the implementation of machine learning solutions in Python with a Live Coding approach, we have chosen Jupyter Notebooks (Kluyver et al., 2016) as coding environment that follows the literate programming paradigme (Knuth, 1984). As commonly done as part of the teaching methodology of "The Carpentry", the learners start in an empty notebooks which is successively extended under guidance of the instructor. For each piece of code added, explanation are given and references to the theoretical foundation are made.
44
+
45
+ Table 1. Description of used data sets.
46
+
47
+ max width=
48
+
49
+ DATA SET NAME Source
50
+
51
+ 1-2
52
+ BREAST CANCER SKLEARN
53
+
54
+ 1-2
55
+ Diabetes SKLEARN
56
+
57
+ 1-2
58
+ (Non-)RNA Binding Protein Zhang & Liu, 2016
59
+
60
+ 1-2
61
+ CANCER CELL EXPRESSION WANG et al, 2014
62
+
63
+ 1-2
64
+
65
+ § 2.2. PACKAGES
66
+
67
+ In order to empower the learners to implement own machine learning based solution efficiently we have chosen commonly used open source Python libaries:
68
+
69
+ The scikit-learn package (Buitinck et al., 2013) was selected as framework to teach machine learning methods due to its simplicity, strong community support and unified application programming interface (API) for classification and regression methods. Furthermore, it provides example data sets as well as numerous pre-processing and evaluation methods. Pandas (Wes McKinney, 2010) is the Swiss army-knife in the field of data science for reading, manipulating, writing and visualisation (tabular) data. It provides the powerful "DataFrame" data structure and by that helps to access data in a format that can be processed by scikit-learn. NumPy (Harris et al., 2020) is a foundational library for storing multi-dimensional arrays and matrices in Python and offers method to apply mathematical operation on them. The Biopython package (Cock et al., 2009) is a collection of several tools for computational biology. Among other features it provided solution to parse numerous format including FASTA. As we covered the classification of proteins in the lesson, we included the PyBioMed package (Dong et al., 2018), which includes several methods to encode amino acids sequences into numerical values.
70
+
71
+ § 2.3. DATA SETS
72
+
73
+ For selecting example data sets we applied a several criteria: The data set must be open, of moderate size to ensure a quick processing during the course, well documented and biologically relevant. This limited our options significantly, nevertheless we found adequate data sets which met our criteria (see Table 1):
74
+
75
+ The Breast Cancer data set is a standard machine learning data set available at the UCI Machine Learning Repository (Dua & Graff, 2017) and scikit-learn offers a function to access the data (as "Bunch" object). Aside from meeting our main criteria, it was also selected for its simplicity. The data set contains 569 data points with 30 numerical attributes. It is labelled as two classes - malignant and benign - and the classes are already in the target vector which. In this lesson it serves as first example for classification.
76
+
77
+ The Diabetes Data Set is another widely used data set provided by the UCI Machine Learning Repository and is also included in the scikit-learn toy data set collection. It has 442 data points with 20 (mostly numerical) attributes and serves to teach regression in the lesson.
78
+
79
+ To demonstrate how classification can be performed on protein sequences, we combined a set of RNA Binding Protein data set (2,780 sequences from 638 species) and non-RNA Binding Protein data set (7,093 sequences from 1,587 species) which were original obtained by Zhang & Liu from UniProt or the Protein Data Bank, respectively.
80
+
81
+ Furthermore, we included a cancer expression data set with originated from The Cancer Genome Atlas (TCG) and was compiled by Wang et al. for testing a Similarity network fusion (SNF) aggregation method. The data set was modified and pre-processed by us to teach a multi-class classification method for 4 cancer types (breast, colon, glioblastoma mul-tiforme (GBM) and lung). The resulting set contained 518 data points (samples) with 11,925 attributes (gene expression levels).
82
+
83
+ § 3. CONCLUSION
84
+
85
+ Machine learning has become an essential tool for the analysis of data in molecular biology. We have created a compact, multi-modal course for teaching fundamental theoretical knowledge and practical skills with the aim to enable researchers to included machine learning based solution into their research.
86
+
87
+ All materials including the presentation used to deliver the theoretical part and the Jupyter Notebooks are Open Education Resources (OER) and available under a Creative Commons Attribution License. We have successfully taught the lesson and further improved it based on the feedback received.
88
+
89
+ We consider to convert the current content into a lesson as part of the incubator for "The Carpentries" to increase its visibility and lay the foundation for a sustainable extension of it.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/m28wDC7B3kx/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Teaching Responsible Machine Learning to Engineers
2
+
3
+ ## Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ With the increasing application of machine learning models in practice, there is a growing need to incorporate ethical considerations in engineering curricula. In this paper, we reflect upon the development of a course on responsible machine learning for undergraduate engineering students. We found that technical material was relatively easy to grasp when it was directly linked to prior knowledge on machine learning. However, it was non-trivial for engineering students to make a deeper connection between real-world outcomes and ethical considerations such as fairness. Moving forward, we call upon educators to focus on the development of realistic case studies that invite students to interrogate the role of an engineer.
8
+
9
+ ## 1. Introduction
10
+
11
+ As machine learning models are increasingly applied in practice, there is a growing interest in the responsible development and use of these models. Although humanities scholars have studied the ethical implications of artificial intelligence for decades, the widespread application of machine learning techniques has opened up new avenues for studying the interaction between the intelligent systems and society. At the same time, major machine learning venues have attracted manuscripts that address the technical challenges of formulating and achieving fairness and explainability in machine learning.
12
+
13
+ Within and across research communities there is an increased understanding that applying machine learning responsibly is a sociotechnical challenge that should be addressed from multidisciplinary perspectives (e.g., Raji et al., 2021). This sentiment is illustrated in an emergence of new cross-disciplinary conferences (most notably FAccT ${}^{1}$ and AIES ${}^{2}$ ) as well as specialized workshops (e.g., Bias and Fairness in AI (Calders et al., 2021)).
14
+
15
+ It seems imperative that practitioners understand in what ways machine learning models may pose ethical risks and how these risks can be mitigated. Indeed, there is a growing interest to incorporate responsible design in computer science education (Zegura et al., 2020; Raji et al., 2021; Fiesler et al., 2021). However, education on topics of fairness, accountability, confidentiality, and transparency (FACT) geared toward engineers is still in its infancy.
16
+
17
+ Responsible Machine Learning Education In some programs, ethical considerations are covered as a stand-alone course, emphasizing normative ethical theories. In other programs, ethics may be incorporated as a seminar following a more technical module. While these classes are valuable, they are at risk of divorcing ethical considerations from technical practice (Malazita & Resetar, 2019; Fiesler et al., 2021). As a result, students may have a hard time applying ethical considerations in their daily professional practice as an engineer (Fiesler et al., 2021).
18
+
19
+ Instead, we believe there is a need to teach responsible machine learning in a way that (1) encourages students to engage with ethical considerations of machine learning systems, (2) is applicable to the daily practice of engineers. With these goals in mind, we have designed a new course, Responsible Machine Learning (RML), at A University ${}^{3}$ , targeted primarily at final-stage undergraduate students majoring in either data science or computer science.
20
+
21
+ In this paper, we detail the instructional design of the course and reflect upon our experiences. Although RML covered various topics, we will limit our discussion mostly to teaching algorithmic fairness. In the remainder of this paper, we assume the reader is familiar with basic concepts of algorithmic fairness. A recent snapshot of the frontiers of fairness in machine learning research can be found in Chouldechova & Roth (2020).
22
+
23
+ Lessons Learned After the first course iterations, we found that technical material was relatively easy to grasp for the target audience of our course when it was directly linked to prior knowledge on machine learning. In particular, we found toy examples, demos, and tutorials to be useful tools to foster student understanding.
24
+
25
+ However, we have also noticed that it is non-trivial for engineering students to make a deeper connection between real-world outcomes and algorithmic fairness. One of the main challenges in teaching RML was to simplify a complex topic to facilitate understanding, without reducing it to a narrow, technical perspective. To this end, realistic and concrete case studies as well as invited lectures have proven to be helpful.
26
+
27
+ ---
28
+
29
+ 'https://facctconference.org/
30
+
31
+ ${}^{2}$ https://www.aies-conference.com/
32
+
33
+ ${}^{3}$ University name and location are redacted to maintain anonymity.
34
+
35
+ ---
36
+
37
+ Moving Forward Despite the raising level of public and academic discourse, high-quality educational resources suitable for undergraduate engineering students are scarce. Moving forward, we call upon educators to develop more realistic and concrete case studies, allowing engineering students to connect ethical considerations and technical decision-making in a more meaningful way.
38
+
39
+ Outline The remainder of this paper is structured as follows. In Section 2, we describe our course design and reflect upon our experiences. In Section 3, we summarize sketch paths for future work.
40
+
41
+ ## 2. Course Design
42
+
43
+ Following the principles of constructive alignment (Kan-dlbinder et al., 2014), our course design consists of three components: learning objectives, learning activities, and assessment. Due to COVID-19 restrictions, the course was taught fully online.
44
+
45
+ ### 2.1. Learning Objectives
46
+
47
+ RML is structured around four main themes: Fairness, Accountability, Confidentiality, and Transparency (FACT). Of these themes, fairness and transparency are covered most extensively. The learning objectives of the course were as follows.
48
+
49
+ ## At the end of the course, students will be able to:
50
+
51
+ 1. Evaluate and communicate trade-offs between (so-cio)technical desiderata of machine learning applications, taking into account diverse stakeholders' perspectives.
52
+
53
+ 2. Explain technical and organizational strategies for advancing FACT throughout the machine learning development process.
54
+
55
+ 3. Select and implement appropriate strategies for enhancing algorithmic fairness and interpretable/explainable machine learning.
56
+
57
+ We would like to highlight a few key aspects of these objectives. First of all, note that learning objective $1\mathrm{\;{em}}$ - phasizes communicating trade-offs. In practice, even well-intentioned engineers can contribute to harmful technology through implicit design choices throughout the development process. By making trade-offs more explicit, it becomes possible to discuss them with other stakeholders and thereby fos-
58
+
59
+ 108 ter accountability. Second, learning objective 1 emphasizes 109 engaging diverse stakeholders, the importance of which as been stressed previously by e.g., Raji et al. (2021). Third, learning objective 2 highlights how different strategies can be applied throughout the machine learning development process - not just as an afterthought. And finally, learning objective 3 requires students to implement technical evaluation and mitigation strategies, marrying ethical considerations with the daily practice of an engineer.
60
+
61
+ ### 2.2. Teaching Materials
62
+
63
+ We have found that high-quality teaching materials geared towards undergraduate engineering students are scarce. Although there exist several graduate-level courses that cover FACT topics in a research seminar format, we consider this format less suitable for undergraduate engineering students. First of all, the target audience of our course may not be able to fully grasp highly technical papers. Moreover, critical position papers typically assume a level of familiarity with the research field that cannot be expected from the target audience of our course.
64
+
65
+ For RML, we have tried to fill this gap through the development of lectures, lecture notes, and tutorials ${}^{4}$ . Additionally, assigned reading included several chapters of Barocas et al. (2019) (an incomplete work in progress at the time).
66
+
67
+ #### 2.2.1. SYLLABUS
68
+
69
+ We start the course with an introduction to a responsible machine learning process, structured around the CRISP-DM process model (Wirth & Hipp, 2000). In accordance to learning objective 1 , our introduction emphasizes the importance of the problem understanding stage. Is this the right problem to solve? Who are the stakeholders of the envisioned system? In particular, we exemplify different types of harm, structured against the moral values they go against (e.g., safety, fairness, transparency, autonomy).
70
+
71
+ The second module of the course revolves around fairness of machine learning algorithms and the challenges associated with this (learning objectives 2 and 3). We cover several fairness metrics and mitigation algorithms coined by the machine learning community and discuss their limitations.
72
+
73
+ To facilitate a deeper understanding of the relationship between fairness and technical design choices, we have also developed several Jupyter notebook (Kluyver et al., 2016) tutorials revolving around a case study of Propublica's analysis of COMPAS (Angwin et al., 2016) leveraging several modules of the Python library Fairlearn (Bird et al., 2020). Although these notebooks contain code, their primary purpose is to help students consider the applicability and limitations of fairness metrics and mitigation algorithms in a particular context.
74
+
75
+ ---
76
+
77
+ ${}^{4}$ Several of our teaching materials can be found on url-redacted-to-maintain-anonymity and in (Name, 2021).
78
+
79
+ ---
80
+
81
+ Additionally, invited lectures of researchers and practitioners helped to engage students with contemporary research discussions and showcase challenges data scientists might face in practice.
82
+
83
+ #### 2.2.2. FAIRNESS AS AN OPTIMIZATION PROBLEM
84
+
85
+ We found that connecting fairness metrics and algorithms with prior knowledge on machine learning helped students to understand technical details. In particular, the usage of toy examples, demos, and code tutorials seemed to increase student understanding.
86
+
87
+ Many techniques aimed at achieving fairness-by-design in machine learning can be framed in an optimization context (Zafar et al., 2019). Through this lens, the goal is to maintain good predictive performance while satisfying a number of group-level or individual fairness constraints. This can be achieved via several different techniques including fairness-aware representation learning (Zemel et al., 2013; Hu et al., 2020), model induction, model selection, regularization, or post-processing of specific (Kamiran et al., 2010) or any (Hardt et al., 2016) trained models or model outputs. If a student has recently learned about concepts such as cost-sensitive learning, it becomes easier to master these topics. Similarly, prior understanding of trade-offs between predictive performance metrics (e.g., precision and recall or ROC-curve analysis) helps to better understand other trade-offs, such as a fairness-accuracy trade-off or conflicting notions of fairness.
88
+
89
+ An exception to this is counterfactual fairness (Kusner et al., 2017). As the majority of engineering students have not previously studied the causal inference framework, it proved difficult to teach this notion of fairness in a compact way. However, we found that exemplifying Simpson's paradox can help to understand the relationship between interrelated features and notions of fairness.
90
+
91
+ #### 2.2.3. Fairness as a Sociotechnical Challenge
92
+
93
+ As we will expand upon in Section 2.3, it was non-trivial for students to connect technical design choices and real-world outcomes. As such, one of the main challenges in developing teaching materials was to simplify a complex, sociotechnical challenge like fairness into something that can be understood by our course's target audience, without reducing it to a narrow, technical perspective.
94
+
95
+ For example, historical biases may be encoded in data, which can result in downstream allocation harms. While this is important to understand, reducing unfairness to "bias in, bias out" foregoes many more fundamental questions, such as whether a predictive model should exist at all. Similarly, after covering fairness metrics, an often-heard question is "which fairness metric should I use?". The answer to this question highly depends on the context of an application. However, this is hardly a satisfying answer. To some extent, reducing the complexity of these challenges through general frameworks seems unavoidable, but risks only a surface-level engagement with a context on the student's part.
96
+
97
+ ### 2.3. Assessment
98
+
99
+ The assessment of RML consisted of three components, an individual assignment (20%), three quizzes (15%), and a final group project (65%). As most of our findings relate to the individual assignment and group project, we will limit our discussion to these.
100
+
101
+ #### 2.3.1. INDIVIDUAL ASSIGNMENT
102
+
103
+ In the individual assignment, students practiced identifying risks and balancing trade-offs of machine learning systems (learning objective 1). The assignment was inspired by Ze-gura et al. (2020), who developed two role playing activities in which students need to decide whether a specific artificial intelligence application should be deployed or not. In RML, the individual assignment was in the form of an individual report covering two scenarios, complemented by two group discussions.
104
+
105
+ The group discussions served to practice communicating trade-offs and exchanging views with peers. Trained primarily as engineers, many of our students were not familiar with instructional formats involving group discussions. To facilitate a fruitful discussion, we provided students with a suggested timing, meeting roles, and general discussion guidelines.
106
+
107
+ The majority of our students indicated they appreciated the group discussion format, as it allowed them to gain several new insights. This was reflected in their reports: most students were able to identify relevant stakeholders and high-level benefits and risks of the envisioned system. However, students had more difficulty with the precise formulating of risks and mitigation strategies. For example, students would write that the system "should be fair for all patients" or "without bias against minority groups" without exemplifying what "fair" or "without bias" entailed in this specific scenario. Similarly, students sometimes had difficulty connecting (technical) design choices to the identified risks, reflected in ambiguous phrasing of how mitigation strategies might alleviate some of the risks.
108
+
109
+ #### 2.3.2. Group Project
110
+
111
+ For the final assessment, we have taken a problem-based learning approach (De Graaf & Kolmos, 2003). In teams of five, students went through all stages of the machine learning development process (except for deployment) and implemented techniques for enhancing fairness and inter-
112
+
113
+ 168 pretabilty/explainability (learning objective 3).
114
+
115
+ 169 The development of a suitable project was highly non-trivial. 170 We believe that developing a realistic prototype is key for students to fully appreciate the challenges of responsible design from the perspective of an engineer. As such, we set out to find a suitable real-world data set accompanied by
116
+
117
+ 175 a realistic scenario. Fairness assessments involve sensitive data, which made it challenging to find an external partner 176 who was wiling to collaborate in the context of undergrad-
118
+
119
+ 178 uate course work. Additionally, bench-marking data sets that are routinely used in fairness research often lack the 179
120
+
121
+ 180 necessary context (e.g., the UCI Adult data set) or relate to contested applications of machine learning (e.g., Propub-lica's COMPAS data set, see Bao et al. (2021) for a detailed account).
122
+
123
+ Eventually, we settled upon the MIMIC-Extract data set (Wang et al., 2020), a partly preprocessed data set build upon the freely accessible critical care database (Johnson et al., 2016). The associated task was the development of an ICU mortality prediction model that could potentially be used as decision-support tool for physicians. In the assignment, the tool was positioned as a potential alternative to the well-established Sequential Organ Failure Assessment (SOFA) scores.
124
+
125
+ By design, the assignment was relatively open-ended. Although the scenario hinted towards fairness and transparency, no explicit requirements were given. Instead, students were required to identify requirements through their analysis of the context. To emphasize the importance of the problem formulation, a large proportion of points was awarded to this part of the assignment (learning objective 1). To teach the importance of fostering accountability, students were also required to fill out a data sheet (Gebru et al., 2018) and model card (Mitchell et al., 2019). Finally, students were asked to reflect upon their findings and explicitly reflect upon (ethical) implications of limitations of their developed model.
126
+
127
+ From the course evaluation survey, it was clear that many students highly appreciated the project. We found that most groups were able to successfully apply various machine learning techniques, including fairness assessment and techniques for enhancing interpretability or explainability. However, similar to the individual assignment, some groups were not able to articulate the relevance of these approaches precisely in the given context. For example, students were able to successfully compute a set of fairness metrics, but did not explain clearly why they believed a specific fairness metric was applicable within the scenario.
128
+
129
+ 218
130
+
131
+ 219
132
+
133
+ ## 3. Moving Forward
134
+
135
+ With the design of RML, we set out to build a bridge between ethical and technical perspectives, in a way that speaks to engineers. In this paper, we have showcased our approach and reflected upon our experiences. However, much work remains to be done.
136
+
137
+ ### 3.1. Realistic Case Studies
138
+
139
+ Although there is an increasing number of examples that showcase how machine learning models can be harmful, it can be difficult for students to connect technical decision-making with ethical implications beyond surface-level observations. As such, we believe that realistic and concrete case studies are crucial to facilitate student learning.
140
+
141
+ However, it has proven difficult to develop these materials within the context of a single university course. Firstly, the sensitivity of fairness-related data sets makes it challenging to find external collaborators who ware willing to collaborate. Secondly, publicly available data sets often lack the required contextualization, such as a datasheet (Gebru et al., 2018) or a realistic use case. Some of these issues might be alleviated through the use of carefully crafted synthetic data. However, this would still not allow students to engage with stakeholders' perspectives in a meaningful way and instead leave them to rely on their own assumptions about a scenario. One way forward would be to expand a case study not only with a description of the scenario, but also with direct input from (potentially fictional) stakeholders. For example, this could be organized as video-recorded interviews or written testimonials.
142
+
143
+ ### 3.2. Interrogating the Role of an Engineer
144
+
145
+ Ethical development of machine learning is a sociotechnical challenge that cannot be solved by engineers alone. In our view, engineering students should not be expected to be well-versed in all these different disciplines. Instead, we believe it is important to show students the limitations of the computer science lens and present concrete approaches to invite other perspectives.
146
+
147
+ We call on educators to develop more examples of multidisciplinary work that showcase the role of an engineer as well as other actors. For example, Raji et al. (2021) suggest to develop frameworks to cooperate with peers from other disciplines and to engage with affected populations. At engineering universities, organizing team work with other disciplines can be impractical. A different way to reflect the importance of other disciplines in course work would be to give students the opportunity to consult external experts, possibly in the form of auxiliary context that is only provided on demand.
148
+
149
+ References
150
+
151
+ Angwin, J., Larson, J., Mattu, S., and Kirchner, L. Machine bias. ProPublica, May, 23(2016):139-159, 2016.
152
+
153
+ Bao, M., Zhou, A., Zottola, S., Brubach, B., Desmarais, S., Horowitz, A., Lum, K., and Venkatasubramanian, S. It's compaslicated: The messy relationship between rai datasets and algorithmic fairness benchmarks, 2021.
154
+
155
+ Barocas, S., Hardt, M., and Narayanan, A. Fairness and Machine Learning. fairmlbook.org, 2019. http:// www.fairmlbook.org.
156
+
157
+ Bird, S., Dudík, M., Edgar, R., Horn, B., Lutz, R., Milan, V., Sameki, M., Wallach, H., and Walker, K. Fairlearn: A toolkit for assessing and improving fairness in AI. Technical Report MSR-TR-2020-32, Microsoft, May 2020. URL https://www.microsoft.com/en-us/research/publication/ fairlearn-a-toolkit-for-assessing-and-imp
158
+
159
+ Calders, T., Ntoutsi, E., Pechenizkiy, M., Rosenhahn, B., and Ruggieri, S. Introduction to the special section on bias and fairness in AI. SIGKDD Explor., 23(1):1-3, 2021. doi: 10.1145/3468507.3468509. URL https: //doi.org/10.1145/3468507.3468509.
160
+
161
+ Chouldechova, A. and Roth, A. A snapshot of the frontiers of fairness in machine learning. Commun. ${ACM},{63}\left( 5\right) : {82} - {89}$ , April 2020. ISSN 0001-0782. doi: 10.1145/3376898. URL https://doi.org/10.1145/3376898.
162
+
163
+ De Graaf, E. and Kolmos, A. Characteristics of problem-based learning. International Journal of Engineering Education, 19(5):657-662, 2003.
164
+
165
+ Fiesler, C., Friske, M., Garrett, N., Muzny, F., Smith, J. J., and Zietz, J. Integrating Ethics into Introductory Programming Classes, pp. 1027-1033. Association for Computing Machinery, New York, NY, USA, 2021. ISBN 9781450380621. URL https://doi.org/10.1145/3408877.3432510.
166
+
167
+ Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., and Crawford, K. Datasheets for datasets. arXiv preprint arXiv:1803.09010, 2018.
168
+
169
+ Hardt, M., Price, E., and Srebro, N. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, NIPS 2016, pp. 3315-3323, 2016.
170
+
171
+ Hu, T., Iosifidis, V., Liao, W., Zhang, H., Yang, M. Y., Ntoutsi, E., and Rosenhahn, B. FairNN - conjoint learning of fair representations for fair decisions. In Proceedings of the 23rd International Conference
172
+
173
+ on Discovery Science, DS 2020, volume 12323 of LNCS, pp. 581-595. Springer, 2020. doi: 10.1007/
174
+
175
+ 978-3-030-61527-7\\_38. URL https://doi.org/ 10.1007/978-3-030-61527-7_38.
176
+
177
+ Johnson, A. E. W., Pollard, T. J., Shen, L., Lehman, L.- W. H., Feng, M., Ghassemi, M., Moody, B., Szolovits, P., Celi, L. A., and Mark, R. G. Mimic-iii, a freely accessible critical care database. Scientific data, 3(1):1-9, 2016.
178
+
179
+ Kamiran, F., Calders, T., and Pechenizkiy, M. Discrimination aware decision tree learning. In Proceedings of the 10th IEEE International Conference on Data Mining, ICDM 2010, pp. 869-874. IEEE Computer Society, 2010. doi: 10.1109/ICDM.2010.50. URL https: //doi.org/10.1109/ICDM.2010.50.
180
+
181
+ Kandlbinder, P. et al. Constructive alignment in university teaching. HERDSA News, 36(3):5, 2014.
182
+
183
+ mproving-fairness-in-ai/.
184
+
185
+ Kluyver, T., Ragan-Kelley, B., Pérez, F., Granger, B., Bus-sonnier, M., Frederic, J., Kelley, K., Hamrick, J., Grout, J., Corlay, S., Ivanov, P., Avila, D., Abdalla, S., Willing, C., and development team, J. Jupyter notebooks - a publishing format for reproducible computational workflows. In Loizides, F. and Scmidt, B. (eds.), Positioning and Power in Academic Publishing: Players, Agents and Agendas, pp. 87-90. IOS Press, 2016. URL https://eprints.soton.ac.uk/403913/.
186
+
187
+ Kusner, M. J., Loftus, J., Russell, C., and Silva, R. Counterfactual fairness. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vish-wanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/
188
+
189
+ a486cd07e4ac3d270571622f4f316ec5-Paper. pdf.
190
+
191
+ Malazita, J. W. and Resetar, K. Infrastructures of abstraction: how computer science education produces anti-political subjects. Digital Creativity, 30(4):300-312, 2019. doi: 10.1080/14626268.2019.1682616. URL https:// doi.org/10.1080/14626268.2019.1682616.
192
+
193
+ Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., and Gebru, T. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency, pp. 220-229, 2019.
194
+
195
+ Raji, I. D., Scheuerman, M. K., and Amironesei, R. You can't sit with us: Exclusionary pedagogy in ai ethics education. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT
196
+
197
+ '21, pp. 515-525, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383097. doi: 10.1145/3442188.3445914. URL https://doi.org/10.1145/3442188.3445914.
198
+
199
+ Wang, S., McDermott, M. B. A., Chauhan, G., Ghassemi, M., Hughes, M. C., and Naumann, T. Mimic-extract. Proceedings of the ACM Conference on Health, Inference, and Learning, Apr 2020. doi: 10.1145/3368555.3384469. URL http://dx.doi.org/10.1145/3368555.3384469.
200
+
201
+ Wirth, R. and Hipp, J. Crisp-dm: Towards a standard process model for data mining. In Proceedings of the 4th international conference on the practical applications of knowledge discovery and data mining, volume 1 . Springer-Verlag London, UK, 2000.
202
+
203
+ Zafar, M. B., Valera, I., Gomez-Rodriguez, M., and Gum-madi, K. P. Fairness constraints: A flexible approach for fair classification. Journal of Machine Learning Research,20(75):1-42,2019. URL http://jmlr.org/ papers/v20/18-262.html.
204
+
205
+ Zegura, E., Borenstein, J., Shapiro, B., Meng, A., and Logevall, E. Embedding ethics in cs classes through role play. https://sites.gatech.edu/ responsiblecomputerscience/, 2020.
206
+
207
+ Zemel, R., Wu, Y., Swersky, K., Pitassi, T., and Dwork, C. Learning fair representations. In Proceedings of the 30th International Conference on Machine Learning, volume 28, pp. 325-333. PMLR, 2013. URL http://proceedings.mlr.press/ v28/zemel13.html.
208
+
209
+ 326
210
+
211
+ 328
212
+
213
+ 329
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/m28wDC7B3kx/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § TEACHING RESPONSIBLE MACHINE LEARNING TO ENGINEERS
2
+
3
+ § ANONYMOUS AUTHORS ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ With the increasing application of machine learning models in practice, there is a growing need to incorporate ethical considerations in engineering curricula. In this paper, we reflect upon the development of a course on responsible machine learning for undergraduate engineering students. We found that technical material was relatively easy to grasp when it was directly linked to prior knowledge on machine learning. However, it was non-trivial for engineering students to make a deeper connection between real-world outcomes and ethical considerations such as fairness. Moving forward, we call upon educators to focus on the development of realistic case studies that invite students to interrogate the role of an engineer.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ As machine learning models are increasingly applied in practice, there is a growing interest in the responsible development and use of these models. Although humanities scholars have studied the ethical implications of artificial intelligence for decades, the widespread application of machine learning techniques has opened up new avenues for studying the interaction between the intelligent systems and society. At the same time, major machine learning venues have attracted manuscripts that address the technical challenges of formulating and achieving fairness and explainability in machine learning.
12
+
13
+ Within and across research communities there is an increased understanding that applying machine learning responsibly is a sociotechnical challenge that should be addressed from multidisciplinary perspectives (e.g., Raji et al., 2021). This sentiment is illustrated in an emergence of new cross-disciplinary conferences (most notably FAccT ${}^{1}$ and AIES ${}^{2}$ ) as well as specialized workshops (e.g., Bias and Fairness in AI (Calders et al., 2021)).
14
+
15
+ It seems imperative that practitioners understand in what ways machine learning models may pose ethical risks and how these risks can be mitigated. Indeed, there is a growing interest to incorporate responsible design in computer science education (Zegura et al., 2020; Raji et al., 2021; Fiesler et al., 2021). However, education on topics of fairness, accountability, confidentiality, and transparency (FACT) geared toward engineers is still in its infancy.
16
+
17
+ Responsible Machine Learning Education In some programs, ethical considerations are covered as a stand-alone course, emphasizing normative ethical theories. In other programs, ethics may be incorporated as a seminar following a more technical module. While these classes are valuable, they are at risk of divorcing ethical considerations from technical practice (Malazita & Resetar, 2019; Fiesler et al., 2021). As a result, students may have a hard time applying ethical considerations in their daily professional practice as an engineer (Fiesler et al., 2021).
18
+
19
+ Instead, we believe there is a need to teach responsible machine learning in a way that (1) encourages students to engage with ethical considerations of machine learning systems, (2) is applicable to the daily practice of engineers. With these goals in mind, we have designed a new course, Responsible Machine Learning (RML), at A University ${}^{3}$ , targeted primarily at final-stage undergraduate students majoring in either data science or computer science.
20
+
21
+ In this paper, we detail the instructional design of the course and reflect upon our experiences. Although RML covered various topics, we will limit our discussion mostly to teaching algorithmic fairness. In the remainder of this paper, we assume the reader is familiar with basic concepts of algorithmic fairness. A recent snapshot of the frontiers of fairness in machine learning research can be found in Chouldechova & Roth (2020).
22
+
23
+ Lessons Learned After the first course iterations, we found that technical material was relatively easy to grasp for the target audience of our course when it was directly linked to prior knowledge on machine learning. In particular, we found toy examples, demos, and tutorials to be useful tools to foster student understanding.
24
+
25
+ However, we have also noticed that it is non-trivial for engineering students to make a deeper connection between real-world outcomes and algorithmic fairness. One of the main challenges in teaching RML was to simplify a complex topic to facilitate understanding, without reducing it to a narrow, technical perspective. To this end, realistic and concrete case studies as well as invited lectures have proven to be helpful.
26
+
27
+ 'https://facctconference.org/
28
+
29
+ ${}^{2}$ https://www.aies-conference.com/
30
+
31
+ ${}^{3}$ University name and location are redacted to maintain anonymity.
32
+
33
+ Moving Forward Despite the raising level of public and academic discourse, high-quality educational resources suitable for undergraduate engineering students are scarce. Moving forward, we call upon educators to develop more realistic and concrete case studies, allowing engineering students to connect ethical considerations and technical decision-making in a more meaningful way.
34
+
35
+ Outline The remainder of this paper is structured as follows. In Section 2, we describe our course design and reflect upon our experiences. In Section 3, we summarize sketch paths for future work.
36
+
37
+ § 2. COURSE DESIGN
38
+
39
+ Following the principles of constructive alignment (Kan-dlbinder et al., 2014), our course design consists of three components: learning objectives, learning activities, and assessment. Due to COVID-19 restrictions, the course was taught fully online.
40
+
41
+ § 2.1. LEARNING OBJECTIVES
42
+
43
+ RML is structured around four main themes: Fairness, Accountability, Confidentiality, and Transparency (FACT). Of these themes, fairness and transparency are covered most extensively. The learning objectives of the course were as follows.
44
+
45
+ § AT THE END OF THE COURSE, STUDENTS WILL BE ABLE TO:
46
+
47
+ 1. Evaluate and communicate trade-offs between (so-cio)technical desiderata of machine learning applications, taking into account diverse stakeholders' perspectives.
48
+
49
+ 2. Explain technical and organizational strategies for advancing FACT throughout the machine learning development process.
50
+
51
+ 3. Select and implement appropriate strategies for enhancing algorithmic fairness and interpretable/explainable machine learning.
52
+
53
+ We would like to highlight a few key aspects of these objectives. First of all, note that learning objective $1\mathrm{\;{em}}$ - phasizes communicating trade-offs. In practice, even well-intentioned engineers can contribute to harmful technology through implicit design choices throughout the development process. By making trade-offs more explicit, it becomes possible to discuss them with other stakeholders and thereby fos-
54
+
55
+ 108 ter accountability. Second, learning objective 1 emphasizes 109 engaging diverse stakeholders, the importance of which as been stressed previously by e.g., Raji et al. (2021). Third, learning objective 2 highlights how different strategies can be applied throughout the machine learning development process - not just as an afterthought. And finally, learning objective 3 requires students to implement technical evaluation and mitigation strategies, marrying ethical considerations with the daily practice of an engineer.
56
+
57
+ § 2.2. TEACHING MATERIALS
58
+
59
+ We have found that high-quality teaching materials geared towards undergraduate engineering students are scarce. Although there exist several graduate-level courses that cover FACT topics in a research seminar format, we consider this format less suitable for undergraduate engineering students. First of all, the target audience of our course may not be able to fully grasp highly technical papers. Moreover, critical position papers typically assume a level of familiarity with the research field that cannot be expected from the target audience of our course.
60
+
61
+ For RML, we have tried to fill this gap through the development of lectures, lecture notes, and tutorials ${}^{4}$ . Additionally, assigned reading included several chapters of Barocas et al. (2019) (an incomplete work in progress at the time).
62
+
63
+ § 2.2.1. SYLLABUS
64
+
65
+ We start the course with an introduction to a responsible machine learning process, structured around the CRISP-DM process model (Wirth & Hipp, 2000). In accordance to learning objective 1, our introduction emphasizes the importance of the problem understanding stage. Is this the right problem to solve? Who are the stakeholders of the envisioned system? In particular, we exemplify different types of harm, structured against the moral values they go against (e.g., safety, fairness, transparency, autonomy).
66
+
67
+ The second module of the course revolves around fairness of machine learning algorithms and the challenges associated with this (learning objectives 2 and 3). We cover several fairness metrics and mitigation algorithms coined by the machine learning community and discuss their limitations.
68
+
69
+ To facilitate a deeper understanding of the relationship between fairness and technical design choices, we have also developed several Jupyter notebook (Kluyver et al., 2016) tutorials revolving around a case study of Propublica's analysis of COMPAS (Angwin et al., 2016) leveraging several modules of the Python library Fairlearn (Bird et al., 2020). Although these notebooks contain code, their primary purpose is to help students consider the applicability and limitations of fairness metrics and mitigation algorithms in a particular context.
70
+
71
+ ${}^{4}$ Several of our teaching materials can be found on url-redacted-to-maintain-anonymity and in (Name, 2021).
72
+
73
+ Additionally, invited lectures of researchers and practitioners helped to engage students with contemporary research discussions and showcase challenges data scientists might face in practice.
74
+
75
+ § 2.2.2. FAIRNESS AS AN OPTIMIZATION PROBLEM
76
+
77
+ We found that connecting fairness metrics and algorithms with prior knowledge on machine learning helped students to understand technical details. In particular, the usage of toy examples, demos, and code tutorials seemed to increase student understanding.
78
+
79
+ Many techniques aimed at achieving fairness-by-design in machine learning can be framed in an optimization context (Zafar et al., 2019). Through this lens, the goal is to maintain good predictive performance while satisfying a number of group-level or individual fairness constraints. This can be achieved via several different techniques including fairness-aware representation learning (Zemel et al., 2013; Hu et al., 2020), model induction, model selection, regularization, or post-processing of specific (Kamiran et al., 2010) or any (Hardt et al., 2016) trained models or model outputs. If a student has recently learned about concepts such as cost-sensitive learning, it becomes easier to master these topics. Similarly, prior understanding of trade-offs between predictive performance metrics (e.g., precision and recall or ROC-curve analysis) helps to better understand other trade-offs, such as a fairness-accuracy trade-off or conflicting notions of fairness.
80
+
81
+ An exception to this is counterfactual fairness (Kusner et al., 2017). As the majority of engineering students have not previously studied the causal inference framework, it proved difficult to teach this notion of fairness in a compact way. However, we found that exemplifying Simpson's paradox can help to understand the relationship between interrelated features and notions of fairness.
82
+
83
+ § 2.2.3. FAIRNESS AS A SOCIOTECHNICAL CHALLENGE
84
+
85
+ As we will expand upon in Section 2.3, it was non-trivial for students to connect technical design choices and real-world outcomes. As such, one of the main challenges in developing teaching materials was to simplify a complex, sociotechnical challenge like fairness into something that can be understood by our course's target audience, without reducing it to a narrow, technical perspective.
86
+
87
+ For example, historical biases may be encoded in data, which can result in downstream allocation harms. While this is important to understand, reducing unfairness to "bias in, bias out" foregoes many more fundamental questions, such as whether a predictive model should exist at all. Similarly, after covering fairness metrics, an often-heard question is "which fairness metric should I use?". The answer to this question highly depends on the context of an application. However, this is hardly a satisfying answer. To some extent, reducing the complexity of these challenges through general frameworks seems unavoidable, but risks only a surface-level engagement with a context on the student's part.
88
+
89
+ § 2.3. ASSESSMENT
90
+
91
+ The assessment of RML consisted of three components, an individual assignment (20%), three quizzes (15%), and a final group project (65%). As most of our findings relate to the individual assignment and group project, we will limit our discussion to these.
92
+
93
+ § 2.3.1. INDIVIDUAL ASSIGNMENT
94
+
95
+ In the individual assignment, students practiced identifying risks and balancing trade-offs of machine learning systems (learning objective 1). The assignment was inspired by Ze-gura et al. (2020), who developed two role playing activities in which students need to decide whether a specific artificial intelligence application should be deployed or not. In RML, the individual assignment was in the form of an individual report covering two scenarios, complemented by two group discussions.
96
+
97
+ The group discussions served to practice communicating trade-offs and exchanging views with peers. Trained primarily as engineers, many of our students were not familiar with instructional formats involving group discussions. To facilitate a fruitful discussion, we provided students with a suggested timing, meeting roles, and general discussion guidelines.
98
+
99
+ The majority of our students indicated they appreciated the group discussion format, as it allowed them to gain several new insights. This was reflected in their reports: most students were able to identify relevant stakeholders and high-level benefits and risks of the envisioned system. However, students had more difficulty with the precise formulating of risks and mitigation strategies. For example, students would write that the system "should be fair for all patients" or "without bias against minority groups" without exemplifying what "fair" or "without bias" entailed in this specific scenario. Similarly, students sometimes had difficulty connecting (technical) design choices to the identified risks, reflected in ambiguous phrasing of how mitigation strategies might alleviate some of the risks.
100
+
101
+ § 2.3.2. GROUP PROJECT
102
+
103
+ For the final assessment, we have taken a problem-based learning approach (De Graaf & Kolmos, 2003). In teams of five, students went through all stages of the machine learning development process (except for deployment) and implemented techniques for enhancing fairness and inter-
104
+
105
+ 168 pretabilty/explainability (learning objective 3).
106
+
107
+ 169 The development of a suitable project was highly non-trivial. 170 We believe that developing a realistic prototype is key for students to fully appreciate the challenges of responsible design from the perspective of an engineer. As such, we set out to find a suitable real-world data set accompanied by
108
+
109
+ 175 a realistic scenario. Fairness assessments involve sensitive data, which made it challenging to find an external partner 176 who was wiling to collaborate in the context of undergrad-
110
+
111
+ 178 uate course work. Additionally, bench-marking data sets that are routinely used in fairness research often lack the 179
112
+
113
+ 180 necessary context (e.g., the UCI Adult data set) or relate to contested applications of machine learning (e.g., Propub-lica's COMPAS data set, see Bao et al. (2021) for a detailed account).
114
+
115
+ Eventually, we settled upon the MIMIC-Extract data set (Wang et al., 2020), a partly preprocessed data set build upon the freely accessible critical care database (Johnson et al., 2016). The associated task was the development of an ICU mortality prediction model that could potentially be used as decision-support tool for physicians. In the assignment, the tool was positioned as a potential alternative to the well-established Sequential Organ Failure Assessment (SOFA) scores.
116
+
117
+ By design, the assignment was relatively open-ended. Although the scenario hinted towards fairness and transparency, no explicit requirements were given. Instead, students were required to identify requirements through their analysis of the context. To emphasize the importance of the problem formulation, a large proportion of points was awarded to this part of the assignment (learning objective 1). To teach the importance of fostering accountability, students were also required to fill out a data sheet (Gebru et al., 2018) and model card (Mitchell et al., 2019). Finally, students were asked to reflect upon their findings and explicitly reflect upon (ethical) implications of limitations of their developed model.
118
+
119
+ From the course evaluation survey, it was clear that many students highly appreciated the project. We found that most groups were able to successfully apply various machine learning techniques, including fairness assessment and techniques for enhancing interpretability or explainability. However, similar to the individual assignment, some groups were not able to articulate the relevance of these approaches precisely in the given context. For example, students were able to successfully compute a set of fairness metrics, but did not explain clearly why they believed a specific fairness metric was applicable within the scenario.
120
+
121
+ 218
122
+
123
+ 219
124
+
125
+ § 3. MOVING FORWARD
126
+
127
+ With the design of RML, we set out to build a bridge between ethical and technical perspectives, in a way that speaks to engineers. In this paper, we have showcased our approach and reflected upon our experiences. However, much work remains to be done.
128
+
129
+ § 3.1. REALISTIC CASE STUDIES
130
+
131
+ Although there is an increasing number of examples that showcase how machine learning models can be harmful, it can be difficult for students to connect technical decision-making with ethical implications beyond surface-level observations. As such, we believe that realistic and concrete case studies are crucial to facilitate student learning.
132
+
133
+ However, it has proven difficult to develop these materials within the context of a single university course. Firstly, the sensitivity of fairness-related data sets makes it challenging to find external collaborators who ware willing to collaborate. Secondly, publicly available data sets often lack the required contextualization, such as a datasheet (Gebru et al., 2018) or a realistic use case. Some of these issues might be alleviated through the use of carefully crafted synthetic data. However, this would still not allow students to engage with stakeholders' perspectives in a meaningful way and instead leave them to rely on their own assumptions about a scenario. One way forward would be to expand a case study not only with a description of the scenario, but also with direct input from (potentially fictional) stakeholders. For example, this could be organized as video-recorded interviews or written testimonials.
134
+
135
+ § 3.2. INTERROGATING THE ROLE OF AN ENGINEER
136
+
137
+ Ethical development of machine learning is a sociotechnical challenge that cannot be solved by engineers alone. In our view, engineering students should not be expected to be well-versed in all these different disciplines. Instead, we believe it is important to show students the limitations of the computer science lens and present concrete approaches to invite other perspectives.
138
+
139
+ We call on educators to develop more examples of multidisciplinary work that showcase the role of an engineer as well as other actors. For example, Raji et al. (2021) suggest to develop frameworks to cooperate with peers from other disciplines and to engage with affected populations. At engineering universities, organizing team work with other disciplines can be impractical. A different way to reflect the importance of other disciplines in course work would be to give students the opportunity to consult external experts, possibly in the form of auxiliary context that is only provided on demand.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/mdiNjHHdoQ7/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Developing Open Source Educational Resources for Machine Learning and Data Science
2
+
3
+ ## Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ Education should not be a privilege but a common good. It should be openly accessible to everyone, with as few barriers as possible; even more so for key technologies such as Machine Learning (ML) and Data Science (DS). Open Educational Resources (OER) are a crucial factor for greater educational equity. In this paper, we describe the specific requirements for OER in ML and DS and argue that it is especially important for these fields to make source files publicly available, leading to Open Source Educational Resources (OSER). We present our view on the collaborative development of OSER, the challenges this poses, and first steps towards their solutions. We outline how OSER can be used for blended learning scenarios and share our experiences in university education. Finally, we discuss additional challenges such as credit assignment or granting certificates.
8
+
9
+ ## 1. Introduction
10
+
11
+ Education is of paramount importance to overcome social inequalities. Globalization and broad access to the internet provide a major opportunity for allowing many more people from all over the world to access high quality educational resources. We endorse the vision of the Open Education Global initiative (OEG, a) and believe that teaching materials developed with the support of public financial resources should be openly accessible for the general public. The initiative aims at providing Open Educational Resources (OER) (OEG, b) which come with the '5R' permissions to retain, reuse, revise, remix, and redistribute the material. The UNESCO strongly promotes the concept of OER (UNESCO), has held two world congresses on OER, in 2012 and 2017, and finally adopted a recommendation on OER (UNESCO, 2019). This recommendation is claimed to be
12
+
13
+ 'the only existing international standard-setting instrument on OER' (UNESCO).
14
+
15
+ A variety of Massive Open Online Courses (MOOCs) have been created in the fields of Machine Learning (ML) and Data Science (DS). These MOOCs are mostly offered by commercial platforms (e.g., Ng, 2021; Boitsev et al., 2021; Malone & Thrun, 2021; Google, 2021; Sulmont et al., 2021; Eremenko & de Ponteves, 2021) but also by university-owned platforms (e.g., MIT, 2021).
16
+
17
+ Although the material itself is often freely accessible, access to the sources that are needed to reproduce, modify and reuse the material is usually not provided. Only a small fraction of courses in ML and DS actually share all their sources, such as slides sources in .pptx or LTEX and source codes for plots, examples, and exercises. We call those 'Open Source Educational Resources' (OSER) to underline this feature; positive examples include Montani (2021); Çetinkaya-Rundel (2021a;b); Vanschoren (2021).
18
+
19
+ Direct benefits of OSER from the perspective of lecturers include: 1. The material will often be of higher quality if additional experts are able to contribute and improve the material. 2. It is more efficient to develop a new course since material can be adapted and re-used from previously created courses legally. 3. Starting from an established OSER course, lecturers can focus on developing additional chapters and tailoring existing material to their audience.
20
+
21
+ We believe there will never be the one and only course on a certain topic - in fact, diversity in how topics are taught is important. Reasons include different constraints on the volume of a course defined by an institution's curriculum, different backgrounds (in the context of ML and DS courses, e.g., statistics, mathematics, computer science), different types of institutions (e.g. university vs. continuing education, undergraduate vs. (post)graduate), different substantive focus, and sometimes simply different styles. On the other hand, it seems natural that teachers for many subjects should be able to find networks of peers among which a considerable amount of content can be similar and we advocate that it is only sensible and efficient to share, reuse and collaboratively improve teaching resources in such cases. But if such a network of peers or shared interest in a certain topic is established, usage of the material is often more complicated and less straight-forward, as adaptions and modifications are often still necessary to accommodate the different contexts and constraints of each institution the teachers work at. The easier and more natural such (reasonable and common) modifications are possible, the more likely it is that like-minded developers and teachers can agree to form a networked team. In our view, there are six different use cases (UC) in this context that can be classified as usage of the material and contribution to the material. Usage consists of (UC1) usage of material without any modifications for teaching, (UC2) usage of a subset of the material for a smaller course, (UC3) development of a somewhat different course where the existing material is used as a starting point. Contribution consists of (UC4) correcting errors, (UC5) improving the existing material, and (UC6) adding new material which leads to a larger OSER collection. In our experience, sharing teaching material is much more complex than just publishing lecture slides on the web. In this paper, we describe which core principles of developing OSER in general and, specifically, for teaching ML and DS should be considered, share our experiences of applying them and point towards open questions and challenges.
22
+
23
+ ---
24
+
25
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
26
+
27
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
28
+
29
+ ---
30
+
31
+ ## 2. OSER and Machine Learning
32
+
33
+ In this article, we discuss OSER from the perspective of teaching ML and DS. Standard material in ML and DS naturally includes theoretical components introducing mathematical concepts, methods and algorithms, typically presented via lecture slides and accompanying videos, as well as practical components such as code demos, which are important to allow for hands-on experience. In contrast to many other disciplines: 1. ML's strong foundation in statistics and algorithms allows to define and illustrate many concepts via (pseudo-)code; 2. Large, open data repositories (such as OpenML (Vanschoren et al., 2013) or ImageNet (Deng et al., 2009)) allow students to obtain hands-on experience on many different applications. 3. Many state-of-the-art ML and DS packages (such as scikit-learn (Pedregosa et al., 2011), mlr3 (Lang et al., 2019), caret (Kuhn et al., 2008), ten-sorflow (Abadi et al., 2015) or pytorch (Paszke et al., 2019)) are open-source and freely available so that students can directly learn to use libraries and frameworks that are also relevant in their future jobs; 4. Many concepts, algorithms, data characteristics and empirical results can be nicely visualized, and this often happens through short coded examples using the mentioned ML toolkits and open data repositories. 5. Gamification via competitions (Kapp, 2012) is possible since running experiments is (comparably) cheap, datasets are available and existing platforms, such as Kaggle InClass, provide the necessary infrastructure and have shown improved learning outcomes (Polak & Cook, 2021).
34
+
35
+ The following recommendations should always be seen in view of the points mentioned above. They also demonstrate that our community is closely connected to the open source spirit and transferring concepts related to open source to teaching in ML should feel even more natural than in other sciences. Furthermore, as students are (or better: should be) used to working with practical ML notebooks on open data sets, each source example used in a lecture chapter (to generate a plot, animation or toy result) provides a potential starting point for further student exploration.
36
+
37
+ ## 3. Developing OSER - the Core Principles
38
+
39
+ We argue that developing OSER has several benefits for students as well as for lecturers and that a lot can be gained from transferring concepts and workflows from software engineering in general and open source software development specifically to the development of OSER, e.g., collaborative work in decentralized teams, modularization, issue tracking, change tracking, and working in properly scheduled release cycles. In the following we list major core principles which in our opinion provide the basis for successful development of open source resources, including brief hints regarding useful technical tools and workflows (many, again, inspired from open source software engineering) and briefly discuss connected challenges.
40
+
41
+ Develop course material collaboratively with others. When several experts from a specific field come together to develop a course, there is a realistic chance that the material will be of higher quality, more balanced, and up-to-date. Furthermore, the total workload for each member of the collaboration is smaller compared with creating courses individually. However, developing a course together comes with the necessity of more communication between the members of the group, e.g., to ensure a consistent storyline and a set of common prerequisites, teaching goals, and mathematical notation. In order to reduce associated costs, an efficient communication structure and the right toolkits are vital, e.g., Git for version control and Mattermost as a communication platform.
42
+
43
+ Make your sources open and modifiable. If only the 'consumer' products of the course (e.g., lecture slides and videos) are published with a license to reuse them, other teachers are forced to take or leave the material as a whole for their course, since any edits would require the huge effort to build sources from scratch and also cut off this teacher from any future improvements of the base material (a hard fork). Therefore, all source files should be made public as well. Furthermore, opening material sources to the public does not only imply public reading access but also the possibility to public contributions to and feedback on the material. A quality control gate has to be implemented in order to ensure that contributions always improve the quality of the material. This can, e.g., be achieved by pull requests in a Git-based system, where suggested modifications are reviewed by members of the core maintainer team.
44
+
45
+ Use open licenses. In order to be able to share material legally, permissive licenses that allow for modification and redistribution have to be used. The OER community recommends licenses of the Creative Commons organization that were designed for all kinds of creative material (e.g., images, texts, and slides). The approach we are proposing, however, consists of creative material but also of source code that allows third parties to tailor the material. Therefore, also open-source licenses need to be considered: Taking the definition of the Open Source Initiative (OSI) as a guideline, we recommend releasing the material under two different licenses: Source files such as LTEX, R or Python files should be released under a permissive BSD-like license or a protective GPL or AGPL license, while files such as images, videos, and slides should be released under a Creative Commons license such as CC-BY-SA 4.0.
46
+
47
+ Release well-defined versions and maintain change logs. As development versions of the material will not be overall consistent, it is important to tag versions that can be considered 'stable'. These releases should be easily identifiable and accessible. A change log that lists main changes compared to prior versions should accompany every release.
48
+
49
+ Define prerequisites and learning goals of the course. In order for other lecturers to efficiently evaluate the material for use in their course, it is important to clearly define the scope of the course and its prerequisites, potentially providing references to books or online material which are well-suited to bring students to the desired starting level. Furthermore, each chapter or course unit also needs clearly defined learning goals, so that lecturers can easily select relevant subsets of the material and remix or extend them.
50
+
51
+ Foster self-regulated learning. In our opinion, only active application of newly learned material guarantees proper memorization and deeper understanding. Such application entails example calculations, method applications on toy examples, and active participation in theoretical arguments and proofs. Such exercises are not trivially constructed if one aims at automatic self-assessment to support independent self-study of students. A simple option are multiple-choice quizzes, allowing students to test their understanding after watching videos or reading texts. As students might choose correct answers for the wrong reasons, quizzes should ideally be accompanied by in-depth explanations. Coding exercises, especially important in ML and DS to deepen practical understanding, should be accompanied by well-documented solutions. They can at least be partially assessed by subdividing the required solution into smaller components and defining strict function signatures for each part. Their correctness can now be examined in a piecemeal and step-by-step manner on progressively more complex input-output pairs with failure feedback - pretty much exactly as unit tests are constructed in modern software development.
52
+
53
+ Modularization: Structure the material in small chunks. We recommend structuring the material in small chunks with a very clearly defined learning goal per chunk, c.f. microlearning and microcontent (Hug, 2006). While mi-crolearning is aimed at enabling more successful studying in smaller, well defined units, we would like to emphasize that such modularization is also highly beneficial from an OSER developer perspective. Highly modularized material can be adapted for different use cases much more easily, and this design principle is analogous to the way good software libraries are constructed. Modularization enables teachers to make changes to specific parts of their course without the need to modify a large set of different chunks (UC4 and UC5). Additional topics and concepts can be plugged in smoothly (UC6) and compiling a smaller, partial course (UC2) is rather convenient and often necessary. Finally, the existing material (or a subset of it) can be used much more easily as a starting point for developing a somewhat different course (UC3).
54
+
55
+ Modularization: Disentangle theoretical concepts and implementation details. For most topics, there is no single best programming language, and preferences and languages themselves evolve quickly over time. To ensure that the choice of programming language does not limit who can study the course, the lecture material should, wherever possible, separate theoretical considerations from coding demos, toolkit discussions and coding exercises. That way, this components can be swapped out or provided in alternative languages, without affecting the remaining material - e.g., an ML lecture with practical variants for Python, R, and Julia, where the latter can be freely chosen depending on the students background. Even more important, this enables a focused, modularized change, if a developer wants to teach the same course via a different programming language.
56
+
57
+ Do not use literate programming systems everywhere. Literate programming systems (Geoffrey M. Poore, 2019; Xie, 2018) provide a convenient way to combine descriptive text (e.g., LATEX or Markdown) with source code that produces figures or tables (e.g., R or Python) into one single source file. At least for lecture slides, we advise against literate programming systems, and instead advocate using a typesetting system such as LTEX with externally generated (fully reproducible) code parts for examples, figures and tables to provide modularization of content and content generation. The mixture of typesetting and code language usually results in simultaneous dependency, debugging and runtime problems and can make simple text modifications much more tedious than they should be.
58
+
59
+ Enable feedback from everyone. Feedback for OSER can come from colleagues and other experts, but students and student assistants also provide very valuable feedback in our experience. Providing students the chance to submit pull requests can further improve the material and student engagement. Therefore, we advocate to be open to feedback from all directions and all levels of expertise. Modern VCS platforms like Github provide infrastructure for broad-based feedback via issue trackers, pull requests and project Wikis.
60
+
61
+ ## 4. Using OSER in Blended Learning
62
+
63
+ High-quality OSER provide an ideal foundation for blended learning scenarios, in which direct interaction with students complements their self-study based on the OSER. Our ideas of how to design an accompanying inverted classroom are based on our experiences from recent years where we have offered several such courses, including an introduction to ML, an advanced ML course, and a full MOOC on a specialization in ML at a platform without paywalls. ${}^{1}$ All the materials are publicly available in GitHub repositories, incl. LATEX files, code for generating plots, demos and exercises, and automatically graded quizzes for self-assessment.
64
+
65
+ Even if the goal of the online material is to be as self-contained as possible to optimally support self-regulated learning, an accompanying class - where lecturers and students can be in direct contact - will increase learning success. This class should not consist of repeating the lecture material in a classical lecture, making the videos redundant. Instead, it should use all the online material and add the valuable component of interacting with others - other students and lecturers. The goal of the class should be to encourage students' active engagement with the material by asking and answering open questions, discussing case studies or discussing more advanced topics. It can consist, e.g., of a question and answer session, live quizzes moderated by the lecturer, group work regarding the exercise sheets, and many more. It is key that students are engaged as much as possible here to foster active and critical thinking.
66
+
67
+ ## 5. Challenges and Discussion
68
+
69
+ Quality control and assurance of consistency. A single lecturer should always know the status of their material and can organize changes in any form, without further communication. With a (potentially large) development team, well-intentioned changes can even degrade the quality of the course; consistency of narrative, notation, and simply correctness of edits by less experienced developers have to be ensured. Additionally, it can be a substantial initial effort to integrate existing material of different previous courses from different instructors into a single shared course. Therefore, a quality control process has to be implemented, which generates additional overhead.
70
+
71
+ Changed workflow for lecturers. Developing an online course and teaching in an accompanying inverted classroom changes the workflow of the lecturer. Whereas the material in a classical lecture is presented at fixed time slots during the semester, the online setup allows even more liberal allocation of work time not only for the development, but also for the recording of the material. Furthermore, material can now be iterated in focused sprints and larger parts of well-established lectures can be re-used during the semester without changes. This can result in large time gains and better control of time allocation for the developer. On the other hand, our experience shows that recording high-quality videos is considerably more time-consuming ${}^{2}$ than a classical lecture in person.
72
+
73
+ Technical barrier. The entire team of developers, from senior lecturers to student assistants, has to work with a much larger toolkit chain that requires more technical expertise. Reducing this entry barrier as much as possible by not overcomplicating setups and providing as much guidance from senior developers is absolute key in our experience.
74
+
75
+ Enable communication between students and between students and lecturers. When using the OSER in a full online or MOOC setting, it is important for students to communicate amongst each other and obtain answers from lecturers in order to provide active, positive exchange between all participants and to create a group experience. We think this is a key challenge, and easier to accomplish in a blended learning setup with on-campus sessions. Possible, at least partial remedies are an online forum or a peer-review system for exercises where students give feedback to other students resulting in a scalable feedback system. Especially for the fields of ML and DS, online forums such as Stack Overflow or Cross Validated are widely used and can be reused for lecture questions if threads are properly tagged. This not only reuses existing open tools, but provides the opportunity of exchange with a larger community.
76
+
77
+ Granting certificates for online students in MOOCs. An open issue remains the question if and how (external) students who take a full online course can be granted certificates in some way. Challenges are: (1) Scalability of the grading process for a potentially very large number of students. A possible solution could be assessments by randomly assigned peers in combination with few samples graded by instructors. (2) Preventing fraud and making sure that people answered exam questions on their own. The risk can be mitigated by randomly assigning tasks, asking open questions or assigning more creative tasks for which there is no single correct answer. (3) Designing an evaluation process that evaluates the learning goals of the course.
78
+
79
+ Providing solutions. Solutions should be online and accessible at all times, but focused, unassisted work on solving the exercises has a positive impact on the learning success. It is somewhat unclear whether providing fully worked out solutions encourages students to access these too early.
80
+
81
+ Credit assignment. If a larger group of developers collaborates on a course, it is no longer clear who should get credit for which parts of the material. The quantity and quality of contributions by the different contributors will vary. We recommend a magnanimous and non-hierarchical policy of generous credit assignment that does not emphasize such differences to avoid alienating potential contributors.
82
+
83
+ ---
84
+
85
+ ${}^{1}$ Details now omitted to preserve double-blind review process.
86
+
87
+ ${}^{2}$ maybe by a factor of 3-4, personal estimate by one author
88
+
89
+ ---
90
+
91
+ ## References
92
+
93
+ Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Lev-enberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/.Software available from tensorflow.org.
94
+
95
+ Boitsev, A., Romanov, A., Volchek, D., Mikhailova, E., Grafeeva, N., and Egorova, O. Introduction to machine learning. https://www.edx.org/course/ introduction-to-machine-learning, 2021. Accessed: 2021-05-31.
96
+
97
+ Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, 2009. doi: 10.1109/CVPR.2009.5206848.
98
+
99
+ Eremenko, K. and de Ponteves, H. Machine learning a-z: Hands-on python & $\mathrm{r}$ in data science. https: //www.udemy.com/share/101Wci/,2021. Accessed: 2021-05-31.
100
+
101
+ Geoffrey M. Poore. Codebraid: Live Code in Pandoc Markdown. In Chris Calloway, David Lippa, Dillon Nieder-hut, and David Shupe (eds.), Proceedings of the 18th Python in Science Conference, pp. 54-61, 2019. doi: 10.25080/Majora-7ddc1dd1-008.
102
+
103
+ Google. Introduction to machine learning. https://developers.google.com/ machine-learning/crash-course/ ml-intro,2021. Accessed: 2021-05-31.
104
+
105
+ Hug, T. Microlearning: a new pedagogical challenge (introductory note). T. Hug, M. Lindner, P. A. Bruck, (Eds.), Microlearning: Emerging Concepts, Practices and Technologies After E-Learning: Proceedings of Microlearning Conference 2005: Learning Working in New Media (pp. 8-11). Innsbruck, Austria: Innsbruck University Press., 2006.
106
+
107
+ Kapp, K. M. The gamification of learning and instruction: game-based methods and strategies for training and education. John Wiley & Sons, 2012.
108
+
109
+ Kuhn, M. et al. Building predictive models in r using the caret package. Journal of Statistical Software, 28(5): $1 - {26},{2008}$ .
110
+
111
+ Lang, M., Binder, M., Richter, J., Schratz, P., Pfisterer, F., Coors, S., Au, Q., Casalicchio, G., Kotthoff, L., and Bis-chl, B. mlr3: A modern object-oriented machine learning framework in R. Journal of Open Source Software, dec 2019. doi: 10.21105/joss.01903. URL https://joss.theoj.org/papers/10.21105/joss.01903.
112
+
113
+ Malone, K. and Thrun, S. Introduction to machine learning course. https://www.udacity.com/course/ intro-to-machine-learning--ud120, 2021. Accessed: 2021-05-31.
114
+
115
+ MIT. Mit open learning library. https://openlearning.mit.edu/courses-programs/
116
+
117
+ open-learning-library, 2021. Accessed: 2021-05-31.
118
+
119
+ Montani, I. Advanced nlp with spacy: A free online course. https://github.com/ines/ spacy-course, 2021. Accessed: 2021-06-14.
120
+
121
+ Ng, A. Machine learning. https://www.coursera.org/learn/machine-learning,2021.Accessed: 2021-05-31.
122
+
123
+ OEG. Open education global initiative. https://www.oeglobal.org/, a. Accessed: 2021-05-31.
124
+
125
+ OEG. Open education global initiative - definition oer. https://www.oeglobal.org/about-us/ what-we-do/, b. Accessed: 2021-05-31.
126
+
127
+ OSI. The open source definition. https:// opensource.org/docs/osd. Accessed: 2021-06- 04.
128
+
129
+ Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32, pp. 8024-8035. Curran Associates, Inc., 2019. URL https: //papers.nips.cc/paper/2019/file/ bdbca288fee7f92f2bfa9f7012727740-Paper. pdf.
130
+
131
+ Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cour-napeau, D., Brucher, M., Perrot, M., and Duchesnay, E. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011.
132
+
133
+ Polak, J. and Cook, D. A study on student performance, engagement, and experience with kaggle inclass data challenges. Journal of Statistics and Data Science Education, 29(1):63-70, 2021.
134
+
135
+ Sulmont, L., Lacroix, H., and Billen, S. "ml" for everyone. https://learn.datacamp.com/courses/ machine-learning-for-everyone, 2021. Accessed: 2021-05-31.
136
+
137
+ UNESCO. Building knowledge societies.
138
+
139
+ https://en.unesco.org/themes/
140
+
141
+ building-knowledge-societies/oer. Accessed: 2021-05-31.
142
+
143
+ UNESCO. Recommendation on open educational resources (oer). http://portal.unesco.org/en/ev.php-URL_ID=49556&URL_DO=DO_TOPIC&URL_ SECTION=201 . html, 2019. Accessed: 2021-05-31.
144
+
145
+ Vanschoren, J. An open machine learning course. https: //github.com/ML-course/master,2021. Accessed: 2021-06-14.
146
+
147
+ Vanschoren, J., van Rijn, J. N., Bischl, B., and Torgo, L. Openml: Networked science in machine learning. SIGKDD Explorations, 15(2):49-60, 2013. doi: 10.1145/2641190.2641198. URL http://doi.acm.org/10.1145/2641190.2641198.
148
+
149
+ Xie, Y. knitr: a comprehensive tool for reproducible research in r. In Implementing reproducible research, pp. 3-31. Chapman and Hall/CRC, 2018.
150
+
151
+ Çetinkaya-Rundel, M. Data analysis and statistical inference. https://github.com/ mine-cetinkaya-rundel/sta101-s16,2021a. Accessed: 2021-05-31.
152
+
153
+ Çetinkaya-Rundel, M. Intro to data science. https:// github.com/Sta199-S18/website,2021b. Accessed: 2021-05-31.
154
+
155
+ 320
156
+
157
+ 325
158
+
159
+ 326
160
+
161
+ 327
162
+
163
+ 328
164
+
165
+ 329
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/mdiNjHHdoQ7/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § DEVELOPING OPEN SOURCE EDUCATIONAL RESOURCES FOR MACHINE LEARNING AND DATA SCIENCE
2
+
3
+ § ANONYMOUS AUTHORS ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ Education should not be a privilege but a common good. It should be openly accessible to everyone, with as few barriers as possible; even more so for key technologies such as Machine Learning (ML) and Data Science (DS). Open Educational Resources (OER) are a crucial factor for greater educational equity. In this paper, we describe the specific requirements for OER in ML and DS and argue that it is especially important for these fields to make source files publicly available, leading to Open Source Educational Resources (OSER). We present our view on the collaborative development of OSER, the challenges this poses, and first steps towards their solutions. We outline how OSER can be used for blended learning scenarios and share our experiences in university education. Finally, we discuss additional challenges such as credit assignment or granting certificates.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ Education is of paramount importance to overcome social inequalities. Globalization and broad access to the internet provide a major opportunity for allowing many more people from all over the world to access high quality educational resources. We endorse the vision of the Open Education Global initiative (OEG, a) and believe that teaching materials developed with the support of public financial resources should be openly accessible for the general public. The initiative aims at providing Open Educational Resources (OER) (OEG, b) which come with the '5R' permissions to retain, reuse, revise, remix, and redistribute the material. The UNESCO strongly promotes the concept of OER (UNESCO), has held two world congresses on OER, in 2012 and 2017, and finally adopted a recommendation on OER (UNESCO, 2019). This recommendation is claimed to be
12
+
13
+ 'the only existing international standard-setting instrument on OER' (UNESCO).
14
+
15
+ A variety of Massive Open Online Courses (MOOCs) have been created in the fields of Machine Learning (ML) and Data Science (DS). These MOOCs are mostly offered by commercial platforms (e.g., Ng, 2021; Boitsev et al., 2021; Malone & Thrun, 2021; Google, 2021; Sulmont et al., 2021; Eremenko & de Ponteves, 2021) but also by university-owned platforms (e.g., MIT, 2021).
16
+
17
+ Although the material itself is often freely accessible, access to the sources that are needed to reproduce, modify and reuse the material is usually not provided. Only a small fraction of courses in ML and DS actually share all their sources, such as slides sources in .pptx or LTEX and source codes for plots, examples, and exercises. We call those 'Open Source Educational Resources' (OSER) to underline this feature; positive examples include Montani (2021); Çetinkaya-Rundel (2021a;b); Vanschoren (2021).
18
+
19
+ Direct benefits of OSER from the perspective of lecturers include: 1. The material will often be of higher quality if additional experts are able to contribute and improve the material. 2. It is more efficient to develop a new course since material can be adapted and re-used from previously created courses legally. 3. Starting from an established OSER course, lecturers can focus on developing additional chapters and tailoring existing material to their audience.
20
+
21
+ We believe there will never be the one and only course on a certain topic - in fact, diversity in how topics are taught is important. Reasons include different constraints on the volume of a course defined by an institution's curriculum, different backgrounds (in the context of ML and DS courses, e.g., statistics, mathematics, computer science), different types of institutions (e.g. university vs. continuing education, undergraduate vs. (post)graduate), different substantive focus, and sometimes simply different styles. On the other hand, it seems natural that teachers for many subjects should be able to find networks of peers among which a considerable amount of content can be similar and we advocate that it is only sensible and efficient to share, reuse and collaboratively improve teaching resources in such cases. But if such a network of peers or shared interest in a certain topic is established, usage of the material is often more complicated and less straight-forward, as adaptions and modifications are often still necessary to accommodate the different contexts and constraints of each institution the teachers work at. The easier and more natural such (reasonable and common) modifications are possible, the more likely it is that like-minded developers and teachers can agree to form a networked team. In our view, there are six different use cases (UC) in this context that can be classified as usage of the material and contribution to the material. Usage consists of (UC1) usage of material without any modifications for teaching, (UC2) usage of a subset of the material for a smaller course, (UC3) development of a somewhat different course where the existing material is used as a starting point. Contribution consists of (UC4) correcting errors, (UC5) improving the existing material, and (UC6) adding new material which leads to a larger OSER collection. In our experience, sharing teaching material is much more complex than just publishing lecture slides on the web. In this paper, we describe which core principles of developing OSER in general and, specifically, for teaching ML and DS should be considered, share our experiences of applying them and point towards open questions and challenges.
22
+
23
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
24
+
25
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
26
+
27
+ § 2. OSER AND MACHINE LEARNING
28
+
29
+ In this article, we discuss OSER from the perspective of teaching ML and DS. Standard material in ML and DS naturally includes theoretical components introducing mathematical concepts, methods and algorithms, typically presented via lecture slides and accompanying videos, as well as practical components such as code demos, which are important to allow for hands-on experience. In contrast to many other disciplines: 1. ML's strong foundation in statistics and algorithms allows to define and illustrate many concepts via (pseudo-)code; 2. Large, open data repositories (such as OpenML (Vanschoren et al., 2013) or ImageNet (Deng et al., 2009)) allow students to obtain hands-on experience on many different applications. 3. Many state-of-the-art ML and DS packages (such as scikit-learn (Pedregosa et al., 2011), mlr3 (Lang et al., 2019), caret (Kuhn et al., 2008), ten-sorflow (Abadi et al., 2015) or pytorch (Paszke et al., 2019)) are open-source and freely available so that students can directly learn to use libraries and frameworks that are also relevant in their future jobs; 4. Many concepts, algorithms, data characteristics and empirical results can be nicely visualized, and this often happens through short coded examples using the mentioned ML toolkits and open data repositories. 5. Gamification via competitions (Kapp, 2012) is possible since running experiments is (comparably) cheap, datasets are available and existing platforms, such as Kaggle InClass, provide the necessary infrastructure and have shown improved learning outcomes (Polak & Cook, 2021).
30
+
31
+ The following recommendations should always be seen in view of the points mentioned above. They also demonstrate that our community is closely connected to the open source spirit and transferring concepts related to open source to teaching in ML should feel even more natural than in other sciences. Furthermore, as students are (or better: should be) used to working with practical ML notebooks on open data sets, each source example used in a lecture chapter (to generate a plot, animation or toy result) provides a potential starting point for further student exploration.
32
+
33
+ § 3. DEVELOPING OSER - THE CORE PRINCIPLES
34
+
35
+ We argue that developing OSER has several benefits for students as well as for lecturers and that a lot can be gained from transferring concepts and workflows from software engineering in general and open source software development specifically to the development of OSER, e.g., collaborative work in decentralized teams, modularization, issue tracking, change tracking, and working in properly scheduled release cycles. In the following we list major core principles which in our opinion provide the basis for successful development of open source resources, including brief hints regarding useful technical tools and workflows (many, again, inspired from open source software engineering) and briefly discuss connected challenges.
36
+
37
+ Develop course material collaboratively with others. When several experts from a specific field come together to develop a course, there is a realistic chance that the material will be of higher quality, more balanced, and up-to-date. Furthermore, the total workload for each member of the collaboration is smaller compared with creating courses individually. However, developing a course together comes with the necessity of more communication between the members of the group, e.g., to ensure a consistent storyline and a set of common prerequisites, teaching goals, and mathematical notation. In order to reduce associated costs, an efficient communication structure and the right toolkits are vital, e.g., Git for version control and Mattermost as a communication platform.
38
+
39
+ Make your sources open and modifiable. If only the 'consumer' products of the course (e.g., lecture slides and videos) are published with a license to reuse them, other teachers are forced to take or leave the material as a whole for their course, since any edits would require the huge effort to build sources from scratch and also cut off this teacher from any future improvements of the base material (a hard fork). Therefore, all source files should be made public as well. Furthermore, opening material sources to the public does not only imply public reading access but also the possibility to public contributions to and feedback on the material. A quality control gate has to be implemented in order to ensure that contributions always improve the quality of the material. This can, e.g., be achieved by pull requests in a Git-based system, where suggested modifications are reviewed by members of the core maintainer team.
40
+
41
+ Use open licenses. In order to be able to share material legally, permissive licenses that allow for modification and redistribution have to be used. The OER community recommends licenses of the Creative Commons organization that were designed for all kinds of creative material (e.g., images, texts, and slides). The approach we are proposing, however, consists of creative material but also of source code that allows third parties to tailor the material. Therefore, also open-source licenses need to be considered: Taking the definition of the Open Source Initiative (OSI) as a guideline, we recommend releasing the material under two different licenses: Source files such as LTEX, R or Python files should be released under a permissive BSD-like license or a protective GPL or AGPL license, while files such as images, videos, and slides should be released under a Creative Commons license such as CC-BY-SA 4.0.
42
+
43
+ Release well-defined versions and maintain change logs. As development versions of the material will not be overall consistent, it is important to tag versions that can be considered 'stable'. These releases should be easily identifiable and accessible. A change log that lists main changes compared to prior versions should accompany every release.
44
+
45
+ Define prerequisites and learning goals of the course. In order for other lecturers to efficiently evaluate the material for use in their course, it is important to clearly define the scope of the course and its prerequisites, potentially providing references to books or online material which are well-suited to bring students to the desired starting level. Furthermore, each chapter or course unit also needs clearly defined learning goals, so that lecturers can easily select relevant subsets of the material and remix or extend them.
46
+
47
+ Foster self-regulated learning. In our opinion, only active application of newly learned material guarantees proper memorization and deeper understanding. Such application entails example calculations, method applications on toy examples, and active participation in theoretical arguments and proofs. Such exercises are not trivially constructed if one aims at automatic self-assessment to support independent self-study of students. A simple option are multiple-choice quizzes, allowing students to test their understanding after watching videos or reading texts. As students might choose correct answers for the wrong reasons, quizzes should ideally be accompanied by in-depth explanations. Coding exercises, especially important in ML and DS to deepen practical understanding, should be accompanied by well-documented solutions. They can at least be partially assessed by subdividing the required solution into smaller components and defining strict function signatures for each part. Their correctness can now be examined in a piecemeal and step-by-step manner on progressively more complex input-output pairs with failure feedback - pretty much exactly as unit tests are constructed in modern software development.
48
+
49
+ Modularization: Structure the material in small chunks. We recommend structuring the material in small chunks with a very clearly defined learning goal per chunk, c.f. microlearning and microcontent (Hug, 2006). While mi-crolearning is aimed at enabling more successful studying in smaller, well defined units, we would like to emphasize that such modularization is also highly beneficial from an OSER developer perspective. Highly modularized material can be adapted for different use cases much more easily, and this design principle is analogous to the way good software libraries are constructed. Modularization enables teachers to make changes to specific parts of their course without the need to modify a large set of different chunks (UC4 and UC5). Additional topics and concepts can be plugged in smoothly (UC6) and compiling a smaller, partial course (UC2) is rather convenient and often necessary. Finally, the existing material (or a subset of it) can be used much more easily as a starting point for developing a somewhat different course (UC3).
50
+
51
+ Modularization: Disentangle theoretical concepts and implementation details. For most topics, there is no single best programming language, and preferences and languages themselves evolve quickly over time. To ensure that the choice of programming language does not limit who can study the course, the lecture material should, wherever possible, separate theoretical considerations from coding demos, toolkit discussions and coding exercises. That way, this components can be swapped out or provided in alternative languages, without affecting the remaining material - e.g., an ML lecture with practical variants for Python, R, and Julia, where the latter can be freely chosen depending on the students background. Even more important, this enables a focused, modularized change, if a developer wants to teach the same course via a different programming language.
52
+
53
+ Do not use literate programming systems everywhere. Literate programming systems (Geoffrey M. Poore, 2019; Xie, 2018) provide a convenient way to combine descriptive text (e.g., LATEX or Markdown) with source code that produces figures or tables (e.g., R or Python) into one single source file. At least for lecture slides, we advise against literate programming systems, and instead advocate using a typesetting system such as LTEX with externally generated (fully reproducible) code parts for examples, figures and tables to provide modularization of content and content generation. The mixture of typesetting and code language usually results in simultaneous dependency, debugging and runtime problems and can make simple text modifications much more tedious than they should be.
54
+
55
+ Enable feedback from everyone. Feedback for OSER can come from colleagues and other experts, but students and student assistants also provide very valuable feedback in our experience. Providing students the chance to submit pull requests can further improve the material and student engagement. Therefore, we advocate to be open to feedback from all directions and all levels of expertise. Modern VCS platforms like Github provide infrastructure for broad-based feedback via issue trackers, pull requests and project Wikis.
56
+
57
+ § 4. USING OSER IN BLENDED LEARNING
58
+
59
+ High-quality OSER provide an ideal foundation for blended learning scenarios, in which direct interaction with students complements their self-study based on the OSER. Our ideas of how to design an accompanying inverted classroom are based on our experiences from recent years where we have offered several such courses, including an introduction to ML, an advanced ML course, and a full MOOC on a specialization in ML at a platform without paywalls. ${}^{1}$ All the materials are publicly available in GitHub repositories, incl. LATEX files, code for generating plots, demos and exercises, and automatically graded quizzes for self-assessment.
60
+
61
+ Even if the goal of the online material is to be as self-contained as possible to optimally support self-regulated learning, an accompanying class - where lecturers and students can be in direct contact - will increase learning success. This class should not consist of repeating the lecture material in a classical lecture, making the videos redundant. Instead, it should use all the online material and add the valuable component of interacting with others - other students and lecturers. The goal of the class should be to encourage students' active engagement with the material by asking and answering open questions, discussing case studies or discussing more advanced topics. It can consist, e.g., of a question and answer session, live quizzes moderated by the lecturer, group work regarding the exercise sheets, and many more. It is key that students are engaged as much as possible here to foster active and critical thinking.
62
+
63
+ § 5. CHALLENGES AND DISCUSSION
64
+
65
+ Quality control and assurance of consistency. A single lecturer should always know the status of their material and can organize changes in any form, without further communication. With a (potentially large) development team, well-intentioned changes can even degrade the quality of the course; consistency of narrative, notation, and simply correctness of edits by less experienced developers have to be ensured. Additionally, it can be a substantial initial effort to integrate existing material of different previous courses from different instructors into a single shared course. Therefore, a quality control process has to be implemented, which generates additional overhead.
66
+
67
+ Changed workflow for lecturers. Developing an online course and teaching in an accompanying inverted classroom changes the workflow of the lecturer. Whereas the material in a classical lecture is presented at fixed time slots during the semester, the online setup allows even more liberal allocation of work time not only for the development, but also for the recording of the material. Furthermore, material can now be iterated in focused sprints and larger parts of well-established lectures can be re-used during the semester without changes. This can result in large time gains and better control of time allocation for the developer. On the other hand, our experience shows that recording high-quality videos is considerably more time-consuming ${}^{2}$ than a classical lecture in person.
68
+
69
+ Technical barrier. The entire team of developers, from senior lecturers to student assistants, has to work with a much larger toolkit chain that requires more technical expertise. Reducing this entry barrier as much as possible by not overcomplicating setups and providing as much guidance from senior developers is absolute key in our experience.
70
+
71
+ Enable communication between students and between students and lecturers. When using the OSER in a full online or MOOC setting, it is important for students to communicate amongst each other and obtain answers from lecturers in order to provide active, positive exchange between all participants and to create a group experience. We think this is a key challenge, and easier to accomplish in a blended learning setup with on-campus sessions. Possible, at least partial remedies are an online forum or a peer-review system for exercises where students give feedback to other students resulting in a scalable feedback system. Especially for the fields of ML and DS, online forums such as Stack Overflow or Cross Validated are widely used and can be reused for lecture questions if threads are properly tagged. This not only reuses existing open tools, but provides the opportunity of exchange with a larger community.
72
+
73
+ Granting certificates for online students in MOOCs. An open issue remains the question if and how (external) students who take a full online course can be granted certificates in some way. Challenges are: (1) Scalability of the grading process for a potentially very large number of students. A possible solution could be assessments by randomly assigned peers in combination with few samples graded by instructors. (2) Preventing fraud and making sure that people answered exam questions on their own. The risk can be mitigated by randomly assigning tasks, asking open questions or assigning more creative tasks for which there is no single correct answer. (3) Designing an evaluation process that evaluates the learning goals of the course.
74
+
75
+ Providing solutions. Solutions should be online and accessible at all times, but focused, unassisted work on solving the exercises has a positive impact on the learning success. It is somewhat unclear whether providing fully worked out solutions encourages students to access these too early.
76
+
77
+ Credit assignment. If a larger group of developers collaborates on a course, it is no longer clear who should get credit for which parts of the material. The quantity and quality of contributions by the different contributors will vary. We recommend a magnanimous and non-hierarchical policy of generous credit assignment that does not emphasize such differences to avoid alienating potential contributors.
78
+
79
+ ${}^{1}$ Details now omitted to preserve double-blind review process.
80
+
81
+ ${}^{2}$ maybe by a factor of 3-4, personal estimate by one author
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/pJWKWGNvoLi/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Collaborative Build of Statistics and Machine Learning Analysis Pipelines for Cosmology and Particle Physics
2
+
3
+ ## Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ In a full-time, week-long project after a Statistics and Machine Learning course, students grouped in "Collaboration" of Teams build pipelines to analyse scientific data: to prove the existence of the Higgs boson from the Large Hadron Col-lider data or Dark Energy from supernova surveys. They have the opportunity to implement together a variety of tools and concepts from physics, data processing, Statistics and Machine Learning, borrowed from the course, earlier training or resources from the Internet.
8
+
9
+ ## 1. Introduction
10
+
11
+ Quite often, when teaching Statistics and Machine Learning, specific topics are introduced one after the other and exercises (or hands-on tutorials) are self-contained. We have set up in 2019 (and repeated in 2020 and 2021) a broader week-long project where scientific data analysis pipelines are built by the students, complementing a 30 hour introductory course on Statistics and Machine Learning for particle physics and cosmology. The objective is to give them a thorough experience by connecting the various elements they have been taught in order to obtain the best possible measurement. The topics are broad and students must organise themselves in teams and collaborate in the same spirit as international experimental collaborations.
12
+
13
+ The original idea comes from another department in the University where students are working in teams to design a synchrotron beam line, from the extraction of the X-rays through various collimators onto a target. Here, data measured by a scientific instrument (an experiment at the Large Hadron Collider or large telescopes) are processed step by step until a measurement is obtained, including uncertainties.
14
+
15
+ ## 2. Organisation
16
+
17
+ ### 2.1. Overview
18
+
19
+ The project lasts one week full time, starting with a short kick-off meeting Monday morning, and ending with an afternoon of presentations on Friday afternoon. Prior to the project week, students are asked to group themselves into independent "collaborations" of about 25 students. The topic of each collaboration is then randomly assigned. We are three tutors to mentor four collaborations, circa 100 students (two tutors would be barely possible).
20
+
21
+ The 30-minute kick-off meeting on Monday allows tutors to give a very brief introduction to the projects and what each team is expected to do. Students are encouraged to find the necessary information in their courses and on the Internet.
22
+
23
+ The collaborations then have until noon to elect a "spokesperson" who will coordinate the different teams. Before $5\mathrm{{pm}}$ , the spokesperson must send a face book with the members of each team of 5 students.
24
+
25
+ A large open space is reserved for the project for the entire week ${}^{1}$ , though the students are free to settle anywhere, outside scheduled meetings. Tutors remain available there all the time for scheduled or impromptu discussions.
26
+
27
+ Communication between tutors and students either occurs directly or through the official University chat/meeting tools, where dedicated channels are opened for each collaboration. Students are free to set up their own chat tools where they can exchange without tutor knowledge, to ease exchanges between them and to encourage the collaborative spirit. Each team as its own tasks but they are interdependent through the global collaboration results. The interfaces between teams are not fully spelled out at the beginning and require some negociation, so that teams have to share data, algorithms, concepts and tools along the week.
28
+
29
+ The spokespersons send every evening a status report, a bullet list of one page describing the status of each team. The status report is discussed every morning with the spokesperson in the presence of the whole collaboration so that issues are identified. If a team is stuck, or heading in a clearly wrong direction, a tutor will talk with them to help them overcome the hurdle. Initiatives from the teams are favoured, the mentoring from the tutor should be as light as possible. A second checkpoint takes place in the early afternoon with the spokesperson alone, often deeper on a specific topic or
30
+
31
+ ---
32
+
33
+ ${}^{1}$ Due to the Covid-19 pandemics, in 2020 the project was entirely conducted online, while in 2021,10% of the students were online, the rest being onsite.
34
+
35
+ ---
36
+
37
+ 057 regarding potential problems in managing collaborations.
38
+
39
+ 058 Friday morning is devoted to the preparation of a 45-minute 059 presentation ( +15 minutes for questions) by each collabo- 060
40
+
41
+ 061 ration. The presentation is introduced by the spokesperson, followed by a speaker in each team. The speaker is chosen 062 at random from each team by the tutor and announced at 063 noon, after the collection of deliverables, to ensure that ev- 064 eryone is fully engaged until then. Questions from students 065
42
+
43
+ 066 are encouraged.
44
+
45
+ The whole pipeline is implemented in Google Colaboratory Jupyter (Kluyver et al., 2016) Python notebooks, which was introduced and used during the practice sessions of the course. This solution was chosen for two reasons: it does not require any individual configuration and it is scalable. The initial dataset and intermediate steps are shared through Google Drive cloud storages. All students have laptops connected to a robust wireless network. Some students may also have installed an Anaconda environment on their laptops and developed on them.
46
+
47
+ ### 2.2. Student background
48
+
49
+ The course is designed for undergraduate students in engineering schools, at the university Bachelor level. After a general training, with majors in mathematics and physics, students choose their areas of interest (here physics) during their curriculum. The proposed course is part of the "data analysis" sequence of their curriculum. Most of our students had no background in the topics to be covered, other than a basic understanding of quantum mechanics, and basics of statistics and probability. Students have all received formal training in the Python language, but the level of proficiency vary greatly between students.
50
+
51
+ The course is designed as a series of introductory lectures to Cosmology and Particle Physics allowing to understand the type of data and the purpose of data analysis. The Statistics and Machine Learning concepts listed in 3.2 are covered. Several practice sessions with exercises are scheduled to get students used to coding and data manipulation.
52
+
53
+ ### 2.3. Deliverables
54
+
55
+ The main deliverable is the 45-minute presentation. A pdf file of the slides (without animation!) has to be received by the tutors on Friday at noon. The presentation has to be understandable to students in other collaborations (e.g. the cosmology collaborations students should understand the Higgs boson presentation). In addition, notebooks also have to be delivered on Friday at noon. For instance, for the Higgs pipeline, 3 notebooks are requested: one that runs the pipeline from start to finish, except for the training, one that
56
+
57
+ 109 trains the BDT (Boosted Decision Trees), one that trains the NN (Neural Networks). Additional notebooks to illustrate specific studies are also welcome. In all cases the notebooks should be clearly commented on what they are doing, all graphs should be clearly labelled.
58
+
59
+ To track the progress of the project, the spokesperson sends a single page progress report by midnight each day. In addition, each student must fill out an online form by midnight with a few sentences about what they did, who they worked with, any difficulties and the plan for the next day. They are also encouraged to express their satisfaction/dissatisfaction.
60
+
61
+ ### 2.4. Evaluation
62
+
63
+ The evaluation stage of students' progress, work, and involvement during the week is an important and necessary element of the teaching experience. As indicated earlier, specific indicators for monitoring the on-going work were developed. The primary one is a comprehensive daily report with a short 5-10 minute presentation by the spokesperson for each collaboration. It is the primary source of evaluation of each team, complemented by ad-hoc discussions with the teams on the initiative of the teams themselves, the spokesperson or the tutors. Tutors remain watchful to the flexibility of each team to adapt to upstream and downstream teams requests. Most often, the difficulties are either technical or relational.
64
+
65
+ In addition to these overall reports, individual reports are requested through online forms to track each student's work and consistency with the team's messages. These individual reports help to gauge individual efforts, as "invisible" students may contribute significantly to the team effort through private channels. The tutors may schedule face-to-face discussion with the few students who seems less engaged. Overall, the careful evaluation of teams and students is the main reason to limit the attendance to 50 students per tutor.
66
+
67
+ The final evaluation is based on a 45-minute presentation by each collaboration of the work done throughout the week. The spokesperson is responsible for the introduction and overall consistency of the collaboration's message, and the overall conclusion. Each student's final grade is evaluated primarily on the team performance, the quality and diversity of the studies performed, technical and interpersonal skills, and the Python notebooks quality. Tutors can provide additional bonus to spokespersons for the extra workload and for individuals who provided outstanding efforts for the overall collaborations.
68
+
69
+ ## 3. Academic content
70
+
71
+ ### 3.1. Brief description of the pipelines
72
+
73
+ Two alternative projects are proposed, which have been devised to credibly reproduce high-profile, Nobel prize level scientific achievements. Despite the sequential nature of the pipelines, all teams can get started immediately, before connecting to each other.
74
+
75
+ ##### 3.1.1.THE HIGGS PIPELINE
76
+
77
+ Students are presented a special version of the Higgs Machine Learning (HiggsML) challenge dataset (ATLAS collaboration (2014), 2014) and associated documentation. This dataset was created for a 2014 Kaggle challenge to investigate Machine Learning algorithms on a high-profile High Energy physics task: extracting the Higgs boson signal from overwhelming background noise. The dataset is a csv file containing tabular data with 17 "primary" features corresponding to the measured parameters of the particles from the simulated proton collision. This is a classification task with a specific figure of merit, the Approximate Median Significance (AMS) that evaluates discovery potential. Prior to the project, students have studied notebooks for a BDT or NN trained on a similar dataset, which they can adapt to this new dataset as a starting point.
78
+
79
+ The pipeline to be built has five components to be addressed by one team each.
80
+
81
+ - Feature Engineering (FE): the original HiggsML dataset has been stripped of all "derived" features, computed using the knowledge of physics experts. The FE team should first rebuild the derived features (given by mathematical formulas), study their importance with the BDT and NN classifiers downstream, and propose new features.
82
+
83
+ - Boosted Decision Tree (BDT): the BDT team should train a BDT (actually two: XGboost and LightGBM) to maximise the AMS, first on the original dataset with only primary features, then with the additional features provided by the FE team. They should proceed with hyperparameter optimisation (HPO) and other studies as listed in section 3.2.
84
+
85
+ - Neural Network (NN): the NN team should train a Neural Network to maximise the AMS, first on the original dataset with only primary features, then with the additional features provided by the FE team. They should proceed with HPO (in particular optimise the architecture of the NN) and other studies as listed in section 3.2. Given the few minutes of training time compared to few seconds for the BDT, they should organise well to optimally cover the HPO space.
86
+
87
+ - STAT: the STATISTICS team has to develop a likelihood framework based on the output of the two previously trained model (BDT and NN) in order to incorporate shape discrimination between signal and background and to exploit the modelling power of the algorithms to increase the statistical significance of signal detection.
88
+
89
+ - SYST: the SYSTEMATICS team should become familiar with the entire data analysis pipeline and develop a framework to re-evaluate the trained models (BDT and NN) under different conditions when the nominal working assumptions are wrong or biased by some amount (e.g. a +3% error on background estimate,...). A script allowing to alter the original dataset is provided to them. Thus, they should evaluate the impact of the different biases on the input dataset to investigate systematic effects and, if possible, find ways to provide results robust to these effects during model (BDT and NN) training stages.
90
+
91
+ Ideally, they should iterate to provide the best performance on the statistical significance of the Higgs signal determination and have the lowest dependence on the systematics which have natively a stronger impact.
92
+
93
+ ##### 3.1.2.THE COSMOLOGY PIPELINE
94
+
95
+ Students are presented with two simulated datasets: Type Ia supernovae and Cosmic Microwave Background (CMB) data. They were both simulated assuming cosmological parameters (Hubble-Lemaître constant ${H}_{0}$ , matter density ${\Omega }_{m}$ and Dark Energy density ${\Omega }_{\Lambda }$ ) distinct from those measured for our Universe. The aim of the project is to build a pipeline for each probe allowing to go from raw observations to constraints on these parameters as well as joint analysis breaking degeneracies of each probe. The different Work-Packages are organised in the following manner:
96
+
97
+ - WP-SN1: Supernovae detection from a series of images: The input data are a series of images containing stars and galaxies with known magnitude and location. The images are noisy and taken under different conditions. Supernovae are detected on image differences with respect to a reference one with no supernovae. Deliverables are a list of SNIa candidates for each field.
98
+
99
+ - WP-SN2: Photometry of detected supernovae from a series of images: The input data is similar to that of WP-SN1 but with a list of SNIa candidates coordinates. The deliverable is a measurement of each calibrated SN's flux and error bars on image differences.
100
+
101
+ - WP-SN3: Supernovae lightcurve fitting and cosmological constraints: The input data is a set of calibrated lightcurves for a number of SNe. The objective is to build a Hubble diagram from these SNe measuring their brightness at maximum including various corrections. A Markov-Chain-Monte-Carlo (MCMC) approach provides cosmological constraints on $\left( {H}_{0}\right.$ , $\left. {{\Omega }_{m},{\Omega }_{\Lambda }}\right)$ from the Hubble diagram.
102
+
103
+ - WP-CMB1: From Time-Ordered-Data (TOD) to Cosmic Microwave Background (CMB) Maps: The input data are noisy TOD along with the corresponding pointing. The deliverable is a projected map with uncertainties of these TODs maximising signal-to-noise ratio through time-domain filtering.
104
+
105
+ - WP-CMB2: From CMB Maps to CMB angular power spectra: Input data is a simulated observed map of the CMB including inhomogeneous noise and the corresponding coverage map. The objective and deliverable is to calculate the CMB angular power spectrum from this map, uncertainties are determined through a Monte-Carlo simulation.
106
+
107
+ - WP-CMB3: Cosmological constraints from CMB angular power spectra: The input data is an angular power spectrum of the CMB with error bars. The objective is to constrain cosmological parameters using a MCMC approach and theoretical power spectra.
108
+
109
+ Finally, WP-SN3 and WP-CMB3 are expected to perform a joint MCMC analysis of their dataset in order to obtain the final measurements on the cosmological parameters.
110
+
111
+ ### 3.2. Statistics and ML tools and concepts to be implemented
112
+
113
+ During the construction of the pipelines, students have the opportunity to implement many concepts and use many tools. Although the ones from the course are sufficient to obtain reasonable results, they are encouraged to look for more. These tools and concepts are listed below without any details.
114
+
115
+ In physics: special relativity, and how the Higgs boson was discovered! General relativity, to prove the existence of dark energy!
116
+
117
+ In statistics and data processing : data cleanup (check those NaNs!), signal processing and filtering, Markov Chain Monte Carlo, scientific plots with labels and legends. Students have to practice maximum likelihood and least squares estimators, check the consistency of the results and quantify the associated uncertainties, either by the classical definition of confidence intervals or by Monte Carlo simulation.
118
+
119
+ In Machine Learning: use of SciPy and Scikit-Learn (Pe-dregosa et al., 2011), feature engineering (Machine Learning does not work miracles), feature normalisation, feature importance with permutation importance, feature selection, Boosted Decision Trees with XGBoost (Chen & Guestrin, 2016) and LightGBM (Ke et al., 2017), Neural Networks with Keras (Chollet et al., 2015), importance sampling, train vs. test splitting, cross-validation, overtraining, model serialisation, classifier evaluation, ROC curve, significance curve, hyperparameter optimisation (manual, grid search, random search), clustering (DBScan (Ester et al., 1996)).
120
+
121
+ In addition, students soft skills are being exercised as well. Team work of course, at the level of the team and at the level of the collaboration, scientific discussion, presentation of results in a compelling way.
122
+
123
+ ## 4. Outlook
124
+
125
+ By the end of the week, students manage to run a full data analysis pipeline, the details of which they expose in a way that shows (in most cases) they really understand what they are doing. For the Higgs pipeline, we had in mind that the students would iterate their model in order to minimise the overall uncertainty. In practice no collaboration have had the time to do it so far. If we would provide a functional minimal pipeline with clear interfaces to get started, they would spend less time with technicalities but they probably would learn less overall.
126
+
127
+ Also, using Git is certainly a better way to exchange code than Google Colab, but few students master contributing to a Git repository (as opposed to downloading from it), so that using Git would bar a large fraction of the students to contribute. On the other hand, the topics are rich enough so that the job of any team is never complete; good students can always do more in-depth studies ${}^{2}$ . It is up to the tutors to adjust the balance between autonomy (at the risk of achieving little) and strong supervision (at the risk of falling back to more usual exercises).
128
+
129
+ This type of project is very different from a challenge " $a$ la Kaggle" where a single figure of merit is optimised. What matters here is that students overcome the various difficulties with minimal guidance and are able to perform a number of small studies on their own. We have often seen teams perform unexpected studies with quite interesting results. This type of project can certainly be adapted to other datasets from different domains, and with students with different level of expertise.
130
+
131
+ ## References
132
+
133
+ ATLAS collaboration (2014). Dataset from the ATLAS Higgs boson machine learning challenge 2014, CERN Open Data Portal. http://doi.org/10.7483/ OPENDATA. ATLAS . ZBP2 . M5T8, 2014.
134
+
135
+ ---
136
+
137
+ ${}^{2}$ It does happen, rarely, that good students are reassigned to a struggling team at the suggestion of the spokesperson.
138
+
139
+ ---
140
+
141
+ Chen, T. and Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, pp. 785-794, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4232-2. doi: 10.1145/2939672.2939785. URL http://doi.acm.org/10.1145/2939672.2939785.
142
+
143
+ Chollet, F. et al. Keras, 2015. URL https://github.com/fchollet/keras.
144
+
145
+ Ester, M., Kriegel, H.-P., Sander, J., and Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proc. of 2nd International Conference on Knowledge Discovery and, pp. 226-231, 1996.
146
+
147
+ Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. Lightgbm: A highly efficient gradient boosting decision tree. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vish-wanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/
148
+
149
+ 6449f44a102fde848669bdd9eb6b76fa-Paper. pdf.
150
+
151
+ Kluyver, T., Ragan-Kelley, B., Pérez, F., Granger, B., Bus-sonnier, M., Frederic, J., Kelley, K., Hamrick, J., Grout, J., Corlay, S., Ivanov, P., Avila, D., Abdalla, S., Willing, C., and development team, J. Jupyter notebooks ? a publishing format for reproducible computational workflows. In Loizides, F. and Scmidt, B. (eds.), Positioning and Power in Academic Publishing: Players, Agents and Agendas, pp. 87-90. IOS Press, 2016. URL https://eprints.soton.ac.uk/403913/.
152
+
153
+ Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cour-napeau, D., Brucher, M., Perrot, M., and Duchesnay, E. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/pJWKWGNvoLi/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § COLLABORATIVE BUILD OF STATISTICS AND MACHINE LEARNING ANALYSIS PIPELINES FOR COSMOLOGY AND PARTICLE PHYSICS
2
+
3
+ § ANONYMOUS AUTHORS ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ In a full-time, week-long project after a Statistics and Machine Learning course, students grouped in "Collaboration" of Teams build pipelines to analyse scientific data: to prove the existence of the Higgs boson from the Large Hadron Col-lider data or Dark Energy from supernova surveys. They have the opportunity to implement together a variety of tools and concepts from physics, data processing, Statistics and Machine Learning, borrowed from the course, earlier training or resources from the Internet.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ Quite often, when teaching Statistics and Machine Learning, specific topics are introduced one after the other and exercises (or hands-on tutorials) are self-contained. We have set up in 2019 (and repeated in 2020 and 2021) a broader week-long project where scientific data analysis pipelines are built by the students, complementing a 30 hour introductory course on Statistics and Machine Learning for particle physics and cosmology. The objective is to give them a thorough experience by connecting the various elements they have been taught in order to obtain the best possible measurement. The topics are broad and students must organise themselves in teams and collaborate in the same spirit as international experimental collaborations.
12
+
13
+ The original idea comes from another department in the University where students are working in teams to design a synchrotron beam line, from the extraction of the X-rays through various collimators onto a target. Here, data measured by a scientific instrument (an experiment at the Large Hadron Collider or large telescopes) are processed step by step until a measurement is obtained, including uncertainties.
14
+
15
+ § 2. ORGANISATION
16
+
17
+ § 2.1. OVERVIEW
18
+
19
+ The project lasts one week full time, starting with a short kick-off meeting Monday morning, and ending with an afternoon of presentations on Friday afternoon. Prior to the project week, students are asked to group themselves into independent "collaborations" of about 25 students. The topic of each collaboration is then randomly assigned. We are three tutors to mentor four collaborations, circa 100 students (two tutors would be barely possible).
20
+
21
+ The 30-minute kick-off meeting on Monday allows tutors to give a very brief introduction to the projects and what each team is expected to do. Students are encouraged to find the necessary information in their courses and on the Internet.
22
+
23
+ The collaborations then have until noon to elect a "spokesperson" who will coordinate the different teams. Before $5\mathrm{{pm}}$ , the spokesperson must send a face book with the members of each team of 5 students.
24
+
25
+ A large open space is reserved for the project for the entire week ${}^{1}$ , though the students are free to settle anywhere, outside scheduled meetings. Tutors remain available there all the time for scheduled or impromptu discussions.
26
+
27
+ Communication between tutors and students either occurs directly or through the official University chat/meeting tools, where dedicated channels are opened for each collaboration. Students are free to set up their own chat tools where they can exchange without tutor knowledge, to ease exchanges between them and to encourage the collaborative spirit. Each team as its own tasks but they are interdependent through the global collaboration results. The interfaces between teams are not fully spelled out at the beginning and require some negociation, so that teams have to share data, algorithms, concepts and tools along the week.
28
+
29
+ The spokespersons send every evening a status report, a bullet list of one page describing the status of each team. The status report is discussed every morning with the spokesperson in the presence of the whole collaboration so that issues are identified. If a team is stuck, or heading in a clearly wrong direction, a tutor will talk with them to help them overcome the hurdle. Initiatives from the teams are favoured, the mentoring from the tutor should be as light as possible. A second checkpoint takes place in the early afternoon with the spokesperson alone, often deeper on a specific topic or
30
+
31
+ ${}^{1}$ Due to the Covid-19 pandemics, in 2020 the project was entirely conducted online, while in 2021,10% of the students were online, the rest being onsite.
32
+
33
+ 057 regarding potential problems in managing collaborations.
34
+
35
+ 058 Friday morning is devoted to the preparation of a 45-minute 059 presentation ( +15 minutes for questions) by each collabo- 060
36
+
37
+ 061 ration. The presentation is introduced by the spokesperson, followed by a speaker in each team. The speaker is chosen 062 at random from each team by the tutor and announced at 063 noon, after the collection of deliverables, to ensure that ev- 064 eryone is fully engaged until then. Questions from students 065
38
+
39
+ 066 are encouraged.
40
+
41
+ The whole pipeline is implemented in Google Colaboratory Jupyter (Kluyver et al., 2016) Python notebooks, which was introduced and used during the practice sessions of the course. This solution was chosen for two reasons: it does not require any individual configuration and it is scalable. The initial dataset and intermediate steps are shared through Google Drive cloud storages. All students have laptops connected to a robust wireless network. Some students may also have installed an Anaconda environment on their laptops and developed on them.
42
+
43
+ § 2.2. STUDENT BACKGROUND
44
+
45
+ The course is designed for undergraduate students in engineering schools, at the university Bachelor level. After a general training, with majors in mathematics and physics, students choose their areas of interest (here physics) during their curriculum. The proposed course is part of the "data analysis" sequence of their curriculum. Most of our students had no background in the topics to be covered, other than a basic understanding of quantum mechanics, and basics of statistics and probability. Students have all received formal training in the Python language, but the level of proficiency vary greatly between students.
46
+
47
+ The course is designed as a series of introductory lectures to Cosmology and Particle Physics allowing to understand the type of data and the purpose of data analysis. The Statistics and Machine Learning concepts listed in 3.2 are covered. Several practice sessions with exercises are scheduled to get students used to coding and data manipulation.
48
+
49
+ § 2.3. DELIVERABLES
50
+
51
+ The main deliverable is the 45-minute presentation. A pdf file of the slides (without animation!) has to be received by the tutors on Friday at noon. The presentation has to be understandable to students in other collaborations (e.g. the cosmology collaborations students should understand the Higgs boson presentation). In addition, notebooks also have to be delivered on Friday at noon. For instance, for the Higgs pipeline, 3 notebooks are requested: one that runs the pipeline from start to finish, except for the training, one that
52
+
53
+ 109 trains the BDT (Boosted Decision Trees), one that trains the NN (Neural Networks). Additional notebooks to illustrate specific studies are also welcome. In all cases the notebooks should be clearly commented on what they are doing, all graphs should be clearly labelled.
54
+
55
+ To track the progress of the project, the spokesperson sends a single page progress report by midnight each day. In addition, each student must fill out an online form by midnight with a few sentences about what they did, who they worked with, any difficulties and the plan for the next day. They are also encouraged to express their satisfaction/dissatisfaction.
56
+
57
+ § 2.4. EVALUATION
58
+
59
+ The evaluation stage of students' progress, work, and involvement during the week is an important and necessary element of the teaching experience. As indicated earlier, specific indicators for monitoring the on-going work were developed. The primary one is a comprehensive daily report with a short 5-10 minute presentation by the spokesperson for each collaboration. It is the primary source of evaluation of each team, complemented by ad-hoc discussions with the teams on the initiative of the teams themselves, the spokesperson or the tutors. Tutors remain watchful to the flexibility of each team to adapt to upstream and downstream teams requests. Most often, the difficulties are either technical or relational.
60
+
61
+ In addition to these overall reports, individual reports are requested through online forms to track each student's work and consistency with the team's messages. These individual reports help to gauge individual efforts, as "invisible" students may contribute significantly to the team effort through private channels. The tutors may schedule face-to-face discussion with the few students who seems less engaged. Overall, the careful evaluation of teams and students is the main reason to limit the attendance to 50 students per tutor.
62
+
63
+ The final evaluation is based on a 45-minute presentation by each collaboration of the work done throughout the week. The spokesperson is responsible for the introduction and overall consistency of the collaboration's message, and the overall conclusion. Each student's final grade is evaluated primarily on the team performance, the quality and diversity of the studies performed, technical and interpersonal skills, and the Python notebooks quality. Tutors can provide additional bonus to spokespersons for the extra workload and for individuals who provided outstanding efforts for the overall collaborations.
64
+
65
+ § 3. ACADEMIC CONTENT
66
+
67
+ § 3.1. BRIEF DESCRIPTION OF THE PIPELINES
68
+
69
+ Two alternative projects are proposed, which have been devised to credibly reproduce high-profile, Nobel prize level scientific achievements. Despite the sequential nature of the pipelines, all teams can get started immediately, before connecting to each other.
70
+
71
+ § 3.1.1.THE HIGGS PIPELINE
72
+
73
+ Students are presented a special version of the Higgs Machine Learning (HiggsML) challenge dataset (ATLAS collaboration (2014), 2014) and associated documentation. This dataset was created for a 2014 Kaggle challenge to investigate Machine Learning algorithms on a high-profile High Energy physics task: extracting the Higgs boson signal from overwhelming background noise. The dataset is a csv file containing tabular data with 17 "primary" features corresponding to the measured parameters of the particles from the simulated proton collision. This is a classification task with a specific figure of merit, the Approximate Median Significance (AMS) that evaluates discovery potential. Prior to the project, students have studied notebooks for a BDT or NN trained on a similar dataset, which they can adapt to this new dataset as a starting point.
74
+
75
+ The pipeline to be built has five components to be addressed by one team each.
76
+
77
+ * Feature Engineering (FE): the original HiggsML dataset has been stripped of all "derived" features, computed using the knowledge of physics experts. The FE team should first rebuild the derived features (given by mathematical formulas), study their importance with the BDT and NN classifiers downstream, and propose new features.
78
+
79
+ * Boosted Decision Tree (BDT): the BDT team should train a BDT (actually two: XGboost and LightGBM) to maximise the AMS, first on the original dataset with only primary features, then with the additional features provided by the FE team. They should proceed with hyperparameter optimisation (HPO) and other studies as listed in section 3.2.
80
+
81
+ * Neural Network (NN): the NN team should train a Neural Network to maximise the AMS, first on the original dataset with only primary features, then with the additional features provided by the FE team. They should proceed with HPO (in particular optimise the architecture of the NN) and other studies as listed in section 3.2. Given the few minutes of training time compared to few seconds for the BDT, they should organise well to optimally cover the HPO space.
82
+
83
+ * STAT: the STATISTICS team has to develop a likelihood framework based on the output of the two previously trained model (BDT and NN) in order to incorporate shape discrimination between signal and background and to exploit the modelling power of the algorithms to increase the statistical significance of signal detection.
84
+
85
+ * SYST: the SYSTEMATICS team should become familiar with the entire data analysis pipeline and develop a framework to re-evaluate the trained models (BDT and NN) under different conditions when the nominal working assumptions are wrong or biased by some amount (e.g. a +3% error on background estimate,...). A script allowing to alter the original dataset is provided to them. Thus, they should evaluate the impact of the different biases on the input dataset to investigate systematic effects and, if possible, find ways to provide results robust to these effects during model (BDT and NN) training stages.
86
+
87
+ Ideally, they should iterate to provide the best performance on the statistical significance of the Higgs signal determination and have the lowest dependence on the systematics which have natively a stronger impact.
88
+
89
+ § 3.1.2.THE COSMOLOGY PIPELINE
90
+
91
+ Students are presented with two simulated datasets: Type Ia supernovae and Cosmic Microwave Background (CMB) data. They were both simulated assuming cosmological parameters (Hubble-Lemaître constant ${H}_{0}$ , matter density ${\Omega }_{m}$ and Dark Energy density ${\Omega }_{\Lambda }$ ) distinct from those measured for our Universe. The aim of the project is to build a pipeline for each probe allowing to go from raw observations to constraints on these parameters as well as joint analysis breaking degeneracies of each probe. The different Work-Packages are organised in the following manner:
92
+
93
+ * WP-SN1: Supernovae detection from a series of images: The input data are a series of images containing stars and galaxies with known magnitude and location. The images are noisy and taken under different conditions. Supernovae are detected on image differences with respect to a reference one with no supernovae. Deliverables are a list of SNIa candidates for each field.
94
+
95
+ * WP-SN2: Photometry of detected supernovae from a series of images: The input data is similar to that of WP-SN1 but with a list of SNIa candidates coordinates. The deliverable is a measurement of each calibrated SN's flux and error bars on image differences.
96
+
97
+ * WP-SN3: Supernovae lightcurve fitting and cosmological constraints: The input data is a set of calibrated lightcurves for a number of SNe. The objective is to build a Hubble diagram from these SNe measuring their brightness at maximum including various corrections. A Markov-Chain-Monte-Carlo (MCMC) approach provides cosmological constraints on $\left( {H}_{0}\right.$ , $\left. {{\Omega }_{m},{\Omega }_{\Lambda }}\right)$ from the Hubble diagram.
98
+
99
+ * WP-CMB1: From Time-Ordered-Data (TOD) to Cosmic Microwave Background (CMB) Maps: The input data are noisy TOD along with the corresponding pointing. The deliverable is a projected map with uncertainties of these TODs maximising signal-to-noise ratio through time-domain filtering.
100
+
101
+ * WP-CMB2: From CMB Maps to CMB angular power spectra: Input data is a simulated observed map of the CMB including inhomogeneous noise and the corresponding coverage map. The objective and deliverable is to calculate the CMB angular power spectrum from this map, uncertainties are determined through a Monte-Carlo simulation.
102
+
103
+ * WP-CMB3: Cosmological constraints from CMB angular power spectra: The input data is an angular power spectrum of the CMB with error bars. The objective is to constrain cosmological parameters using a MCMC approach and theoretical power spectra.
104
+
105
+ Finally, WP-SN3 and WP-CMB3 are expected to perform a joint MCMC analysis of their dataset in order to obtain the final measurements on the cosmological parameters.
106
+
107
+ § 3.2. STATISTICS AND ML TOOLS AND CONCEPTS TO BE IMPLEMENTED
108
+
109
+ During the construction of the pipelines, students have the opportunity to implement many concepts and use many tools. Although the ones from the course are sufficient to obtain reasonable results, they are encouraged to look for more. These tools and concepts are listed below without any details.
110
+
111
+ In physics: special relativity, and how the Higgs boson was discovered! General relativity, to prove the existence of dark energy!
112
+
113
+ In statistics and data processing : data cleanup (check those NaNs!), signal processing and filtering, Markov Chain Monte Carlo, scientific plots with labels and legends. Students have to practice maximum likelihood and least squares estimators, check the consistency of the results and quantify the associated uncertainties, either by the classical definition of confidence intervals or by Monte Carlo simulation.
114
+
115
+ In Machine Learning: use of SciPy and Scikit-Learn (Pe-dregosa et al., 2011), feature engineering (Machine Learning does not work miracles), feature normalisation, feature importance with permutation importance, feature selection, Boosted Decision Trees with XGBoost (Chen & Guestrin, 2016) and LightGBM (Ke et al., 2017), Neural Networks with Keras (Chollet et al., 2015), importance sampling, train vs. test splitting, cross-validation, overtraining, model serialisation, classifier evaluation, ROC curve, significance curve, hyperparameter optimisation (manual, grid search, random search), clustering (DBScan (Ester et al., 1996)).
116
+
117
+ In addition, students soft skills are being exercised as well. Team work of course, at the level of the team and at the level of the collaboration, scientific discussion, presentation of results in a compelling way.
118
+
119
+ § 4. OUTLOOK
120
+
121
+ By the end of the week, students manage to run a full data analysis pipeline, the details of which they expose in a way that shows (in most cases) they really understand what they are doing. For the Higgs pipeline, we had in mind that the students would iterate their model in order to minimise the overall uncertainty. In practice no collaboration have had the time to do it so far. If we would provide a functional minimal pipeline with clear interfaces to get started, they would spend less time with technicalities but they probably would learn less overall.
122
+
123
+ Also, using Git is certainly a better way to exchange code than Google Colab, but few students master contributing to a Git repository (as opposed to downloading from it), so that using Git would bar a large fraction of the students to contribute. On the other hand, the topics are rich enough so that the job of any team is never complete; good students can always do more in-depth studies ${}^{2}$ . It is up to the tutors to adjust the balance between autonomy (at the risk of achieving little) and strong supervision (at the risk of falling back to more usual exercises).
124
+
125
+ This type of project is very different from a challenge " $a$ la Kaggle" where a single figure of merit is optimised. What matters here is that students overcome the various difficulties with minimal guidance and are able to perform a number of small studies on their own. We have often seen teams perform unexpected studies with quite interesting results. This type of project can certainly be adapted to other datasets from different domains, and with students with different level of expertise.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/q8N3WvWvt4X/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Participatory Live Coding and Learning-Centered Assessment in Programming for Data Science
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ Programming for Data Science is a programming intensive data science course. This paper discusses a revision of the course to center student learning. The revision effort centered the desired learning outcomes and resulted in a course that charted an explicit path toward achieving them for students. This paper summarizes the design overall and provides practical details about the instruction via participatory live coding and assessment with a competency based grading scheme.
8
+
9
+ ## 1. Introduction
10
+
11
+ In this paper, we present an undergraduate course that teaches introductory data science through a programming intensive lens. As originally designed the course involved lectures using slides and prefilled Jupyter Notebooks followed by in class group work. In this format, the instructor provides the conceptual ideas for all material, foundational to advanced, and students figure out more practical details in group work and outside of class. Live coding flips the instructional model: the instructor provides core concept with their practical details and students build on the base knowledge to learn more advanced aspects independently. Active learning in this version was independent rather than in groups through formative assessment and following along coding. The changes in assessment were designed to create a more equitable and inclusive learning environment while maintaining high expectations for all students.
12
+
13
+ Section 2 describes the goals of the course through its context. Section 3 provides an overview of the design and how the pieces work together. The remainder of the paper goes into greater detail about how participatory live coding (Section 4) and learner-centric assessment (Section 5) worked in practice.
14
+
15
+ ## 2. Course Context
16
+
17
+ Programming for Data Science is a required course for Data Science (DS) Majors and a popular elective for Computer Science (CS) Majors as it fulfills the requirement for a programming-intensive elective. The prerequisite is one programming course, but no statistics or math. CS majors take Computer Programming taught in C++, after Survey of Computer Science taught in Python. DS majors take Intro to Computer Programming taught in Python, which covers topics with less theoretical depth than the CS majors' course. Many students fulfill prerequisites at community colleges that teach in Java, so some students come to the class with no prior experience in Python. This course is a prerequisite to Machine Learning, which covers the implementation of machine learning algorithms and is also required for DS majors and popular among CS majors.
18
+
19
+ In this context, the role of this course is to give students a chance to deepen their programming skills and have an overview of data science so that they can succeed in understanding the details of machine learning. To do this, communication about their work, data organization, examining results of machine learning models are essential as these skills will support students to focus on the machine learning algorithms in the subsequent course. For some students, this will be their only course exposure to machine learning concepts prior to graduation, so their motivations are largely practical. The course's five learning outcomes are: (1) Describe the process of data science, define each phase, and identify standard tools. (2) Access and combine data in multiple formats for analysis. (3) Perform exploratory data analyses including descriptive statistics and visualization. (4) Select models for data by applying and evaluating multiple models to a single dataset. (5) Communicate solutions to problems with data in common industry formats.
20
+
21
+ ## 3. Design Overview
22
+
23
+ The course was developed using a reverse instructional design process, focused on guiding students to achieve the learning outcomes. To plan assessment, then instruction, the learning outcomes were broken down into 15 component skills each of which was decomposed further to a 3 stage progression, shown in Table 1.. The first level represents a basic understanding: the general terms and core concepts, typically at the understand level of Bloom's Taxonomy. The second level represents the ability to apply concepts with guidance as demonstrated in class, at the apply or analyze level of Bloom's Taxonomy. The third level is the ability to apply the general concepts beyond the scope demonstrated in class, operating at the evaluate or create levels of Bloom's Taxonomy. Each level of each skill is called an achievement and these served as the basis of grading.
24
+
25
+ ---
26
+
27
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
28
+
29
+ Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute.
30
+
31
+ ---
32
+
33
+ The content of the course was sequenced to build skills early
34
+
35
+ 067 that would support later skills and activities were crafted to review prior topics. We used loading data as a lens to review 068
36
+
37
+ 069 of basic programming and reinforcement of the overview of Data Science. Next, we covered exploratory data analysis 070
38
+
39
+ 071 using pandas in order build understanding of what well structured data looks like and make the concepts of data 072 science more concrete. Then we covered Data cleaning, 073 as a context for studying ways to manipulate data frames and review the data science process again, emphasizing how the stages interact and aren't always discrete. While cleaning data, we used skills visualizations and summary statistics to examine progress. This allowed for repetition and reinforcement. Databases served as context to discuss constructing datasets from pieces and a chance to reinforce concepts from accessing data.
40
+
41
+ In weeks 6-11, a new machine learning models served as context to motivate different aspects of evaluation and modeling as shown in 2. We used the Sci-kit Learn API to take a model-centric approach, while sticking with the programming focus of the course (Buitinck et al., 2013). Sci-kit Learn provides a large number of typical models with a consistent API, this made it easy for students to try out new models independently to extend what was taught. We used object inspection in Python to examine the attributes of the estimator objects to learn about the model parameters and the built in Jupyter help to learn about the hyperparameters. The final three weeks covered nontabular data through case studies that allowed for reinforcement of many different skills and centered students interests. There was greater interest in natural language processing than in images, so we spent two weeks on text and only one week with deep learning. Fireside chat style interviews with practicing data scientists, introduced helped students see that what we had covered in class directly connected to what data science is like in industry.
42
+
43
+ ## 4. Class Sessions
44
+
45
+ During class time, I delivered instruction via participatory live coding, where the instructor types and explains code in real time and students follow along, typing the same code, getting practice in real time (Nederbragt et al., 2020). Partic-
46
+
47
+ 109 ipatory live coding models realistic programming, students observe the instructor make mistakes, get errors, and debug them in real time. Error messages are difficult to parse for novices, so seeing the instructor parse and resolve them helps reach proficiency in this much faster than relying on internet searches alone. Additionally, debugging a model that does not perform as expected is an even more complicated process, but with this model, we can see this in real time ${}^{1}$ . With the aid of Jupyter Notebooks, students have a copy of the code produced in class, with their own notes. In the Carpentries, where this model of teaching was popularized, the audience is novices, who are coming to programming as a supplemental skill to support their research. In that context, the learners need a minimal mental model of how the code works to move into a competent practitioner role, these learners have good knowledge of what data analyses they wish to do and attend the workshop to learn to code as a tool. This course is a 300 level elective for computer science and data science majors; these students need more conceptual understanding and come to the class with significant prior knowledge in programming.
48
+
49
+ These differences require adaptations to the practice. First, the more advanced material and short ( 50 minutes, 3 times weekly) relative to a Carpentries workshop means some necessary code excerpts are prohibitively long to type live. To accommodate, we used a short url that pointed to the markdown download page for a $\mathrm{{HackM}}{\mathrm{d}}^{2}$ pad, by appending /download/markdown, and imported that to an editable notebook cell with IPython load magic: %load http://shorturl.co/123.This method was most often used for the first cell of class: a sequence of import statements. The HackMD is editable real time while maintaining a consistent url which allowed the instructor to add text there on the fly and share it with students immediately. Second, we used code inspection tools to examine data structures and class objects as a visual for conceptual discussions. For example, we printed out the object using
50
+
51
+ _____dict_method to see how the estimator object changed before and after fitting. We also made extensive use of built in Jupyter help views to consider parameters of methods before calling them. This both gave a visual to complement explanation and modeled for students where they could get help while working independently. Many students come to this course unfamiliar with using a library's documentation because introductory courses use teaching-specific development environments, so this is an essential practical skill.
52
+
53
+ We chatted through Prismia chat ${}^{3}$ for in class assessment, which provides a chat-like interface for students and allows the instructional team to see all student responses at once, group them, and reply individually or group-wise. Many questions were multiple choice questions designed to probe specific misconceptions, though some were open ended programming questions, where student submitted code to the chat. This served as a formative assessment to reinforce concepts for students in real time and as way for the instructor to monitor progress.
54
+
55
+ ---
56
+
57
+ ${}^{1}$ pg: 361-363 https://easyupload.io/ur0c3o
58
+
59
+ 2https://hackmd.io/
60
+
61
+ 3https://prismia.chat/
62
+
63
+ ---
64
+
65
+ Table 1. Achievement Definitions. There are three achievements for each of 15 skills, that describe a progression of learning for that skill. The keyword for each skill is a shorthand that was used throughout the course: from the schedule, to assignment text, and the gradebook.
66
+
67
+ <table><tr><td>keyword</td><td>skill</td><td>Level 1</td><td>Level 2</td><td>Level 3</td></tr><tr><td>python</td><td>pythonic code writ- ing</td><td>python code that mostly runs, occa- sional pep8 adherance</td><td>python code that reliably runs, frequent pep8 adherance</td><td>reliable, efficient, pythonic code that consistently adheres to pep8</td></tr><tr><td>process</td><td>describe data sci- ence as a process</td><td>Identify basic components of data sci- ence</td><td>Describe and define each stage of the data science process</td><td>Compare different ways that data sci- ence can facilitate decision making</td></tr><tr><td>access</td><td>Access data in mul- tiple formats</td><td>Load data from at least one format; Identify the most common data for- mats</td><td>Load data for processing from the most common formats; Compare and con- trast most common formats</td><td>Access data from uncommon formats and identify best practices for formats in different contexts</td></tr><tr><td>construct</td><td>combine data from multiple sources</td><td>Identify what should happen to merge datasets or when they can be merged</td><td>Apply basic merges</td><td>Merge data that is not automatically aligned</td></tr><tr><td>summarize</td><td>Summarize and de- scribe data</td><td>Describe the shape and structure of a dataset in basic terms</td><td>compute summary standard statistics of a whole dataset and grouped data</td><td>Compute and interpret various sum- mary statistics of subsets of data</td></tr><tr><td>visualize</td><td>Visualize data</td><td>identify plot types, generate basic plots from Pandas</td><td>Generate multiple plot types with com- plete labeling with Pandas</td><td>generate and customize complex plots with plotting libraries</td></tr><tr><td>prepare</td><td>prepare data for analysis</td><td>identify if data is or is not ready for analysis, potential problems with data</td><td>apply data reshaping, cleaning, and fil- tering as directed</td><td>apply data reshaping, cleaning, and fil- tering manipulations reliably and cor- rectly by assessing data as received</td></tr><tr><td>classification</td><td>Apply classifica- tion</td><td>Describe what classification is</td><td>Apply a prescribed classification model to a dataset</td><td>Select and apply appropriate classifica- tion models to different datasets</td></tr><tr><td>regression</td><td>Apply Regression</td><td>Identify what data that can be used for regression looks like</td><td>Fit linear regression models</td><td>Fit and explain regularized or nonlin- ear regression</td></tr><tr><td>clustering</td><td>Clustering</td><td>Describe what clustering is</td><td>apply basic clustering</td><td>apply multiple clustering techniques, and interpret results</td></tr><tr><td>evaluate</td><td>Evaluate model performance</td><td>Explain basic performance metrics for different data science tasks</td><td>Apply basic model evaluation metrics to a held out test set</td><td>Evaluate a model with multiple metrics and cross validation</td></tr><tr><td>optimize</td><td>Optimize model pa- rameters</td><td>Identify when model parameters need to be optimized</td><td>Manually optimize basic model param- eters such as model order</td><td>Select optimal parameters based of mu- tiple quanttiateve criteria and automate parameter tuning</td></tr><tr><td>compare</td><td>compare models</td><td>Qualitatively compare model classes</td><td>Compare model classes specifically; compare performance of fit models</td><td>Evaluate tradeoffs between different model comparison types</td></tr><tr><td>unstructured</td><td>analyze unstruc- tured data</td><td>Identify options for representing text data</td><td>Transform unstructured data for analy- sis</td><td>Compare and contrast multiple repre- sentations for text</td></tr><tr><td>workflow</td><td>Use standard tools to solve data sci- ence problems</td><td>Solve well structured problems with a single tool pipeline</td><td>Plan and execute solutions to fully specified problems; apply new features of standard tools</td><td>Scope, choose appropriate tools and solve open-ended data science prob- lems; compare common tools</td></tr></table>
68
+
69
+ Table 2. Course Schedule with skills emphasized each week. Skills are defined in Table 1 and linked by the keyword column
70
+
71
+ <table><tr><td>Week</td><td>Topics</td><td>Skills</td></tr><tr><td>1</td><td>Overview, Python Review</td><td>python, process</td></tr><tr><td>2</td><td>Loading data</td><td>access, prepare, summarize</td></tr><tr><td>3</td><td>Exploratory Data Analysis</td><td>summarize, visualize</td></tr><tr><td>4</td><td>Data Cleaning</td><td>prepare, summarize, visualize</td></tr><tr><td>5</td><td>Databases & Merges</td><td>access, construct, summarize</td></tr><tr><td>6</td><td>Naive Bayes Classification</td><td>classification, evaluate</td></tr><tr><td>7</td><td>decision trees, cross validation</td><td>classification, evaluate</td></tr><tr><td>8</td><td>Linear Regression</td><td>regression, evaluate</td></tr><tr><td>9</td><td>Kmeans Clustering</td><td>clustering, evaluate</td></tr><tr><td>10</td><td>SVM, parameter tuning</td><td>optimize, evaluate, clustering</td></tr><tr><td>11</td><td>KNN, LASSO</td><td>compare, clustering, regression</td></tr><tr><td>12</td><td>Text Analysis</td><td>unstructured</td></tr><tr><td>13</td><td>Topic Modeling</td><td>unstructured, workflow</td></tr><tr><td>14</td><td>Deep Learning</td><td>workflow, compare</td></tr></table>
72
+
73
+ At the end of class, students were able to submit additional questions through a Google Form Exit Ticket. Answers to those questions were appended to the instructor notebook prior to posting them online a course Jupyter book ${}^{4}$ . The instructor notes were also annotated with resources, written explanations, and extra practice exercises using Jupyter Book special content blocks after converting to Myst Markdown.
74
+
75
+ ## 5. Assessment
76
+
77
+ In order to align assessment to the assumed model of skill acquisition, the course adopted a hybrid competency-specification based grading scheme. This grading scheme allowed specification grading of each activity, meaning that the instructor and teaching assistant did not have to calculate partial credit for assignments and that students had multiple chances to demonstrate each competency in the course. Specification grading involves defining a set of criteria, the specifications, for an assignment and assessing on a binary: the specifications are met or not. In this case the specifications were the achievement definitions, crucially this allows for some mistakes to be made if the understanding is demonstrated. Competency based grading allows students to work through material at their own pace and typically allows for resubmits on assignments. In this course, the grade was based on accumulated achievements and there were multiple opportunities to earn each achievement, through the design of assignments, rather than resubmits. This structure also helped students see the material as connected: the assignment on classification explicitly required that they also plot; the one on databases required they compute summary statistics.
78
+
79
+ ---
80
+
81
+ ${}^{4}$ anonymized pdf: https://easyupload.io/ur0c30
82
+
83
+ ---
84
+
85
+ Students had at least two opportunities to earn each of the 45 (15*3) achievements. Level 1 achievements could be earned on any type of activity: in class, assignments, or portfolio checks. Level 2 could be earned only on assignments and portfolios. Level 3 achievements could only be earned on portfolio submissions. Each skill was addressed in at least 3 class sessions, at least 2 weekly assignments and at least 2 portfolio submissions ${}^{5}$ . The communication learning outcome was built into all assignments and portfolios through the requirement to explain code and interpret results, using markdown cells in submitted Jupyter Notebooks.
86
+
87
+ Assignments were guided data analyses. Each allowed students to practice with new concepts and skills within a direct guidance. Each submitted assignment was graded on specification for level 2 achievement in each relevant skill independently; a student could earn a level 2 achievement for one skill, but not another in a given assignment. If the submission did not meet the specification for level 2 , it was evaluated for meeting level 1 . For example, correctly using and interpretting summary statistics, and choosing the right plot type failing to generate the plots in A3 would earn level 2 for summarize, but level 1 for visualize. Students submitted assignments as Jupyter notebooks to a GitHub repository created with GitHub Classroom from a template repository a GitHub Action to convert submitted notebooks files to Myst markdown with Jupytext(Team, 2020). The markdown format facilitated providing inline feedback through the Feedback Pull Request automatically created with GitHub Classroom by making a human readable file (Gennarelli, 2017; Team, 2020). This gave student explicit feedback about how to improve on future assignments. Because achievements were evaluated independently, students could skip portions of the assignment that assessed
88
+
89
+ 206 achievements they had already earned. For example, later
90
+
91
+ 207 assignments included suggestions for extra plots or modi-
92
+
93
+ 208 fications to the figures to include in order to earn level 2
94
+
95
+ 209 for visualization. A student could also submit an empty repository indicating that they were choosing to attempt the relevant achievements through a different assignment.
96
+
97
+ Portfolios were a chance for students to demonstrate deeper understanding by building a large Jupyter Book in a single GitHub Repository over the course of the semester. Students wrote an introduction describing what achievements
98
+
99
+ 218
100
+
101
+ 219 they were attempting to earn and where in their portfolio each was addressed. Portfolios were graded on specification for the achievements the students identified. Students were provided with prompts to guide their inquiries to earn level three and the option to revise a previously submitted assignment to earn missed level two achievements. Prompts were open-ended and students were encouraged to propose alternative, creative options as well. To earn achievements for an assignment revision the student had to submit a more reflective notebook than was required the first time, describing where they were stuck or did not understand, addressing feedback they received, comparing their solution to the correct one if appropriate, and explaining the correct answer.
102
+
103
+ In the end, accumulated achievements were converted to a letter grade with a series of minimum thresholds shown in 3: to earn a $\mathrm{C}$ students had to accumulate all level 1 achievements; a B required all level 2 achievements; and an A required all level 3 achievements.
104
+
105
+ Table 3. Minimum Achievements required for each letter grade. For example, if a student earned all level 1 achievements, 13 level 2 achievements and 6 level 3 achievements, the grade would be a B-.
106
+
107
+ <table><tr><td>letter grade</td><td>Level 3</td><td>Level 2</td><td>Level 1</td></tr><tr><td>A</td><td>15</td><td>15</td><td>15</td></tr><tr><td>A-</td><td>10</td><td>15</td><td>15</td></tr><tr><td>B+</td><td>5</td><td>15</td><td>15</td></tr><tr><td>B</td><td>0</td><td>15</td><td>15</td></tr><tr><td>B-</td><td>0</td><td>10</td><td>15</td></tr><tr><td>C+</td><td>0</td><td>5</td><td>15</td></tr><tr><td>C</td><td>0</td><td>0</td><td>15</td></tr><tr><td>C-</td><td>0</td><td>0</td><td>10</td></tr><tr><td>D+</td><td>0</td><td>0</td><td>5</td></tr><tr><td>D</td><td>0</td><td>0</td><td>3</td></tr></table>
108
+
109
+ ## 6. Conclusion
110
+
111
+ This paper described Programming for Data Science, a programming focused introduction to data science with learning centered assessment. The design of the course was centered on student learning, but this organization also provides key advantages for the instructor. Grading without assigning partial credit is much less redundant and draining; giving inline feedback on how students can improve their code, either to meet the specification, or be a better coworker is more enjoyable than taking points off. Having the clear learning outcomes that needed to be met with each activity that were written before the start of the semester made the ongoing prep lighter. Writing code live gives the freedom to adapt to student questions on the fly and means the advance preparation is only notes that will not be shared directly.
112
+
113
+ ## References
114
+
115
+ Buitinck, L., Louppe, G., Blondel, M., Pedregosa, F., Mueller, A., Grisel, O., Niculae, V., Prettenhofer, P.,
116
+
117
+ ---
118
+
119
+ ${}^{5}$ full allocation on page 9-10 https://easyupload.io/ uroc30
120
+
121
+ ---
122
+
123
+ Gramfort, A., Grobler, J., Layton, R., VanderPlas, J., Joly, A., Holt, B., and Varoquaux, G. API design for machine learning software: experiences from the scikit-learn project. In ECML PKDD workshop: Languages for data mining and machine learning, pp. 108-122, 2013.
124
+
125
+ Gennarelli, V. How to grade programming assignments on GitHub, June 2017. URL https://github.blog/ 2017-06-13-how-to-grade-programming-assignments-on-github/.
126
+
127
+ Nederbragt, A., Harris, R. M., Hill, A. P., and Wilson, G. Ten quick tips for teaching with participatory live coding. PLOS Computational Biology, 16(9):1-7, September 2020. doi: 10.1371/journal. pcbi.1008090. URL https://doi.org/10.1371/ journal.pcbi. 1008090. Publisher: Public Library of Science.
128
+
129
+ Team, J. JupyText: Using at the Command Line, 2020. URL https://jupytext.readthedocs.io/en/latest/using-cli.html.
130
+
131
+ 260
132
+
133
+ 261
134
+
135
+ 262
136
+
137
+ 263
138
+
139
+ 266
140
+
141
+ 267
142
+
143
+ 268
144
+
145
+ 269
146
+
147
+ 270
148
+
149
+ 274
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/q8N3WvWvt4X/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,228 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § PARTICIPATORY LIVE CODING AND LEARNING-CENTERED ASSESSMENT IN PROGRAMMING FOR DATA SCIENCE
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ Programming for Data Science is a programming intensive data science course. This paper discusses a revision of the course to center student learning. The revision effort centered the desired learning outcomes and resulted in a course that charted an explicit path toward achieving them for students. This paper summarizes the design overall and provides practical details about the instruction via participatory live coding and assessment with a competency based grading scheme.
8
+
9
+ § 1. INTRODUCTION
10
+
11
+ In this paper, we present an undergraduate course that teaches introductory data science through a programming intensive lens. As originally designed the course involved lectures using slides and prefilled Jupyter Notebooks followed by in class group work. In this format, the instructor provides the conceptual ideas for all material, foundational to advanced, and students figure out more practical details in group work and outside of class. Live coding flips the instructional model: the instructor provides core concept with their practical details and students build on the base knowledge to learn more advanced aspects independently. Active learning in this version was independent rather than in groups through formative assessment and following along coding. The changes in assessment were designed to create a more equitable and inclusive learning environment while maintaining high expectations for all students.
12
+
13
+ Section 2 describes the goals of the course through its context. Section 3 provides an overview of the design and how the pieces work together. The remainder of the paper goes into greater detail about how participatory live coding (Section 4) and learner-centric assessment (Section 5) worked in practice.
14
+
15
+ § 2. COURSE CONTEXT
16
+
17
+ Programming for Data Science is a required course for Data Science (DS) Majors and a popular elective for Computer Science (CS) Majors as it fulfills the requirement for a programming-intensive elective. The prerequisite is one programming course, but no statistics or math. CS majors take Computer Programming taught in C++, after Survey of Computer Science taught in Python. DS majors take Intro to Computer Programming taught in Python, which covers topics with less theoretical depth than the CS majors' course. Many students fulfill prerequisites at community colleges that teach in Java, so some students come to the class with no prior experience in Python. This course is a prerequisite to Machine Learning, which covers the implementation of machine learning algorithms and is also required for DS majors and popular among CS majors.
18
+
19
+ In this context, the role of this course is to give students a chance to deepen their programming skills and have an overview of data science so that they can succeed in understanding the details of machine learning. To do this, communication about their work, data organization, examining results of machine learning models are essential as these skills will support students to focus on the machine learning algorithms in the subsequent course. For some students, this will be their only course exposure to machine learning concepts prior to graduation, so their motivations are largely practical. The course's five learning outcomes are: (1) Describe the process of data science, define each phase, and identify standard tools. (2) Access and combine data in multiple formats for analysis. (3) Perform exploratory data analyses including descriptive statistics and visualization. (4) Select models for data by applying and evaluating multiple models to a single dataset. (5) Communicate solutions to problems with data in common industry formats.
20
+
21
+ § 3. DESIGN OVERVIEW
22
+
23
+ The course was developed using a reverse instructional design process, focused on guiding students to achieve the learning outcomes. To plan assessment, then instruction, the learning outcomes were broken down into 15 component skills each of which was decomposed further to a 3 stage progression, shown in Table 1.. The first level represents a basic understanding: the general terms and core concepts, typically at the understand level of Bloom's Taxonomy. The second level represents the ability to apply concepts with guidance as demonstrated in class, at the apply or analyze level of Bloom's Taxonomy. The third level is the ability to apply the general concepts beyond the scope demonstrated in class, operating at the evaluate or create levels of Bloom's Taxonomy. Each level of each skill is called an achievement and these served as the basis of grading.
24
+
25
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
26
+
27
+ Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute.
28
+
29
+ The content of the course was sequenced to build skills early
30
+
31
+ 067 that would support later skills and activities were crafted to review prior topics. We used loading data as a lens to review 068
32
+
33
+ 069 of basic programming and reinforcement of the overview of Data Science. Next, we covered exploratory data analysis 070
34
+
35
+ 071 using pandas in order build understanding of what well structured data looks like and make the concepts of data 072 science more concrete. Then we covered Data cleaning, 073 as a context for studying ways to manipulate data frames and review the data science process again, emphasizing how the stages interact and aren't always discrete. While cleaning data, we used skills visualizations and summary statistics to examine progress. This allowed for repetition and reinforcement. Databases served as context to discuss constructing datasets from pieces and a chance to reinforce concepts from accessing data.
36
+
37
+ In weeks 6-11, a new machine learning models served as context to motivate different aspects of evaluation and modeling as shown in 2. We used the Sci-kit Learn API to take a model-centric approach, while sticking with the programming focus of the course (Buitinck et al., 2013). Sci-kit Learn provides a large number of typical models with a consistent API, this made it easy for students to try out new models independently to extend what was taught. We used object inspection in Python to examine the attributes of the estimator objects to learn about the model parameters and the built in Jupyter help to learn about the hyperparameters. The final three weeks covered nontabular data through case studies that allowed for reinforcement of many different skills and centered students interests. There was greater interest in natural language processing than in images, so we spent two weeks on text and only one week with deep learning. Fireside chat style interviews with practicing data scientists, introduced helped students see that what we had covered in class directly connected to what data science is like in industry.
38
+
39
+ § 4. CLASS SESSIONS
40
+
41
+ During class time, I delivered instruction via participatory live coding, where the instructor types and explains code in real time and students follow along, typing the same code, getting practice in real time (Nederbragt et al., 2020). Partic-
42
+
43
+ 109 ipatory live coding models realistic programming, students observe the instructor make mistakes, get errors, and debug them in real time. Error messages are difficult to parse for novices, so seeing the instructor parse and resolve them helps reach proficiency in this much faster than relying on internet searches alone. Additionally, debugging a model that does not perform as expected is an even more complicated process, but with this model, we can see this in real time ${}^{1}$ . With the aid of Jupyter Notebooks, students have a copy of the code produced in class, with their own notes. In the Carpentries, where this model of teaching was popularized, the audience is novices, who are coming to programming as a supplemental skill to support their research. In that context, the learners need a minimal mental model of how the code works to move into a competent practitioner role, these learners have good knowledge of what data analyses they wish to do and attend the workshop to learn to code as a tool. This course is a 300 level elective for computer science and data science majors; these students need more conceptual understanding and come to the class with significant prior knowledge in programming.
44
+
45
+ These differences require adaptations to the practice. First, the more advanced material and short ( 50 minutes, 3 times weekly) relative to a Carpentries workshop means some necessary code excerpts are prohibitively long to type live. To accommodate, we used a short url that pointed to the markdown download page for a $\mathrm{{HackM}}{\mathrm{d}}^{2}$ pad, by appending /download/markdown, and imported that to an editable notebook cell with IPython load magic: %load http://shorturl.co/123.This method was most often used for the first cell of class: a sequence of import statements. The HackMD is editable real time while maintaining a consistent url which allowed the instructor to add text there on the fly and share it with students immediately. Second, we used code inspection tools to examine data structures and class objects as a visual for conceptual discussions. For example, we printed out the object using
46
+
47
+ _____dict_method to see how the estimator object changed before and after fitting. We also made extensive use of built in Jupyter help views to consider parameters of methods before calling them. This both gave a visual to complement explanation and modeled for students where they could get help while working independently. Many students come to this course unfamiliar with using a library's documentation because introductory courses use teaching-specific development environments, so this is an essential practical skill.
48
+
49
+ We chatted through Prismia chat ${}^{3}$ for in class assessment, which provides a chat-like interface for students and allows the instructional team to see all student responses at once, group them, and reply individually or group-wise. Many questions were multiple choice questions designed to probe specific misconceptions, though some were open ended programming questions, where student submitted code to the chat. This served as a formative assessment to reinforce concepts for students in real time and as way for the instructor to monitor progress.
50
+
51
+ ${}^{1}$ pg: 361-363 https://easyupload.io/ur0c3o
52
+
53
+ 2https://hackmd.io/
54
+
55
+ 3https://prismia.chat/
56
+
57
+ Table 1. Achievement Definitions. There are three achievements for each of 15 skills, that describe a progression of learning for that skill. The keyword for each skill is a shorthand that was used throughout the course: from the schedule, to assignment text, and the gradebook.
58
+
59
+ max width=
60
+
61
+ keyword skill Level 1 Level 2 Level 3
62
+
63
+ 1-5
64
+ python pythonic code writ- ing python code that mostly runs, occa- sional pep8 adherance python code that reliably runs, frequent pep8 adherance reliable, efficient, pythonic code that consistently adheres to pep8
65
+
66
+ 1-5
67
+ process describe data sci- ence as a process Identify basic components of data sci- ence Describe and define each stage of the data science process Compare different ways that data sci- ence can facilitate decision making
68
+
69
+ 1-5
70
+ access Access data in mul- tiple formats Load data from at least one format; Identify the most common data for- mats Load data for processing from the most common formats; Compare and con- trast most common formats Access data from uncommon formats and identify best practices for formats in different contexts
71
+
72
+ 1-5
73
+ construct combine data from multiple sources Identify what should happen to merge datasets or when they can be merged Apply basic merges Merge data that is not automatically aligned
74
+
75
+ 1-5
76
+ summarize Summarize and de- scribe data Describe the shape and structure of a dataset in basic terms compute summary standard statistics of a whole dataset and grouped data Compute and interpret various sum- mary statistics of subsets of data
77
+
78
+ 1-5
79
+ visualize Visualize data identify plot types, generate basic plots from Pandas Generate multiple plot types with com- plete labeling with Pandas generate and customize complex plots with plotting libraries
80
+
81
+ 1-5
82
+ prepare prepare data for analysis identify if data is or is not ready for analysis, potential problems with data apply data reshaping, cleaning, and fil- tering as directed apply data reshaping, cleaning, and fil- tering manipulations reliably and cor- rectly by assessing data as received
83
+
84
+ 1-5
85
+ classification Apply classifica- tion Describe what classification is Apply a prescribed classification model to a dataset Select and apply appropriate classifica- tion models to different datasets
86
+
87
+ 1-5
88
+ regression Apply Regression Identify what data that can be used for regression looks like Fit linear regression models Fit and explain regularized or nonlin- ear regression
89
+
90
+ 1-5
91
+ clustering Clustering Describe what clustering is apply basic clustering apply multiple clustering techniques, and interpret results
92
+
93
+ 1-5
94
+ evaluate Evaluate model performance Explain basic performance metrics for different data science tasks Apply basic model evaluation metrics to a held out test set Evaluate a model with multiple metrics and cross validation
95
+
96
+ 1-5
97
+ optimize Optimize model pa- rameters Identify when model parameters need to be optimized Manually optimize basic model param- eters such as model order Select optimal parameters based of mu- tiple quanttiateve criteria and automate parameter tuning
98
+
99
+ 1-5
100
+ compare compare models Qualitatively compare model classes Compare model classes specifically; compare performance of fit models Evaluate tradeoffs between different model comparison types
101
+
102
+ 1-5
103
+ unstructured analyze unstruc- tured data Identify options for representing text data Transform unstructured data for analy- sis Compare and contrast multiple repre- sentations for text
104
+
105
+ 1-5
106
+ workflow Use standard tools to solve data sci- ence problems Solve well structured problems with a single tool pipeline Plan and execute solutions to fully specified problems; apply new features of standard tools Scope, choose appropriate tools and solve open-ended data science prob- lems; compare common tools
107
+
108
+ 1-5
109
+
110
+ Table 2. Course Schedule with skills emphasized each week. Skills are defined in Table 1 and linked by the keyword column
111
+
112
+ max width=
113
+
114
+ Week Topics Skills
115
+
116
+ 1-3
117
+ 1 Overview, Python Review python, process
118
+
119
+ 1-3
120
+ 2 Loading data access, prepare, summarize
121
+
122
+ 1-3
123
+ 3 Exploratory Data Analysis summarize, visualize
124
+
125
+ 1-3
126
+ 4 Data Cleaning prepare, summarize, visualize
127
+
128
+ 1-3
129
+ 5 Databases & Merges access, construct, summarize
130
+
131
+ 1-3
132
+ 6 Naive Bayes Classification classification, evaluate
133
+
134
+ 1-3
135
+ 7 decision trees, cross validation classification, evaluate
136
+
137
+ 1-3
138
+ 8 Linear Regression regression, evaluate
139
+
140
+ 1-3
141
+ 9 Kmeans Clustering clustering, evaluate
142
+
143
+ 1-3
144
+ 10 SVM, parameter tuning optimize, evaluate, clustering
145
+
146
+ 1-3
147
+ 11 KNN, LASSO compare, clustering, regression
148
+
149
+ 1-3
150
+ 12 Text Analysis unstructured
151
+
152
+ 1-3
153
+ 13 Topic Modeling unstructured, workflow
154
+
155
+ 1-3
156
+ 14 Deep Learning workflow, compare
157
+
158
+ 1-3
159
+
160
+ At the end of class, students were able to submit additional questions through a Google Form Exit Ticket. Answers to those questions were appended to the instructor notebook prior to posting them online a course Jupyter book ${}^{4}$ . The instructor notes were also annotated with resources, written explanations, and extra practice exercises using Jupyter Book special content blocks after converting to Myst Markdown.
161
+
162
+ § 5. ASSESSMENT
163
+
164
+ In order to align assessment to the assumed model of skill acquisition, the course adopted a hybrid competency-specification based grading scheme. This grading scheme allowed specification grading of each activity, meaning that the instructor and teaching assistant did not have to calculate partial credit for assignments and that students had multiple chances to demonstrate each competency in the course. Specification grading involves defining a set of criteria, the specifications, for an assignment and assessing on a binary: the specifications are met or not. In this case the specifications were the achievement definitions, crucially this allows for some mistakes to be made if the understanding is demonstrated. Competency based grading allows students to work through material at their own pace and typically allows for resubmits on assignments. In this course, the grade was based on accumulated achievements and there were multiple opportunities to earn each achievement, through the design of assignments, rather than resubmits. This structure also helped students see the material as connected: the assignment on classification explicitly required that they also plot; the one on databases required they compute summary statistics.
165
+
166
+ ${}^{4}$ anonymized pdf: https://easyupload.io/ur0c30
167
+
168
+ Students had at least two opportunities to earn each of the 45 (15*3) achievements. Level 1 achievements could be earned on any type of activity: in class, assignments, or portfolio checks. Level 2 could be earned only on assignments and portfolios. Level 3 achievements could only be earned on portfolio submissions. Each skill was addressed in at least 3 class sessions, at least 2 weekly assignments and at least 2 portfolio submissions ${}^{5}$ . The communication learning outcome was built into all assignments and portfolios through the requirement to explain code and interpret results, using markdown cells in submitted Jupyter Notebooks.
169
+
170
+ Assignments were guided data analyses. Each allowed students to practice with new concepts and skills within a direct guidance. Each submitted assignment was graded on specification for level 2 achievement in each relevant skill independently; a student could earn a level 2 achievement for one skill, but not another in a given assignment. If the submission did not meet the specification for level 2, it was evaluated for meeting level 1 . For example, correctly using and interpretting summary statistics, and choosing the right plot type failing to generate the plots in A3 would earn level 2 for summarize, but level 1 for visualize. Students submitted assignments as Jupyter notebooks to a GitHub repository created with GitHub Classroom from a template repository a GitHub Action to convert submitted notebooks files to Myst markdown with Jupytext(Team, 2020). The markdown format facilitated providing inline feedback through the Feedback Pull Request automatically created with GitHub Classroom by making a human readable file (Gennarelli, 2017; Team, 2020). This gave student explicit feedback about how to improve on future assignments. Because achievements were evaluated independently, students could skip portions of the assignment that assessed
171
+
172
+ 206 achievements they had already earned. For example, later
173
+
174
+ 207 assignments included suggestions for extra plots or modi-
175
+
176
+ 208 fications to the figures to include in order to earn level 2
177
+
178
+ 209 for visualization. A student could also submit an empty repository indicating that they were choosing to attempt the relevant achievements through a different assignment.
179
+
180
+ Portfolios were a chance for students to demonstrate deeper understanding by building a large Jupyter Book in a single GitHub Repository over the course of the semester. Students wrote an introduction describing what achievements
181
+
182
+ 218
183
+
184
+ 219 they were attempting to earn and where in their portfolio each was addressed. Portfolios were graded on specification for the achievements the students identified. Students were provided with prompts to guide their inquiries to earn level three and the option to revise a previously submitted assignment to earn missed level two achievements. Prompts were open-ended and students were encouraged to propose alternative, creative options as well. To earn achievements for an assignment revision the student had to submit a more reflective notebook than was required the first time, describing where they were stuck or did not understand, addressing feedback they received, comparing their solution to the correct one if appropriate, and explaining the correct answer.
185
+
186
+ In the end, accumulated achievements were converted to a letter grade with a series of minimum thresholds shown in 3: to earn a $\mathrm{C}$ students had to accumulate all level 1 achievements; a B required all level 2 achievements; and an A required all level 3 achievements.
187
+
188
+ Table 3. Minimum Achievements required for each letter grade. For example, if a student earned all level 1 achievements, 13 level 2 achievements and 6 level 3 achievements, the grade would be a B-.
189
+
190
+ max width=
191
+
192
+ letter grade Level 3 Level 2 Level 1
193
+
194
+ 1-4
195
+ A 15 15 15
196
+
197
+ 1-4
198
+ A- 10 15 15
199
+
200
+ 1-4
201
+ B+ 5 15 15
202
+
203
+ 1-4
204
+ B 0 15 15
205
+
206
+ 1-4
207
+ B- 0 10 15
208
+
209
+ 1-4
210
+ C+ 0 5 15
211
+
212
+ 1-4
213
+ C 0 0 15
214
+
215
+ 1-4
216
+ C- 0 0 10
217
+
218
+ 1-4
219
+ D+ 0 0 5
220
+
221
+ 1-4
222
+ D 0 0 3
223
+
224
+ 1-4
225
+
226
+ § 6. CONCLUSION
227
+
228
+ This paper described Programming for Data Science, a programming focused introduction to data science with learning centered assessment. The design of the course was centered on student learning, but this organization also provides key advantages for the instructor. Grading without assigning partial credit is much less redundant and draining; giving inline feedback on how students can improve their code, either to meet the specification, or be a better coworker is more enjoyable than taking points off. Having the clear learning outcomes that needed to be met with each activity that were written before the start of the semester made the ongoing prep lighter. Writing code live gives the freedom to adapt to student questions on the fly and means the advance preparation is only notes that will not be shared directly.
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/yFPqbprG2Qb/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Deeper Learning By Doing: Integrating Hands-On Research Projects Into A Machine Learning Course
2
+
3
+ ## Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ Machine learning has seen a vast increase of interest in recent years, along with an abundance of learning resources. While conventional lectures provide students with important information and knowledge, we also believe that additional project-based learning components can motivate students to engage in topics more deeply. In addition to incorporating project-based learning in our courses, we aim to develop project-based learning components aligned with real-world tasks, including experimental design and execution, report writing, oral presentation, and peer-reviewing. This paper describes the organization of our project-based machine learning courses with a particular emphasis on the class project components and shares our resources with instructors who would like to include similar elements in their courses.
8
+
9
+ ## 1. Motivation
10
+
11
+ Interests in machine learning and deep learning have been increasing in recent years. Similarly, the number of learning resources, including textbooks, blogs, online courses, and video tutorials, is growing rapidly as well. This is a great development, and one might say that getting into machine learning has never been easier.
12
+
13
+ However, we believe that while the process of absorbing knowledge from various resources is necessary, it is not sufficient for becoming a successful machine learning researcher or practitioner. Anecdotal evidence from online learning communities suggests that adopting an experimental mindset can accelerate learning (Osmulski, 2021). Moreover, analyses by Headden and McKay assert that "a sense of control over the work" is an essential aspect for motivating and engaging students in learning (Headden & McKay,
14
+
15
+ 2015). How can we foster such an experimental mindset and engage students? While we cannot answer this definitively, in this paper, we describe our deep learning course featuring project-based learning components, where students work on original questions and research topics that interest them.
16
+
17
+ Three years ago, we began designing machine learning and deep learning courses with substantial student project components, including an original research proposal, conference paper-style project report, oral class presentation, and paper peer-review. We have adopted and refined this approach throughout teaching six machine learning and deep learning courses. While similar project-based elements were used in different machine and deep learning courses, this paper will only focus on the latest deep learning course.
18
+
19
+ Based on anonymous surveys, the project-based learning components were, without exceptions, very well received by the students. In addition, we found that it was effective in fostering interaction and collaboration among students and offering students opportunities to practice essential communication skills. This paper outlines our latest project-based course format as well as some of the lessons learned.
20
+
21
+ ## 2. Overall Course Design
22
+
23
+ This section briefly outlines the overall course and lecture design to provide the broader context for the project-based learning components described in more detail in Section 3.
24
+
25
+ ### 2.1. Target Audience
26
+
27
+ The course is listed as an elective course for statistics and data science majors and is thus aimed at senior undergraduate students. Programming and scientific computing experience is highly recommended, but prior machine learning knowledge is not required.
28
+
29
+ ### 2.2. Lecture Topics
30
+
31
+ Being intended as an introductory course that exposes students to all major areas of deep learning, we introduce students to the core concepts of deep learning via face-to-face lectures over the course of 15 weeks. The course covers all major areas of deep learning, from single layer neural networks to transformers. We omit a detailed lecture topic list for brevity, but interested readers can find a list of lecture topics in our supplementary material ${}^{1}$ .
32
+
33
+ ---
34
+
35
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
36
+
37
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
38
+
39
+ ---
40
+
41
+ ### 2.3. Implementing Algorithms from Scratch and Using Libraries
42
+
43
+ While the general deep learning topics and concepts are taught in a conventional lecture format, using a tablet to augment presentation slides with rich annotations, we prepare and discuss full code examples as demonstrations for each topic.
44
+
45
+ We agree with Schiendorfer et al. (2021) that it could be beneficial to expose students to "from scratch" implementations in addition to teaching how to use established libraries. These implementations have pedagogical value since they serve as an additional "language" (in addition to drawings and mathematics) to describe algorithms. In addition, being familiar with coding algorithms from scratch can help students with being able to implement and experiment with their ideas more readily.
46
+
47
+ However, implementing algorithms from scratch is both inefficient and error-prone. Hence, we believe that it is in the students' best interest to balance from-scratch implementations and using existing libraries. For example, after presenting students with the essential conceptual and mathematical details, we teach students how to implement a logistic regression classifier trained with stochastic gradient descent ${}^{2}$ . Then, we show students how the same can be achieved with PyTorch (Paszke et al., 2019) and its automatic differentiation capabilities ${}^{3}$ . We think that empowering students to implement algorithms from scratch but also showing more reliable and convenient tools motivates and demystifies the latter.
48
+
49
+ ### 2.4. Student Work and Evaluation
50
+
51
+ Besides attending the lectures, students are presented with weekly quizzes, homework (approximately every two weeks), a midterm exam, and the class project components, which will be detailed in Section 3. The project structure and timeline are summarized in Figure 1.
52
+
53
+ While it appears that students are presented with a substantial workload at first glance, the students reported that the workload in this course indeed presents an average course
54
+
55
+ 109
56
+
57
+ ## load for a three credit point course.
58
+
59
+ The weekly self-assessment quizzes are multiple-choice, multiple dropdown, numerical, and multiple answer style questions that test the students' current understanding of the course material. These quizzes constitute only a small percentage of the total grade but help incentivize students to keep up with the lecture material before and after the midterm exam. There is no final exam in this course as we found that it adds unnecessary stress when the students prepare the deliverables for the project-based components towards the end of the semester.
60
+
61
+ Through the homework assignments, students learn to implement and apply core concepts learned in the lectures. In contrast to the weekly quizzes, the homework assignments are coding-based. Since deep learning code can be very verbose and include a lot of boilerplate code, students are provided with skeleton code where they only need to fill in key parts. We encourage students to reuse lecture and homework code in their class projects.
62
+
63
+ ## 3. Project-based Learning Components
64
+
65
+ Considering that machine learning is primarily a very applied field, we believe that machine learning courses can benefit from project-based learning components. In this regard, we designed a course with the class project as a major component, where the sum of its components constitutes half of the total grade. The individual components consist of (1) a project proposal, (2) a report, (3) an oral presentation, and (4) peer-review. Overall, this process aims to mimic the lifecycle of a real-world ML project from conception to completion.
66
+
67
+ ### 3.1. Forming Project Groups
68
+
69
+ To provide students with sufficient time to work on their project proposals (Figure 1), project groups should ideally be formed as soon as possible, within the first weeks of the semester. An added benefit of creating project groups early is that the project groups can also function as study groups.
70
+
71
+ In the absence of strong evidence in favor of a particular group size, we initially considered group sizes of 2-4 students. In a classroom of 72 students, we preferred sizes of 3-4 to reduce the total number of groups to improve instructor support and extend the per-group presentation time for the oral in-class presentations at the end of the semester. Furthermore, following advice from research into different group sizes (Apedoe et al., 2012), we took advice from teacher impressions suggesting that "Groups of 3 worked best." Upon request, we allow students to select their group partners and assign remaining slots randomly.
72
+
73
+ ---
74
+
75
+ ${}^{1}$ https://github.com/anonymous345q234/ ecml-teaching-ml/blob/main/lecture-topics. md
76
+
77
+ ${}^{2}$ https://github.com/anonymous345q234/ ecml-teaching-ml/blob/main/nbs/logreg_ from-scratch.ipynb
78
+
79
+ ${}^{3}$ https://github.com/anonymous345q234/ ecml-teaching-ml/blob/main/nbs/logreg_ pytorch.ipynb
80
+
81
+ ---
82
+
83
+ 110
84
+
85
+ ![01963a90-7c27-701b-a11d-e92847581186_2_193_224_1365_604_0.jpg](images/01963a90-7c27-701b-a11d-e92847581186_2_193_224_1365_604_0.jpg)
86
+
87
+ Figure 1. Summary and timeline of the student deliverables throughout the semester.
88
+
89
+ ### 3.2. Project Proposal
90
+
91
+ The project proposal is a short 2-3 page document outlining the project plans. Students receive total points if all sections in the template ${}^{4}$ are completed because the proposal’s main intention is to provide instructors with a formal outline of the student's plan for feedback. The proposal's due date is set to approximately 2-3 weeks after the project groups are formed such that students have enough time remaining in the semester to work on the project itself.
92
+
93
+ A particular challenge is that students are asked to propose a deep learning project without having been exposed to the breadth of topics covered in class. While this is unavoidable for practical reasons, we recommend sharing interesting and diverse examples and applications of deep learning with students early in the semester to help students to help with choosing the topic and defining the approximate scope. In addition, we found that providing examples of anonymized project proposals from previous semesters can make this task less daunting.
94
+
95
+ In retrospect, while some groups found the project conceptualization more challenging than others, we never encountered a case where students couldn't find a project they were interested in working on. In addition, projects that students worked on in the past were very diverse. For example, projects included convolutional neural network based self-driving cars, COVID-19 detection, and trading card game classification. We included anonymized example reports in the supplementary materials ${}^{5}$ .
96
+
97
+ ### 3.3. Project Report
98
+
99
+ While we realize that in the real world, papers and paper sections can be flexible and diverse, we aim to create a universal rubric that can be applied fairly to all projects for grading ${}^{6}$ . For this purpose, we adopted the CVPR conference template for the report and defined the following sections: Abstract, Introduction, Related Work, Proposed Method, Experiments, Results and Discussion, Conclusions, and Contributions. We share this template and provide more details about the section contents in the supplementary mate- ${\text{rial}}^{7}$ along with anonymized example reports from previous semesters ${}^{8}$ .
100
+
101
+ To keep the writing and reviewing efforts realistic and manageable, the require students to stay within 6-8 pages excluding references. In addition, we provide students with the aforementioned report rubric to assist their writing efforts. We recommend students to use Overleaf ${}^{9}$ (free tier) as it provides the best collaborative writing experience for LaTeX papers.
102
+
103
+ ---
104
+
105
+ 5https://github.com/anonymous345q234/ ecml-teaching-ml/tree/main/project-examples
106
+
107
+ ${}^{6}$ https://github.com/anonymous345q234/ ecml-teaching-ml/blob/main/rubrics/ report-rubric.md
108
+
109
+ ${}^{7}$ https://github.com/anonymous345q234/ ecml-teaching-ml/tree/main/report-template
110
+
111
+ ${}^{8}$ https://github.com/anonymous345q234/ ecml-teaching-ml/tree/main/project-examples
112
+
113
+ 9https://www.overleaf.com/
114
+
115
+ ${}^{4}$ https://github.com/anonymous345q234/ ecml-teaching-ml/tree/main/
116
+
117
+ proposal-template
118
+
119
+ ---
120
+
121
+ ### 3.4. Project Presentation
122
+
123
+ At the end of the semester, students present their projects in class. Due to practical reasons, the presentation length is capped at 8 minutes, and presentations are split across 3 separate lecture days ( 8 presentations per lecture days).
124
+
125
+ To further incentivize attendance, the presentation order is randomized (announced at the beginning of each class), and we give bonus points for attendance. We track attendance through voting sheets, where students are asked to vote for their preferred candidates for the Best Oral Presentation, Most Creative Project, and Best Visualizations awards.
126
+
127
+ ### 3.5. Peer Review
128
+
129
+ Students are expected to review two project reports and presentations from other groups. This is a single-blind setting where reviewers remain anonymous to the project group members. We provide code to facilitate this peer review assignment ${}^{10}$ . Given that project groups consist of three students each, 5-6 reviewers were assigned to each project. The reviewer scores were averaged, and outliers were removed at the instructor's discretion. To make the presentation and report assessments as fair as possible, the reviewers received detailed rubrics to follow ${}^{11}$ . (These rubrics were shared several weeks before the report due date such that students could use those as additional guidance during the writing process.) In addition, peer-reviewers received points for each submitted review to incentivize complete and timely submissions. The instructors curated the peer reviews, and constructive feedback was shared with the students alongside the instructors' feedback.
130
+
131
+ We found that this peer-review process worked exceptionally well, and students appreciated this experience. Also, in addition to the instructor feedback, the peer reviews create an additional opportunity for students to receive feedback and being exposed to different perspectives, which can help with improving their work. A downside of this approach is that feedback could sometimes be overly harsh, for instance, assigning zero points for related work when a report included such related work in the introduction section but omitted/removed the related work section (originally part of the report template) itself. We are thinking of future versions of the rubric to allow more flexibility and bonus points for exceptionally well-done sections.
132
+
133
+ ### 3.6. Switching from In-Person to All-Virtual
134
+
135
+ In 2020 and 2021, the COVID-19 pandemic required switching the course to an all-virtual format. While this was a new experience for both instructors and students, we could transition all aspects of the course to a virtual environment without making major changes to the course design. In-person lectures were replaced by virtual lectures and recordings to accommodate students in different time zones. We made accommodations during the group assignment such that students in similar time zones were working together. While students use collaborative tools in a non-virtual semester (e.g., GitHub for code sharing, Overleaf for collaborative writing, and OneDrive for general file sharing), students used conferencing software for virtual meetings, and similar to the in-class presentations, the student presentations were pre-recorded such that students could view them at their convenience. We found that the possibility of pre-recording their talks helped students overcome nervousness related to speaking in front of an audience, and we are considering offering this as an option in future in-person semesters.
136
+
137
+ ### 3.7. Reception
138
+
139
+ While we have no formal way (for example, via AB testing) to assess the success of the project-based learning, we are under the impression that it was worthwhile. Generally, the course was very well received (averaging a 4.8/5.0 overall course rating in recent semesters). In anonymous class surveys conducted by the college, students included the following comments: "Project is somewhat challenging but very meaningful;""One of my favorite courses I've taken in college;" "I enjoyed this course and I really enjoyed the final project."
140
+
141
+ Moreover, from personal communications with the instructors, students mentioned that the class project was a helpful resume component when interviewing for internships or jobs.
142
+
143
+ ## 4. Conclusion
144
+
145
+ This paper has presented a deep learning course that includes substantial project-based learning components and shared the templates and rubrics we created and refined in previous semesters. Without exception, student feedback has been unilaterally supportive of the class project in recent semesters. We noticed that most students were very motivated to research deep learning topics beyond the scope of this course. While project-based learning components may create extra work for the instructors, we think seeing the creative outcomes is very rewarding. Moreover, project-based learning provides additional opportunities for meaningful student collaborations and interactions.
146
+
147
+ ---
148
+
149
+ ${}^{10}$ https://github.com/anonymous345q234/ ecml-teaching-ml/tree/main/ review-assignment
150
+
151
+ "https://github.com/anonymous345q234/ ecml-teaching-ml/tree/main/rubrics
152
+
153
+ ---
154
+
155
+ ## References
156
+
157
+ Apedoe, X. S., Ellefson, M. R., and Schunn, C. D. Learning together while designing: Does group size make a difference? Journal of Science Education and Technology, 21 (1):83-94, 2012.
158
+
159
+ Headden, S. and McKay, S. Motivation matters: How new research can help teachers boost student engagement. Carnegie Foundation for the Advancement of Teaching, 2015.
160
+
161
+ Osmulski, R. Meta Learning: How To Learn Deep Learning And Thrive In The Digital World. Gumroad, 2021.
162
+
163
+ Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 32:8026-8037, 2019.
164
+
165
+ Schiendorfer, A., Gajek, C., and Reif, W. Turning software engineers into machine learning engineers. In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, pp. 36- 41. PMLR, 2021.
166
+
167
+ 260
168
+
169
+ 261
170
+
171
+ 262
172
+
173
+ 263
174
+
175
+ 264
176
+
177
+ 265
178
+
179
+ 266
180
+
181
+ 267
182
+
183
+ 268
184
+
185
+ 269
186
+
187
+ 270
188
+
189
+ 274
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/yFPqbprG2Qb/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § DEEPER LEARNING BY DOING: INTEGRATING HANDS-ON RESEARCH PROJECTS INTO A MACHINE LEARNING COURSE
2
+
3
+ § ANONYMOUS AUTHORS ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ Machine learning has seen a vast increase of interest in recent years, along with an abundance of learning resources. While conventional lectures provide students with important information and knowledge, we also believe that additional project-based learning components can motivate students to engage in topics more deeply. In addition to incorporating project-based learning in our courses, we aim to develop project-based learning components aligned with real-world tasks, including experimental design and execution, report writing, oral presentation, and peer-reviewing. This paper describes the organization of our project-based machine learning courses with a particular emphasis on the class project components and shares our resources with instructors who would like to include similar elements in their courses.
8
+
9
+ § 1. MOTIVATION
10
+
11
+ Interests in machine learning and deep learning have been increasing in recent years. Similarly, the number of learning resources, including textbooks, blogs, online courses, and video tutorials, is growing rapidly as well. This is a great development, and one might say that getting into machine learning has never been easier.
12
+
13
+ However, we believe that while the process of absorbing knowledge from various resources is necessary, it is not sufficient for becoming a successful machine learning researcher or practitioner. Anecdotal evidence from online learning communities suggests that adopting an experimental mindset can accelerate learning (Osmulski, 2021). Moreover, analyses by Headden and McKay assert that "a sense of control over the work" is an essential aspect for motivating and engaging students in learning (Headden & McKay,
14
+
15
+ 2015). How can we foster such an experimental mindset and engage students? While we cannot answer this definitively, in this paper, we describe our deep learning course featuring project-based learning components, where students work on original questions and research topics that interest them.
16
+
17
+ Three years ago, we began designing machine learning and deep learning courses with substantial student project components, including an original research proposal, conference paper-style project report, oral class presentation, and paper peer-review. We have adopted and refined this approach throughout teaching six machine learning and deep learning courses. While similar project-based elements were used in different machine and deep learning courses, this paper will only focus on the latest deep learning course.
18
+
19
+ Based on anonymous surveys, the project-based learning components were, without exceptions, very well received by the students. In addition, we found that it was effective in fostering interaction and collaboration among students and offering students opportunities to practice essential communication skills. This paper outlines our latest project-based course format as well as some of the lessons learned.
20
+
21
+ § 2. OVERALL COURSE DESIGN
22
+
23
+ This section briefly outlines the overall course and lecture design to provide the broader context for the project-based learning components described in more detail in Section 3.
24
+
25
+ § 2.1. TARGET AUDIENCE
26
+
27
+ The course is listed as an elective course for statistics and data science majors and is thus aimed at senior undergraduate students. Programming and scientific computing experience is highly recommended, but prior machine learning knowledge is not required.
28
+
29
+ § 2.2. LECTURE TOPICS
30
+
31
+ Being intended as an introductory course that exposes students to all major areas of deep learning, we introduce students to the core concepts of deep learning via face-to-face lectures over the course of 15 weeks. The course covers all major areas of deep learning, from single layer neural networks to transformers. We omit a detailed lecture topic list for brevity, but interested readers can find a list of lecture topics in our supplementary material ${}^{1}$ .
32
+
33
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
34
+
35
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
36
+
37
+ § 2.3. IMPLEMENTING ALGORITHMS FROM SCRATCH AND USING LIBRARIES
38
+
39
+ While the general deep learning topics and concepts are taught in a conventional lecture format, using a tablet to augment presentation slides with rich annotations, we prepare and discuss full code examples as demonstrations for each topic.
40
+
41
+ We agree with Schiendorfer et al. (2021) that it could be beneficial to expose students to "from scratch" implementations in addition to teaching how to use established libraries. These implementations have pedagogical value since they serve as an additional "language" (in addition to drawings and mathematics) to describe algorithms. In addition, being familiar with coding algorithms from scratch can help students with being able to implement and experiment with their ideas more readily.
42
+
43
+ However, implementing algorithms from scratch is both inefficient and error-prone. Hence, we believe that it is in the students' best interest to balance from-scratch implementations and using existing libraries. For example, after presenting students with the essential conceptual and mathematical details, we teach students how to implement a logistic regression classifier trained with stochastic gradient descent ${}^{2}$ . Then, we show students how the same can be achieved with PyTorch (Paszke et al., 2019) and its automatic differentiation capabilities ${}^{3}$ . We think that empowering students to implement algorithms from scratch but also showing more reliable and convenient tools motivates and demystifies the latter.
44
+
45
+ § 2.4. STUDENT WORK AND EVALUATION
46
+
47
+ Besides attending the lectures, students are presented with weekly quizzes, homework (approximately every two weeks), a midterm exam, and the class project components, which will be detailed in Section 3. The project structure and timeline are summarized in Figure 1.
48
+
49
+ While it appears that students are presented with a substantial workload at first glance, the students reported that the workload in this course indeed presents an average course
50
+
51
+ 109
52
+
53
+ § LOAD FOR A THREE CREDIT POINT COURSE.
54
+
55
+ The weekly self-assessment quizzes are multiple-choice, multiple dropdown, numerical, and multiple answer style questions that test the students' current understanding of the course material. These quizzes constitute only a small percentage of the total grade but help incentivize students to keep up with the lecture material before and after the midterm exam. There is no final exam in this course as we found that it adds unnecessary stress when the students prepare the deliverables for the project-based components towards the end of the semester.
56
+
57
+ Through the homework assignments, students learn to implement and apply core concepts learned in the lectures. In contrast to the weekly quizzes, the homework assignments are coding-based. Since deep learning code can be very verbose and include a lot of boilerplate code, students are provided with skeleton code where they only need to fill in key parts. We encourage students to reuse lecture and homework code in their class projects.
58
+
59
+ § 3. PROJECT-BASED LEARNING COMPONENTS
60
+
61
+ Considering that machine learning is primarily a very applied field, we believe that machine learning courses can benefit from project-based learning components. In this regard, we designed a course with the class project as a major component, where the sum of its components constitutes half of the total grade. The individual components consist of (1) a project proposal, (2) a report, (3) an oral presentation, and (4) peer-review. Overall, this process aims to mimic the lifecycle of a real-world ML project from conception to completion.
62
+
63
+ § 3.1. FORMING PROJECT GROUPS
64
+
65
+ To provide students with sufficient time to work on their project proposals (Figure 1), project groups should ideally be formed as soon as possible, within the first weeks of the semester. An added benefit of creating project groups early is that the project groups can also function as study groups.
66
+
67
+ In the absence of strong evidence in favor of a particular group size, we initially considered group sizes of 2-4 students. In a classroom of 72 students, we preferred sizes of 3-4 to reduce the total number of groups to improve instructor support and extend the per-group presentation time for the oral in-class presentations at the end of the semester. Furthermore, following advice from research into different group sizes (Apedoe et al., 2012), we took advice from teacher impressions suggesting that "Groups of 3 worked best." Upon request, we allow students to select their group partners and assign remaining slots randomly.
68
+
69
+ ${}^{1}$ https://github.com/anonymous345q234/ ecml-teaching-ml/blob/main/lecture-topics. md
70
+
71
+ ${}^{2}$ https://github.com/anonymous345q234/ ecml-teaching-ml/blob/main/nbs/logreg_ from-scratch.ipynb
72
+
73
+ ${}^{3}$ https://github.com/anonymous345q234/ ecml-teaching-ml/blob/main/nbs/logreg_ pytorch.ipynb
74
+
75
+ 110
76
+
77
+ < g r a p h i c s >
78
+
79
+ Figure 1. Summary and timeline of the student deliverables throughout the semester.
80
+
81
+ § 3.2. PROJECT PROPOSAL
82
+
83
+ The project proposal is a short 2-3 page document outlining the project plans. Students receive total points if all sections in the template ${}^{4}$ are completed because the proposal’s main intention is to provide instructors with a formal outline of the student's plan for feedback. The proposal's due date is set to approximately 2-3 weeks after the project groups are formed such that students have enough time remaining in the semester to work on the project itself.
84
+
85
+ A particular challenge is that students are asked to propose a deep learning project without having been exposed to the breadth of topics covered in class. While this is unavoidable for practical reasons, we recommend sharing interesting and diverse examples and applications of deep learning with students early in the semester to help students to help with choosing the topic and defining the approximate scope. In addition, we found that providing examples of anonymized project proposals from previous semesters can make this task less daunting.
86
+
87
+ In retrospect, while some groups found the project conceptualization more challenging than others, we never encountered a case where students couldn't find a project they were interested in working on. In addition, projects that students worked on in the past were very diverse. For example, projects included convolutional neural network based self-driving cars, COVID-19 detection, and trading card game classification. We included anonymized example reports in the supplementary materials ${}^{5}$ .
88
+
89
+ § 3.3. PROJECT REPORT
90
+
91
+ While we realize that in the real world, papers and paper sections can be flexible and diverse, we aim to create a universal rubric that can be applied fairly to all projects for grading ${}^{6}$ . For this purpose, we adopted the CVPR conference template for the report and defined the following sections: Abstract, Introduction, Related Work, Proposed Method, Experiments, Results and Discussion, Conclusions, and Contributions. We share this template and provide more details about the section contents in the supplementary mate- ${\text{ rial }}^{7}$ along with anonymized example reports from previous semesters ${}^{8}$ .
92
+
93
+ To keep the writing and reviewing efforts realistic and manageable, the require students to stay within 6-8 pages excluding references. In addition, we provide students with the aforementioned report rubric to assist their writing efforts. We recommend students to use Overleaf ${}^{9}$ (free tier) as it provides the best collaborative writing experience for LaTeX papers.
94
+
95
+ 5https://github.com/anonymous345q234/ ecml-teaching-ml/tree/main/project-examples
96
+
97
+ ${}^{6}$ https://github.com/anonymous345q234/ ecml-teaching-ml/blob/main/rubrics/ report-rubric.md
98
+
99
+ ${}^{7}$ https://github.com/anonymous345q234/ ecml-teaching-ml/tree/main/report-template
100
+
101
+ ${}^{8}$ https://github.com/anonymous345q234/ ecml-teaching-ml/tree/main/project-examples
102
+
103
+ 9https://www.overleaf.com/
104
+
105
+ ${}^{4}$ https://github.com/anonymous345q234/ ecml-teaching-ml/tree/main/
106
+
107
+ proposal-template
108
+
109
+ § 3.4. PROJECT PRESENTATION
110
+
111
+ At the end of the semester, students present their projects in class. Due to practical reasons, the presentation length is capped at 8 minutes, and presentations are split across 3 separate lecture days ( 8 presentations per lecture days).
112
+
113
+ To further incentivize attendance, the presentation order is randomized (announced at the beginning of each class), and we give bonus points for attendance. We track attendance through voting sheets, where students are asked to vote for their preferred candidates for the Best Oral Presentation, Most Creative Project, and Best Visualizations awards.
114
+
115
+ § 3.5. PEER REVIEW
116
+
117
+ Students are expected to review two project reports and presentations from other groups. This is a single-blind setting where reviewers remain anonymous to the project group members. We provide code to facilitate this peer review assignment ${}^{10}$ . Given that project groups consist of three students each, 5-6 reviewers were assigned to each project. The reviewer scores were averaged, and outliers were removed at the instructor's discretion. To make the presentation and report assessments as fair as possible, the reviewers received detailed rubrics to follow ${}^{11}$ . (These rubrics were shared several weeks before the report due date such that students could use those as additional guidance during the writing process.) In addition, peer-reviewers received points for each submitted review to incentivize complete and timely submissions. The instructors curated the peer reviews, and constructive feedback was shared with the students alongside the instructors' feedback.
118
+
119
+ We found that this peer-review process worked exceptionally well, and students appreciated this experience. Also, in addition to the instructor feedback, the peer reviews create an additional opportunity for students to receive feedback and being exposed to different perspectives, which can help with improving their work. A downside of this approach is that feedback could sometimes be overly harsh, for instance, assigning zero points for related work when a report included such related work in the introduction section but omitted/removed the related work section (originally part of the report template) itself. We are thinking of future versions of the rubric to allow more flexibility and bonus points for exceptionally well-done sections.
120
+
121
+ § 3.6. SWITCHING FROM IN-PERSON TO ALL-VIRTUAL
122
+
123
+ In 2020 and 2021, the COVID-19 pandemic required switching the course to an all-virtual format. While this was a new experience for both instructors and students, we could transition all aspects of the course to a virtual environment without making major changes to the course design. In-person lectures were replaced by virtual lectures and recordings to accommodate students in different time zones. We made accommodations during the group assignment such that students in similar time zones were working together. While students use collaborative tools in a non-virtual semester (e.g., GitHub for code sharing, Overleaf for collaborative writing, and OneDrive for general file sharing), students used conferencing software for virtual meetings, and similar to the in-class presentations, the student presentations were pre-recorded such that students could view them at their convenience. We found that the possibility of pre-recording their talks helped students overcome nervousness related to speaking in front of an audience, and we are considering offering this as an option in future in-person semesters.
124
+
125
+ § 3.7. RECEPTION
126
+
127
+ While we have no formal way (for example, via AB testing) to assess the success of the project-based learning, we are under the impression that it was worthwhile. Generally, the course was very well received (averaging a 4.8/5.0 overall course rating in recent semesters). In anonymous class surveys conducted by the college, students included the following comments: "Project is somewhat challenging but very meaningful;""One of my favorite courses I've taken in college;" "I enjoyed this course and I really enjoyed the final project."
128
+
129
+ Moreover, from personal communications with the instructors, students mentioned that the class project was a helpful resume component when interviewing for internships or jobs.
130
+
131
+ § 4. CONCLUSION
132
+
133
+ This paper has presented a deep learning course that includes substantial project-based learning components and shared the templates and rubrics we created and refined in previous semesters. Without exception, student feedback has been unilaterally supportive of the class project in recent semesters. We noticed that most students were very motivated to research deep learning topics beyond the scope of this course. While project-based learning components may create extra work for the instructors, we think seeing the creative outcomes is very rewarding. Moreover, project-based learning provides additional opportunities for meaningful student collaborations and interactions.
134
+
135
+ ${}^{10}$ https://github.com/anonymous345q234/ ecml-teaching-ml/tree/main/ review-assignment
136
+
137
+ "https://github.com/anonymous345q234/ ecml-teaching-ml/tree/main/rubrics
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/z-N7JMHjLO/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Teaching Machine Learning for the Physical Sciences: A summary of lessons learned and challenges
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ ## Abstract
6
+
7
+ This paper summarizes some challenges encountered and best practices established in several years of teaching Machine Learning for the Physical Sciences at the undergraduate and graduate level. I discuss motivations for teaching ML to Physicists, desirable properties of pedagogical materials such as accessibility, relevance, and likeness to real-world research problems, and give examples of components of teaching units.
8
+
9
+ ### 1.ML x Physical Sciences
10
+
11
+ Machine learning methods have become ubiquitous in many data-intensive disciplines, including, of course, Physics and Astronomy. The Physical sciences offer a rich landscape of observational and simulated data that are suitable to be analyzed using machine learning and deep learning tools. Identifying particles produced in collision events at the Large Hadron Collider, processing astronomical images from large surveys, identifying transient phenomena in real time, creating fast approximated solutions for lattice theories, or building "emulators" for expensive cosmological simulations are just some of the uses that have become popular in the last decade (see e.g. Carleo et al. 2019 for a review).
12
+
13
+ It follows as a logical consequence that the foundations of machine learning methods should be taught as part of the standard Physics curriculum. In part, this is because they are bound to become standard tools for Physics research. But even more importantly, they are a great pedagogical tool to stimulate critical thinking and to build transferable skills that would create better job perspective for Physics graduates, by leveraging the enormous growth of jobs in the Data Science area that are accessible to those with a rigorous scientific background and strong computational skills.
14
+
15
+ ## 2. Teaching ML to Physicists
16
+
17
+ There are many great learning resources that either low-cost or free. These include excellent books with free online Jupyter notebooks (Géron, 2019; VanderPlas, 2016), courses on online platforms like Coursera or Udemy, and chances to practice on fairly complex data sets such as those hosted by Kaggle. However, the abundance of choices can be overwhelming for beginner practitioners, who would have to "mix-and-match" different resources to create a curriculum, and more importantly, resources that are tailored to the process of scientific research, and, in particular, to the physical sciences, are still scarce; (Ivezić et al., 2014) is a happy exception, focused on Astronomy. Furthermore, if we want machine learning to become a standard part of the Physics curriculum, we need to provide resources for instructors: many of them won't have been trained in this subject during their own course of study, so that lowering the barrier for teaching is as important than lowering the barrier for learning. Five years ago, I started creating materials for a "ML for Physics and Astronomy" course, almost from scratch. Since than, I have taught this class several times, at the undergraduate level to STEM majors, and at the graduate level to Physicists and Astronomers, and I have written the first draft of a textbook on the same subject. Here, I'd like to share some of the practices I have found to be useful and some of the challenges I have identified during this time.
18
+
19
+ ### 2.1. Needs
20
+
21
+ These are some desirable qualities of materials used to teach Machine Learning to Physics (or more in general, STEM) students:
22
+
23
+ Accessibility: While most STEM majors are familiar with linear algebra, calculus, and statistics, there is great variability in the level of mathematics they are able or willing to handle. At the undergraduate level, I advocate for keeping the complex mathematics at a minimum, and focus on the conceptual aspects of how different algorithms work. My experience is that this approach is the most inclusive and still provides a good foundation to those who would like to explore a topic in higher detail.
24
+
25
+ ---
26
+
27
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
28
+
29
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
30
+
31
+ ---
32
+
33
+ ![01963a92-c59c-74ec-927b-0fb3a667678c_1_287_184_1156_811_0.jpg](images/01963a92-c59c-74ec-927b-0fb3a667678c_1_287_184_1156_811_0.jpg)
34
+
35
+ Figure 1. An excerpt from a lecture notebook, including the "target performance" from (?), and an example task that students may be asked to complete during class.
36
+
37
+ Relevance: This is possibly the greatest challenge - finding problems and data sets that are relevant to scientific research is hard! Most of the "introductory" data sets (MNIST, Boston housing data, Iris...) are too simple, and working with those does not resemble the typical challenges of a research problem. Many more advanced ones (e.g., those found on Kaggle, which include several Physics/Astronomy challenges) are very complex, are require a vast amount of background information. Striking the right balance of finding data and problems that are beginner-friendly, while at the same time presenting some nontrivial features and help develop intuition, is difficult.
38
+
39
+ Real-world likeness: Experienced machine learning practitioners would often say that "prepping" the data is one of the most time-consuming and important tasks in a ML project. I wholeheartedly agree. Many existing data sets for beginner practitioners are very "curated", and this limits their usefulness in demonstrating best practices in very important aspects such as data exploration, data cleaning, transformations, imputing strategies, and feature engineering. Additionally, themes that are of fundamental importance in research, such as choosing or building an appropriate evaluation metric or estimating uncertainties, are often absent from pedagogical material.
40
+
41
+ How can we put together materials that have all these desirable properties? In short, it takes a lot of work, and it takes a village. As I was assembling resources, I was grateful to count on the help of a very supportive Physics/Astronomy community.
42
+
43
+ An approach that I have found to be promising is to start with a literature review, and find papers that apply a certain method to data of interest. Some of the most successful pedagogical examples I have used propose to match, or improve on, the performance of a published paper. This is appealing to students because it communicates that they are doing quality work (or at least, that the ML aspect is at a publication-worthy level). For several exercises in my upcoming book, I have reached out to authors of a paper, and asked for access to data, if necessary, and for some introductory guidelines to create a problem-solving pipeline. In class, I show students what our final goal (performance-wise) would be, and try to build the pipeline together with them. I have found that if I propose a solution, it's always helpful to riddle it with poor choices; for example, "forgetting" to normalize the data when needed, including noisy or very correlated features, using accuracy on imbalanced data sets... The process of improving the code helps students retain a stronger memory of these potential pitfalls. Colleagues have indicated that they use a similar approach successfully (S. Caron, private comm.)
44
+
45
+ Table 1. Example review/discussion questions for a teaching unit on Bagging/Boosted methods.
46
+
47
+ <table><tr><td>Question</td><td>ANSWERS (CORRECT IN BOLDFACE)</td></tr><tr><td>ATTRIBUTE EACH OF THESE STATEMENTS TO Bagging Methods or Boosting Meth- ODS:</td><td>(A) THEY TEND TO LOWER THE VARIANCE BAGGING (B) THE FINAL ESTIMATOR PREDICTS A WEIGHTED AVERAGE OF ALL THE COMPO- NENTS BOOSTING (C) BASE ESTIMATORS ARE INDEPENDENT OF EACH OTHER BAGGING (D) BASE ESTIMATORS ARE BUILT ON THE BASIS OF PREVIOUS STAGES RESULTS Boosting</td></tr><tr><td>HOW IS THE FEATURE IMPORTANCE CALCU- LATED IN TREE-BASED METHODS?</td><td>(A) REPORTING THE CORRELATION COEFFICIENT BETWEEN EACH FEATURE AND THE TARGET IN THE FINAL ENSEMBLE (B) Calculating the mean decrease of impurity across splits that USE THAT FEATURE (C) CALCULATING THE AVERAGE OF THE CORRELATION COEFFICIENT BETWEEN EACH FEATURE AND THE TARGET IN A RANDOM SELECTION OF TREES (D) LISTING THE FEATURES IN ALPHABETICAL ORDER</td></tr><tr><td>WHICH OF THESE PARAMETERS, IF DE- CREASED, IS MOST LIKELY TO REDUCE THE GAP BETWEEN TRAINING AND TEST SCORES?</td><td>(A) MAX_DEPTH (B) N_ESTIMATORS (C) MIN_SAMPLE_SPLITS (D) MIN_SAMPLE_LEAF</td></tr><tr><td>WHICH OF THESE ESTIMATORS IS MOST LIKELY TO WORK WELL EVEN WITH A VERY WEAK BASE LEARNER?</td><td>(A) Random Forests (B) EXTREMELY RANDOM TREES (C) ADABOOST (D) GBMs</td></tr></table>
48
+
49
+ ## 3. Elements of a teaching unit
50
+
51
+ The strategies I have developed are especially tailored to introductory Machine Learning courses at the advanced undergraduate/early graduate level in "official", grade-bearing classes. I find it helpful to integrate some traditional course-work elements (for example, slides and review questions) with more hands-on content, such as lecture notebooks and programming worksheets assignments. Often, the two sets "talk" to each other; for example, homework might include completing parts of a lecture notebook. For more advanced practitioners or more informal settings, such as summer schools, I usually skip the assignments and try to make each unit self-contained.
52
+
53
+ My approach is to present each machine learning theme (typically, a new algorithm or a discipline-wide concept, such as cross-validation or hyperparameter optimization) in parallel with a Physics or Astronomy problem (examples I have used recently include identifying potentially habitable
54
+
55
+ 155 planets, classifying the products of collision events in par-
56
+
57
+ 156 ticle physics data, or creating a model for the rise of water in different US stations). The example materials shown 157 here are excerpts from a teaching unit where we discuss and 158
58
+
59
+ 159 use ensemble methods (in particular, bagging and boosting algorithms) to solve the problem of determining distance 160 (parameterized by redshift, $z$ ) to faraway galaxies from the 161 shape of their spectral energy emission. 162
60
+
61
+ The elements of each teaching unit are the following:
62
+
63
+ 164
64
+
65
+ A Power Point (or equivalent) presentation, with some blackboard discussion, which introduces the ML theme from a theoretical perspective, as well as the data set used;
66
+
67
+ A Jupyter notebook lecture, with some blank parts "to fill". The notebook shows the process of problem-solving - for example, setting up and optimizing the new ML method, or establishing good practices. The extent of the "fillable" parts varies according to the class and the time availability. When possible, I like to do mini assignments (10-15 minutes), where students are asked to complete a single task - for example, do some exploratory data analysis, find a "bug" in my code, or improve on a benchmark performance. An example, including the "target performance" from a published paper (Zhou et al., 2019) and a "mini-task", is shown in Fig. 1.
68
+
69
+ Quizzes/review questions; these are ungraded and usually a students' favorite. Most of them are in multiple-choice form and are meant to reinforce the theoretical foundations, for example with questions on the suitability of a given method to a given problem, or the parameters of a specific method. In a classroom setting, these usually work well as think/pair/share material; I have also organized them as group competitions for extra credit. They also lend themselves to be used in online settings with interactive lecture softwares like Sli.do, Mentimeter, or similar. A small selection of review questions for this teaching unit is shown in Table 1.
70
+
71
+ Reading material; I include traditional (e.g. book chapters, journal articles) and less traditional (e.g. blog posts, YouTube videos) resources for every unit. Learners have different learning preferences/styles and I feel that it is important for them to have several options. Some of the sources I recommend for this unit are (Louppe, 2014) and the relevant part of Chapter 5 of (VanderPlas, 2016), but also this YouTube video, this blog post and this one, and this notebook.
72
+
73
+ Table 2. Example assignment for the teaching unit presented in the text.
74
+
75
+ <table><tr><td>DESCRIPTION</td><td>TASK</td></tr><tr><td>START FROM THE FULL DATA SET. WE SAW IN THE LECTURE NOTEBOOK THAT THE PERFORMANCE CHANGES A LOT ONCE THE SELECTION CRITERIA ARE APPLIED.</td><td>- Figure out which of the data cleaning cuts we made was the MOST SIGNIFICANT IN TERMS OF IMPROVING THE SCORES OF THE FINAL MODEL.</td></tr><tr><td>NOW CHOOSE ONE REFERENCE ALGORITHM AMONG THE ONES WE SAW (RF, ADABOOST, GBM), AND REFER TO THE OPTIMIZED MODEL (NO NEED TO RE- RUN THE GRID SEARCH). LET'S CALL THIS MODEL 1. USE THE DATA SET SELECTION WITH 6,307 OB- JECTS AND 6 FEATURES.</td><td>- GENERATE PREDICTIONS USING CV AND PLOT THEM IN A HISTOGRAM, TOGETHER WITH THE TRUE VALUES. WHICH DISTRIBUTION IS NAR- ROWER? EXPLAIN WHY. - Optimize (Using a Grid Search for the parameters you deem to be MOST RELEVANT) THE EXTREMELY RANDOM TREE ALGORITHM AND COMPUTE THE OUTLIER FRACTION AND NMAD. HOW DOES IT COMPARE to Model 1? Comment not just on the scoring parameter(s), BUT ALSO ON VARIANCE/BIAS. WHICH ONE WOULD YOU PICK? - In THE PAPER THAT WE USED AS A REFERENCE (ZHOU ET AL., 2019), THE AUTHORS ACTUALLY USE COLORS, NOT MAGNITUDES, AS FEATURES. FIND IN THE PAPER THE EXACT LIST OF FEATURES, AND GENERATE THEM. - ARMED WITH YOUR NEW SET OF FEATURES. USE AN ALGORITHM OF YOUR CHOICE TO MATCH OR BEAT THE PERFORMANCE QUOTED IN THE PAPER (NMAD: 0.0174; OLF, 4.54%).</td></tr></table>
76
+
77
+ Homework assignments; these vary according to the class, but usually they include programming exercises that complement the material in the lecture notebook, and some non-coding tasks. The latter could be, e.g., writing pseudocode for a given algorithm, commenting code line-by-line, or reflecting on how to approach a problematic issue such as a severe imbalance, or missing data. An example assignment for this unit is shown in Table 2.
78
+
79
+ In a traditional course, it is useful to have a final open-ended project that resembles a real-world application; importantly, this helps students create a "portfolio" item that can be added to a resume or brought up in job interviews. Students are encouraged to use Git and GitHub for their projects. In undergraduate settings, I have recently settled on a two-steps research problem. Step 1 sets a common goal for all students (i.e., exploring and cleaning data, training and optimizing a model using one of the algorithms already discussed). In Step 2, I propose possible "tracks" that can be worked on by small group of students; students choose the "track". The "track" could consist in learning about and deploying a new algorithm, experimenting with feature
80
+
81
+ 218 engineering, analyzing the dependence on signal-to-noise
82
+
83
+ 219 ratio, and so on. In graduate-level classes, students are asked to come up with their own project and data, and the products include a project plan and a final report, ideally in the form of a research paper.
84
+
85
+ ## 4. Conclusions
86
+
87
+ My conclusions are the following:
88
+
89
+ - It is important to include ML techniques in Physics (and more generally, STEM) curricula, as they are useful for both academic and non-academic careers;
90
+
91
+ - Using a mix of techniques, from traditional lectures to hands-on programming exercises, and recommending varied learning resources helps meet the needs of different learners;
92
+
93
+ - Practical projects are important, because they teach students to be comfortable with open-ended questions, and help them build a portfolio; students can be encouraged to post them on platforms like GitHub;
94
+
95
+ - Preparing good sets of materials is important not just for students, who tend to be resourceful and resilient, but also to widen the pool of instructors who can teach this subject;
96
+
97
+ - Persisting challenges include: 1. Improving access to data sets and problems at the right level of complexity and size; 2. Finding effective ways to teach across-discipline concepts, such as uncertainty estimation or interpretability, that are important but don't fit in the algorithm/problem mold, and 3. Integrating the conversation around ML topics with content taught e.g. in Statistics or Computational Methods courses.
98
+
99
+ ## References
100
+
101
+ Carleo, G., Cirac, I., Cranmer, K., Daudet, L., Schuld, M., Tishby, N., Vogt-Maranto, L., and Zdeborová, L. Machine learning and the physical sciences. Reviews of Modern Physics, 91(4):045002, 2019.
102
+
103
+ Géron, A. Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: Concepts, tools, and techniques to build intelligent systems. O'Reilly Media, 2019.
104
+
105
+ Ivezić, Z., Connolly, A. J., VanderPlas, J. T., and Gray, A. Statistics, data mining, and machine learning in astronomy: a practical Python guide for the analysis of survey data, volume 1. Princeton University Press, 2014.
106
+
107
+ Louppe, G. Understanding random forests: From theory to practice. arXiv preprint arXiv:1407.7502, 2014.
108
+
109
+ VanderPlas, J. Python Data Science Handbook: Essential Tools for Working with Data. O'Reilly Media, Inc., 1st edition, 2016. ISBN 1491912057.
110
+
111
+ Zhou, R., Cooper, M. C., Newman, J. A., Ashby, M. L., Aird, J., Conselice, C. J., Davis, M., Dutton, A. A., Faber, S., Fang, J. J., et al. Deep ugrizy imaging and deep2/3 spectroscopy: a photometric redshift testbed for lsst and public release of data from the deep3 galaxy redshift survey. Monthly Notices of the Royal Astronomical Society, 488(4):4565-4584, 2019.
112
+
113
+ 261
114
+
115
+ 262
116
+
117
+ 263
118
+
119
+ 266
120
+
121
+ 267
122
+
123
+ 268
124
+
125
+ 269
126
+
127
+ 270
128
+
129
+ 271
130
+
131
+ 274
papers/ECMLPKDD/ECMLPKDD 2021/ECMLPKDD 2021 Workshop/ECMLPKDD 2021 Workshop TeachML/z-N7JMHjLO/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § TEACHING MACHINE LEARNING FOR THE PHYSICAL SCIENCES: A SUMMARY OF LESSONS LEARNED AND CHALLENGES
2
+
3
+ Anonymous Authors ${}^{1}$
4
+
5
+ § ABSTRACT
6
+
7
+ This paper summarizes some challenges encountered and best practices established in several years of teaching Machine Learning for the Physical Sciences at the undergraduate and graduate level. I discuss motivations for teaching ML to Physicists, desirable properties of pedagogical materials such as accessibility, relevance, and likeness to real-world research problems, and give examples of components of teaching units.
8
+
9
+ § 1.ML X PHYSICAL SCIENCES
10
+
11
+ Machine learning methods have become ubiquitous in many data-intensive disciplines, including, of course, Physics and Astronomy. The Physical sciences offer a rich landscape of observational and simulated data that are suitable to be analyzed using machine learning and deep learning tools. Identifying particles produced in collision events at the Large Hadron Collider, processing astronomical images from large surveys, identifying transient phenomena in real time, creating fast approximated solutions for lattice theories, or building "emulators" for expensive cosmological simulations are just some of the uses that have become popular in the last decade (see e.g. Carleo et al. 2019 for a review).
12
+
13
+ It follows as a logical consequence that the foundations of machine learning methods should be taught as part of the standard Physics curriculum. In part, this is because they are bound to become standard tools for Physics research. But even more importantly, they are a great pedagogical tool to stimulate critical thinking and to build transferable skills that would create better job perspective for Physics graduates, by leveraging the enormous growth of jobs in the Data Science area that are accessible to those with a rigorous scientific background and strong computational skills.
14
+
15
+ § 2. TEACHING ML TO PHYSICISTS
16
+
17
+ There are many great learning resources that either low-cost or free. These include excellent books with free online Jupyter notebooks (Géron, 2019; VanderPlas, 2016), courses on online platforms like Coursera or Udemy, and chances to practice on fairly complex data sets such as those hosted by Kaggle. However, the abundance of choices can be overwhelming for beginner practitioners, who would have to "mix-and-match" different resources to create a curriculum, and more importantly, resources that are tailored to the process of scientific research, and, in particular, to the physical sciences, are still scarce; (Ivezić et al., 2014) is a happy exception, focused on Astronomy. Furthermore, if we want machine learning to become a standard part of the Physics curriculum, we need to provide resources for instructors: many of them won't have been trained in this subject during their own course of study, so that lowering the barrier for teaching is as important than lowering the barrier for learning. Five years ago, I started creating materials for a "ML for Physics and Astronomy" course, almost from scratch. Since than, I have taught this class several times, at the undergraduate level to STEM majors, and at the graduate level to Physicists and Astronomers, and I have written the first draft of a textbook on the same subject. Here, I'd like to share some of the practices I have found to be useful and some of the challenges I have identified during this time.
18
+
19
+ § 2.1. NEEDS
20
+
21
+ These are some desirable qualities of materials used to teach Machine Learning to Physics (or more in general, STEM) students:
22
+
23
+ Accessibility: While most STEM majors are familiar with linear algebra, calculus, and statistics, there is great variability in the level of mathematics they are able or willing to handle. At the undergraduate level, I advocate for keeping the complex mathematics at a minimum, and focus on the conceptual aspects of how different algorithms work. My experience is that this approach is the most inclusive and still provides a good foundation to those who would like to explore a topic in higher detail.
24
+
25
+ ${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
26
+
27
+ Preliminary work. Under review by the Teaching Machine Learning Workshop at ECML 2021. Do not distribute.
28
+
29
+ < g r a p h i c s >
30
+
31
+ Figure 1. An excerpt from a lecture notebook, including the "target performance" from (?), and an example task that students may be asked to complete during class.
32
+
33
+ Relevance: This is possibly the greatest challenge - finding problems and data sets that are relevant to scientific research is hard! Most of the "introductory" data sets (MNIST, Boston housing data, Iris...) are too simple, and working with those does not resemble the typical challenges of a research problem. Many more advanced ones (e.g., those found on Kaggle, which include several Physics/Astronomy challenges) are very complex, are require a vast amount of background information. Striking the right balance of finding data and problems that are beginner-friendly, while at the same time presenting some nontrivial features and help develop intuition, is difficult.
34
+
35
+ Real-world likeness: Experienced machine learning practitioners would often say that "prepping" the data is one of the most time-consuming and important tasks in a ML project. I wholeheartedly agree. Many existing data sets for beginner practitioners are very "curated", and this limits their usefulness in demonstrating best practices in very important aspects such as data exploration, data cleaning, transformations, imputing strategies, and feature engineering. Additionally, themes that are of fundamental importance in research, such as choosing or building an appropriate evaluation metric or estimating uncertainties, are often absent from pedagogical material.
36
+
37
+ How can we put together materials that have all these desirable properties? In short, it takes a lot of work, and it takes a village. As I was assembling resources, I was grateful to count on the help of a very supportive Physics/Astronomy community.
38
+
39
+ An approach that I have found to be promising is to start with a literature review, and find papers that apply a certain method to data of interest. Some of the most successful pedagogical examples I have used propose to match, or improve on, the performance of a published paper. This is appealing to students because it communicates that they are doing quality work (or at least, that the ML aspect is at a publication-worthy level). For several exercises in my upcoming book, I have reached out to authors of a paper, and asked for access to data, if necessary, and for some introductory guidelines to create a problem-solving pipeline. In class, I show students what our final goal (performance-wise) would be, and try to build the pipeline together with them. I have found that if I propose a solution, it's always helpful to riddle it with poor choices; for example, "forgetting" to normalize the data when needed, including noisy or very correlated features, using accuracy on imbalanced data sets... The process of improving the code helps students retain a stronger memory of these potential pitfalls. Colleagues have indicated that they use a similar approach successfully (S. Caron, private comm.)
40
+
41
+ Table 1. Example review/discussion questions for a teaching unit on Bagging/Boosted methods.
42
+
43
+ max width=
44
+
45
+ Question ANSWERS (CORRECT IN BOLDFACE)
46
+
47
+ 1-2
48
+ ATTRIBUTE EACH OF THESE STATEMENTS TO Bagging Methods or Boosting Meth- ODS: (A) THEY TEND TO LOWER THE VARIANCE BAGGING (B) THE FINAL ESTIMATOR PREDICTS A WEIGHTED AVERAGE OF ALL THE COMPO- NENTS BOOSTING (C) BASE ESTIMATORS ARE INDEPENDENT OF EACH OTHER BAGGING (D) BASE ESTIMATORS ARE BUILT ON THE BASIS OF PREVIOUS STAGES RESULTS Boosting
49
+
50
+ 1-2
51
+ HOW IS THE FEATURE IMPORTANCE CALCU- LATED IN TREE-BASED METHODS? (A) REPORTING THE CORRELATION COEFFICIENT BETWEEN EACH FEATURE AND THE TARGET IN THE FINAL ENSEMBLE (B) Calculating the mean decrease of impurity across splits that USE THAT FEATURE (C) CALCULATING THE AVERAGE OF THE CORRELATION COEFFICIENT BETWEEN EACH FEATURE AND THE TARGET IN A RANDOM SELECTION OF TREES (D) LISTING THE FEATURES IN ALPHABETICAL ORDER
52
+
53
+ 1-2
54
+ WHICH OF THESE PARAMETERS, IF DE- CREASED, IS MOST LIKELY TO REDUCE THE GAP BETWEEN TRAINING AND TEST SCORES? (A) MAX_DEPTH (B) N_ESTIMATORS (C) MIN_SAMPLE_SPLITS (D) MIN_SAMPLE_LEAF
55
+
56
+ 1-2
57
+ WHICH OF THESE ESTIMATORS IS MOST LIKELY TO WORK WELL EVEN WITH A VERY WEAK BASE LEARNER? (A) Random Forests (B) EXTREMELY RANDOM TREES (C) ADABOOST (D) GBMs
58
+
59
+ 1-2
60
+
61
+ § 3. ELEMENTS OF A TEACHING UNIT
62
+
63
+ The strategies I have developed are especially tailored to introductory Machine Learning courses at the advanced undergraduate/early graduate level in "official", grade-bearing classes. I find it helpful to integrate some traditional course-work elements (for example, slides and review questions) with more hands-on content, such as lecture notebooks and programming worksheets assignments. Often, the two sets "talk" to each other; for example, homework might include completing parts of a lecture notebook. For more advanced practitioners or more informal settings, such as summer schools, I usually skip the assignments and try to make each unit self-contained.
64
+
65
+ My approach is to present each machine learning theme (typically, a new algorithm or a discipline-wide concept, such as cross-validation or hyperparameter optimization) in parallel with a Physics or Astronomy problem (examples I have used recently include identifying potentially habitable
66
+
67
+ 155 planets, classifying the products of collision events in par-
68
+
69
+ 156 ticle physics data, or creating a model for the rise of water in different US stations). The example materials shown 157 here are excerpts from a teaching unit where we discuss and 158
70
+
71
+ 159 use ensemble methods (in particular, bagging and boosting algorithms) to solve the problem of determining distance 160 (parameterized by redshift, $z$ ) to faraway galaxies from the 161 shape of their spectral energy emission. 162
72
+
73
+ The elements of each teaching unit are the following:
74
+
75
+ 164
76
+
77
+ A Power Point (or equivalent) presentation, with some blackboard discussion, which introduces the ML theme from a theoretical perspective, as well as the data set used;
78
+
79
+ A Jupyter notebook lecture, with some blank parts "to fill". The notebook shows the process of problem-solving - for example, setting up and optimizing the new ML method, or establishing good practices. The extent of the "fillable" parts varies according to the class and the time availability. When possible, I like to do mini assignments (10-15 minutes), where students are asked to complete a single task - for example, do some exploratory data analysis, find a "bug" in my code, or improve on a benchmark performance. An example, including the "target performance" from a published paper (Zhou et al., 2019) and a "mini-task", is shown in Fig. 1.
80
+
81
+ Quizzes/review questions; these are ungraded and usually a students' favorite. Most of them are in multiple-choice form and are meant to reinforce the theoretical foundations, for example with questions on the suitability of a given method to a given problem, or the parameters of a specific method. In a classroom setting, these usually work well as think/pair/share material; I have also organized them as group competitions for extra credit. They also lend themselves to be used in online settings with interactive lecture softwares like Sli.do, Mentimeter, or similar. A small selection of review questions for this teaching unit is shown in Table 1.
82
+
83
+ Reading material; I include traditional (e.g. book chapters, journal articles) and less traditional (e.g. blog posts, YouTube videos) resources for every unit. Learners have different learning preferences/styles and I feel that it is important for them to have several options. Some of the sources I recommend for this unit are (Louppe, 2014) and the relevant part of Chapter 5 of (VanderPlas, 2016), but also this YouTube video, this blog post and this one, and this notebook.
84
+
85
+ Table 2. Example assignment for the teaching unit presented in the text.
86
+
87
+ max width=
88
+
89
+ DESCRIPTION TASK
90
+
91
+ 1-2
92
+ START FROM THE FULL DATA SET. WE SAW IN THE LECTURE NOTEBOOK THAT THE PERFORMANCE CHANGES A LOT ONCE THE SELECTION CRITERIA ARE APPLIED. - Figure out which of the data cleaning cuts we made was the MOST SIGNIFICANT IN TERMS OF IMPROVING THE SCORES OF THE FINAL MODEL.
93
+
94
+ 1-2
95
+ NOW CHOOSE ONE REFERENCE ALGORITHM AMONG THE ONES WE SAW (RF, ADABOOST, GBM), AND REFER TO THE OPTIMIZED MODEL (NO NEED TO RE- RUN THE GRID SEARCH). LET'S CALL THIS MODEL 1. USE THE DATA SET SELECTION WITH 6,307 OB- JECTS AND 6 FEATURES. - GENERATE PREDICTIONS USING CV AND PLOT THEM IN A HISTOGRAM, TOGETHER WITH THE TRUE VALUES. WHICH DISTRIBUTION IS NAR- ROWER? EXPLAIN WHY. - Optimize (Using a Grid Search for the parameters you deem to be MOST RELEVANT) THE EXTREMELY RANDOM TREE ALGORITHM AND COMPUTE THE OUTLIER FRACTION AND NMAD. HOW DOES IT COMPARE to Model 1? Comment not just on the scoring parameter(s), BUT ALSO ON VARIANCE/BIAS. WHICH ONE WOULD YOU PICK? - In THE PAPER THAT WE USED AS A REFERENCE (ZHOU ET AL., 2019), THE AUTHORS ACTUALLY USE COLORS, NOT MAGNITUDES, AS FEATURES. FIND IN THE PAPER THE EXACT LIST OF FEATURES, AND GENERATE THEM. - ARMED WITH YOUR NEW SET OF FEATURES. USE AN ALGORITHM OF YOUR CHOICE TO MATCH OR BEAT THE PERFORMANCE QUOTED IN THE PAPER (NMAD: 0.0174; OLF, 4.54%).
96
+
97
+ 1-2
98
+
99
+ Homework assignments; these vary according to the class, but usually they include programming exercises that complement the material in the lecture notebook, and some non-coding tasks. The latter could be, e.g., writing pseudocode for a given algorithm, commenting code line-by-line, or reflecting on how to approach a problematic issue such as a severe imbalance, or missing data. An example assignment for this unit is shown in Table 2.
100
+
101
+ In a traditional course, it is useful to have a final open-ended project that resembles a real-world application; importantly, this helps students create a "portfolio" item that can be added to a resume or brought up in job interviews. Students are encouraged to use Git and GitHub for their projects. In undergraduate settings, I have recently settled on a two-steps research problem. Step 1 sets a common goal for all students (i.e., exploring and cleaning data, training and optimizing a model using one of the algorithms already discussed). In Step 2, I propose possible "tracks" that can be worked on by small group of students; students choose the "track". The "track" could consist in learning about and deploying a new algorithm, experimenting with feature
102
+
103
+ 218 engineering, analyzing the dependence on signal-to-noise
104
+
105
+ 219 ratio, and so on. In graduate-level classes, students are asked to come up with their own project and data, and the products include a project plan and a final report, ideally in the form of a research paper.
106
+
107
+ § 4. CONCLUSIONS
108
+
109
+ My conclusions are the following:
110
+
111
+ * It is important to include ML techniques in Physics (and more generally, STEM) curricula, as they are useful for both academic and non-academic careers;
112
+
113
+ * Using a mix of techniques, from traditional lectures to hands-on programming exercises, and recommending varied learning resources helps meet the needs of different learners;
114
+
115
+ * Practical projects are important, because they teach students to be comfortable with open-ended questions, and help them build a portfolio; students can be encouraged to post them on platforms like GitHub;
116
+
117
+ * Preparing good sets of materials is important not just for students, who tend to be resourceful and resilient, but also to widen the pool of instructors who can teach this subject;
118
+
119
+ * Persisting challenges include: 1. Improving access to data sets and problems at the right level of complexity and size; 2. Finding effective ways to teach across-discipline concepts, such as uncertainty estimation or interpretability, that are important but don't fit in the algorithm/problem mold, and 3. Integrating the conversation around ML topics with content taught e.g. in Statistics or Computational Methods courses.
papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/-0xPrt01VXD/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,368 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TICO-19: the Translation Initiative for COvid-19
2
+
3
+ Antonios Anastasopoulos ${}^{j}$ , Alessandro Cattelan ${}^{Y}$ , Zi-Yi Dou ${}^{Å}$ , Marcello Federico ${}^{\Upsilon }$ ,
4
+
5
+ Christian Federman ${}^{\delta }$ , Dmitriy Genzel ${}^{v}$ , Francisco Guzmán ${}^{v}$ , Junjie Hu ${}^{\delta }$ , Macduff Hughes ${}^{\delta }$ ,
6
+
7
+ Philipp Koehn ${}^{3}$ , Rosie Lazar ${}^{\phi }$ , Will Lewis ${}^{\delta }$ , Graham Neubig ${}^{\delta }$ , Mengmeng Niu ${}^{3}$ ,
8
+
9
+ Alp Öktem ${}^{\mathrm{E}}$ , Eric Paquin ${}^{\mathrm{E}}$ , Grace Tang ${}^{\mathrm{E}}$ , Sylwia Tur ${}^{\Phi }$
10
+
11
+ ${}^{j}$ Department of Computer Science, George Mason University
12
+
13
+ ${}^{\Lambda }$ Language Technologies Institute, Carnegie Mellon University
14
+
15
+ ${}^{3}$ Translated ${}^{3}$ Amazon AI ${}^{3}$ Microsoft ${}^{8}$ Facebook AI ${}^{3}$ Johns Hopkins University
16
+
17
+ ${}^{\phi }$ Appen ${}^{3}$ Google ${}^{8}$ Translators without Borders
18
+
19
+ tico19.2020@gmail.com
20
+
21
+ ## Abstract
22
+
23
+ The COVID-19 pandemic is the worst pandemic to strike the world in over a century. Crucial to stemming the tide of the SARS-CoV-2 virus is communicating to vulnerable populations the means by which they can protect themselves. To this end, the collaborators forming the Translation Initiative for COvid- 19 (TICO-19) ${}^{1}$ have made test and development data available to AI and MT researchers in 35 different languages in order to foster the development of tools and resources for improving access to information about COVID- 19 in these languages. In addition to 9 high-resourced, "pivot" languages, the team is targeting 26 lesser resourced languages, in particular languages of Africa, South Asia and South-East Asia, whose populations may be the most vulnerable to the spread of the virus. The same data is translated into all of the languages represented, meaning that testing or development can be done for any pairing of languages in the set. Further, the team is converting the test and development data into translation memories (TMXs) that can be used by localizers from and to any of the languages. ${}^{2}$
24
+
25
+ ## 1 Introduction
26
+
27
+ The COVID-19 pandemic marks the worst pandemic to strike the world since 1918. At the time of this writing, ${}^{3}$ the SARS-CoV-2 coronavirus responsible for COVID-19 has infected over ten million people worldwide, with over a half a million deaths. While these numbers are likely under-reported, they are growing at an alarming rate, and many millions of people could become infected or perish without proper prevention measures.
28
+
29
+ Effective communication from health authorities is essential to protect at-risk populations, slow down the spread of the disease, and decrease its morbidity and mortality (UNOCHA, 2020). Yet, preventive measures such as stay-at-home orders, social distancing, and requirements to wear personal protective equipment (e.g. masks, gloves, etc.) have proven difficult to relay. That's not accounting for the difficulty to disseminate correct technical information about the disease, such as symptoms (e.g., fever, chills, etc.), specifics about testing (e.g., viral ${vs}$ . antibody testing), and treatments (e.g., intubation, plasma transfusion).
30
+
31
+ While official communications from the World's Health Organization (WHO) are constantly published and revised, they are mostly limited to major languages. This has resulted in a vacuum in many languages that has been filled by an infodemic of misinformation, as described by the WHO. Nongovernmental organizations (NGOs) such as Translators without Borders (TWB) play an important role in delivering multilingual communication in emergencies, such as the COVID-19 pandemic, but their reach and capacity has been outsized by the needs presented by the pandemic. To date, TWB has translated over 3.5 million words with over 80 non-profit organizations for more than 100 language pairs as part of their COVID-19 response.
32
+
33
+ Translation technologies such as automatic Machine Translation (MT) and Computer Assisted Translation (CAT) present unique opportunities to scale the throughput of human translators. However, given the sensitivity of the content, it is critical that the translations produced automatically are of the highest possible quality.
34
+
35
+ The Translation Initiative for COvid-19 (TICO- 19) effort marks a unique collaboration between public and private entities that came together shortly after the beginning of the pandemic. ${}^{4}$ The focus of TICO-19 is to enable the translation of content related to COVID-19 into a wide range of languages. First, we make available a collection of translation memories and technical glossaries so that language service providers (LSPs), translators and volunteers can make use of them to expedite their work and ensure consistency and accuracy. Second, we provide an open-source, multi-lingual benchmark set (which includes data for very-low-resource languages) specialized in the medical domain, which is intended to track the quality of current machine translation systems, thus enabling future research in the area. Lastly, we provide monolingual and bi-lingual resources for MT practitioners to use in order to advance the state-of-the-art in medical and humanitarian Machine Translation, as well as other natural language processing (NLP) applications.
36
+
37
+ ---
38
+
39
+ ${}^{1}$ Collaborators in the initiative include Translators without Borders, Carnegie Mellon University, Johns Hopkins University, George Mason University, Amazon Web Services, Appen, Facebook, Google, Microsoft, and Translated.
40
+
41
+ ${}^{2}$ The dataset, translation memories, and additional resources are freely available online: http://tico-19.github.io/.As the project continues and we create data for more languages, we will keep updating this paper as well as the project's website.
42
+
43
+ ${}^{3}$ July 1st, 2020
44
+
45
+ ---
46
+
47
+ Our hope is that our work will in the short-term enable the translation of important communications into multiple languages, and that in the long-term, it will serve to foster the research on MT for specialized content into low-resource languages. Through these resources we hope that our society is better prepared to quickly respond to the needs of translation in the midst of crises (e.g., for future crises, $a$ la Lewis et al. (2011)).
48
+
49
+ ## 2 The Value of Translation Technologies in Crisis Scenarios
50
+
51
+ During a crisis, whether it is local to one region or is a worldwide pandemic, communicating effectively in the languages and formats people understand is central to effective programs on the ground. For example, as part of the effort to control the spread of COVID-19, the Global Humanitarian Response Plan recognizes community engagement in relevant languages as a key strategy (UNOCHA, 2020). ${}^{5}$ In some countries, this will be all the more vital because information will be the main defense against the disease, and particular effort will be needed to make it accessible and grounded in local culture and context. Among these are countries where large sections of the population do not speak the dominant language.
52
+
53
+ Historically, MT, NLP and translation technologies have played a crucial role in crisis scenarios. The response to the Haitian earthquake in 2010 was notable for the broad use of technology in the humanitarian response, relying more on crowd-sourced translations and geolocation, but notably, translation technology was also used. In the days following the earthquake, Haitian citizens were encouraged to text messages requesting assistance to "4636", and as many as 5,000 messages were texted to this number per hour. Unfortunately for the aid agencies, whose dominant languages were English and French (aid agencies included the US Navy, the Red Cross, and Doctors without Borders), most of the SMS messages were in Haitian Kreyòl. Quickly, the Haitian Kreyòl speaking diaspora around the world were activated by the Mission 4636 consortium to translate the SMS messages and geolocate (Munro, 2010), and the translated messages were handed off to aid agencies for triage and action. The Mission 4636 infrastructure included a high-precision rule-based MT (Lewis et al., 2011), and within days to weeks after the earthquake, statistical MT engines were brought online by Microsoft (Lewis,2010) and Google. ${}^{6}$
54
+
55
+ Translation technology continues to be used in a variety of crisis and relief scenarios. Notable among these is Translators without Borders (TWB) use of translation memories for translating to a number of under-resourced languages in relief settings. Likewise, the Standby Task Force, ${}^{7}$ who are activated in a variety of relief settings, note the use of MT in various deployments around the world, e.g., for Urdu in the Pakistan earthquake of 2011 and for Spanish in the Ecuador earthquake in 2016. The EU funded INTERnAtional network on Crisis Translation (INTERACT) ${}^{8}$ project, started a couple of years before the COVID-19 pandemic, focused on crisis translation, specifically in health crises such as pandemics, with a focus on improving resilience in times of crises through communication, ultimately with the goal of reducing loss of life. ${}^{9}$ Likewise, during the current pandemic, several community-driven efforts have sprung up to fulfil the need for information communication. The Endangered Languages Project ${}^{10}$ , for example, has collected community-produced translations of public health information in hundreds of languages in various formats.
56
+
57
+ ---
58
+
59
+ ${}^{4}$ The World Health Organization (WHO) declared COVID- 19 a pandemic on March 11th, 2020. The TICO-19 collaborators came together in the days following and first met as a group (over Zoom) on March 20th. It cannot be understated the rapidity with which this collaboration came together and how seamlessly the participants, many erstwhile competitors, have worked in harmony and without animosity. It is truly a testament to the needs of the greater good outweighing personal differences or potentially conflicting objectives.
60
+
61
+ ${}^{5}$ The plan does not call out Machine Translation per se, but does call out the need for content to be produced and disseminated in "accessible languages", and the need for communication in "local languages".
62
+
63
+ ${}^{6}$ Although these engines were not integrated into the relief pipeline developed by Munro and colleagues, Lewis et al. (2011) document how MT could be integrated into a crowd-centric relief pipeline like that used by Mission 4636, whereby MT, even if low-precision or noisy, could provide first-pass translations which could then be triaged before being handed off to translators for more accurate translations and geoloca-tion.
64
+
65
+ ${}^{7}$ https://www.standbytaskforce.org/
66
+
67
+ ${}^{8}$ https://cordis.europa.eu/project/id/734211
68
+
69
+ ---
70
+
71
+ What is not tracked is the degree to which publicly available MT tools and resources are used in crisis and relief settings, e.g., translation apps and tools from Amazon, Google and Microsoft, or the translation feature built into Facebook (e.g., automatically translating posts). The authors suspect use may be broad, but there are no published accounts documenting just how broadly and how much these tools are used in crises. Tantalizing evidence of the use of publicly available tools was noted by Lewis et al. (2011) who documented traffic in the Microsoft Translator apps in the weeks following the Haitian earthquake: they noted that at least 5 percent of the Haitian Kreyòl traffic was relief-related. It is likely that Google's and Mi-crosoft's apps are used even when cell phone infrastructure is unavailable or destroyed, since the tools permit users to download models to their devices so they can perform offline translations. ${}^{11}$
72
+
73
+ In crises, it is clear that organizations need the capacity to communicate critical information and key messages into the languages people understand, at speed and at scale. Crisis affected communities could access content in local languages through various channels such as SMS, online chatbots, or more traditional printed materials. Their questions and feedback can be used to refine content to better meet their needs. Likewise, relief agencies need access to SMS and other communiques in local languages in order to more effectively and equitably distribute aid.
74
+
75
+ MT can help the various actors to translate and disseminate essential communications in a timely manner without the need to wait on human translators. This is particularly important in low resourced languages where professional translators are not readily available. Domain-specific MT can also assist translators with the right terminology to convey the correct response and standardize concepts. Furthermore, people who are unable to understand major languages could get access to vital information (such as news sources, websites, etc) first hand via a MT-driven tool set. We emphatically note that communication is not just a translation problem. Especially in the case of under-represented indigenous communities, messaging needs to also remain respectful of cultural norms (e.g. communicate through appropriate channels, without undermining cultural authorities and practices) and to not minimize the agency of such communities through "deficit framing" (Wañambi et al., 2020). Nevertheless, translation is a crucial component of the information flow.
76
+
77
+ However, to be useful for translating specialized content such as medical texts, we require that automatic translations be of the highest possible accuracy. To advance the research in Machine Translation, we require both high-quality benchmark sets and access to basic training resources, both monolingual and parallel. Likewise, translation memories in a broader set of languages can help localizers around the world translate into these languages. In the remainder of this paper we describe the resources created by the TICO-19 initiative, and some evaluations against them.
78
+
79
+ ## 3 The TICO-19 Translation Benchmark
80
+
81
+ We created the TICO-19 benchmark with three criteria in mind: diversity, relevance and quality. First, we sampled from a variety of public sources containing COVID-19 related content, representing different domains. Second, to make our content relevant for relief organizations, we chose the languages to translate into based on the requests from relief organizations on-the-ground. Third, we established a stringent quality assurance process, to ensure that the content is translated according to the highest industry standard.
82
+
83
+ ### 3.1 COVID-19 source data
84
+
85
+ The translation benchmark was created by combining English open-source data from various sources, listed in Table 1. We took special care to diversify the domains and sources of the data. We provide a concise summary here and detailed statistics for every source in Appendix B:
86
+
87
+ ---
88
+
89
+ ${}^{9}$ The project has already released COVID-19-specific MT models for 4 languages (Way et al., 2020).
90
+
91
+ ${}^{10}$ https://endangeredlanguagesproject.github.io/COVID- 19/.
92
+
93
+ ${}^{11}$ Crucially, it should be noted that these apps and tools are available for no more than 110 languages, leaving most of the world's languages in the dark.
94
+
95
+ ---
96
+
97
+ <table><tr><td rowspan="2">Data Source</td><td rowspan="2">Domain</td><td colspan="4">Statistics</td></tr><tr><td>#docs</td><td>#sents</td><td>#words</td><td>avg. slen</td></tr><tr><td>CMU</td><td>medical, conversational</td><td>-</td><td>141</td><td>1.2k</td><td>8.5</td></tr><tr><td>PubMed</td><td>medical, scientific</td><td>6</td><td>939</td><td>21.2k</td><td>22.5</td></tr><tr><td>Wikinews</td><td>news</td><td>6</td><td>88</td><td>1.8k</td><td>20.4</td></tr><tr><td>Wikivoyage</td><td>travel</td><td>1</td><td>243</td><td>4.5k</td><td>18.5</td></tr><tr><td>Wikipedia</td><td>general</td><td>15</td><td>1,538</td><td>38.1k</td><td>24.7</td></tr><tr><td>Wikisource</td><td>announcements</td><td>2</td><td>122</td><td>2.4k</td><td>19.6</td></tr><tr><td/><td>Total</td><td>30</td><td>3,071</td><td>69.7k</td><td>22.7</td></tr><tr><td/><td>Dev Set</td><td>12</td><td>971</td><td>21.0k</td><td>21.6</td></tr><tr><td/><td>Test Set</td><td>18</td><td>2,100</td><td>49.3k</td><td>23.5</td></tr></table>
98
+
99
+ Table 1: Source-side (English) statistics of the TICO-19 benchmark.
100
+
101
+ - PubMed: we selected 6 COVID-19-related scientific articles from PubMed ${}^{12}$ for a total of 939 sentences.
102
+
103
+ - CMU English-Haitian Creole dataset (CMU): the data were originally collected at Carnegie Mellon University ${}^{13}$ and translated into Haitian Creole by Eriksen Translations, Inc. The dataset is comprised of medical domain phrases and sentences, which along other data were used to quickly build and deploy statistical MT systems in disaster-ridden Haiti (Lewis, 2010) and later was part of the 2011 Workshop of (Statistical) Machine Translation Shared Tasks (Callison-Burch et al., 2011). For our purposes, we subsampled the English conversational phrases to only those including COVID-19-related keywords taken from our terminologies (see Section 4), ending up with 140 sentences.
104
+
105
+ - Wikipedia: we selected 15 COVID-19- related articles from the English Wikipedia ${}^{14}$ on topics ranging from responses to the pandemic, drug development, testing, and coron-aviruses in general.
106
+
107
+ ## Wikinews, Wikivoyage, Wikisource: 6 COVID-19-related entries from Wikinews. ${}^{15}$ one article from Wikivoyage ${}^{16}$ summarizing
108
+
109
+ travel restrictions, and two entries from Wik-isource ${}^{17}$ (an executive order and an internal Wikipedia communiqué). These data respectively cover the domains of news, travel advisories, and government/organization announcements.
110
+
111
+ ### 3.2 Languages
112
+
113
+ We translated the above English data into 38 languages. ${}^{18}$ In some cases, this was achieved through pivot languages, i.e., the content was translated into the pivot language first (e.g., French, Farsi) and then translated into the target language (e.g., Congolese Swahili, Dari). The languages were selected according to various criteria, with the main consideration being the potential impact of our collected translations and the humanitarian priorities of TWB. The translation languages include:
114
+
115
+ - Pivots: 9 major languages which function as a lingua franca for large parts of the globe: Arabic (modern standard), Chinese (simplified), French, Brazilian Portuguese, Latin American Spanish, Hindi, Russian, Swahili, and Indonesian.
116
+
117
+ - Priority: 21 languages which TWB classified as high-priority, due to the large volume of requests they are receiving and the strategic location of their partners (e.g. the Red Cross). They include languages in Asia -Dari, Central Khmer, Kurdish Kurmanji (Latin script), Kurdish Sorani (Arabic script), Nepali, Pashto-and Africa -Amharic, Congolese Swahili,
118
+
119
+ ---
120
+
121
+ ${}^{12}$ https://www.ncbi.nlm.nih.gov/pubmed/
122
+
123
+ ${}^{13}$ Under the NSF-funded (jointly with the EU) "NE-SPOLE!" project.
124
+
125
+ 14https://en.wikipedia.org
126
+
127
+ 15https://en.wikinews.org
128
+
129
+ 16https://en.wikivoyage.org
130
+
131
+ ${}^{17}$ https://en.wikisource.org
132
+
133
+ ${}^{18}$ All translations are available under a CCO license.
134
+
135
+ ---
136
+
137
+ <table><tr><td>Data Source</td><td>Example</td></tr><tr><td>CMU</td><td>are you having any shortness of breath?</td></tr><tr><td>PubMed</td><td>The basic reproductive number (R0) was ${3.77}\left( {{95}\% \text{CI:}{3.51} - {4.05}}\right)$ , and the adjusted R0 was 2.23-4.82.</td></tr><tr><td>Wikinews</td><td>By yesterday, the World Health Organization reported 1,051,635 confirmed cases, including 79,332 cases in the twenty four hours preceding 10 a.m. Central European Time (0800 UTC) on April 4.</td></tr><tr><td>Wikivoyage</td><td>Due to the spread of the disease, you are advised not to travel unless necessary, to avoid being infected, quarantined, or stranded by changing restrictions and cancelled flights.</td></tr><tr><td>Wikipedia</td><td>Drug development is the process of bringing a new infectious disease vaccine or therapeutic drug to the market once a lead compound has been identified through the process of drug discovery.</td></tr><tr><td>Wikisource</td><td>The federal government has identified 16 critical infrastructure sectors whose assets, systems, and networks, whether physical or virtual, are considered so vital to the United States that their incapacitation or destruction would have a debilitating effect on security, economic security, public health or safety, or any combination thereof.</td></tr></table>
138
+
139
+ Table 2: Samples of the English source sentences for the TICO-19 benchmark.
140
+
141
+ Dinka, Nigerian Fulfulde, Hausa, Kanuri, Kin-yarwanda, Lingala, Luganda, Nuer, Oromo, Somali, Eritrean Tigrinya, Ethiopian Tigrinya, Zulu.
142
+
143
+ - Important: 8 additional languages spoken by millions in South and South-East Asia: Bengali, Burmese (Myanmar), Farsi, Malay, Marathi, Tagalog, Tamil, and Urdu.
144
+
145
+ The latter two sets are primarily languages of Africa, and South and South-East Asia, whose communities, according to on-the-ground organizations, may be most susceptible to the spread of the virus and its potentially disastrous ramifications, mostly due to lack of access to information and communication in the community languages. They are also overwhelmingly under-resourced languages; in fact, some of the languages have remained untouched by the AI and MT communities, and have no known tools or resources that have been developed for them.
146
+
147
+ All of the test and development documents are sentence aligned across all of the languages, which allows for any pairing of languages for testing or development purposes. This was done by design, in order to facilitate tool and resource development in and across any of the targeted languages. For example, an MT developer could develop translation systems for French to/from Congolese Swahili, Arabic to/from Kurdish, Urdu to/from Pashto, Hindi to/from Marathi, Amharic to/from Oromo, or Chinese to/from Malay, among the 1296 possible pairings. Note that as the project continues and as we create data for more languages we will keep updating this paper as well as the project's website.
148
+
149
+ ### 3.3 Quality Assurance
150
+
151
+ It has been observed that translation from and into low-resource languages requires additional automatic and manual quality checks (Guzmán et al., 2019). To obtain the highest possible quality, here we implemented a two-step human quality control process. First, each document is sent for translation to language service providers (LSP), where the translation is performed. After translation, the dataset goes through a process of editing, in which each sentence is thoroughly vetted by qualified professionals familiar with the medical domain, whenever available. ${}^{19}$ In case of discrepancies, a process of arbitration is followed to solve disagreements between translators and editors.
152
+
153
+ After editing, a selected fraction of the data (18%, 558 sentences) undergoes a second independent quality assurance process. To ensure quality in the hardest-to-translate data, the scientific medical content from PubMed was upsampled so that it comprises 329 of the 558 doubly-checked sentences (almost 59%). The exact documents that comprise our second quality assurance set are listed in Appendix C.
154
+
155
+ ---
156
+
157
+ ${}^{19}$ For a handful of languages, such as Dinka or Nuer, where simply creating translations was a challenge, this process was, by necessity, skipped.
158
+
159
+ ---
160
+
161
+ The quality of the translations was checked, and reworks were made until every translation set was rated above ${95}\%$ across all languages, before any additional subsequent edits. Some low-resource languages like Somali, Dari, Khmer, Amharic, Tamil, Farsi, and Marathi required several rounds of translation to reach acceptable performance. The hardest part, unsurprisingly, proved to be the PubMed portion of the benchmark. Our QA process revealed that in most cases the problems arose when the translators did not have any medical expertise, which lead them to misunderstand the English source sentence and often opt for sub-par literal or word-for-word translations. We provide additional details with the estimated quality per language in the Appendix D. We note that all mistakes identified in this subset have been corrected in the final released dataset, and that all sentences that underwent the QA process are part of the test portion of our benchmark.
162
+
163
+ We additionally release the sampled dataset along with detailed error annotations and corrections. Whenever an error was noted in the validation sample, it was classified as one of the following categories: Addition/Omission, Grammar, Punctuation, Spelling, Capitalization, Mis-translation, Unnatural translation, and Untranslated text. The severity of the error was also classified as minor, major, or critical. Although small in size (at most 558 sentences in each translation direction), we hope that releasing these annotations will also invite automatic quality estimation and post-editing research for diverse under-resourced languages.
164
+
165
+ ## 4 Translator Resources
166
+
167
+ Translation Memories Because of the breadth of languages covered by TICO-19, and the fact that so many are under-resourced, the translations themselves can be of significant value to local-izers. As part of the effort, the TICO-19 collaborators have converted ${}^{20}$ the translated data to translation memories, cast as TMX files, for all English-X pairings, as well as some other pairings of languages focusing on potential local needs (e.g. French-Congolese Swahili, Farsi-Dari, and Kurdish Kurmanji-Sorani). These TMX files, in addition to the test and development data, have been made available to the public through the project's website.
168
+
169
+ Terminologies Two sets of translation terminologies were provided by Facebook and Google (the complete set of the English source terms and of the translated languages are listed in Appendix E):
170
+
171
+ - the Facebook one includes 364 COVID-19 related terms translated in 92 languages/locales.
172
+
173
+ - the Google one includes 300 COVID-19 related terms translated from English to 100 languages and a total of 1300 terms from 27 languages translated into English (for a total of approximately ${30}\mathrm{k}$ terms).
174
+
175
+ Additional Translations Translators without Borders (TWB) worked with its network of translators to provide translations in hard-to-source languages (e.g. Congolese Swahili and Kanuri). It also provided COVID-19 specific sources from its diverse humanitarian partners to augment the dataset. This augmented dataset will be available under license on TWB’s Gamayun portal. ${}^{21}$
176
+
177
+ ## 5 MT Developer Resources
178
+
179
+ As part of our project the CMU team also collected some COVID-19-related monolingual data in multiple languages. They are available online, ${}^{22}$ but we note that some of these data might not be available under the same license as our datasets (and hence might not be appropriate for commercial system development). These are detailed in the next sections.
180
+
181
+ ### 5.1 Monolingual
182
+
183
+ Wikipedia Data COVID-19-related data from Wikipedia were scraped in 37 languages. COVID- 19 terms were used as queries (language specific, in most cases), retrieving the textual data from the returned articles (i.e. stripping out any Wikipedia markup, metadata, images, etc). The data ranges from less than $1\mathrm{\;K}$ sentences (around ${10}\mathrm{\;K}$ tokens) for languages like Hindi, Bengali, or Afrikaans, and for as much as $8\mathrm{\;K}$ sentences for Spanish ( ${160}\mathrm{\;K}$ tokens) or Hungarian (120K tokens).
184
+
185
+ ---
186
+
187
+ ${}^{20}$ Conversion was carried out with the Tikal command included in the Okapi framework: https://okapiframework.org/.
188
+
189
+ ${}^{21}$ https://gamayun.translatorswb.org/
190
+
191
+ ${}^{22}$ https://bit.ly/2ZLOkpo
192
+
193
+ ---
194
+
195
+ News Data COVID-19-related news articles (as identified by keyword search) were scraped from three news organizations that publish multilingually through their world services. Specifically, the collected data include articles from the BBC World Service ${}^{23}$ (22 languages), the Voice of Amer- ${\mathrm{{ica}}}^{24}$ (31 languages), and the Deutsche Welle ${}^{25}$ (29 languages).
196
+
197
+ ### 5.2 Parallel
198
+
199
+ We have also scraped a very small amount of available parallel data, mostly from public service announcements from NGOs and national/state government sources. Specifically, we scraped Public Service Announcements by the Canadian government ${}^{26}$ in 21 languages (English, French, and First Nations Languages), a fact sheet provided by the King County (Washington, USA) ${}^{27}$ in 12 languages, a COVID-19 advice sheet from the Doctors of the World ${}^{28}$ in 47 languages, and data from the COVID-19 Myth Busters in World Languages project ${}^{29}$ in 28 languages, and a medical prevention and treatment handbook from Zhejiang University School of Medicine ${}^{30}$ in 10 languages. Unfortunately the total amount of data from these sources do not exceed a few hundred sentences in each direction, so they are not enough for system development; they could, though, be potentially useful as an additional smaller evaluation set or for terminology extraction.
200
+
201
+ ## 6 Baseline Results and Discussion
202
+
203
+ We present baseline results in some language directions, using the following systems:
204
+
205
+ 1. For English to es, fr, pt, ru, sw, id, ln, lg, $\mathrm{{mr}}$ and most opposite directions: we use the OPUS-MT systems (Tiedemann and Thottin-gal, 2020) which are trained on the OPUS parallel data (Tiedemann, 2012a) using the Marian toolkit (Junczys-Dowmunt et al., 2018). We also use the pretrained systems between French and es, id, ln, lg, ru, rw.
206
+
207
+ 2. For English to Russian we compare against the pre-trained Fairseq models that won the WMT Shared Task in this direction last year $\left( {\mathrm{{Ng}}\text{et al.,2019), as well as the English to}}\right)$ French system of Ott et al. (2018). For translation between English and Chinese we also use a system trained on WMT'18 data (Bojar et al., 2018).
208
+
209
+ 3. We train systems between English and ar, fa, $\mathrm{{mr}}$ , $\mathrm{{om}},\mathrm{{zu}}$ on publicly available corpora from OPUS (referred to as "our OPUS models").
210
+
211
+ 4. We train multilingual systems between English and hi, ms, and ur on a TED talks dataset (Qi et al., 2018) ("our Multilingual TED models").
212
+
213
+ We note that none of the systems have been specifically trained or fine-tuned on any data listed above. We leave domain adaptation studies for future work.
214
+
215
+ Results Table 3 in the Appendix $§\mathrm{A}$ presents results in translation from English to all languages whose MT systems we were able to train or use, while Table 4 includes results in the opposite directions. Similarly, Tables 5 and 6 in the Appendix $\$ \mathrm{A}$ show the quality of MT systems, as measured on our test set, from and to French for a few languages. All tables also include a breakdown of the quality for each test domain. ${}^{31}$
216
+
217
+ Discussion First and foremost, the main takeaway from these baseline results lie not in the above-mentioned Tables, but in the languages that are not present in them. We were unable to find either pre-trained MT systems or publicly available parallel data in order to train our own baselines for Dari, Pashto, Tigrinya, Nigerian Fulfulde, Kurdish Sorani, Myanmar, Oromo, Dinka, Nuer, and isiZulu. ${}^{32}$ This highlights the need for serious data collection efforts to expand the availability of data for large swathes of under-represented communities and languages.
218
+
219
+ Beyond this obvious limitation, the existing systems' results highlight the divide between high-resource language pairs and low-resource ones. For all European languages (Spanish, French, Portuguese, Russian) as well as for Chinese and Indonesian, MT produces very competitive results with BLEU scores between 25 and 49. ${}^{33}$ In contrast, the output translations for languages like Lingala, Luganda, Marathi, or Urdu are quite disappointing, with extremely low BLEU scores under 10. The existence of pre-trained systems or of parallel data, hence, is not enough; this level of quality is basically unusable for any real-world deployment either for translators or for end-users.
220
+
221
+ ---
222
+
223
+ ${}^{23}$ https://www.bbc.com/
224
+
225
+ ${}^{24}$ https://www.voanews.com/
226
+
227
+ ${}^{25}$ https://www.dw.com/
228
+
229
+ 26https://bit.ly/3iCeqnl
230
+
231
+ ${}^{27}$ https://welcoming.seattle.gov/covid -19/
232
+
233
+ ${}^{28}$ https://bit.ly/3e7Mq7M
234
+
235
+ ${}^{29}$ https://covid-no-mb.org/
236
+
237
+ ${}^{30}$ https://bit.ly/3gvxaTL
238
+
239
+ ${}^{31}$ Note, however, that each sub-domain posits a smaller test set than the complete set, and hence any result should properly take into account statistical significance measures.
240
+
241
+ ${}^{32}$ Note that although a small amount of parallel data exists for English-isiZulu, Abbott and Martinus (2019) report very low results on general benchmarks as the parallel data requires cleaning.
242
+
243
+ ---
244
+
245
+ A comparison of the results across different domains is also revealing. BLEU scores are generally higher on Wikipedia and news articles; this is unsurprising, as most MT systems rely on such domains for training, as they naturally produce parallel or quasi-parallel data. Our PubMed data pose a more challenging setting, but perhaps not as challenging as we initially expected, although the results vary across languages. In translating from English to French, for instance, the difference between Wikipedia and PubMed is more than 14 BLEU points, ${}^{34}$ while the differences are smaller for e.g. Indonesian-English ( 6 BLEU points) or Russian-English (4 BLEU points).
246
+
247
+ Future Work Several concrete steps have the potential to improve MT for all languages in our benchmark. All results we report are with MT systems trained on general domain data or in particularly out-of-domain data (such as TED talks); domain adaptation techniques using small in-domain parallel resources or monolingual source- or target-side data should be able to increase performance. Incorporating the terminologies as part of the training and the inference schemes of the models could also ensure faithful and consistent translations of the COVID-19-specific scientific terms that might not naturally appear in other training data or might appear in different contexts.
248
+
249
+ Another direction for improvement involves multilingual NMT models trained on massive web-based corpora (Aharoni et al., 2019), which have improved translation accuracy particularly for languages in the lower end of data availability. Also viable are methods relying on multilingual model transfer, which can target languages with extremely small amounts of data, as in Chen et al. (2018).
250
+
251
+ Lastly, we need to improve the representation of low resource languages in public domain corpora. While there are open data collections in OPUS (Tiedemann, 2012b), and mined corpora like Paracrawl (Esplà et al., 2019), WikiMatrix (Schwenk et al., 2019), they don't cover enough low-resource languages. We hope that the availability of multilingual representations such as Multilingual BERT (Devlin et al., 2018) and XML-R (Conneau et al., 2019) will empower the creation of parallel corpora for low resource languages through low-resource corpus filtering (Koehn et al., 2019) or other approaches.
252
+
253
+ ## 7 Conclusion
254
+
255
+ Enabling efficient and accurate communication through translations still has a ways to go for the majority of the world's languages and particularly the most vulnerable ones. With this effort we only address a fraction of the needs for a fraction of the world's languages. Nevertheless, we hope that the MT Resources that we release will have an immediate impact for the languages we cover. More importantly, the benchmark we release will allow the MT research community, both academic and industrial, to be more prepared for the next crisis where translation technologies will be needed.
256
+
257
+ ## Acknowledgements
258
+
259
+ We would like to thank the people who made this effort possible: Tanya Badeka, Jen Wang, William Wong, Rebekkah Hogan, Cynthia Gao, Rachael Brunckhorst, Ian Hill, Bob Jung, Jason Smith, Susan Kim Chan, Romina Stella, Keith Stevens. We also extend our gratitude to the many translators and the quality reviewers whose hard work are represented in our benchmarks and in our translation memories. Some of the languages were very difficult to source, and the burden in these cases often fell to a very small number of translators. We thank you for the many hours you spent translating and, in many cases, re-translating content.
260
+
261
+ ## References
262
+
263
+ Jade Abbott and Laura Martinus. 2019. Benchmarking neural machine translation for southern african languages. In Proceedings of the 2019 Workshop on Widening NLP, pages 98-101.
264
+
265
+ Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation.
266
+
267
+ ---
268
+
269
+ ${}^{33}$ Note that BLEU scores on test sets on different languages (e.g. in Tables 3 and 5) are not directly comparable.
270
+
271
+ ${}^{34}$ We note again that these scores are not directly comparable as the underlying test data are different.
272
+
273
+ ---
274
+
275
+ In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874-3884, Minneapolis, Minnesota. Association for Computational Linguistics.
276
+
277
+ Ondrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (wmt18). In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, pages 272-307, Belgium, Brussels. Association for Computational Linguistics.
278
+
279
+ Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar F Zaidan. 2011. Findings of the 2011 workshop on statistical machine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 22-64. Association for Computational Linguistics.
280
+
281
+ Xilun Chen, Ahmed Hassan Awadallah, Hany Hassan, Wei Wang, and Claire Cardie. 2018. Mutli-Source Cross-Lingual Model Transfer: Learning What to Share. arXiv:1810.03552.
282
+
283
+ Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale.
284
+
285
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding.
286
+
287
+ Miquel Esplà, Mikel Forcada, Gema Ramírez-Sánchez, and Hieu Hoang. 2019. ParaCrawl: Web-scale parallel corpora for the languages of the EU. In Proceedings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, pages 118-119, Dublin, Ireland. European Association for Machine Translation.
288
+
289
+ Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6098-6111, Hong Kong, China. Association for Computational Linguistics.
290
+
291
+ Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, et al. 2018. Marian: Fast neural machine translation in c++. arXiv:1804.00344.
292
+
293
+ Philipp Koehn, Francisco Guzmán, Vishrav Chaud-hary, and Juan Pino. 2019. Findings of the WMT
294
+
295
+ 2019 shared task on parallel corpus filtering for low-resource conditions. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 54-72, Florence, Italy. Association for Computational Linguistics.
296
+
297
+ Will Lewis. 2010. Haitian Creole: How to Build and Ship an MT Engine from Scratch in 4 days, 17 hours, & 30 minutes. In 14th Annual conference of the European Association for machine translation. Cite-seer.
298
+
299
+ William D Lewis, Robert Munro, and Stephan Vogel. 2011. Crisis MT: Developing a Cookbook for MT in Crisis Situations. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 501- 511. Association for Computational Linguistics.
300
+
301
+ Robert Munro. 2010. Crowdsourced translation for emergency response in haiti: the global collaboration of local knowledge. In Proceedings of AMTA Workshop on Collaborative Crowdsourcing for Translation, Denver, Colorado. AMTA.
302
+
303
+ Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 news translation task submission. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 314-319, Florence, Italy. Association for Computational Linguistics.
304
+
305
+ Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9, Brussels, Belgium. Association for Computational Linguistics.
306
+
307
+ Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Pad-manabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL), New Orleans, USA.
308
+
309
+ Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2019. Wiki-matrix: Mining ${135}\mathrm{\;m}$ parallel sentences in 1620 language pairs from wikipedia.
310
+
311
+ Jörg Tiedemann. 2012a. Parallel data, tools and interfaces in opus. In Eight International Conference on Language Resources and Evaluation, MAY 21-27, 2012, Istanbul, Turkey, pages 2214-2218.
312
+
313
+ Jörg Tiedemann. 2012b. Parallel data, tools and interfaces in opus. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12), Istanbul, Turkey. European Language Resources Association (ELRA).
314
+
315
+ Jörg Tiedemann and Santhosh Thottingal. 2020. OPUS-MT - Building open translation services for the World. In Proceedings of the 22nd Annual Con-ferenec of the European Association for Machine Translation (EAMT), Lisbon, Portugal.
316
+
317
+ UNOCHA. 2020. Global humanitarian response plan COVID-19. https://www.unocha.org/sites/ unocha/files/Global-Humanitarian-Response-Plan-COVID-19.pdf. Accessed 2020-06-14].
318
+
319
+ Gawura Wañambi, Joy Bulkanhawuy, Stephen Dhamar-randji, and Rosemary Gundjarranbuy. 2020. Caring for yol u and ways of life during covid 19. Https://indigenousx.com.au/caring-for-yolnu-and-ways-of-life-during-covid-19/, retrieved July 27.
320
+
321
+ Andy Way, Rejwanul Haque, Guodong Xie, Federico Gaspari, Maja Popovic, and Alberto Pon-celas. 2020. Facilitating Access to Multilingual COVID-19 Information via Neural Machine Translation. arXiv:2005.00283.
322
+
323
+ ## Appendix
324
+
325
+ ## A Baseline MT Results
326
+
327
+ ## B Source Documents
328
+
329
+ The list of original English-language documents that make up our benchmark are listed in Table 7.
330
+
331
+ ## C Quality Assurance Documents
332
+
333
+ The 558 sentences of our quality assurance set are comprised of examples from almost all subdomains of the corpus. Specifically, it includes 40 sentences from the conversational data, one PubMed document (PubMed_8), two of the Wikinews documents (Wikinews_1, Wikinews_3) and one complete Wikipedia article (Wikipedia_handpicked_4).
334
+
335
+ ## D Expected Quality per Language
336
+
337
+ For all translation directions the quality was quite good, with average quality scores above ${95}\%$ . The detailed list of the quality evaluations on our sampled documents, which can be considered a proxy for the overall translation quality of our whole dataset, is available in Table 8.
338
+
339
+ ## E Terminology Terms
340
+
341
+ The Facebook terminologies provide translations from English into 129 languages/locales for the following terms: 1918 flu, acute bronchitis, acute respiratory disease, AIDS, airborne droplets, airway, alcohol-based, alcohol-based hand rub, alcohol-based hand sanitizer, alveolar disease, an epidemic, annual flu, anti-inflammatory drug, antiviral drug, antiviral treatment, assay, asymptomatic, avian flu, Black Plague, blood pressure, body fluid, breathing, bubonic plague, c19, cv19, case fatality rate, causative agent, CDC, chemical disinfection, chest x-ray, clinical diagnosis, clinical trial, close contact, cvirus, chronic respiratory disease, common cold, common flu, communicable disease, community spread, community transmission, compromised immune system, contagion, contagious, contagiousness, corona, corona virus, corona virus epidemic, corona virus outbreak, corona virus scare, coronavirus, coronavirus cases, coronavirus outbreak, coronavirus pandemic, coronavirus scare, cough, cough etiquette, coughing, cov 19, cov19, COVID, COVID-19 crisis, covid-19, covid19, COVID-19 epidemic, COVID-19 outbreak, COVID- 19 pandemic, COVID 19, current crisis, current health crisis, current outbreak, current pandemic, CV, CV-19, deadly, deadly virus, death rate, death toll, decontaminate, detectable, detergent, difficulty breathing, diabetes, diagnosable disease, diagnostic protocol, diagnostic testing, disease itself, disease outbreak, disinfectant, disposable, droplet transmission, droplets, dry cough, dry surface contamination, ebola, effective treatment, electron microscope, emergency department, epidemic, epidemic curve, epidemic peak, epidemiologist, exposure, extreme caution, eye protection, face mask, face masks, family cluster, fatality rate, fecal contamination, fever, flatten the infection curve, flu, food safety, formaldehyde, gastroenteritis, germicide, global pandemic, global warming, good respiratory hygiene, Guangdong, H1N1, H1N1 virus, hand disinfectant, hand sanitizer, health care provider, health crisis, health plan carrier, health services, heart failure, high fever, HIV, hospitalize, household bleach, Hubei, hydrogen peroxide, hygiene, immune system, immunity, immunocompromised, immunologist, incubation, incubation period, indirect contact, infect, infection, infectious, infectivity, influenza, initial transmission event, intensive care equipment, intermediate host, intrauterine, intubation, isolation, Isopropanol, laboratory test kit, laboratory testing, lack of testing, lack of tests, latest updates, liver failure, local public health authority, lockdown, lungs, masks, measles, mechanical ventilation, media coverage, medical care, MERS, MERS CoV, microbiologist, mild cough, mortality rate, multi-organ failure, nasal congestion, neutral detergent, new normal, novel corona virus, novel coronavirus, novel coronavirus outbreak, novel virus, online medical consultation, onset, outbreak, outbreak readiness, overall case fatality, pandemic, pangolin, pathogen, pathology, patient care equipment, personal protective equipment, person-to-person transmission, phlegm, physical contact, plague, pneumonia, pneumonias, positive test, precaution, precautionary, pre-existing condition, preparedness, prognosis, protect myself, protect others, protect yourself, protective measures, public health crisis, pulmonary tissue damage, quarantine, rapid risk assessment, reagent, regular flu, reinfection, renal disease, renal failure, respirator, respiratory, respiratory disease, respiratory distress, respiratory droplets, respiratory hygiene, respiratory illness, respiratory syndrome, respiratory tract disease, restrictions, resurge, RNA, rubeola, runny nose, SARS, SARS-CoV-2, SARS-CoV, sars-related coronavirus, seasonal flu, seasonal influenza, secretion, self-quarantine, sepsis, septic shock, Severe Acute Respiratory Syndrome, shortness of breath, sickened, sickness, sneeze, sneezes, sneezing, social distancing, sore throat, spanish flu, specimen collection, spread, sputum, stigmatisation, supportive, surface, surfaces, suspected infection, swab, swine flu, symptom, symptomatic, symptoms, targeted disinfection, test kit, testing error, tissue damage, tp shortage, toilet paper shortage, touch, touching, transmissibility, transmissible, transmission, transmission potential, typical flu, underlying disease, updates, vaccine, vaccines, ventilation, ventilator, ventilators, viral infection, viral outbreak, viral pandemic, viral pneumonia, virtual care, virulence, virus itself, virus outbreak, virus scare, virus spreads, virus strain, viral transmission, vivid-19, washing hands, washing ones hands, wash your hands, white blood cell count, widespread transmission, working from home, WFH, Wuhan, aetiology, alpha coronavirus, alveolus, antibody test, antibody therapy, antigen, antimicrobial agent, antiviral antibody, associated disease, betacoronavirus, bioaerosol, bronchoalveolar, canine coronavirus, cDNA, complete genome, confirmatory testing, coronavirus envelope, cytological, diabetes mellitus, diclofenac, dyspnoea, ectodomain, emergency airway management, envelope antibody, fomite, food safety, genome analysis, glycoprotein, gp surgery, hemoptysis, histological examination, host cell, Immunofluorescence, immunopathogenesis, interferon, Lancet, lopinavir, microbial flora, nucleic acid test, pan-coronavirus assay, passive antibody therapy, pathogenesis, pathophysiology, phylogenetic analysis, phylogenic tree, PPE, prophylaxis, protein sequence analysis, reverse transcription polymerase chain reaction, serological test, sodium hypochlorite, thermal disinfection, ultraviolet light exposure, vertical transmission, viral antigen, viral transmission, virology, virucidal.
342
+
343
+ The Google terminologies provide translations from English into 100 languages/locales of various amounts of terms. There are 27 more translation directions provided, for which we don't provide details for lack of space. The following subset of terms, though, should be common across all English-to-X terminologies: 14 days in isolation, 14 days quarantine, 2019 coronavirus, 2019 novel coronavirus, 2019-nCoV, 2020 coronavirus, 2020 novel coronavirus, about coronavirus, acute respiratory distress syndrome, advanced hand sanitizer, Affected by coronavirus, after exposure, After the epidemic, After the outbreak, airborne virus, alcohol based hand sanitizer, alcohol hand sanitizer, alcohol-based hand sanitizer, Anti-Corona virus spray, antibacterial hand sanitizer, ARDS, avoid exposure, be infected, Beware of coronavirus, bilateral interstitial pneumonia, CDC, Center for Disease Control, community spread, compulsory quarantine, contagious, coronavirus, coronavirus (COVID-19), coronavirus alert, coronavirus cases, coronavirus concerns, coronavirus crisis, coronavirus disease, Coronavirus disease (COVID-19) outbreak, coronavirus early symptoms, coronavirus epidemic, coronavirus exposure, coronavirus incubation, coronavirus incubation period, coronavirus infection, coronavirus map, coronavirus medicine, coronavirus medicines, coronavirus news, coronavirus outbreak, coronavirus pandemic, coronavirus pneumonia, coronavirus precautions, coronavirus prevention, coronavirus protection, coronavirus quarantine, coronavirus SOS Alert, coronavirus spread, coronavirus symptoms, coronavirus transmission, coronavirus travel ban, coronavirus travel restrictions, coronavirus treatment, coronavirus update, coronavirus vaccine, coronavirus vaccines, covid cases, covid early symptoms, covid incubation period, covid international spread, covid international travel, covid isolation, covid map, covid medicine, covid medicines, covid news, covid outbreak, covid pandemic, covid panic, covid SOS Alert, covid symptoms, covid transmission, covid travel ban, covid travel restrictions, covid treatment, covid vaccine, covid vaccines, covid-19, covid-19 alert, covid-19 cases, covid-19 CDC, covid-19 contagious, covid-19 cure, covid-19 dangerous, covid-19 deadly, covid-19 death, covid-19 deaths, covid-19 domestic travel, covid- 19 early symptoms, covid-19 effects, covid-19 epidemic, covid-19 exposure, covid-19 fatal, covid-19 fever, covid-19 illness, covid-19 incubation, covid-19 incubation period, covid-19 infection, covid-19 international spread, covid-19 international travel, covid-19 isolation, covid-19 lockdown, covid-19 map, covid-19 medicine, covid-19 medicines, covid-19 news, covid-19 outbreak, covid-19 pandemic, covid-19 panic, covid-19 precautions, covid-19 protection, covid-19 quarantine, covid-19 SOS Alert, covid-19 spread, covid-19 symptoms, covid-19 transmission, covid-19 travel ban, covid-19 travel restrictions, covid-19 treatment, covid-19 uncontrolled spread, covid-19 vaccine, covid-19 vaccines, COVID-19 virus, covid-19 virus outbreak, covid-19 virus transmission, covid-19 WHO, covid19, covid19 alert, covid19 cases, covid19 CDC, covid19 deaths, covid19 domestic travel, covid19 effects, covid19 epidemic, covid19 exposure, covid19 fatal, covid19 fever, covid19 illness, covid19 incubation, covid19 incubation period, covid19 infection, covid19 international spread, covid19 international travel, covid19 isolation, covid19 lockdown, covid19 map, covid19 medicine, covid19 medicines, covid19 news, covid19 outbreak, covid19 pandemic, covid19 precautions, covid19 protection, covid19 quarantine, covid19 SOS Alert, covid19 spread, covid19 symptoms, covid19 transmission, covid19 travel ban, covid19 travel restrictions, covid19 treatment, covid19 vaccine, covid19 vaccines, covid19 virus, covid19 virus outbreak, covid19 virus transmission, current outbreak, deadly outbreak, disease outbreak, Disposable hand sanitizer, domestic travel, droplets, Effects of coronavirus, epidemic, epidemic and pandemic, epidemic disease, epidemic outbreak, epidemic period, epidemic prevention, epidemic season, epidemic situation, exposure, exposure time, fever, fight the virus, Fighting the outbreak, flu epidemic, fomites, global health emergency, global outbreak, global pandemic, hand sanitizer, hand sanitizer dispenser, hand sanitizer gel, hand sanitizer spray, home isolation, home quarantine, illness, incubation period, infected, instant hand sanitizer, international spread, international travel, isolation, isolation period, isolation room, isolation valve, isolation ward, lockdown, major outbreak, mandatory quarantine, mass gathering, medicine, medicines, n95, n95 mask, n95 respirator, ncov, ncov-2019, new coronavirus, new coronavirus pneumonia, novel coronavirus, novel coronavirus infection, novel coronavirus outbreak, novel coronavirus pneumonia, ongoing outbreak, outbreak, outbreak of coronavirus, outbreak of disease, pandemic, pandemic influenza, pandemic outbreak, pandemic plan, pandemic potential, pneumonia, pneumonia epidemic, potential exposure, precautions, prevent virus, prolonged exposure, quarantine, quarantine area, quarantine facility, quarantine measures, quarantine period, quarantine room, quarantine zone, Recovered coronavirus patient, repatriate, repeated exposure, respiratory syncytial virus, respiratory virus, Sanitizing hand sanitizer, SARS-CoV-2, self isolation, self quarantine, Severe outbreak, social distancing, social distancing measures, social isolation, SOS Alert, spread, spread of coronavirus, spread of virus, strain of virus, the incubation period, the novel coronavirus, touching face, travel advisory, travel ban, travel restrictions, use hand sanitizer, vaccines, viral outbreak, virtual lockdown, virus, virus carrier, virus infection, virus mask, virus outbreak, virus prevention, virus protection, virus spread, virus spreads, virus strain, virus transmission, wash your hands, washing hands, WHO, WHO Confirmed, WHO Deaths, widespread outbreak, World Health Organization, zoonotic disease, zoonotic virus.
344
+
345
+ <table><tr><td rowspan="2">en→:</td><td rowspan="2">Overall</td><td colspan="5">Translation Accuracy by Domain (BLEU)</td></tr><tr><td>PubMed</td><td>Conv.</td><td>Wikisource</td><td>Wikinews</td><td>Wikipedia</td></tr><tr><td colspan="7">HelsinkiNLP OPUS-MT</td></tr><tr><td>es-LA</td><td>48.73</td><td>49.87</td><td>32.11</td><td>39.73</td><td>53.20</td><td>48.70</td></tr><tr><td>es-LA†</td><td>49.25</td><td>50.17</td><td>30.60</td><td>40.74</td><td>53.29</td><td>49.33</td></tr><tr><td>fr</td><td>37.59</td><td>27.11</td><td>30.86</td><td>39.72</td><td>28.44</td><td>42.69</td></tr><tr><td>id</td><td>41.27</td><td>37.75</td><td>28.68</td><td>40.52</td><td>42.85</td><td>43.2</td></tr><tr><td>pt-BR</td><td>47.26</td><td>46.64</td><td>30.85</td><td>36.52</td><td>48.21</td><td>48.32</td></tr><tr><td>pt-BR ${}^{ \dagger }$</td><td>47.27</td><td>47.63</td><td>24.90</td><td>36.11</td><td>49.26</td><td>47.94</td></tr><tr><td>ru</td><td>25.49</td><td>21.65</td><td>18.43</td><td>18.40</td><td>24.24</td><td>27.84</td></tr><tr><td>SW</td><td>22.62</td><td>19.94</td><td>19.59</td><td>27.52</td><td>26.79</td><td>23.61</td></tr><tr><td>$\lg$</td><td>2.96</td><td>2.54</td><td>1.71</td><td>5.17</td><td>3.37</td><td>3.01</td></tr><tr><td>ln</td><td>7.85</td><td>8.33</td><td>5.40</td><td>12.0</td><td>7.42</td><td>7.38</td></tr><tr><td>mr</td><td>0.21</td><td>0.18</td><td>0.96</td><td>0.30</td><td>0.62</td><td>0.19</td></tr><tr><td colspan="7">Fairseq</td></tr><tr><td>fr</td><td>36.96</td><td>26.83</td><td>27.71</td><td>41.70</td><td>27.55</td><td>42.35</td></tr><tr><td>ru</td><td>28.88</td><td>26.45</td><td>15.89</td><td>21.52</td><td>28.95</td><td>30.61</td></tr><tr><td colspan="7">WMT-18</td></tr><tr><td>zh</td><td>33.70</td><td>41.66</td><td>16.26</td><td>23.35</td><td>28.51</td><td>30.10</td></tr><tr><td colspan="7">Our OPUS models</td></tr><tr><td>ar</td><td>15.16</td><td>11.81</td><td>10.47</td><td>11.38</td><td>14.36</td><td>17.50</td></tr><tr><td>fa</td><td>8.48</td><td>7.85</td><td>3.88</td><td>14.54</td><td>5.26</td><td>8.72</td></tr><tr><td>prs*</td><td>9.49</td><td>7.73</td><td>2.39</td><td>10.40</td><td>9.08</td><td>10.51</td></tr><tr><td>mr</td><td>0.12</td><td>0.09</td><td>1.07</td><td>0.24</td><td>0.30</td><td>0.09</td></tr><tr><td>om</td><td>0.57</td><td>0.53</td><td>0.75</td><td>1.25</td><td>0.47</td><td>0.54</td></tr><tr><td>zu</td><td>11.73</td><td>13.98</td><td>16.42</td><td>16.74</td><td>14.12</td><td>10.18</td></tr><tr><td colspan="7">Our Multilingual TED</td></tr><tr><td>hi</td><td>6.43</td><td>5.45</td><td>10.37</td><td>6.34</td><td>3.08</td><td>6.80</td></tr><tr><td>ms</td><td>6.26</td><td>5.90</td><td>6.12</td><td>10.15</td><td>6.17</td><td>6.17</td></tr><tr><td>id</td><td>25.65</td><td>23.76</td><td>20.31</td><td>27.92</td><td>24.40</td><td>26.54</td></tr><tr><td>ur</td><td>2.79</td><td>2.79</td><td>6.48</td><td>5.02</td><td>4.0</td><td>2.40</td></tr><tr><td colspan="7">WMT-20</td></tr><tr><td>zh</td><td>57.83</td><td>68.88</td><td>41.49</td><td>33.57</td><td>55.97</td><td>53.45</td></tr><tr><td>ps</td><td>36.56</td><td>49.26</td><td>26.94</td><td>12.15</td><td>8.85</td><td>32.25</td></tr><tr><td>ru</td><td>40.20</td><td>29.71</td><td>26.37</td><td>22.90</td><td>40.44</td><td>46.38</td></tr></table>
346
+
347
+ Table 3: Baseline results on some English-to-X translation directions. $\dagger$ : the italicized rows of the Spanish and Portuguese results show the quality of the outputs obtained using the European Spanish and Portuguese systems, while the top lines use es_MX and pt_BR tags. ♣: the results on Dari (prs) are with the English-Farsi (fa) model.
348
+
349
+ <table><tr><td rowspan="2">$\Leftrightarrow \Leftrightarrow \mathbf{{en}}$</td><td rowspan="2">Overall</td><td colspan="5">Translation Accuracy by Domain (BLEU)</td></tr><tr><td>PubMed</td><td>Conv.</td><td>Wikisource</td><td>Wikinews</td><td>Wikipedia</td></tr><tr><td colspan="7">Naver Papago</td></tr><tr><td>es-LA</td><td>52.55</td><td>54.31</td><td>35.47</td><td>45.60</td><td>50.25</td><td>52.50</td></tr><tr><td>es-LA◇</td><td>52.78</td><td>54.16</td><td>34.82</td><td>44.90</td><td>51.24</td><td>52.97</td></tr><tr><td>fr</td><td>41.65</td><td>30.58</td><td>32.85</td><td>43.57</td><td>27.07</td><td>47.95</td></tr><tr><td>${\mathrm{{fr}}}^{\Diamond }$</td><td>42.12</td><td>30.84</td><td>32.17</td><td>43.82</td><td>28.67</td><td>48.61</td></tr><tr><td colspan="7">HelsinkiNLP OPUS-MT</td></tr><tr><td>es-LA</td><td>46.82</td><td>49.23</td><td>33.02</td><td>39.99</td><td>44.69</td><td>46.38</td></tr><tr><td>fr</td><td>39.40</td><td>28.78</td><td>26.10</td><td>39.55</td><td>28.10</td><td>45.09</td></tr><tr><td>id</td><td>34.86</td><td>33.40</td><td>26.52</td><td>38.64</td><td>37.79</td><td>35.35</td></tr><tr><td>pt-BR</td><td>48.56</td><td>49.62</td><td>31.52</td><td>40.05</td><td>43.11</td><td>40.19</td></tr><tr><td>ru</td><td>28.53</td><td>26.43</td><td>21.70</td><td>27.27</td><td>25.39</td><td>29.94</td></tr><tr><td>hi</td><td>18.91</td><td>16.99</td><td>23.60</td><td>22.22</td><td>14.68</td><td>19.71</td></tr><tr><td>TW</td><td>8.29</td><td>7.67</td><td>7.56</td><td>13.50</td><td>7.53</td><td>8.31</td></tr><tr><td>$\lg$</td><td>5.62</td><td>4.66</td><td>5.54</td><td>7.44</td><td>5.31</td><td>6.04</td></tr><tr><td>In</td><td>6.71</td><td>7.03</td><td>1.86</td><td>9.97</td><td>4.58</td><td>6.41</td></tr><tr><td colspan="7">WMT-18</td></tr><tr><td>zh</td><td>28.94</td><td>32.61</td><td>16.82</td><td>17.11</td><td>31.76</td><td>27.68</td></tr><tr><td colspan="7">Our OPUS models</td></tr><tr><td>ar</td><td>28.56</td><td>25.25</td><td>13.39</td><td>23.07</td><td>23.82</td><td>31.22</td></tr><tr><td>fa</td><td>15.07</td><td>12.00</td><td>23.44</td><td>23.68</td><td>14.78</td><td>16.42</td></tr><tr><td>prs*</td><td>15.16</td><td>15.19</td><td>15.76</td><td>20.02</td><td>15.57</td><td>14.72</td></tr><tr><td>mr</td><td>1.16</td><td>1.02</td><td>1.46</td><td>1.74</td><td>1.80</td><td>1.56</td></tr><tr><td>om</td><td>2.11</td><td>1.72</td><td>1.90</td><td>4.41</td><td>2.87</td><td>2.14</td></tr><tr><td>zu</td><td>25.52</td><td>26.32</td><td>22.25</td><td>30.03</td><td>28.03</td><td>24.75</td></tr></table>
350
+
351
+ Table 4: Baseline results on some X-to-English translation directions. $\clubsuit$ : the results on Dari (prs) are with the Farsi (fa)-English model. $\diamond$ : results with the systems adapted to the medical domain.
352
+
353
+ <table><tr><td rowspan="2">fr→:</td><td rowspan="2">Overall</td><td colspan="5">Translation Accuracy by Domain (BLEU)</td></tr><tr><td>PubMed</td><td>Conv.</td><td>Wikisource</td><td>Wikinews</td><td>Wikipedia</td></tr><tr><td colspan="7">HelsinkiNLP OPUS-MT</td></tr><tr><td>en</td><td>39.40</td><td>28.78</td><td>26.10</td><td>39.55</td><td>28.10</td><td>45.09</td></tr><tr><td>es-LA</td><td>34.95</td><td>26.28</td><td>25.31</td><td>31.42</td><td>29.25</td><td>39.87</td></tr><tr><td>ru</td><td>15.11</td><td>10.49</td><td>13.66</td><td>16.73</td><td>10.38</td><td>17.50</td></tr><tr><td>TW</td><td>3.83</td><td>2.40</td><td>3.35</td><td>3.66</td><td>2.30</td><td>4.56</td></tr><tr><td>$\lg$</td><td>1.48</td><td>1.05</td><td>1.34</td><td>1.73</td><td>0.77</td><td>1.69</td></tr><tr><td>ln</td><td>6.14</td><td>5.25</td><td>3.59</td><td>9.71</td><td>2.16</td><td>6.61</td></tr></table>
354
+
355
+ Table 5: Baseline results on some French-to-X translation directions.
356
+
357
+ <table><tr><td rowspan="2">$: : \rightarrow \mathbf{{fr}}$</td><td rowspan="2">Overall</td><td colspan="5">Translation Accuracy by Domain (BLEU)</td></tr><tr><td>PubMed</td><td>Conv.</td><td>Wikisource</td><td>Wikinews</td><td>Wikipedia</td></tr><tr><td colspan="7"/></tr><tr><td>es-LA</td><td>29.21</td><td>22.95</td><td>22.24</td><td>29.88</td><td>21.95</td><td>32.63</td></tr><tr><td>en</td><td>37.59</td><td>27.11</td><td>30.86</td><td>39.72</td><td>28.44</td><td>42.69</td></tr><tr><td>id</td><td>18.95</td><td>13.59</td><td>17.92</td><td>22.83</td><td>14.02</td><td>21.48</td></tr><tr><td>ru</td><td>17.62</td><td>12.94</td><td>18.34</td><td>19.82</td><td>13.52</td><td>19.96</td></tr><tr><td>$\lg$</td><td>2.91</td><td>1.76</td><td>1.81</td><td>5.30</td><td>0.81</td><td>3.32</td></tr><tr><td>In</td><td>4.77</td><td>3.62</td><td>2.88</td><td>7.74</td><td>2.96</td><td>5.22</td></tr><tr><td>TW</td><td>5.62</td><td>3.96</td><td>2.18</td><td>8.18</td><td>2.89</td><td>6.39</td></tr></table>
358
+
359
+ Table 6: Baseline results on some X-to-French translation directions.
360
+
361
+ <table><tr><td>Doc ID</td><td>Type</td><td>Source/URL</td></tr><tr><td colspan="3">Conversational</td></tr><tr><td>CMU_1</td><td>medical-domain phrases</td><td>http://www.speech.cs.cmu.edu/haitian/text/1600_medical_ domain_sentences.en</td></tr><tr><td colspan="3">PubMed</td></tr><tr><td>PubMed_6</td><td>Scientific Article</td><td>https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7096777/</td></tr><tr><td>PubMed_7</td><td>Scientific Article</td><td>https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7098028/</td></tr><tr><td>PubMed_8</td><td>Scientific Article</td><td>https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7098031/</td></tr><tr><td>PubMed_9</td><td>Scientific Article</td><td>https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7119513/</td></tr><tr><td>PubMed_10</td><td>Scientific Article</td><td>https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7124955/</td></tr><tr><td>PubMed_11 Wikipedia</td><td>Scientific Article</td><td>https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7125052/</td></tr><tr><td/><td>Wikipedia_hand_1 Wikipedia Article</td><td>https://en.wikipedia.org/wiki/2019%E2%80%9320_coronavirus_ pandemic</td></tr><tr><td/><td>Wikipedia_hand_3 Wikipedia Article</td><td>https://en.wikipedia.org/wiki/COVID-19_testing</td></tr><tr><td/><td>Wikipedia_hand_4 Wikipedia Article</td><td>https://en.wikipedia.org/wiki/Hand_washing</td></tr><tr><td/><td>Wikipedia_hand_5 Wikipedia Article</td><td>https://en.wikipedia.org/wiki/Impact_of_the_2019%E2%80% 9320_coronavirus_pandemic_on_education</td></tr><tr><td/><td>Wikipedia_hand_7 Wikipedia Article</td><td>https://en.wikipedia.org/wiki/Workplace_hazard_controls_ for_COVID-19</td></tr><tr><td>Wiki_20</td><td>Wikipedia Article</td><td>https://en.wikipedia.org/wiki/Angiotensin-converting_ enzyme_2</td></tr><tr><td>Wiki_26</td><td>Wikipedia Article</td><td>https://en.wikipedia.org/wiki/Bat_SARS-like_coronavirus_ WIV1</td></tr><tr><td>Wiki_7</td><td>Wikipedia Article</td><td>Article https://en.wikipedia.org/wiki/COVID-19_apps</td></tr><tr><td>Wiki_13</td><td/><td>Wikipedia Article https://en.wikipedia.org/wiki/COVID-19_drug_development</td></tr><tr><td>Wiki_10</td><td/><td>'ikipedia Article https://en.wikipedia.org/wiki/COVID-19_drug_repurposing_ research</td></tr><tr><td>Wiki_29</td><td/><td>Wikipedia Article https://en.wikipedia.org/wiki/COVID-19_in_pregnancy</td></tr><tr><td>Wiki_11</td><td/><td>Wikipedia Article https://en.wikipedia.org/wiki/COVID-19_surveillance</td></tr><tr><td>Wiki_4</td><td/><td>Wikipedia Article https://en.wikipedia.org/wiki/COVID-19_vaccine</td></tr><tr><td>Wiki_5</td><td/><td>Wikipedia Article https://en.wikipedia.org/wiki/Coronavirus</td></tr><tr><td>Wiki_9</td><td/><td>Wikipedia Article https://en.wikipedia.org/wiki/Coronavirus_disease_2019</td></tr><tr><td colspan="3">Wikinews</td></tr><tr><td>Wikinews_1</td><td>News Segment</td><td>https://en.wikinews.org/wiki/Bangladesh_reports_five_new_ deaths_due_to_COVID-19,_a_daily_highest</td></tr><tr><td>Wikinews_2</td><td>News Segment</td><td>https://en.wikinews.org/wiki/National_Basketball_ Association_suspends_season_due_to_COVID-19_concerns</td></tr><tr><td>Wikinews_3</td><td>News Segment</td><td>https://en.wikinews.org/wiki/SARS-CoV-2_surpasses_one_ million_infections_worldwide</td></tr><tr><td>Wikinews_4</td><td>News Segment</td><td>https://en.wikinews.org/wiki/Stores_in_Australia_lower_ toilet_paper_limits_per_transaction</td></tr><tr><td>Wikinews_5</td><td>News Segment</td><td>https://en.wikinews.org/wiki/US_President_Trump_declares_ COVID-19_national_emergency</td></tr><tr><td>Wikinews_6</td><td>News Segment</td><td>https://en.wikinews.org/wiki/World_Health_Organization_ declares_COVID-19_pandemic</td></tr><tr><td colspan="3">Wikivoyage</td></tr><tr><td>Wikivoyage_1</td><td>nouncement</td><td>1-https://en.wikivoyage.org/wiki/2019%E2%80%932020_ coronavirus_pandemic</td></tr><tr><td colspan="3">Wikisource</td></tr><tr><td>Wikisource_1</td><td>Executive Order</td><td>https://en.wikisource.org/wiki/California_Executive_Order_ N-33-20</td></tr><tr><td>Wikisource_1</td><td>Communiqué</td><td>https://en.wikisource.org/wiki/Covid-19:_Lightening_the_ load_and_preparing_for_the_future</td></tr></table>
362
+
363
+ Table 7: List of all source documents for our translation benchmark.
364
+
365
+ <table><tr><td>en→:</td><td>QA score (%) (initial $\left( \rightarrow \right)$ final</td><td>re-worked</td><td>en→:</td><td>QA score (%) (initial $\rightarrow$ ) final</td><td>re-worked</td></tr><tr><td>ar</td><td>99.28</td><td/><td>ha</td><td>94.29</td><td>✓</td></tr><tr><td>zh</td><td>98.89</td><td/><td>km</td><td>${89.15} \rightarrow {91.23}$</td><td>✓</td></tr><tr><td>fr</td><td>95.78</td><td>✓</td><td>am</td><td>${84} \rightarrow {93.85}$</td><td>✓</td></tr><tr><td>pt-BR</td><td>97.85</td><td>✓</td><td>zu</td><td>97.77</td><td>✓</td></tr><tr><td>es-419</td><td>99.60</td><td/><td>tl</td><td>98.08</td><td>✓</td></tr><tr><td>hi</td><td>98.39</td><td>✓</td><td>ms</td><td>98.32</td><td/></tr><tr><td>ru</td><td>98.36</td><td/><td>ta</td><td>${79} \rightarrow {94.74}$</td><td>✓</td></tr><tr><td>SW</td><td>95.43</td><td>✓</td><td>my</td><td>97.68</td><td/></tr><tr><td>id</td><td>97.98</td><td/><td>bn</td><td>94.78</td><td>✓</td></tr><tr><td>ku</td><td>99.96</td><td/><td>ur</td><td>94</td><td>✓</td></tr><tr><td>In</td><td>96.13</td><td/><td>fa</td><td>${95.37} \rightarrow {96.35}$</td><td>✓</td></tr><tr><td>$\lg$</td><td>99.63</td><td/><td>mr</td><td>${92} \rightarrow {96.45}$</td><td>✓</td></tr><tr><td>ne</td><td>95.32</td><td>✓</td><td>om</td><td>94.25</td><td>✓</td></tr><tr><td>pus</td><td>98.96</td><td/><td>ckb</td><td>99.94</td><td/></tr><tr><td>ti-ET</td><td>${81.98} \rightarrow 9$93.47</td><td>✓</td><td>din</td><td>99.40</td><td/></tr><tr><td>so</td><td>94.80</td><td>✓</td><td>kr</td><td>98.88</td><td/></tr><tr><td>prs</td><td>96.34</td><td>✓</td><td>ti-ER</td><td>90.98</td><td>✓</td></tr><tr><td>fuv-Latn-NG</td><td>99.41</td><td/><td>SWC</td><td/><td/></tr><tr><td>rw</td><td>99.73</td><td/><td>nus</td><td/><td/></tr><tr><td colspan="6">Average QA score (final): 96.77</td></tr></table>
366
+
367
+ Table 8: QA score across languages. We report the initial QA score and whether re-work was required on the produced translations (beyond the QA sample, due to serious translation errors). In cases where the initial QA yielded very poor results, the translations were corrected in their entirety and a new QA process was performed, and we report both the initial and the final QA results. Note: a "N/A" final QA score in the score indicates that the final score is temporarily not available and we will report it in an updated version of the paper.
368
+
papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/-0xPrt01VXD/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,287 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § TICO-19: THE TRANSLATION INITIATIVE FOR COVID-19
2
+
3
+ Antonios Anastasopoulos ${}^{j}$ , Alessandro Cattelan ${}^{Y}$ , Zi-Yi Dou ${}^{Å}$ , Marcello Federico ${}^{\Upsilon }$ ,
4
+
5
+ Christian Federman ${}^{\delta }$ , Dmitriy Genzel ${}^{v}$ , Francisco Guzmán ${}^{v}$ , Junjie Hu ${}^{\delta }$ , Macduff Hughes ${}^{\delta }$ ,
6
+
7
+ Philipp Koehn ${}^{3}$ , Rosie Lazar ${}^{\phi }$ , Will Lewis ${}^{\delta }$ , Graham Neubig ${}^{\delta }$ , Mengmeng Niu ${}^{3}$ ,
8
+
9
+ Alp Öktem ${}^{\mathrm{E}}$ , Eric Paquin ${}^{\mathrm{E}}$ , Grace Tang ${}^{\mathrm{E}}$ , Sylwia Tur ${}^{\Phi }$
10
+
11
+ ${}^{j}$ Department of Computer Science, George Mason University
12
+
13
+ ${}^{\Lambda }$ Language Technologies Institute, Carnegie Mellon University
14
+
15
+ ${}^{3}$ Translated ${}^{3}$ Amazon AI ${}^{3}$ Microsoft ${}^{8}$ Facebook AI ${}^{3}$ Johns Hopkins University
16
+
17
+ ${}^{\phi }$ Appen ${}^{3}$ Google ${}^{8}$ Translators without Borders
18
+
19
+ tico19.2020@gmail.com
20
+
21
+ § ABSTRACT
22
+
23
+ The COVID-19 pandemic is the worst pandemic to strike the world in over a century. Crucial to stemming the tide of the SARS-CoV-2 virus is communicating to vulnerable populations the means by which they can protect themselves. To this end, the collaborators forming the Translation Initiative for COvid- 19 (TICO-19) ${}^{1}$ have made test and development data available to AI and MT researchers in 35 different languages in order to foster the development of tools and resources for improving access to information about COVID- 19 in these languages. In addition to 9 high-resourced, "pivot" languages, the team is targeting 26 lesser resourced languages, in particular languages of Africa, South Asia and South-East Asia, whose populations may be the most vulnerable to the spread of the virus. The same data is translated into all of the languages represented, meaning that testing or development can be done for any pairing of languages in the set. Further, the team is converting the test and development data into translation memories (TMXs) that can be used by localizers from and to any of the languages. ${}^{2}$
24
+
25
+ § 1 INTRODUCTION
26
+
27
+ The COVID-19 pandemic marks the worst pandemic to strike the world since 1918. At the time of this writing, ${}^{3}$ the SARS-CoV-2 coronavirus responsible for COVID-19 has infected over ten million people worldwide, with over a half a million deaths. While these numbers are likely under-reported, they are growing at an alarming rate, and many millions of people could become infected or perish without proper prevention measures.
28
+
29
+ Effective communication from health authorities is essential to protect at-risk populations, slow down the spread of the disease, and decrease its morbidity and mortality (UNOCHA, 2020). Yet, preventive measures such as stay-at-home orders, social distancing, and requirements to wear personal protective equipment (e.g. masks, gloves, etc.) have proven difficult to relay. That's not accounting for the difficulty to disseminate correct technical information about the disease, such as symptoms (e.g., fever, chills, etc.), specifics about testing (e.g., viral ${vs}$ . antibody testing), and treatments (e.g., intubation, plasma transfusion).
30
+
31
+ While official communications from the World's Health Organization (WHO) are constantly published and revised, they are mostly limited to major languages. This has resulted in a vacuum in many languages that has been filled by an infodemic of misinformation, as described by the WHO. Nongovernmental organizations (NGOs) such as Translators without Borders (TWB) play an important role in delivering multilingual communication in emergencies, such as the COVID-19 pandemic, but their reach and capacity has been outsized by the needs presented by the pandemic. To date, TWB has translated over 3.5 million words with over 80 non-profit organizations for more than 100 language pairs as part of their COVID-19 response.
32
+
33
+ Translation technologies such as automatic Machine Translation (MT) and Computer Assisted Translation (CAT) present unique opportunities to scale the throughput of human translators. However, given the sensitivity of the content, it is critical that the translations produced automatically are of the highest possible quality.
34
+
35
+ The Translation Initiative for COvid-19 (TICO- 19) effort marks a unique collaboration between public and private entities that came together shortly after the beginning of the pandemic. ${}^{4}$ The focus of TICO-19 is to enable the translation of content related to COVID-19 into a wide range of languages. First, we make available a collection of translation memories and technical glossaries so that language service providers (LSPs), translators and volunteers can make use of them to expedite their work and ensure consistency and accuracy. Second, we provide an open-source, multi-lingual benchmark set (which includes data for very-low-resource languages) specialized in the medical domain, which is intended to track the quality of current machine translation systems, thus enabling future research in the area. Lastly, we provide monolingual and bi-lingual resources for MT practitioners to use in order to advance the state-of-the-art in medical and humanitarian Machine Translation, as well as other natural language processing (NLP) applications.
36
+
37
+ ${}^{1}$ Collaborators in the initiative include Translators without Borders, Carnegie Mellon University, Johns Hopkins University, George Mason University, Amazon Web Services, Appen, Facebook, Google, Microsoft, and Translated.
38
+
39
+ ${}^{2}$ The dataset, translation memories, and additional resources are freely available online: http://tico-19.github.io/.As the project continues and we create data for more languages, we will keep updating this paper as well as the project's website.
40
+
41
+ ${}^{3}$ July 1st, 2020
42
+
43
+ Our hope is that our work will in the short-term enable the translation of important communications into multiple languages, and that in the long-term, it will serve to foster the research on MT for specialized content into low-resource languages. Through these resources we hope that our society is better prepared to quickly respond to the needs of translation in the midst of crises (e.g., for future crises, $a$ la Lewis et al. (2011)).
44
+
45
+ § 2 THE VALUE OF TRANSLATION TECHNOLOGIES IN CRISIS SCENARIOS
46
+
47
+ During a crisis, whether it is local to one region or is a worldwide pandemic, communicating effectively in the languages and formats people understand is central to effective programs on the ground. For example, as part of the effort to control the spread of COVID-19, the Global Humanitarian Response Plan recognizes community engagement in relevant languages as a key strategy (UNOCHA, 2020). ${}^{5}$ In some countries, this will be all the more vital because information will be the main defense against the disease, and particular effort will be needed to make it accessible and grounded in local culture and context. Among these are countries where large sections of the population do not speak the dominant language.
48
+
49
+ Historically, MT, NLP and translation technologies have played a crucial role in crisis scenarios. The response to the Haitian earthquake in 2010 was notable for the broad use of technology in the humanitarian response, relying more on crowd-sourced translations and geolocation, but notably, translation technology was also used. In the days following the earthquake, Haitian citizens were encouraged to text messages requesting assistance to "4636", and as many as 5,000 messages were texted to this number per hour. Unfortunately for the aid agencies, whose dominant languages were English and French (aid agencies included the US Navy, the Red Cross, and Doctors without Borders), most of the SMS messages were in Haitian Kreyòl. Quickly, the Haitian Kreyòl speaking diaspora around the world were activated by the Mission 4636 consortium to translate the SMS messages and geolocate (Munro, 2010), and the translated messages were handed off to aid agencies for triage and action. The Mission 4636 infrastructure included a high-precision rule-based MT (Lewis et al., 2011), and within days to weeks after the earthquake, statistical MT engines were brought online by Microsoft (Lewis,2010) and Google. ${}^{6}$
50
+
51
+ Translation technology continues to be used in a variety of crisis and relief scenarios. Notable among these is Translators without Borders (TWB) use of translation memories for translating to a number of under-resourced languages in relief settings. Likewise, the Standby Task Force, ${}^{7}$ who are activated in a variety of relief settings, note the use of MT in various deployments around the world, e.g., for Urdu in the Pakistan earthquake of 2011 and for Spanish in the Ecuador earthquake in 2016. The EU funded INTERnAtional network on Crisis Translation (INTERACT) ${}^{8}$ project, started a couple of years before the COVID-19 pandemic, focused on crisis translation, specifically in health crises such as pandemics, with a focus on improving resilience in times of crises through communication, ultimately with the goal of reducing loss of life. ${}^{9}$ Likewise, during the current pandemic, several community-driven efforts have sprung up to fulfil the need for information communication. The Endangered Languages Project ${}^{10}$ , for example, has collected community-produced translations of public health information in hundreds of languages in various formats.
52
+
53
+ ${}^{4}$ The World Health Organization (WHO) declared COVID- 19 a pandemic on March 11th, 2020. The TICO-19 collaborators came together in the days following and first met as a group (over Zoom) on March 20th. It cannot be understated the rapidity with which this collaboration came together and how seamlessly the participants, many erstwhile competitors, have worked in harmony and without animosity. It is truly a testament to the needs of the greater good outweighing personal differences or potentially conflicting objectives.
54
+
55
+ ${}^{5}$ The plan does not call out Machine Translation per se, but does call out the need for content to be produced and disseminated in "accessible languages", and the need for communication in "local languages".
56
+
57
+ ${}^{6}$ Although these engines were not integrated into the relief pipeline developed by Munro and colleagues, Lewis et al. (2011) document how MT could be integrated into a crowd-centric relief pipeline like that used by Mission 4636, whereby MT, even if low-precision or noisy, could provide first-pass translations which could then be triaged before being handed off to translators for more accurate translations and geoloca-tion.
58
+
59
+ ${}^{7}$ https://www.standbytaskforce.org/
60
+
61
+ ${}^{8}$ https://cordis.europa.eu/project/id/734211
62
+
63
+ What is not tracked is the degree to which publicly available MT tools and resources are used in crisis and relief settings, e.g., translation apps and tools from Amazon, Google and Microsoft, or the translation feature built into Facebook (e.g., automatically translating posts). The authors suspect use may be broad, but there are no published accounts documenting just how broadly and how much these tools are used in crises. Tantalizing evidence of the use of publicly available tools was noted by Lewis et al. (2011) who documented traffic in the Microsoft Translator apps in the weeks following the Haitian earthquake: they noted that at least 5 percent of the Haitian Kreyòl traffic was relief-related. It is likely that Google's and Mi-crosoft's apps are used even when cell phone infrastructure is unavailable or destroyed, since the tools permit users to download models to their devices so they can perform offline translations. ${}^{11}$
64
+
65
+ In crises, it is clear that organizations need the capacity to communicate critical information and key messages into the languages people understand, at speed and at scale. Crisis affected communities could access content in local languages through various channels such as SMS, online chatbots, or more traditional printed materials. Their questions and feedback can be used to refine content to better meet their needs. Likewise, relief agencies need access to SMS and other communiques in local languages in order to more effectively and equitably distribute aid.
66
+
67
+ MT can help the various actors to translate and disseminate essential communications in a timely manner without the need to wait on human translators. This is particularly important in low resourced languages where professional translators are not readily available. Domain-specific MT can also assist translators with the right terminology to convey the correct response and standardize concepts. Furthermore, people who are unable to understand major languages could get access to vital information (such as news sources, websites, etc) first hand via a MT-driven tool set. We emphatically note that communication is not just a translation problem. Especially in the case of under-represented indigenous communities, messaging needs to also remain respectful of cultural norms (e.g. communicate through appropriate channels, without undermining cultural authorities and practices) and to not minimize the agency of such communities through "deficit framing" (Wañambi et al., 2020). Nevertheless, translation is a crucial component of the information flow.
68
+
69
+ However, to be useful for translating specialized content such as medical texts, we require that automatic translations be of the highest possible accuracy. To advance the research in Machine Translation, we require both high-quality benchmark sets and access to basic training resources, both monolingual and parallel. Likewise, translation memories in a broader set of languages can help localizers around the world translate into these languages. In the remainder of this paper we describe the resources created by the TICO-19 initiative, and some evaluations against them.
70
+
71
+ § 3 THE TICO-19 TRANSLATION BENCHMARK
72
+
73
+ We created the TICO-19 benchmark with three criteria in mind: diversity, relevance and quality. First, we sampled from a variety of public sources containing COVID-19 related content, representing different domains. Second, to make our content relevant for relief organizations, we chose the languages to translate into based on the requests from relief organizations on-the-ground. Third, we established a stringent quality assurance process, to ensure that the content is translated according to the highest industry standard.
74
+
75
+ § 3.1 COVID-19 SOURCE DATA
76
+
77
+ The translation benchmark was created by combining English open-source data from various sources, listed in Table 1. We took special care to diversify the domains and sources of the data. We provide a concise summary here and detailed statistics for every source in Appendix B:
78
+
79
+ ${}^{9}$ The project has already released COVID-19-specific MT models for 4 languages (Way et al., 2020).
80
+
81
+ ${}^{10}$ https://endangeredlanguagesproject.github.io/COVID- 19/.
82
+
83
+ ${}^{11}$ Crucially, it should be noted that these apps and tools are available for no more than 110 languages, leaving most of the world's languages in the dark.
84
+
85
+ max width=
86
+
87
+ 2*Data Source 2*Domain 4|c|Statistics
88
+
89
+ 3-6
90
+ #docs #sents #words avg. slen
91
+
92
+ 1-6
93
+ CMU medical, conversational - 141 1.2k 8.5
94
+
95
+ 1-6
96
+ PubMed medical, scientific 6 939 21.2k 22.5
97
+
98
+ 1-6
99
+ Wikinews news 6 88 1.8k 20.4
100
+
101
+ 1-6
102
+ Wikivoyage travel 1 243 4.5k 18.5
103
+
104
+ 1-6
105
+ Wikipedia general 15 1,538 38.1k 24.7
106
+
107
+ 1-6
108
+ Wikisource announcements 2 122 2.4k 19.6
109
+
110
+ 1-6
111
+ X Total 30 3,071 69.7k 22.7
112
+
113
+ 1-6
114
+ X Dev Set 12 971 21.0k 21.6
115
+
116
+ 1-6
117
+ X Test Set 18 2,100 49.3k 23.5
118
+
119
+ 1-6
120
+
121
+ Table 1: Source-side (English) statistics of the TICO-19 benchmark.
122
+
123
+ * PubMed: we selected 6 COVID-19-related scientific articles from PubMed ${}^{12}$ for a total of 939 sentences.
124
+
125
+ * CMU English-Haitian Creole dataset (CMU): the data were originally collected at Carnegie Mellon University ${}^{13}$ and translated into Haitian Creole by Eriksen Translations, Inc. The dataset is comprised of medical domain phrases and sentences, which along other data were used to quickly build and deploy statistical MT systems in disaster-ridden Haiti (Lewis, 2010) and later was part of the 2011 Workshop of (Statistical) Machine Translation Shared Tasks (Callison-Burch et al., 2011). For our purposes, we subsampled the English conversational phrases to only those including COVID-19-related keywords taken from our terminologies (see Section 4), ending up with 140 sentences.
126
+
127
+ * Wikipedia: we selected 15 COVID-19- related articles from the English Wikipedia ${}^{14}$ on topics ranging from responses to the pandemic, drug development, testing, and coron-aviruses in general.
128
+
129
+ § WIKINEWS, WIKIVOYAGE, WIKISOURCE: 6 COVID-19-RELATED ENTRIES FROM WIKINEWS. ${}^{15}$ ONE ARTICLE FROM WIKIVOYAGE ${}^{16}$ SUMMARIZING
130
+
131
+ travel restrictions, and two entries from Wik-isource ${}^{17}$ (an executive order and an internal Wikipedia communiqué). These data respectively cover the domains of news, travel advisories, and government/organization announcements.
132
+
133
+ § 3.2 LANGUAGES
134
+
135
+ We translated the above English data into 38 languages. ${}^{18}$ In some cases, this was achieved through pivot languages, i.e., the content was translated into the pivot language first (e.g., French, Farsi) and then translated into the target language (e.g., Congolese Swahili, Dari). The languages were selected according to various criteria, with the main consideration being the potential impact of our collected translations and the humanitarian priorities of TWB. The translation languages include:
136
+
137
+ * Pivots: 9 major languages which function as a lingua franca for large parts of the globe: Arabic (modern standard), Chinese (simplified), French, Brazilian Portuguese, Latin American Spanish, Hindi, Russian, Swahili, and Indonesian.
138
+
139
+ * Priority: 21 languages which TWB classified as high-priority, due to the large volume of requests they are receiving and the strategic location of their partners (e.g. the Red Cross). They include languages in Asia -Dari, Central Khmer, Kurdish Kurmanji (Latin script), Kurdish Sorani (Arabic script), Nepali, Pashto-and Africa -Amharic, Congolese Swahili,
140
+
141
+ ${}^{12}$ https://www.ncbi.nlm.nih.gov/pubmed/
142
+
143
+ ${}^{13}$ Under the NSF-funded (jointly with the EU) "NE-SPOLE!" project.
144
+
145
+ 14https://en.wikipedia.org
146
+
147
+ 15https://en.wikinews.org
148
+
149
+ 16https://en.wikivoyage.org
150
+
151
+ ${}^{17}$ https://en.wikisource.org
152
+
153
+ ${}^{18}$ All translations are available under a CCO license.
154
+
155
+ max width=
156
+
157
+ Data Source Example
158
+
159
+ 1-2
160
+ CMU are you having any shortness of breath?
161
+
162
+ 1-2
163
+ PubMed The basic reproductive number (R0) was ${3.77}\left( {{95}\% \text{ CI: }{3.51} - {4.05}}\right)$ , and the adjusted R0 was 2.23-4.82.
164
+
165
+ 1-2
166
+ Wikinews By yesterday, the World Health Organization reported 1,051,635 confirmed cases, including 79,332 cases in the twenty four hours preceding 10 a.m. Central European Time (0800 UTC) on April 4.
167
+
168
+ 1-2
169
+ Wikivoyage Due to the spread of the disease, you are advised not to travel unless necessary, to avoid being infected, quarantined, or stranded by changing restrictions and cancelled flights.
170
+
171
+ 1-2
172
+ Wikipedia Drug development is the process of bringing a new infectious disease vaccine or therapeutic drug to the market once a lead compound has been identified through the process of drug discovery.
173
+
174
+ 1-2
175
+ Wikisource The federal government has identified 16 critical infrastructure sectors whose assets, systems, and networks, whether physical or virtual, are considered so vital to the United States that their incapacitation or destruction would have a debilitating effect on security, economic security, public health or safety, or any combination thereof.
176
+
177
+ 1-2
178
+
179
+ Table 2: Samples of the English source sentences for the TICO-19 benchmark.
180
+
181
+ Dinka, Nigerian Fulfulde, Hausa, Kanuri, Kin-yarwanda, Lingala, Luganda, Nuer, Oromo, Somali, Eritrean Tigrinya, Ethiopian Tigrinya, Zulu.
182
+
183
+ * Important: 8 additional languages spoken by millions in South and South-East Asia: Bengali, Burmese (Myanmar), Farsi, Malay, Marathi, Tagalog, Tamil, and Urdu.
184
+
185
+ The latter two sets are primarily languages of Africa, and South and South-East Asia, whose communities, according to on-the-ground organizations, may be most susceptible to the spread of the virus and its potentially disastrous ramifications, mostly due to lack of access to information and communication in the community languages. They are also overwhelmingly under-resourced languages; in fact, some of the languages have remained untouched by the AI and MT communities, and have no known tools or resources that have been developed for them.
186
+
187
+ All of the test and development documents are sentence aligned across all of the languages, which allows for any pairing of languages for testing or development purposes. This was done by design, in order to facilitate tool and resource development in and across any of the targeted languages. For example, an MT developer could develop translation systems for French to/from Congolese Swahili, Arabic to/from Kurdish, Urdu to/from Pashto, Hindi to/from Marathi, Amharic to/from Oromo, or Chinese to/from Malay, among the 1296 possible pairings. Note that as the project continues and as we create data for more languages we will keep updating this paper as well as the project's website.
188
+
189
+ § 3.3 QUALITY ASSURANCE
190
+
191
+ It has been observed that translation from and into low-resource languages requires additional automatic and manual quality checks (Guzmán et al., 2019). To obtain the highest possible quality, here we implemented a two-step human quality control process. First, each document is sent for translation to language service providers (LSP), where the translation is performed. After translation, the dataset goes through a process of editing, in which each sentence is thoroughly vetted by qualified professionals familiar with the medical domain, whenever available. ${}^{19}$ In case of discrepancies, a process of arbitration is followed to solve disagreements between translators and editors.
192
+
193
+ After editing, a selected fraction of the data (18%, 558 sentences) undergoes a second independent quality assurance process. To ensure quality in the hardest-to-translate data, the scientific medical content from PubMed was upsampled so that it comprises 329 of the 558 doubly-checked sentences (almost 59%). The exact documents that comprise our second quality assurance set are listed in Appendix C.
194
+
195
+ ${}^{19}$ For a handful of languages, such as Dinka or Nuer, where simply creating translations was a challenge, this process was, by necessity, skipped.
196
+
197
+ The quality of the translations was checked, and reworks were made until every translation set was rated above ${95}\%$ across all languages, before any additional subsequent edits. Some low-resource languages like Somali, Dari, Khmer, Amharic, Tamil, Farsi, and Marathi required several rounds of translation to reach acceptable performance. The hardest part, unsurprisingly, proved to be the PubMed portion of the benchmark. Our QA process revealed that in most cases the problems arose when the translators did not have any medical expertise, which lead them to misunderstand the English source sentence and often opt for sub-par literal or word-for-word translations. We provide additional details with the estimated quality per language in the Appendix D. We note that all mistakes identified in this subset have been corrected in the final released dataset, and that all sentences that underwent the QA process are part of the test portion of our benchmark.
198
+
199
+ We additionally release the sampled dataset along with detailed error annotations and corrections. Whenever an error was noted in the validation sample, it was classified as one of the following categories: Addition/Omission, Grammar, Punctuation, Spelling, Capitalization, Mis-translation, Unnatural translation, and Untranslated text. The severity of the error was also classified as minor, major, or critical. Although small in size (at most 558 sentences in each translation direction), we hope that releasing these annotations will also invite automatic quality estimation and post-editing research for diverse under-resourced languages.
200
+
201
+ § 4 TRANSLATOR RESOURCES
202
+
203
+ Translation Memories Because of the breadth of languages covered by TICO-19, and the fact that so many are under-resourced, the translations themselves can be of significant value to local-izers. As part of the effort, the TICO-19 collaborators have converted ${}^{20}$ the translated data to translation memories, cast as TMX files, for all English-X pairings, as well as some other pairings of languages focusing on potential local needs (e.g. French-Congolese Swahili, Farsi-Dari, and Kurdish Kurmanji-Sorani). These TMX files, in addition to the test and development data, have been made available to the public through the project's website.
204
+
205
+ Terminologies Two sets of translation terminologies were provided by Facebook and Google (the complete set of the English source terms and of the translated languages are listed in Appendix E):
206
+
207
+ * the Facebook one includes 364 COVID-19 related terms translated in 92 languages/locales.
208
+
209
+ * the Google one includes 300 COVID-19 related terms translated from English to 100 languages and a total of 1300 terms from 27 languages translated into English (for a total of approximately ${30}\mathrm{k}$ terms).
210
+
211
+ Additional Translations Translators without Borders (TWB) worked with its network of translators to provide translations in hard-to-source languages (e.g. Congolese Swahili and Kanuri). It also provided COVID-19 specific sources from its diverse humanitarian partners to augment the dataset. This augmented dataset will be available under license on TWB’s Gamayun portal. ${}^{21}$
212
+
213
+ § 5 MT DEVELOPER RESOURCES
214
+
215
+ As part of our project the CMU team also collected some COVID-19-related monolingual data in multiple languages. They are available online, ${}^{22}$ but we note that some of these data might not be available under the same license as our datasets (and hence might not be appropriate for commercial system development). These are detailed in the next sections.
216
+
217
+ § 5.1 MONOLINGUAL
218
+
219
+ Wikipedia Data COVID-19-related data from Wikipedia were scraped in 37 languages. COVID- 19 terms were used as queries (language specific, in most cases), retrieving the textual data from the returned articles (i.e. stripping out any Wikipedia markup, metadata, images, etc). The data ranges from less than $1\mathrm{\;K}$ sentences (around ${10}\mathrm{\;K}$ tokens) for languages like Hindi, Bengali, or Afrikaans, and for as much as $8\mathrm{\;K}$ sentences for Spanish ( ${160}\mathrm{\;K}$ tokens) or Hungarian (120K tokens).
220
+
221
+ ${}^{20}$ Conversion was carried out with the Tikal command included in the Okapi framework: https://okapiframework.org/.
222
+
223
+ ${}^{21}$ https://gamayun.translatorswb.org/
224
+
225
+ ${}^{22}$ https://bit.ly/2ZLOkpo
226
+
227
+ News Data COVID-19-related news articles (as identified by keyword search) were scraped from three news organizations that publish multilingually through their world services. Specifically, the collected data include articles from the BBC World Service ${}^{23}$ (22 languages), the Voice of Amer- ${\mathrm{{ica}}}^{24}$ (31 languages), and the Deutsche Welle ${}^{25}$ (29 languages).
228
+
229
+ § 5.2 PARALLEL
230
+
231
+ We have also scraped a very small amount of available parallel data, mostly from public service announcements from NGOs and national/state government sources. Specifically, we scraped Public Service Announcements by the Canadian government ${}^{26}$ in 21 languages (English, French, and First Nations Languages), a fact sheet provided by the King County (Washington, USA) ${}^{27}$ in 12 languages, a COVID-19 advice sheet from the Doctors of the World ${}^{28}$ in 47 languages, and data from the COVID-19 Myth Busters in World Languages project ${}^{29}$ in 28 languages, and a medical prevention and treatment handbook from Zhejiang University School of Medicine ${}^{30}$ in 10 languages. Unfortunately the total amount of data from these sources do not exceed a few hundred sentences in each direction, so they are not enough for system development; they could, though, be potentially useful as an additional smaller evaluation set or for terminology extraction.
232
+
233
+ § 6 BASELINE RESULTS AND DISCUSSION
234
+
235
+ We present baseline results in some language directions, using the following systems:
236
+
237
+ 1. For English to es, fr, pt, ru, sw, id, ln, lg, $\mathrm{{mr}}$ and most opposite directions: we use the OPUS-MT systems (Tiedemann and Thottin-gal, 2020) which are trained on the OPUS parallel data (Tiedemann, 2012a) using the Marian toolkit (Junczys-Dowmunt et al., 2018). We also use the pretrained systems between French and es, id, ln, lg, ru, rw.
238
+
239
+ 2. For English to Russian we compare against the pre-trained Fairseq models that won the WMT Shared Task in this direction last year $\left( {\mathrm{{Ng}}\text{ et al.,2019), as well as the English to }}\right)$ French system of Ott et al. (2018). For translation between English and Chinese we also use a system trained on WMT'18 data (Bojar et al., 2018).
240
+
241
+ 3. We train systems between English and ar, fa, $\mathrm{{mr}}$ , $\mathrm{{om}},\mathrm{{zu}}$ on publicly available corpora from OPUS (referred to as "our OPUS models").
242
+
243
+ 4. We train multilingual systems between English and hi, ms, and ur on a TED talks dataset (Qi et al., 2018) ("our Multilingual TED models").
244
+
245
+ We note that none of the systems have been specifically trained or fine-tuned on any data listed above. We leave domain adaptation studies for future work.
246
+
247
+ Results Table 3 in the Appendix $§\mathrm{A}$ presents results in translation from English to all languages whose MT systems we were able to train or use, while Table 4 includes results in the opposite directions. Similarly, Tables 5 and 6 in the Appendix $\$ \mathrm{A}$ show the quality of MT systems, as measured on our test set, from and to French for a few languages. All tables also include a breakdown of the quality for each test domain. ${}^{31}$
248
+
249
+ Discussion First and foremost, the main takeaway from these baseline results lie not in the above-mentioned Tables, but in the languages that are not present in them. We were unable to find either pre-trained MT systems or publicly available parallel data in order to train our own baselines for Dari, Pashto, Tigrinya, Nigerian Fulfulde, Kurdish Sorani, Myanmar, Oromo, Dinka, Nuer, and isiZulu. ${}^{32}$ This highlights the need for serious data collection efforts to expand the availability of data for large swathes of under-represented communities and languages.
250
+
251
+ Beyond this obvious limitation, the existing systems' results highlight the divide between high-resource language pairs and low-resource ones. For all European languages (Spanish, French, Portuguese, Russian) as well as for Chinese and Indonesian, MT produces very competitive results with BLEU scores between 25 and 49. ${}^{33}$ In contrast, the output translations for languages like Lingala, Luganda, Marathi, or Urdu are quite disappointing, with extremely low BLEU scores under 10. The existence of pre-trained systems or of parallel data, hence, is not enough; this level of quality is basically unusable for any real-world deployment either for translators or for end-users.
252
+
253
+ ${}^{23}$ https://www.bbc.com/
254
+
255
+ ${}^{24}$ https://www.voanews.com/
256
+
257
+ ${}^{25}$ https://www.dw.com/
258
+
259
+ 26https://bit.ly/3iCeqnl
260
+
261
+ ${}^{27}$ https://welcoming.seattle.gov/covid -19/
262
+
263
+ ${}^{28}$ https://bit.ly/3e7Mq7M
264
+
265
+ ${}^{29}$ https://covid-no-mb.org/
266
+
267
+ ${}^{30}$ https://bit.ly/3gvxaTL
268
+
269
+ ${}^{31}$ Note, however, that each sub-domain posits a smaller test set than the complete set, and hence any result should properly take into account statistical significance measures.
270
+
271
+ ${}^{32}$ Note that although a small amount of parallel data exists for English-isiZulu, Abbott and Martinus (2019) report very low results on general benchmarks as the parallel data requires cleaning.
272
+
273
+ A comparison of the results across different domains is also revealing. BLEU scores are generally higher on Wikipedia and news articles; this is unsurprising, as most MT systems rely on such domains for training, as they naturally produce parallel or quasi-parallel data. Our PubMed data pose a more challenging setting, but perhaps not as challenging as we initially expected, although the results vary across languages. In translating from English to French, for instance, the difference between Wikipedia and PubMed is more than 14 BLEU points, ${}^{34}$ while the differences are smaller for e.g. Indonesian-English ( 6 BLEU points) or Russian-English (4 BLEU points).
274
+
275
+ Future Work Several concrete steps have the potential to improve MT for all languages in our benchmark. All results we report are with MT systems trained on general domain data or in particularly out-of-domain data (such as TED talks); domain adaptation techniques using small in-domain parallel resources or monolingual source- or target-side data should be able to increase performance. Incorporating the terminologies as part of the training and the inference schemes of the models could also ensure faithful and consistent translations of the COVID-19-specific scientific terms that might not naturally appear in other training data or might appear in different contexts.
276
+
277
+ Another direction for improvement involves multilingual NMT models trained on massive web-based corpora (Aharoni et al., 2019), which have improved translation accuracy particularly for languages in the lower end of data availability. Also viable are methods relying on multilingual model transfer, which can target languages with extremely small amounts of data, as in Chen et al. (2018).
278
+
279
+ Lastly, we need to improve the representation of low resource languages in public domain corpora. While there are open data collections in OPUS (Tiedemann, 2012b), and mined corpora like Paracrawl (Esplà et al., 2019), WikiMatrix (Schwenk et al., 2019), they don't cover enough low-resource languages. We hope that the availability of multilingual representations such as Multilingual BERT (Devlin et al., 2018) and XML-R (Conneau et al., 2019) will empower the creation of parallel corpora for low resource languages through low-resource corpus filtering (Koehn et al., 2019) or other approaches.
280
+
281
+ § 7 CONCLUSION
282
+
283
+ Enabling efficient and accurate communication through translations still has a ways to go for the majority of the world's languages and particularly the most vulnerable ones. With this effort we only address a fraction of the needs for a fraction of the world's languages. Nevertheless, we hope that the MT Resources that we release will have an immediate impact for the languages we cover. More importantly, the benchmark we release will allow the MT research community, both academic and industrial, to be more prepared for the next crisis where translation technologies will be needed.
284
+
285
+ § ACKNOWLEDGEMENTS
286
+
287
+ We would like to thank the people who made this effort possible: Tanya Badeka, Jen Wang, William Wong, Rebekkah Hogan, Cynthia Gao, Rachael Brunckhorst, Ian Hill, Bob Jung, Jason Smith, Susan Kim Chan, Romina Stella, Keith Stevens. We also extend our gratitude to the many translators and the quality reviewers whose hard work are represented in our benchmarks and in our translation memories. Some of the languages were very difficult to source, and the burden in these cases often fell to a very small number of translators. We thank you for the many hours you spent translating and, in many cases, re-translating content.
papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/0X9O6VcYe_/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Collecting Verified COVID-19 Question Answer Pairs
2
+
3
+ Adam Poliak ${}^{1}$ , Max Fleming, Cash Costello, Kenton Murray, Shivani Pandya, Darius Irani, Milind Agarwal, Udit Sharma, Shuo Sun, Nicola Ivanov, Mahsa Yarmohammadi, Lingxi Shang, Kaushik Srinivasan Seolhwa Lee, Xu Han, Smisha Agarwal, João Sedoc ${}^{2}$
4
+
5
+ ${}^{1}$ Barnard College, Johns Hopkins University, ${}^{2}$ NYU Stern School of Business https://covid-19-infobot.org/
6
+
7
+ ## Abstract
8
+
9
+ We release a dataset of over 2,100 COVID- 19 related Frequently asked Question-Answer pairs scraped from over 40 trusted websites. We include an additional24,000questions pulled from online sources that have been aligned by experts with existing answered questions from our dataset. This paper describes our efforts in collecting the dataset and summarizes the resulting data. Our dataset is automatically updated daily and available at https://github.com/JHU-COVID-QA/ scraping-qas. So far, this data has been used to develop a chatbot providing users information about COVID-19. We encourage others to build analytics and tools upon this dataset as well.
10
+
11
+ ## 1 Introduction
12
+
13
+ With the quick spread of COVID19, misinformation has rapidly spread. ${}^{1}$ Misinformation around the use of certain drugs for the prevention of Covid- 19 has had fatal outcomes, and stigmatization guided by misinformation about certain communities as vectors of virus undermines the long-term welfare of our society. We are developing a natural language processing (NLP) backed-informational chatbot targeted at comprehensive COVID-19 information and misinformation. Users can interact with our chatbot on different platforms to access information about COVID-19, available care, and other topics of interest. ${}^{2}$
14
+
15
+ To aid in this effort, we aggregate factual information in the form of verified questions and answers to help answer frequently asked questions about the pandemic. We employ three main aggregation efforts in tandem: 1) generating high quality and accurate information from domain experts, i.e. Public Health researchers at Johns Hopkins University; 2) automatically scraping frequently asked questions and answers from online trusted sources, e.g. newspapers and government agencies; and 3) automatically ranking and manually aligning additional questions from social media with the scraped questions and answers in our dataset. This paper primarily describes our efforts to extract high quality content from trustworthy web-sites and domain experts. Our effort has resulted in a publicly available dataset that currently contains over 2,100 Questions and Answers from more than 40 webpages. The dataset is available at https://covid-19-infobot.org/data/.Since we are actively scraping more websites and re-scrape all sites at least once a day, these numbers are updated daily. ${}^{3}$
16
+
17
+ ## 2 Creating our FAQ Dataset
18
+
19
+ We create our publicly available dataset of over 2,100 question-answer pairs by aggregating FAQs from trusted news sources. ${}^{4}$ We choose websites to scrape based on three broad criteria: 1) the informativeness and trustworthiness of the website; 2) the ease of scraping frequently asked question-answer pair from the website; and 3) the number of questions and answers on the website.
20
+
21
+ We use a straightforward scraping process that enables undergraduate students to contribute to our efforts. We developed a python library for students to easily add scrapers to our project. As demonstrated in the example in Figure 1, our library requires each question-answer (and metadata) - misinformation-tracking-center / ually created by Public Health experts. We plan on including to be stored as a simple dictionary. The library automatically adds this information to our set of question-answer pairs. Additionally, the library accordingly handles updating answers to questions in our dataset if a previously scraped website updates its information.
22
+
23
+ ---
24
+
25
+ ${}^{3}$ The dataset’s statistics described in this paper are based on a snapshot of the data as of June 25th, 2020, corresponding with https://github.com/JHU-COVID-QA/scraping-qas/tree/ a446c00c318e02cad5188cec359b9d649d8c4933
26
+
27
+ ${}^{4}$ We additionally have over 300 question-answer pairs man-these in our publicly available dataset at a later date.
28
+
29
+ 'https://www.newsguardtech.com/
30
+
31
+ 2 https://covid-19-infobot.org/
32
+
33
+ ---
34
+
35
+ ---
36
+
37
+ converter.addExample(\{
38
+
39
+ 'sourceUrl': 'example.com',
40
+
41
+ 'sourceName': "example",
42
+
43
+ "needUpdate": True,
44
+
45
+ "typeOfInfo": "QA",
46
+
47
+ "isAnnotated": False,
48
+
49
+ "responseAuthority": "",
50
+
51
+ "question": '<a href="example.com/dir1">What is COVID-19?</a>',
52
+
53
+ "answer": '<p><a href="example.com/dir2">Coronaviruses</a> are a large family of viruses.</p>',
54
+
55
+ "hasAnswer": True,
56
+
57
+ "targetEducationLevel": "NA",
58
+
59
+ "topic": ['topic1', 'topic2'],
60
+
61
+ "extraData": \{'example extra field': 'example value'\},
62
+
63
+ "targetLocation": "US",
64
+
65
+ "language": 'en',
66
+
67
+ \})
68
+
69
+ ---
70
+
71
+ Figure 1: Screenshot of our documentation describing the data and metadata stored for each scraped question-answer pair.
72
+
73
+ This has enabled students to efficiently join the project and contribute immediately. Further documentation is available at https://github.com/ JHU-COVID-QA/scraping-qas and we encourage others to join our efforts.
74
+
75
+ ### 2.1 Metadata
76
+
77
+ For each scraped question-answer pair, we extract relevant metadata for our chatbot and other NLP analytics. The metadata includes information about the source of each question-answer pair (we include both the source name and the URL) and the date when the question-answer was last scraped from or updated on the website. Additionally, if the information on the website is targeted for a specific geographic area, we include that in our metadata as well.
78
+
79
+ ### 2.2 Leveraging existing scrapers
80
+
81
+ We leverage existing scrapers for collecting questions-answer pairs for COVID-19. 874 of our examples come from scrapers released by deepset. ${}^{5}$ Following deepset's lead, we open-source our scrapers as well.
82
+
83
+ ### 2.3 Continuous scraping
84
+
85
+ As our understanding of COVID-19 rapidly evolves, trustworthy sources update the information they release. Therefore, each day, we automatically re-run the web scrapers to find new information. This enables us to add new question-answers or update answers to existing questions in our dataset.
86
+
87
+ If a previously scraped question-answer is removed from a website, we remove that example from our dataset. ${}^{6}$ Question and answers that we removed from our dataset as still available in our history since we archive each day's dataset. In turn, the quality of our dataset is constantly evolving and improving.
88
+
89
+ ## 3 Data
90
+
91
+ The described effort resulted in a dataset that is evolving daily. The June 15th version contains over 2,100 questions and answers scraped from 40 websites. We list the number of question-answer pairs extracted from each source in Table 1. Our dataset contains some examples in different lan-gages besides for English, owing to deepset scraping websites in multiple languages. Figure 2 plots the number of question-answer pairs in each of the five languages: English, German, Polish, Italian, and Swedish. Roughly 70% of our examples are in English. As we release more data, we will include further analysis of the growing dataset.
92
+
93
+ Websites might update or change how they store information. This is why the current version of our dataset contains just 1 example from the Delaware State Government webpage. The May 20th version of our dataset contains 22 examples from this website.
94
+
95
+ ---
96
+
97
+ ${}^{6}$ We assume that a website will remove information about COVID-19 that is no longer accurate.
98
+
99
+ ${}^{5}$ https://github.com/deepset-ai/COVID-QA/
100
+
101
+ ---
102
+
103
+ ![01963e00-7331-732a-8a99-7147c8fd9180_2_196_175_631_411_0.jpg](images/01963e00-7331-732a-8a99-7147c8fd9180_2_196_175_631_411_0.jpg)
104
+
105
+ Figure 2: Number of question/answers for each language in our dataset.
106
+
107
+ ## 4 Manually Aligning Additional Questions and Answers
108
+
109
+ Since the internet contains many more questions that are not answered, we additionally collected questions and align them with the question-answer pairs in our dataset. We leverage information retrieval techniques to match these unanswered questions with questions in our dataset and then rely on domain experts to verify each aligned question-question-answer (QQA) pair. In this section, we provide details for each of these steps.
110
+
111
+ ### 4.1 Online Question Extraction
112
+
113
+ We downloaded 28 million tweets from the COVID- 19 Twitter Dataset (Chen et al.,2020), Qorona, ${}^{7}$ and CovidFaq ${}^{8}$ , extracted the questions from those resources, ${}^{9}$ sorted them by frequency, and discarded the questions that occurred less than four times. Then, we grouped semantically similar questions into 9,200 clusters. Next, we extracted the centers of the clusters and, using a state-of-the-art sentence re-writer (Hu et al., 2019), we generated three high quality paraphrases of each question. This resulted in a collection of over 27,000 unanswered questions about COVID-19.
114
+
115
+ ### 4.2 Aligning Extracted Questions with Existing Questions and Answers
116
+
117
+ We worked with public health experts to align these unanswered questions with our verified question-answer pairs (section 3). For each of these 27,000 questions, we used a BM25 model (Robertson and Walker, 1994; Robertson et al., 1996) to determine the most similar answered questions in our dataset. ${}^{10}$ Following the EASL annotation protocol (Sakaguchi and Van Durme, 2018), for each unanswered twitter question, we presented public health experts with the five most similar QA's from our dataset. Based on a formal protocol developed by a senior Public Health researcher on our team (Figure 4), we asked the experts to determine, on a scale from 0 to 100 , how relevant or similar the QA from our dataset is to the unanswered question.
118
+
119
+ ![01963e00-7331-732a-8a99-7147c8fd9180_2_862_205_568_428_0.jpg](images/01963e00-7331-732a-8a99-7147c8fd9180_2_862_205_568_428_0.jpg)
120
+
121
+ Figure 3: Histogram of number of QQAs (Y-axis) annotated with a score at most the corresponding $\mathrm{x}$ -axis. Over ${17.5}\mathrm{\;K}$ examples are labeled between 0 and 1 .
122
+
123
+ For this annotation effort, we leveraged Turkle, open-sourced, locally hosted clone of Amazon Mechanical Turk developed by the JHU Human Language Technology Center of Excellence. ${}^{11}$ Figure 5 and Figure 6 illustrate our annotation interface.
124
+
125
+ As part of this protocol, expert annotators could indicate whether a question was not relevant to COVID-19 or whether an existing answer was no longer correct. We removed such labeled examples from our set. This effort results in 24,240 annotated QQAs. Figure 3 plots the distribution of labels annotated for QQAs. Over 18,000 examples were judged to be less than 1% relevant, indicating that the majority of the questions extracted from twitter are irrelevant to the answered questions in our dataset. These additional examples can be used to further train a chatbot to answer questions about COVID-19.
126
+
127
+ ---
128
+
129
+ ${}^{7}$ https://github.com/allenai/Qorona
130
+
131
+ 8 https://github.com/dialoguemd/ covidfaq
132
+
133
+ ${}^{9}$ Corona and CovidFaq specifically contain questions. We extract questions from the Twitter dataset by determining whether a sentence from a tweet either ends with a question mark, or starts with a provided list of words (e.g., "who", "when", "where", etc).
134
+
135
+ ${}^{10}$ We trained the BM25 model by using the answers that previously were manually aligned by experts with the candidate questions. We then calculated scores between the terms in the input question and terms in the candidate answers. We used the implementation in Elasticsearch and relied on the default parameters.
136
+
137
+ "https://github.com/hltcoe/turkle
138
+
139
+ ---
140
+
141
+ ## 5 Conclusion
142
+
143
+ We have presented our growing dataset of over 2,100 question-answers that has been created by scraping over 40 websites. We also discussed other data we collected and annotated that may be beneficial to others in the community as well. Our evolving dataset is complementary to other recent COVID-19 QA datasets, e.g. Tang et al. (2020)'s 124 question-article pairs, Wei et al. (2020) 1,690 questions and 403 answers, and Möller et al. (2020)'s dataset.
144
+
145
+ ## Acknowledgments
146
+
147
+ We thank the reviewers for their insightful comments. This work was supported in part by DARPA KAIROS (FA8750-19-2-0034). The views and conclusions contained in this work are those of the authors and should not be interpreted as representing official policies or endorsements by DARPA or the U.S. Government.
148
+
149
+ ## References
150
+
151
+ Emily Chen, Kristina Lerman, and Emilio Ferrara. 2020. Covid-19: The first public coronavirus twitter dataset.
152
+
153
+ J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, and Benjamin Van Durme. 2019. Improved lexically constrained decoding for translation and monolingual rewriting. In Proceedings of the Annual Meeting of the North American Association of Computational Linguistics (NAACL).
154
+
155
+ Timo Möller, Anthony Reina, Raghavan Jayakumar, and Malte Pietsch. 2020. Covid-qa: A question answer dataset for covid-19.
156
+
157
+ SE Robertson, S Walker, MM Beaulieu, M Gatford, and A Payne. 1996. Okapi at trec-4. NIST special publication, (500236):73-96.
158
+
159
+ Stephen E Robertson and Steve Walker. 1994. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In ${SI}$ - GIR'94, pages 232-241. Springer.
160
+
161
+ Keisuke Sakaguchi and Benjamin Van Durme. 2018. Efficient online scalar annotation with bounded support. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 208-218, Melbourne,
162
+
163
+ Australia. Association for Computational Linguistics.
164
+
165
+ Raphael Tang, Rodrigo Nogueira, Edwin Zhang, Nikhil Gupta, Phuong Cam, Kyunghyun Cho, and Jimmy Lin. 2020. Rapidly bootstrapping a question answering dataset for covid-19.
166
+
167
+ Jerry Wei, Chengyu Huang, Soroush Vosoughi, and Jason Wei. 2020. What are people asking about covid- 19? a question classification dataset.
168
+
169
+ <table><tr><td>Source Name</td><td>#of Questions-Answers</td></tr><tr><td>North Dakota Stake Government</td><td>305</td></tr><tr><td>Vermont Department of Health</td><td>151</td></tr><tr><td>NYTimes</td><td>118</td></tr><tr><td>CNN</td><td>106</td></tr><tr><td>Kansas Department of Health and Enviroment</td><td>92</td></tr><tr><td>FDA</td><td>76</td></tr><tr><td>Oregon Public Health Division</td><td>75</td></tr><tr><td>Johns Hopkins Bloomberg School of Public Health</td><td>57</td></tr><tr><td>FloridaGov</td><td>45</td></tr><tr><td>Texas Human Resources</td><td>40</td></tr><tr><td>National Foundation for Infectious Diseases</td><td>33</td></tr><tr><td>AVMA</td><td>32</td></tr><tr><td>WHOMyth</td><td>29</td></tr><tr><td>Cleveland Clinic</td><td>29</td></tr><tr><td>Public Health Agency of Canada</td><td>28</td></tr><tr><td>Ministero della Salute, IT</td><td>16</td></tr><tr><td>JHU Medicine</td><td>13</td></tr><tr><td>JHU HUB</td><td>7</td></tr><tr><td>Hawaii State Government</td><td>4</td></tr><tr><td>Delaware State Government</td><td>1</td></tr><tr><td>GOV Polska</td><td>154</td></tr><tr><td>Bundesministerium für Gesundheit (BMG)</td><td>201</td></tr><tr><td>FHM, Folkhälsomyndigheten</td><td>142</td></tr><tr><td>Ministero della Salute, IT</td><td>16</td></tr><tr><td>World Health Organization (WHO)</td><td>121</td></tr><tr><td>Bundesministerium für Wirtschaft und Energie</td><td>34</td></tr><tr><td>Berliner Senat</td><td>48</td></tr><tr><td>European Centre for Disease Prevention and Control</td><td>47</td></tr><tr><td>Bundesanstalt für Arbeitsschutz und Arbeitsmedi...</td><td>35</td></tr><tr><td>Bundesministerium für Arbeit und Soziales (BMAS)</td><td>32</td></tr><tr><td>Bundesagentur für Arbeit</td><td>11</td></tr><tr><td>Presse- und Informationsamt der Bundesregierung</td><td>16</td></tr><tr><td>Center for Disease Control and Prevention (CDC)</td><td>13</td></tr><tr><td>Robert Koch Institute (RKI)</td><td>4</td></tr><tr><td>total</td><td>2115</td></tr></table>
170
+
171
+ Table 1: Number of question-answer pairs for each source in the dataset scraped on June 25th. Some of these sources contain more than one website. The bottom half represents the websources in our dataset that we extract using deepset's scrapers.
172
+
173
+ ## Protocol
174
+
175
+ There is a lot of subjectivity that enters the picture when we are assigning relevancy through the use of a 0-100% scale. To help standardize our thought process and understanding on how to rank relevancy, we've tried to do it by parsing out systematically what each question is discussing.
176
+
177
+ Please note that these examples are only showing it one at a time. As you can see in the description of the task and the picture above, it will actually be one user question, and five relevant questions to review
178
+
179
+ ## Example 1:
180
+
181
+ User Question: How many people have it in my town
182
+
183
+ Relevant Question: How long do people have to isolate for?
184
+
185
+ Thought Process: The relevant question is focused on COVID-19, isolation, and duration. The new question is focused on COVID-19 and prevalence.
186
+
187
+ Relevancy Scale: 0% relevant
188
+
189
+ - They are both talking about COVID-19, but don't overlap significantly beyond that.
190
+
191
+ ## Example 2:
192
+
193
+ Relevant Question: It would be great to hear more about the symptoms. Cough and difficulty breathing isn't too specific (especially during allergy season!).
194
+
195
+ New Question: Is cough but no fever a symptom?
196
+
197
+ Thought Process: Both of these questions are asking about COVID-19 symptomatology - however the first is discussing how it is different from other illnesses (like the flu, cold, and allergies). The second question is specifically focused on COVID-19 symptoms solely, and what those are.
198
+
199
+ ## Relevancy Scale: 80-90% relevant
200
+
201
+ - This is because they are both talking about COVID-19 symptoms, but the answers to the questions differ slightly.
202
+
203
+ Figure 4: Protocol and examples for the expert annotators to align unanswered questions from Twitter with the question-answer pairs in our dataset.
204
+
205
+ ![01963e00-7331-732a-8a99-7147c8fd9180_6_186_519_1280_1184_0.jpg](images/01963e00-7331-732a-8a99-7147c8fd9180_6_186_519_1280_1184_0.jpg)
206
+
207
+ Figure 5: Screenshot of our expert annotation interface.
208
+
209
+ ![01963e00-7331-732a-8a99-7147c8fd9180_7_184_454_1279_1317_0.jpg](images/01963e00-7331-732a-8a99-7147c8fd9180_7_184_454_1279_1317_0.jpg)
210
+
211
+ Figure 6: Screenshot of our expert annotation interface.
212
+
papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/0X9O6VcYe_/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § COLLECTING VERIFIED COVID-19 QUESTION ANSWER PAIRS
2
+
3
+ Adam Poliak ${}^{1}$ , Max Fleming, Cash Costello, Kenton Murray, Shivani Pandya, Darius Irani, Milind Agarwal, Udit Sharma, Shuo Sun, Nicola Ivanov, Mahsa Yarmohammadi, Lingxi Shang, Kaushik Srinivasan Seolhwa Lee, Xu Han, Smisha Agarwal, João Sedoc ${}^{2}$
4
+
5
+ ${}^{1}$ Barnard College, Johns Hopkins University, ${}^{2}$ NYU Stern School of Business https://covid-19-infobot.org/
6
+
7
+ § ABSTRACT
8
+
9
+ We release a dataset of over 2,100 COVID- 19 related Frequently asked Question-Answer pairs scraped from over 40 trusted websites. We include an additional24,000questions pulled from online sources that have been aligned by experts with existing answered questions from our dataset. This paper describes our efforts in collecting the dataset and summarizes the resulting data. Our dataset is automatically updated daily and available at https://github.com/JHU-COVID-QA/ scraping-qas. So far, this data has been used to develop a chatbot providing users information about COVID-19. We encourage others to build analytics and tools upon this dataset as well.
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ With the quick spread of COVID19, misinformation has rapidly spread. ${}^{1}$ Misinformation around the use of certain drugs for the prevention of Covid- 19 has had fatal outcomes, and stigmatization guided by misinformation about certain communities as vectors of virus undermines the long-term welfare of our society. We are developing a natural language processing (NLP) backed-informational chatbot targeted at comprehensive COVID-19 information and misinformation. Users can interact with our chatbot on different platforms to access information about COVID-19, available care, and other topics of interest. ${}^{2}$
14
+
15
+ To aid in this effort, we aggregate factual information in the form of verified questions and answers to help answer frequently asked questions about the pandemic. We employ three main aggregation efforts in tandem: 1) generating high quality and accurate information from domain experts, i.e. Public Health researchers at Johns Hopkins University; 2) automatically scraping frequently asked questions and answers from online trusted sources, e.g. newspapers and government agencies; and 3) automatically ranking and manually aligning additional questions from social media with the scraped questions and answers in our dataset. This paper primarily describes our efforts to extract high quality content from trustworthy web-sites and domain experts. Our effort has resulted in a publicly available dataset that currently contains over 2,100 Questions and Answers from more than 40 webpages. The dataset is available at https://covid-19-infobot.org/data/.Since we are actively scraping more websites and re-scrape all sites at least once a day, these numbers are updated daily. ${}^{3}$
16
+
17
+ § 2 CREATING OUR FAQ DATASET
18
+
19
+ We create our publicly available dataset of over 2,100 question-answer pairs by aggregating FAQs from trusted news sources. ${}^{4}$ We choose websites to scrape based on three broad criteria: 1) the informativeness and trustworthiness of the website; 2) the ease of scraping frequently asked question-answer pair from the website; and 3) the number of questions and answers on the website.
20
+
21
+ We use a straightforward scraping process that enables undergraduate students to contribute to our efforts. We developed a python library for students to easily add scrapers to our project. As demonstrated in the example in Figure 1, our library requires each question-answer (and metadata) - misinformation-tracking-center / ually created by Public Health experts. We plan on including to be stored as a simple dictionary. The library automatically adds this information to our set of question-answer pairs. Additionally, the library accordingly handles updating answers to questions in our dataset if a previously scraped website updates its information.
22
+
23
+ ${}^{3}$ The dataset’s statistics described in this paper are based on a snapshot of the data as of June 25th, 2020, corresponding with https://github.com/JHU-COVID-QA/scraping-qas/tree/ a446c00c318e02cad5188cec359b9d649d8c4933
24
+
25
+ ${}^{4}$ We additionally have over 300 question-answer pairs man-these in our publicly available dataset at a later date.
26
+
27
+ 'https://www.newsguardtech.com/
28
+
29
+ 2 https://covid-19-infobot.org/
30
+
31
+ converter.addExample({
32
+
33
+ 'sourceUrl': 'example.com',
34
+
35
+ 'sourceName': "example",
36
+
37
+ "needUpdate": True,
38
+
39
+ "typeOfInfo": "QA",
40
+
41
+ "isAnnotated": False,
42
+
43
+ "responseAuthority": "",
44
+
45
+ "question": '<a href="example.com/dir1">What is COVID-19?</a>',
46
+
47
+ "answer": '<p><a href="example.com/dir2">Coronaviruses</a> are a large family of viruses.</p>',
48
+
49
+ "hasAnswer": True,
50
+
51
+ "targetEducationLevel": "NA",
52
+
53
+ "topic": ['topic1', 'topic2'],
54
+
55
+ "extraData": {'example extra field': 'example value'},
56
+
57
+ "targetLocation": "US",
58
+
59
+ "language": 'en',
60
+
61
+ })
62
+
63
+ Figure 1: Screenshot of our documentation describing the data and metadata stored for each scraped question-answer pair.
64
+
65
+ This has enabled students to efficiently join the project and contribute immediately. Further documentation is available at https://github.com/ JHU-COVID-QA/scraping-qas and we encourage others to join our efforts.
66
+
67
+ § 2.1 METADATA
68
+
69
+ For each scraped question-answer pair, we extract relevant metadata for our chatbot and other NLP analytics. The metadata includes information about the source of each question-answer pair (we include both the source name and the URL) and the date when the question-answer was last scraped from or updated on the website. Additionally, if the information on the website is targeted for a specific geographic area, we include that in our metadata as well.
70
+
71
+ § 2.2 LEVERAGING EXISTING SCRAPERS
72
+
73
+ We leverage existing scrapers for collecting questions-answer pairs for COVID-19. 874 of our examples come from scrapers released by deepset. ${}^{5}$ Following deepset's lead, we open-source our scrapers as well.
74
+
75
+ § 2.3 CONTINUOUS SCRAPING
76
+
77
+ As our understanding of COVID-19 rapidly evolves, trustworthy sources update the information they release. Therefore, each day, we automatically re-run the web scrapers to find new information. This enables us to add new question-answers or update answers to existing questions in our dataset.
78
+
79
+ If a previously scraped question-answer is removed from a website, we remove that example from our dataset. ${}^{6}$ Question and answers that we removed from our dataset as still available in our history since we archive each day's dataset. In turn, the quality of our dataset is constantly evolving and improving.
80
+
81
+ § 3 DATA
82
+
83
+ The described effort resulted in a dataset that is evolving daily. The June 15th version contains over 2,100 questions and answers scraped from 40 websites. We list the number of question-answer pairs extracted from each source in Table 1. Our dataset contains some examples in different lan-gages besides for English, owing to deepset scraping websites in multiple languages. Figure 2 plots the number of question-answer pairs in each of the five languages: English, German, Polish, Italian, and Swedish. Roughly 70% of our examples are in English. As we release more data, we will include further analysis of the growing dataset.
84
+
85
+ Websites might update or change how they store information. This is why the current version of our dataset contains just 1 example from the Delaware State Government webpage. The May 20th version of our dataset contains 22 examples from this website.
86
+
87
+ ${}^{6}$ We assume that a website will remove information about COVID-19 that is no longer accurate.
88
+
89
+ ${}^{5}$ https://github.com/deepset-ai/COVID-QA/
90
+
91
+ < g r a p h i c s >
92
+
93
+ Figure 2: Number of question/answers for each language in our dataset.
94
+
95
+ § 4 MANUALLY ALIGNING ADDITIONAL QUESTIONS AND ANSWERS
96
+
97
+ Since the internet contains many more questions that are not answered, we additionally collected questions and align them with the question-answer pairs in our dataset. We leverage information retrieval techniques to match these unanswered questions with questions in our dataset and then rely on domain experts to verify each aligned question-question-answer (QQA) pair. In this section, we provide details for each of these steps.
98
+
99
+ § 4.1 ONLINE QUESTION EXTRACTION
100
+
101
+ We downloaded 28 million tweets from the COVID- 19 Twitter Dataset (Chen et al.,2020), Qorona, ${}^{7}$ and CovidFaq ${}^{8}$ , extracted the questions from those resources, ${}^{9}$ sorted them by frequency, and discarded the questions that occurred less than four times. Then, we grouped semantically similar questions into 9,200 clusters. Next, we extracted the centers of the clusters and, using a state-of-the-art sentence re-writer (Hu et al., 2019), we generated three high quality paraphrases of each question. This resulted in a collection of over 27,000 unanswered questions about COVID-19.
102
+
103
+ § 4.2 ALIGNING EXTRACTED QUESTIONS WITH EXISTING QUESTIONS AND ANSWERS
104
+
105
+ We worked with public health experts to align these unanswered questions with our verified question-answer pairs (section 3). For each of these 27,000 questions, we used a BM25 model (Robertson and Walker, 1994; Robertson et al., 1996) to determine the most similar answered questions in our dataset. ${}^{10}$ Following the EASL annotation protocol (Sakaguchi and Van Durme, 2018), for each unanswered twitter question, we presented public health experts with the five most similar QA's from our dataset. Based on a formal protocol developed by a senior Public Health researcher on our team (Figure 4), we asked the experts to determine, on a scale from 0 to 100, how relevant or similar the QA from our dataset is to the unanswered question.
106
+
107
+ < g r a p h i c s >
108
+
109
+ Figure 3: Histogram of number of QQAs (Y-axis) annotated with a score at most the corresponding $\mathrm{x}$ -axis. Over ${17.5}\mathrm{\;K}$ examples are labeled between 0 and 1 .
110
+
111
+ For this annotation effort, we leveraged Turkle, open-sourced, locally hosted clone of Amazon Mechanical Turk developed by the JHU Human Language Technology Center of Excellence. ${}^{11}$ Figure 5 and Figure 6 illustrate our annotation interface.
112
+
113
+ As part of this protocol, expert annotators could indicate whether a question was not relevant to COVID-19 or whether an existing answer was no longer correct. We removed such labeled examples from our set. This effort results in 24,240 annotated QQAs. Figure 3 plots the distribution of labels annotated for QQAs. Over 18,000 examples were judged to be less than 1% relevant, indicating that the majority of the questions extracted from twitter are irrelevant to the answered questions in our dataset. These additional examples can be used to further train a chatbot to answer questions about COVID-19.
114
+
115
+ ${}^{7}$ https://github.com/allenai/Qorona
116
+
117
+ 8 https://github.com/dialoguemd/ covidfaq
118
+
119
+ ${}^{9}$ Corona and CovidFaq specifically contain questions. We extract questions from the Twitter dataset by determining whether a sentence from a tweet either ends with a question mark, or starts with a provided list of words (e.g., "who", "when", "where", etc).
120
+
121
+ ${}^{10}$ We trained the BM25 model by using the answers that previously were manually aligned by experts with the candidate questions. We then calculated scores between the terms in the input question and terms in the candidate answers. We used the implementation in Elasticsearch and relied on the default parameters.
122
+
123
+ "https://github.com/hltcoe/turkle
124
+
125
+ § 5 CONCLUSION
126
+
127
+ We have presented our growing dataset of over 2,100 question-answers that has been created by scraping over 40 websites. We also discussed other data we collected and annotated that may be beneficial to others in the community as well. Our evolving dataset is complementary to other recent COVID-19 QA datasets, e.g. Tang et al. (2020)'s 124 question-article pairs, Wei et al. (2020) 1,690 questions and 403 answers, and Möller et al. (2020)'s dataset.
128
+
129
+ § ACKNOWLEDGMENTS
130
+
131
+ We thank the reviewers for their insightful comments. This work was supported in part by DARPA KAIROS (FA8750-19-2-0034). The views and conclusions contained in this work are those of the authors and should not be interpreted as representing official policies or endorsements by DARPA or the U.S. Government.
papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/37zyB5yuPXi/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Public Sentiment on Governmental COVID-19 Measures in Dutch Social Media
2
+
3
+ Shihan Wang ${}^{1}$ , Marijn Schraagen ${}^{1}$ , Erik Tjong Kim Sang ${}^{2}$ , and Mehdi Dastani ${}^{1}$
4
+
5
+ ${}^{1}$ Department of Information and Computing Sciences, Utrecht University
6
+
7
+ ${}^{2}$ Netherlands eScience Center
8
+
9
+ The Netherlands
10
+
11
+ s.wang2@uu.nl, m.p.schraagen@uu.nl,
12
+
13
+ e.tjongkimsang@esciencecenter.nl, m.m.dastani@uu.nl
14
+
15
+ ## Abstract
16
+
17
+ Public sentiment (the opinion, attitude or feeling that the public expresses) is a factor of interest for government, as it directly influences the implementation of policies. Given the unprecedented nature of the COVID-19 crisis, having an up-to-date representation of public sentiment on governmental measures and announcements is crucial. In this paper, we analyse Dutch public sentiment on governmental COVID-19 measures from text data collected across three online media sources (Twitter, Reddit and Nu.nl) from February to September 2020. We apply sentiment analysis methods to analyse polarity over time, as well as to identify stance towards two specific pandemic policies regarding social distancing and wearing face masks. The presented preliminary results provide valuable insights into the narratives shown in vast social media text data, which help understand the influence of COVID-19 measures on the general public.
18
+
19
+ ## 1 Introduction
20
+
21
+ Public sentiment (the opinion, attitude or feeling that the public expresses) can directly influence the implementation of policies (Burstein, 2003), therefore it is crucial for policy makers to know the public sentiment of chosen policies and to take this sentiment into account when deciding on new policies. Given the unprecedented nature of the COVID-19 crisis, having an up-to-date representation of public sentiment on governmental measures and announcements becomes even more important. However, the 'staying-at-home' policy makes analysing public sentiment by means of face-to-face research methods like interviews and questionnaires challenging, while classical online surveys could delay the analysis results by restrictions of frequency.
22
+
23
+ With the rapid growth of online social media, monitoring public sentiment on platforms like Twitter and Reddit allows for much more and frequent measurements and a better indication of changes over time (Tan et al., 2013; Wang and Terano, 2015). Thus, we apply natural language processing (NLP) approaches on Dutch social media to understand the temporal variation of Dutch public sentiment during the COVID-19 outbreak period.
24
+
25
+ Our analysis covers two perspectives of sentiment analysis: polarity analysis (whether a message is positive or negative) and stance analysis (whether a message is supportive of or against a given target (Li and Caragea, 2019)). In our study, the given targets are policy measures taken by the Dutch government. We particularly focus on the public attitude towards two specific measures, i.e., social distancing and wearing face masks. To validate our work for the broader Dutch public, we collected data from three different online media platforms (Twitter, Reddit and Nu.nl) to perform a comparison study. The preliminary results of analysis are presented in this paper. As a summary of our contributions, we provide a first sentiment-oriented overview of Dutch public discussion around COVID-19 across multiple social media sources and explore the practical usage of NLP approaches for understanding the influence of COVID-19 measures on the general public.
26
+
27
+ ## 2 Related work
28
+
29
+ Social media sentiment has previously been analyzed during pandemics such as H1N1(Chew and Eysenback, 2010). Also the COVID-19 pandemic, despite being a relatively new topic, has attracted many researchers from different areas, including social media analysis. Abd-Alrazaq et al. (2020) identified four main COVID-19 related themes on Twitter: virus origin, contamination sources, preventive measures and impact on societies. Zhao et al. (2020) identified topics and sentiment related to COVID-19 in China. Samuel et al. (2020) conducted textual analyses of Twitter COVID-19 data to identify public fear sentiment. Several studies focused on bots (Ferrara, 2020) and misinformation (Kouzy et al., 2020; Singh et al., 2020) which influence opinions on social media. Chen et al. (2020) showed that positive polarity government messages on social media result in higher public engagement. Cinelli et al. (2020) identified COVID-19-related topics in various social media, including Reddit and Twitter. They found that the ratio of misinformation to reliable information was found to be stable over time, however Twitter has a larger percentage of misinformation compared to Reddit.
30
+
31
+ From a technical point of view, stance detection has recently received considerable attention in the NLP community (Küçük and Can, 2020), for which neural networks with word embeddings have proven to be effective (Yi-Chin Chen, 2017; Li and Caragea, 2019). An annotation and training approach for classifying hate speech in COVID- 19 related tweets using embeddings was described by (Cotik et al., 2020). Alternative approaches include unsupervised clustering for stance analysis (Darwish et al., 2020).
32
+
33
+ ## 3 Methodology
34
+
35
+ ### 3.1 Data collection
36
+
37
+ Data for polarity and stance analysis is collected from Twitter, Reddit and Nu.nl. For Twitter, all Dutch language tweets are collected via the Twitter streaming API (using the provided lang attribute) and are subsequently filtered using a set of keywords related to COVID-19 (as presented in Table 1). Reddit and Nu.nl are organized by topic, so for these two data sources all messages from Corona-related threads are used without keyword filtering. Nu.nl is a news website that allows people to comment on news articles and blogs. From this data source, our analysis concentrates on the comments instead of the contents of articles. The times-pan of the datasets is February 27th 2020 (when the first COVID-19 patient was discovered in the Netherlands) until September 2020. The amount of collected messages is presented in Table 2.
38
+
39
+ ### 3.2 Data annotation and analysis
40
+
41
+ The goal of the analysis is to provide both general polarity analysis for COVID-19 related messages, and stance analysis towards particular subtopics. For polarity analysis we use the library pattern. nl (Smedt and Daelemans, 2012), for which no training is required. This library contains a lexicon of 3918 Dutch polarity words, mostly adjectives, and 120 language-independent emoji.
42
+
43
+ <table><tr><td>Category</td><td>Keyword</td><td>Translation</td></tr><tr><td rowspan="2">Disease</td><td>corona</td><td/></tr><tr><td>covid</td><td/></tr><tr><td rowspan="2">Health care</td><td>huisarts</td><td>doctor</td></tr><tr><td>mondkapje</td><td>face mask</td></tr><tr><td>Government</td><td>rivm</td><td>national health organization</td></tr><tr><td rowspan="3">Social</td><td>flattenthecurve</td><td/></tr><tr><td>blijfthuis</td><td>stay home</td></tr><tr><td>houvol</td><td>hang in there</td></tr></table>
44
+
45
+ Table 1: COVID-19 keywords for filtering topic tweets.
46
+
47
+ <table><tr><td>Month</td><td>Tweets</td><td>$\mathbf{{Nu}.{nl}}$</td><td>Reddit</td></tr><tr><td>February</td><td>278,082</td><td>25,721</td><td>5</td></tr><tr><td>March</td><td>3,152,638</td><td>207,957</td><td>28,038</td></tr><tr><td>April</td><td>2,115,728</td><td>193,530</td><td>11,943</td></tr><tr><td>May</td><td>1,264,650</td><td>146,832</td><td>6,452</td></tr><tr><td>June</td><td>921,481</td><td>90,698</td><td>3,218</td></tr><tr><td>July</td><td>922,992</td><td>85,085</td><td>2,985</td></tr><tr><td>August</td><td>1,078,644</td><td>105,047</td><td>4,738</td></tr><tr><td>September</td><td>1,115,057</td><td>114,480</td><td>6,486</td></tr></table>
48
+
49
+ Table 2: Number of messages per dataset over time.
50
+
51
+ For stance analysis, we trained classifiers based on manually annotated data. Two topics related to governmental measures are selected, the social distancing measure (i.e. all people keep 1.5 metres distance from each other except when they are living in the same house) and whether or not the government should enforce face mask use by the general public (the Dutch government opposed face mask wearing until recently). We define three possible labels (support, reject, other) for both topics.
52
+
53
+ For the social distancing measure messages are selected using the following pattern: anderhalve (one and a half) followed by meter, or 1.5 or 1,5 followed by $m$ , or afstand (distance) and hou (keep) anywhere in the message in any order. ${}^{1}$ This resulted in 994,052 tweets, 2,930 Reddit comments and 40,429 Nu.nl comments. In total, 5,732 messages (randomly selected from tweets data) were manually annotated by a single annotator (a second annotator was involved to validate the annotation results), answering the question Does the message support or reject the social distancing measure announced by the Dutch government on 15 March 2020?
54
+
55
+ ---
56
+
57
+ ${}^{1}$ The following regular expression was implemented: 1[.,]5[ -]*m|afstand.*hou|hou.*afstand |anderhalve[ -] *meter
58
+
59
+ ---
60
+
61
+ For the face mask discussion, messages containing the word mondkapje (face mask) were selected. 578 tweets and 744 Nu.nl comments have been annotated manually by the same annotator, answering the question Does this message support or reject the policy on advising against the use of face masks by the general public? Note that a rejection of the face mask policy actually entails that the person is in favor of (mandatory) face masks.
62
+
63
+ Using the annotated data, for each topic a classifier is trained with the fastText library for Python (Bojanowski et al., 2017; Joulin et al., 2017). The fastText classifier is a linear feed forward network trained using stochastic gradient descent. The network contains an embedding layer for subword features. In our current experiments the subword embeddings are trained on the fly, as preliminary experiments showed that using pre-trained embed-dings did not increase the performance. After training and evaluating these models, the classifiers are used to predict the stance of all other messages in the dataset in order to perform a comprehensive stance analysis for the target topics.
64
+
65
+ ## 4 Results
66
+
67
+ ### 4.1 Polarity analysis
68
+
69
+ We computed daily average polarity score for three data sources and found that trends of public polarity have stable fluctuations over time (for instance the results of Twitter data are shown in Figure 1). Some interesting links can be found between press conferences and trends of public sentiment. For instance, general polarity scores reached a minimum at the March 12th press conference of the Dutch government (A in Figure 1), when the first lock-down measures were announced. More recently, the COVID-19 related polarity reached a peak on May 19th (B in Figure 1), when the government announced the first release measures. In addition, the COVID-19 related tweets is generally more negative than general Dutch tweets, while the polarity score regarding social distancing was decreasing in the first months. Similar results were found in the other two data sources.
70
+
71
+ ### 4.2 Stance analysis training
72
+
73
+ For training the fastText model for stance analysis, we used ten-fold cross-validation on the human-labeled tweets in combination with a grid search to determine the optimal word vector dimension (10-300), number of training epochs (10-500) and learning rate (0.05-1.0). For the setup of the neural network itself we used the default settings of the fastText library. The dataset was divided in ${80}\%$ training, ${10}\%$ validation (for parameter optimization), and 10% test examples. Table 3 shows the parameter settings and classification accuracy results. The baseline method labels the given message as the majority class (support for distancing and reject for face masks).
74
+
75
+ ![01963df0-6074-75cc-8ad0-fc06261a8dc7_2_848_171_608_326_0.jpg](images/01963df0-6074-75cc-8ad0-fc06261a8dc7_2_848_171_608_326_0.jpg)
76
+
77
+ Figure 1: Daily average polarity scores of all Dutch tweets, the COVID-19 topic related tweets and two measures-related tweets.
78
+
79
+ <table><tr><td/><td>distancing</td><td>face masks</td></tr><tr><td>vector length</td><td>10</td><td>300</td></tr><tr><td>learning rate</td><td>0.2</td><td>0.2</td></tr><tr><td>epochs</td><td>10</td><td>10</td></tr><tr><td>baseline accuracy</td><td>0.56</td><td>0.42</td></tr><tr><td>validation accuracy</td><td>0.65</td><td>0.56</td></tr><tr><td>test accuracy</td><td>0.65</td><td>0.55</td></tr></table>
80
+
81
+ Table 3: Stance classification parameters and results
82
+
83
+ An additional experiment was performed to investigate the effect of training set size on classification accuracy. As shown in Table 4, this experiment showed that the classifier was almost linearly improving with training size, therefore adding more annotated data is expected to increase the accuracy further.
84
+
85
+ <table><tr><td>training set size</td><td>distancing</td><td>face masks</td></tr><tr><td>100</td><td>0.56</td><td>0.46</td></tr><tr><td>200</td><td>0.56</td><td>0.49</td></tr><tr><td>500</td><td>0.58</td><td>0.52</td></tr><tr><td>1,000</td><td>0.60</td><td>0.55</td></tr><tr><td>5,000</td><td>0.65</td><td>-</td></tr></table>
86
+
87
+ Table 4: The relation between training set size and classification performance, measured by accuracy
88
+
89
+ ![01963df0-6074-75cc-8ad0-fc06261a8dc7_3_209_166_576_314_0.jpg](images/01963df0-6074-75cc-8ad0-fc06261a8dc7_3_209_166_576_314_0.jpg)
90
+
91
+ Figure 2: Development of public support for the March policy on social distancing.
92
+
93
+ ### 4.3 Stance analysis application
94
+
95
+ We applied the trained classifier for the social distancing topic to all data and present the results in Figure 2. A similar trend among three platforms can be observed that validates our stance analysis results (due to the small size of Reddit data, its results may contain considerable noise). The public support is initially high in March, however it gradually decreases in the following months (until June 15th). This is consistent with reports of the Dutch health authority RIVM (RIVM, 2020a). Figure 2 also shows an increasing support for the social distancing measure after June 15th, which has been confirmed by national questionnaire results in September (RIVM, 2020b). This finding supports the validity and timeliness of our analysis.
96
+
97
+ The classifier trained on annotated face mask-related messages was also applied to the remaining face-mask messages from the three data sources. As shown in Figure 3, most tweets (95%) and most Nu.nl comments (90%) are against the policy. There were too few Reddit posts on this topic to obtain accurate measurements, but the stance was also less than ${30}\%$ supportive. A possible reason for the stance being more supportive on Nu.nl than on Twitter is that comments on Nu.nl are actively moderated while tweets are not, leaving less room for trolls to attack government policy. Another reason could be the general prevalence of polarized opinions on Twitter (DiResta, 2018). This finding also indicates the importance of capturing public reactions across different social media sources.
98
+
99
+ ## 5 Discussion and future work
100
+
101
+ We present the preliminary results of sentiment analysis of Dutch social media data related to the COVID-19 measures across three sources. We concentrate on public polarity and stance towards two measures taken by the Dutch government against the spread of the corona virus. We found that a large number of messages on Dutch online media were related to the COVID-19 pandemic and the polarity of those messages tended to be more negative than general messages.
102
+
103
+ ![01963df0-6074-75cc-8ad0-fc06261a8dc7_3_860_166_579_318_0.jpg](images/01963df0-6074-75cc-8ad0-fc06261a8dc7_3_860_166_579_318_0.jpg)
104
+
105
+ Figure 3: Development of public support for the March policy on face mask wearing.
106
+
107
+ We assessed the stance of Dutch social media messages on the national advice on social distancing and wearing face masks. The analysis showed that people widely supported social distancing when the measure was announced in March, then support declined until June and increased again recently. We think this phenomenon may be related to the the pandemic situation in the Netherlands (the number of reported COVID-19 patients has decreased and increased during this time as well). With respect to face masks, analysis showed that the public think face masks are useful (against the governmental measure). The rejection rates remained relatively stable over the past months.
108
+
109
+ In future work we would like to improve the performance of our stance classification approach. As shown in Table 4, training set size restricts the performance of the classifier. Therefore we will start annotating more data using crowd-sourcing techniques. Furthermore, we aim to train a general cross-topic stance classifier to assess new topics. Transfer learning (Zarrella and Marsh, 2016) could be an interesting approach for this task.
110
+
111
+ From a practical perspective, we are interested in the explainability of our stance analysis results. We are collaborating with social scientists, who use quantitative methods (i.e. digital questionnaires and interviews) to investigate the influence of policy measures on the general public. A comparison study is planned to validate and explain the temporal trends of Dutch public sentiment.
112
+
113
+ ## Acknowledgements
114
+
115
+ This work is funded by Netherlands eScience Center under the project PuReGoMe (27020S04).
116
+
117
+ ## References
118
+
119
+ Alaa Abd-Alrazaq, Dari Alhuwail, Mowafa Househ, Mounir Hamdi, and Zubair Shah. 2020. Top concerns of tweeters during the covid-19 pandemic: In-foveillance study. Journal of Medical Internet Research, 22(4). E19016.
120
+
121
+ Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.
122
+
123
+ Paul Burstein. 2003. The impact of public opinion on public policy: A review and an agenda. Political research quarterly, 56(1):29-40.
124
+
125
+ Qiang Chen, Min Chen, Wei Zhang, Ge Wang, Xiaoyue Ma, and Richard Evans. 2020. Unpacking the black box: how to promote citizen engagement through government social media during the COVID-19 crisis. Computers in Human Behavior, 110(106380).
126
+
127
+ Cynthia Chew and Gunther Eysenback. 2010. Pandemics in the age of Twitter: Content analysis of tweets during the ${2009}\mathrm{H}1\mathrm{\;N}1$ outbreak. PLoS ONE, 5(11):e14118.
128
+
129
+ Matteo Cinelli, Walter Quattrociocchi, Alessandro Galeazzi, Carlo Michele Valensise, Emanuele Brug-noli, Ana Lucia Schmidt, Paola Zola, Fabiana Zollo, and Antonio Scala. 2020. The COVID-19 social media infodemic. ArXiv preprint 2003.05004.
130
+
131
+ Viviana Cotik, Natalia Debandi, Franco Luque, Paula Miguel, Agustín Moro, Juan Manuel Pérez, Pablo Serrati, Joaquin Zajac, and Demián Zayat. 2020. A study of hate speech in social media during the COVID-19 outbreak. In Proceedings of the NLP COVID-19 Workshop Part 1.
132
+
133
+ Kareem Darwish, Peter Stefanov, Michaël Aupetit, and Preslav Nakov. 2020. Unsupervised user stance detection on Twitter. In Proceedings of the Fourteenth International AAAI Conference on Web and Social Media (ICWSM 2020), pages 141-152. AAAI.
134
+
135
+ René DiResta. 2018. Of Virality and Viruses: The Anti-Vaccine Movement and Social Media. In Social Media Storms and Nuclear Early Warning Systems Workshop. Hewlett Foundation, Menlo Park, CA.
136
+
137
+ Emilio Ferrara. 2020. What Types of COVID- 19 Conspiracies are Populated by Twitter Bots? ArXiv:2004.09531.
138
+
139
+ Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427-431. Association for Computational Linguistics.
140
+
141
+ Ramez Kouzy, Joseph Abi Jaoude, Afif Kraitem, Molly B El Alam, Basil Karam, Elio Adib, Jabra
142
+
143
+ Zarka, Cindy Traboulsi, Elie W Akl, and Khalil Baddour. 2020. Coronavirus goes viral: Quantify-
144
+
145
+ ing the covid-19 misinformation epidemic on twitter. Cureus, 12(3). E7255.
146
+
147
+ Dilek Küçük and Fazli Can. 2020. Stance detection: A survey. ACM Computing Surveys (CSUR), 53(1):1- 37.
148
+
149
+ Yingjie Li and Cornelia Caragea. 2019. Multi-task stance detection with sentiment and stance lexicons. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6300- 6306.
150
+
151
+ RIVM. 2020a. Does the general public manage to follow the behavioral measures? Visited on 11 August 2020. (in Dutch).
152
+
153
+ RIVM. 2020b. Results study behavioral rules and wellbeing: Support. Visited on 17 September 2020. (in Dutch).
154
+
155
+ Jim Samuel, GG Ali, Md Rahman, Ek Esawi, Yana Samuel, et al. 2020. Covid-19 public sentiment insights and machine learning for tweets classification. Information, 11(6):314.
156
+
157
+ Lisa Singh, Shweta Bansal, Leticia Bode, Ceren Bu-dak, Guangqing Chi, Kornraphop Kawintiranon, Colton Padden, Rebecca Vanarsdall, Emily Vraga, and Yanchen Wang. 2020. A first look at covid- 19 information and misinformation sharingon twitter. ArXiv:2003.13907.
158
+
159
+ Tom De Smedt and Walter Daelemans. 2012. Pattern for Python. Journal of Machine Learning Research, pages 2063-2067.
160
+
161
+ Shulong Tan, Yang Li, Huan Sun, Ziyu Guan, Xifeng Yan, Jiajun Bu, Chun Chen, and Xiaofei He. 2013. Interpreting the public sentiment variations on Twitter. IEEE transactions on knowledge and data engineering, 26(5):1158-1170.
162
+
163
+ Shihan Wang and Takao Terano. 2015. Detecting rumor patterns in streaming social media. In 2015 IEEE International Conference on Big Data (Big Data), pages 2709-2715. IEEE.
164
+
165
+ Hung-Yu Kao Yi-Chin Chen, Zhao-Yang Liu. 2017. IKM at SemEval-2017 task 8: Convolutional neural networks for stance detection and rumor verification. In Proceedings of the 11th International Workshop on Semantic Evaluations (SemEval-2017), pages 465-469. ACL.
166
+
167
+ Guido Zarrella and Amy Marsh. 2016. Mitre at semeval-2016 task 6: Transfer learning for stance detection. arXiv preprint arXiv:1606.03784.
168
+
169
+ Yuxin Zhao, Sixiang Cheng, Xiaoyan Yu, and Huilan Xu. 2020. Chinese public's attention to the COVID- 19 epidemic on social media: Observational descriptive study. Journal of Medical Internet Research, 22(5):e18825.
papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/37zyB5yuPXi/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § PUBLIC SENTIMENT ON GOVERNMENTAL COVID-19 MEASURES IN DUTCH SOCIAL MEDIA
2
+
3
+ Shihan Wang ${}^{1}$ , Marijn Schraagen ${}^{1}$ , Erik Tjong Kim Sang ${}^{2}$ , and Mehdi Dastani ${}^{1}$
4
+
5
+ ${}^{1}$ Department of Information and Computing Sciences, Utrecht University
6
+
7
+ ${}^{2}$ Netherlands eScience Center
8
+
9
+ The Netherlands
10
+
11
+ s.wang2@uu.nl, m.p.schraagen@uu.nl,
12
+
13
+ e.tjongkimsang@esciencecenter.nl, m.m.dastani@uu.nl
14
+
15
+ § ABSTRACT
16
+
17
+ Public sentiment (the opinion, attitude or feeling that the public expresses) is a factor of interest for government, as it directly influences the implementation of policies. Given the unprecedented nature of the COVID-19 crisis, having an up-to-date representation of public sentiment on governmental measures and announcements is crucial. In this paper, we analyse Dutch public sentiment on governmental COVID-19 measures from text data collected across three online media sources (Twitter, Reddit and Nu.nl) from February to September 2020. We apply sentiment analysis methods to analyse polarity over time, as well as to identify stance towards two specific pandemic policies regarding social distancing and wearing face masks. The presented preliminary results provide valuable insights into the narratives shown in vast social media text data, which help understand the influence of COVID-19 measures on the general public.
18
+
19
+ § 1 INTRODUCTION
20
+
21
+ Public sentiment (the opinion, attitude or feeling that the public expresses) can directly influence the implementation of policies (Burstein, 2003), therefore it is crucial for policy makers to know the public sentiment of chosen policies and to take this sentiment into account when deciding on new policies. Given the unprecedented nature of the COVID-19 crisis, having an up-to-date representation of public sentiment on governmental measures and announcements becomes even more important. However, the 'staying-at-home' policy makes analysing public sentiment by means of face-to-face research methods like interviews and questionnaires challenging, while classical online surveys could delay the analysis results by restrictions of frequency.
22
+
23
+ With the rapid growth of online social media, monitoring public sentiment on platforms like Twitter and Reddit allows for much more and frequent measurements and a better indication of changes over time (Tan et al., 2013; Wang and Terano, 2015). Thus, we apply natural language processing (NLP) approaches on Dutch social media to understand the temporal variation of Dutch public sentiment during the COVID-19 outbreak period.
24
+
25
+ Our analysis covers two perspectives of sentiment analysis: polarity analysis (whether a message is positive or negative) and stance analysis (whether a message is supportive of or against a given target (Li and Caragea, 2019)). In our study, the given targets are policy measures taken by the Dutch government. We particularly focus on the public attitude towards two specific measures, i.e., social distancing and wearing face masks. To validate our work for the broader Dutch public, we collected data from three different online media platforms (Twitter, Reddit and Nu.nl) to perform a comparison study. The preliminary results of analysis are presented in this paper. As a summary of our contributions, we provide a first sentiment-oriented overview of Dutch public discussion around COVID-19 across multiple social media sources and explore the practical usage of NLP approaches for understanding the influence of COVID-19 measures on the general public.
26
+
27
+ § 2 RELATED WORK
28
+
29
+ Social media sentiment has previously been analyzed during pandemics such as H1N1(Chew and Eysenback, 2010). Also the COVID-19 pandemic, despite being a relatively new topic, has attracted many researchers from different areas, including social media analysis. Abd-Alrazaq et al. (2020) identified four main COVID-19 related themes on Twitter: virus origin, contamination sources, preventive measures and impact on societies. Zhao et al. (2020) identified topics and sentiment related to COVID-19 in China. Samuel et al. (2020) conducted textual analyses of Twitter COVID-19 data to identify public fear sentiment. Several studies focused on bots (Ferrara, 2020) and misinformation (Kouzy et al., 2020; Singh et al., 2020) which influence opinions on social media. Chen et al. (2020) showed that positive polarity government messages on social media result in higher public engagement. Cinelli et al. (2020) identified COVID-19-related topics in various social media, including Reddit and Twitter. They found that the ratio of misinformation to reliable information was found to be stable over time, however Twitter has a larger percentage of misinformation compared to Reddit.
30
+
31
+ From a technical point of view, stance detection has recently received considerable attention in the NLP community (Küçük and Can, 2020), for which neural networks with word embeddings have proven to be effective (Yi-Chin Chen, 2017; Li and Caragea, 2019). An annotation and training approach for classifying hate speech in COVID- 19 related tweets using embeddings was described by (Cotik et al., 2020). Alternative approaches include unsupervised clustering for stance analysis (Darwish et al., 2020).
32
+
33
+ § 3 METHODOLOGY
34
+
35
+ § 3.1 DATA COLLECTION
36
+
37
+ Data for polarity and stance analysis is collected from Twitter, Reddit and Nu.nl. For Twitter, all Dutch language tweets are collected via the Twitter streaming API (using the provided lang attribute) and are subsequently filtered using a set of keywords related to COVID-19 (as presented in Table 1). Reddit and Nu.nl are organized by topic, so for these two data sources all messages from Corona-related threads are used without keyword filtering. Nu.nl is a news website that allows people to comment on news articles and blogs. From this data source, our analysis concentrates on the comments instead of the contents of articles. The times-pan of the datasets is February 27th 2020 (when the first COVID-19 patient was discovered in the Netherlands) until September 2020. The amount of collected messages is presented in Table 2.
38
+
39
+ § 3.2 DATA ANNOTATION AND ANALYSIS
40
+
41
+ The goal of the analysis is to provide both general polarity analysis for COVID-19 related messages, and stance analysis towards particular subtopics. For polarity analysis we use the library pattern. nl (Smedt and Daelemans, 2012), for which no training is required. This library contains a lexicon of 3918 Dutch polarity words, mostly adjectives, and 120 language-independent emoji.
42
+
43
+ max width=
44
+
45
+ Category Keyword Translation
46
+
47
+ 1-3
48
+ 2*Disease corona X
49
+
50
+ 2-3
51
+ covid X
52
+
53
+ 1-3
54
+ 2*Health care huisarts doctor
55
+
56
+ 2-3
57
+ mondkapje face mask
58
+
59
+ 1-3
60
+ Government rivm national health organization
61
+
62
+ 1-3
63
+ 3*Social flattenthecurve X
64
+
65
+ 2-3
66
+ blijfthuis stay home
67
+
68
+ 2-3
69
+ houvol hang in there
70
+
71
+ 1-3
72
+
73
+ Table 1: COVID-19 keywords for filtering topic tweets.
74
+
75
+ max width=
76
+
77
+ Month Tweets $\mathbf{{Nu}.{nl}}$ Reddit
78
+
79
+ 1-4
80
+ February 278,082 25,721 5
81
+
82
+ 1-4
83
+ March 3,152,638 207,957 28,038
84
+
85
+ 1-4
86
+ April 2,115,728 193,530 11,943
87
+
88
+ 1-4
89
+ May 1,264,650 146,832 6,452
90
+
91
+ 1-4
92
+ June 921,481 90,698 3,218
93
+
94
+ 1-4
95
+ July 922,992 85,085 2,985
96
+
97
+ 1-4
98
+ August 1,078,644 105,047 4,738
99
+
100
+ 1-4
101
+ September 1,115,057 114,480 6,486
102
+
103
+ 1-4
104
+
105
+ Table 2: Number of messages per dataset over time.
106
+
107
+ For stance analysis, we trained classifiers based on manually annotated data. Two topics related to governmental measures are selected, the social distancing measure (i.e. all people keep 1.5 metres distance from each other except when they are living in the same house) and whether or not the government should enforce face mask use by the general public (the Dutch government opposed face mask wearing until recently). We define three possible labels (support, reject, other) for both topics.
108
+
109
+ For the social distancing measure messages are selected using the following pattern: anderhalve (one and a half) followed by meter, or 1.5 or 1,5 followed by $m$ , or afstand (distance) and hou (keep) anywhere in the message in any order. ${}^{1}$ This resulted in 994,052 tweets, 2,930 Reddit comments and 40,429 Nu.nl comments. In total, 5,732 messages (randomly selected from tweets data) were manually annotated by a single annotator (a second annotator was involved to validate the annotation results), answering the question Does the message support or reject the social distancing measure announced by the Dutch government on 15 March 2020?
110
+
111
+ ${}^{1}$ The following regular expression was implemented: 1[.,]5[ -]*m|afstand.*hou|hou.*afstand |anderhalve[ -] *meter
112
+
113
+ For the face mask discussion, messages containing the word mondkapje (face mask) were selected. 578 tweets and 744 Nu.nl comments have been annotated manually by the same annotator, answering the question Does this message support or reject the policy on advising against the use of face masks by the general public? Note that a rejection of the face mask policy actually entails that the person is in favor of (mandatory) face masks.
114
+
115
+ Using the annotated data, for each topic a classifier is trained with the fastText library for Python (Bojanowski et al., 2017; Joulin et al., 2017). The fastText classifier is a linear feed forward network trained using stochastic gradient descent. The network contains an embedding layer for subword features. In our current experiments the subword embeddings are trained on the fly, as preliminary experiments showed that using pre-trained embed-dings did not increase the performance. After training and evaluating these models, the classifiers are used to predict the stance of all other messages in the dataset in order to perform a comprehensive stance analysis for the target topics.
116
+
117
+ § 4 RESULTS
118
+
119
+ § 4.1 POLARITY ANALYSIS
120
+
121
+ We computed daily average polarity score for three data sources and found that trends of public polarity have stable fluctuations over time (for instance the results of Twitter data are shown in Figure 1). Some interesting links can be found between press conferences and trends of public sentiment. For instance, general polarity scores reached a minimum at the March 12th press conference of the Dutch government (A in Figure 1), when the first lock-down measures were announced. More recently, the COVID-19 related polarity reached a peak on May 19th (B in Figure 1), when the government announced the first release measures. In addition, the COVID-19 related tweets is generally more negative than general Dutch tweets, while the polarity score regarding social distancing was decreasing in the first months. Similar results were found in the other two data sources.
122
+
123
+ § 4.2 STANCE ANALYSIS TRAINING
124
+
125
+ For training the fastText model for stance analysis, we used ten-fold cross-validation on the human-labeled tweets in combination with a grid search to determine the optimal word vector dimension (10-300), number of training epochs (10-500) and learning rate (0.05-1.0). For the setup of the neural network itself we used the default settings of the fastText library. The dataset was divided in ${80}\%$ training, ${10}\%$ validation (for parameter optimization), and 10% test examples. Table 3 shows the parameter settings and classification accuracy results. The baseline method labels the given message as the majority class (support for distancing and reject for face masks).
126
+
127
+ < g r a p h i c s >
128
+
129
+ Figure 1: Daily average polarity scores of all Dutch tweets, the COVID-19 topic related tweets and two measures-related tweets.
130
+
131
+ max width=
132
+
133
+ X distancing face masks
134
+
135
+ 1-3
136
+ vector length 10 300
137
+
138
+ 1-3
139
+ learning rate 0.2 0.2
140
+
141
+ 1-3
142
+ epochs 10 10
143
+
144
+ 1-3
145
+ baseline accuracy 0.56 0.42
146
+
147
+ 1-3
148
+ validation accuracy 0.65 0.56
149
+
150
+ 1-3
151
+ test accuracy 0.65 0.55
152
+
153
+ 1-3
154
+
155
+ Table 3: Stance classification parameters and results
156
+
157
+ An additional experiment was performed to investigate the effect of training set size on classification accuracy. As shown in Table 4, this experiment showed that the classifier was almost linearly improving with training size, therefore adding more annotated data is expected to increase the accuracy further.
158
+
159
+ max width=
160
+
161
+ training set size distancing face masks
162
+
163
+ 1-3
164
+ 100 0.56 0.46
165
+
166
+ 1-3
167
+ 200 0.56 0.49
168
+
169
+ 1-3
170
+ 500 0.58 0.52
171
+
172
+ 1-3
173
+ 1,000 0.60 0.55
174
+
175
+ 1-3
176
+ 5,000 0.65 -
177
+
178
+ 1-3
179
+
180
+ Table 4: The relation between training set size and classification performance, measured by accuracy
181
+
182
+ < g r a p h i c s >
183
+
184
+ Figure 2: Development of public support for the March policy on social distancing.
185
+
186
+ § 4.3 STANCE ANALYSIS APPLICATION
187
+
188
+ We applied the trained classifier for the social distancing topic to all data and present the results in Figure 2. A similar trend among three platforms can be observed that validates our stance analysis results (due to the small size of Reddit data, its results may contain considerable noise). The public support is initially high in March, however it gradually decreases in the following months (until June 15th). This is consistent with reports of the Dutch health authority RIVM (RIVM, 2020a). Figure 2 also shows an increasing support for the social distancing measure after June 15th, which has been confirmed by national questionnaire results in September (RIVM, 2020b). This finding supports the validity and timeliness of our analysis.
189
+
190
+ The classifier trained on annotated face mask-related messages was also applied to the remaining face-mask messages from the three data sources. As shown in Figure 3, most tweets (95%) and most Nu.nl comments (90%) are against the policy. There were too few Reddit posts on this topic to obtain accurate measurements, but the stance was also less than ${30}\%$ supportive. A possible reason for the stance being more supportive on Nu.nl than on Twitter is that comments on Nu.nl are actively moderated while tweets are not, leaving less room for trolls to attack government policy. Another reason could be the general prevalence of polarized opinions on Twitter (DiResta, 2018). This finding also indicates the importance of capturing public reactions across different social media sources.
191
+
192
+ § 5 DISCUSSION AND FUTURE WORK
193
+
194
+ We present the preliminary results of sentiment analysis of Dutch social media data related to the COVID-19 measures across three sources. We concentrate on public polarity and stance towards two measures taken by the Dutch government against the spread of the corona virus. We found that a large number of messages on Dutch online media were related to the COVID-19 pandemic and the polarity of those messages tended to be more negative than general messages.
195
+
196
+ < g r a p h i c s >
197
+
198
+ Figure 3: Development of public support for the March policy on face mask wearing.
199
+
200
+ We assessed the stance of Dutch social media messages on the national advice on social distancing and wearing face masks. The analysis showed that people widely supported social distancing when the measure was announced in March, then support declined until June and increased again recently. We think this phenomenon may be related to the the pandemic situation in the Netherlands (the number of reported COVID-19 patients has decreased and increased during this time as well). With respect to face masks, analysis showed that the public think face masks are useful (against the governmental measure). The rejection rates remained relatively stable over the past months.
201
+
202
+ In future work we would like to improve the performance of our stance classification approach. As shown in Table 4, training set size restricts the performance of the classifier. Therefore we will start annotating more data using crowd-sourcing techniques. Furthermore, we aim to train a general cross-topic stance classifier to assess new topics. Transfer learning (Zarrella and Marsh, 2016) could be an interesting approach for this task.
203
+
204
+ From a practical perspective, we are interested in the explainability of our stance analysis results. We are collaborating with social scientists, who use quantitative methods (i.e. digital questionnaires and interviews) to investigate the influence of policy measures on the general public. A comparison study is planned to validate and explain the temporal trends of Dutch public sentiment.
205
+
206
+ § ACKNOWLEDGEMENTS
207
+
208
+ This work is funded by Netherlands eScience Center under the project PuReGoMe (27020S04).
papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/4LIJshtHlnk/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Quantifying the Effects of COVID-19 on Mental Health Support Forums
2
+
3
+ Laura Biester*, Katie Matton*, Janarthanan Rajendran,
4
+
5
+ Emily Mower Provost, Rada Mihalcea
6
+
7
+ Computer Science & Engineering, University of Michigan, USA
8
+
9
+ $\left\{ {{1biester},{katiemat},{rjana},{emilykmp},{mihalcea}}\right\} @{umich}.{edu}$
10
+
11
+ ## Abstract
12
+
13
+ The COVID-19 pandemic, like many of the disease outbreaks that have preceded it, is likely to have a profound effect on mental health. Understanding its impact can inform strategies for mitigating negative consequences. In this work, we seek to better understand the effects of COVID-19 on mental health by examining discussions within mental health support communities on Reddit. First, we quantify the rate at which COVID-19 is discussed in each community, or subreddit, in order to understand levels of pandemic-related discussion. Next, we examine the volume of activity in order to determine whether the number of people discussing mental health has risen. Finally, we analyze how COVID-19 has influenced language use and topics of discussion within each subreddit.
14
+
15
+ ## 1 Introduction
16
+
17
+ The implications of COVID-19 extend far beyond its immediate physical health effects. Uncertainty and fear surrounding the disease and its effects, in addition to a lack of consistent and reliable information, contribute to rising levels of anxiety and stress (Torales et al., 2020). Policies designed to help contain the disease also have significant consequences. Social distancing policies and lockdowns lead to increased feelings of isolation and uncertainty (Huremović, 2019). They have also triggered an economic downturn (Şahin et al., 2020), resulting in soaring unemployment rates and causing many to experience financial stress. Therefore, in addition to the profound effects on physical health around the world, psychiatrists have warned that we should also brace for a mental health crisis as a result of the pandemic (Qiu et al., 2020; Greenberg et al., 2020; Yao et al., 2020; Torales et al., 2020).
18
+
19
+ Indeed, the literature on the impact of past epidemics indicates that they are associated with a myriad of adverse mental health effects. In a review of studies on the 2002-2003 SARS outbreak, the 2009 H1N1 influenza outbreak, and the 2018 Ebola outbreak, Chew et al. (2020) found that anxiety, fear, depression, anger, guilt, grief, and post-traumatic stress were all commonly observed psychological responses. Furthermore, many of the factors commonly cited for inducing these responses are applicable to the COVID-19 setting. These include: fear of contracting the disease, a disruption in daily routines, isolation related to being quarantined, and uncertainty regarding the disease treatment process and outcomes, the well-being of loved ones, and one's economic situation.
20
+
21
+ While disease outbreaks pose a risk to the mental health of the general population, research suggests that this risk is heightened for those with preexisting mental health concerns. People with mental health disorders are particularly susceptible to experiencing negative mental health consequences during times of social isolation (Usher et al., 2020). Further, as Yao et al. (2020) warn, they are likely to have a stronger emotional response to the feelings of fear, anxiety, and depression that come along with COVID-19 than the general population.
22
+
23
+ Given the potential for the COVID-19 outbreak to have devastating consequences for mental health, it is critical that we work to understand its psychological effects. In this work, we use Reddit, a popular social media platform, to study how COVID-19 has impacted the behavior of groups of users who express mental health concerns. We analyze the content of discussions (COVID-related discussions, psycholinguistic categories, and topics) as well as the volume of communication (daily user count) and find notable changes in each category. Some of these changes appear in multiple mental health subreddits, but some are more specific to individual communities that relate to specific diagnoses. We believe that our findings can help us better understand and potentially alleviate the negative mental health effects of the pandemic; for instance, this type of analysis could help moderators to more effectively support users through future crises. To the best of our knowledge, the method that we propose has not been used previously to study changes in mental health subreddits, and could be applied to understand the effects of other major events like political elections and natural disasters.
24
+
25
+ ---
26
+
27
+ *Denotes equal contribution.
28
+
29
+ ---
30
+
31
+ ## 2 Related Work
32
+
33
+ ### 2.1 Linguistic Analysis and Mental Health
34
+
35
+ There is a considerable body of research that examines the relationship between language use and mental health, including work dating back several decades. For example, Bucci and Freedman (1981) and Weintraub (1981) observed an increased usage of first person singular pronouns in individuals with depression. Oxman et al. (1982) showed that they could distinguish between paranoia and depression by applying linguistic analysis to speech.
36
+
37
+ Since then, advances in tools for text analytics have led to increased research in this area. Notably, the Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) is a widely used computerized text analysis tool that has been validated for psycholinguistic analysis. Some of the earliest studies using LIWC analyzed written text. For instance, researchers have used LIWC to study linguistic patterns in essays written by college students with and without depression (Rude et al., 2004) or in poems written by suicidal vs non-suicidal poets (Stirman and Pennebaker, 2001). More recently, there has been a proliferation of studies applying LIWC to online text, including social media data. LIWC has been used to study language patterns on social media for a variety of mental health disorders, including depression, anxiety, suicidality, and bipolar disorder (De Choudhury et al., 2013; Shen and Rudzicz, 2017; Coppersmith et al., 2014, 2016). In addition to LIWC, other methods used to study the linguistic patterns of mental illness include character and word models (Coppersmith et al., 2014; Tsugawa et al., 2013) and topic modeling (Resnik et al., 2015; Preotiuc-Pietro et al., 2015).
38
+
39
+ ### 2.2 Studying Mental Health via Social Media
40
+
41
+ In the past decade, social media has emerged as a powerful tool for understanding human behavior, and correspondingly mental health. A growing number of studies have applied computational methods to data collected from social media platforms in order to characterize behavior associated with mental health illnesses and to detect and forecast mental health outcomes (see Chancellor and De Choudhury (2020) for a comprehensive review).
42
+
43
+ Reddit is a particularly well-suited platform for studying mental health due to its semi-anonymous nature, which encourages user honesty and reduces inhibitions associated with self-disclosure (De Choudhury and De, 2014). Additionally, Reddit contains subreddits that act as mental health support forums (e.g., r/Anxiety, r/depression, r/SuicideWatch), which enable a more targeted analysis of users experiencing different mental health conditions. A number of existing works have focused on characterizing patterns of discourse within these mental health communities on Red-dit. These include studies that have analyzed longitudinal trends in topic usage and word choice (Chakravorti et al., 2018), the relationship between user participation styles and topic usage (Feldhege et al., 2020), and the discourse patterns specific to self-disclosure, social support, and anonymous posting (Pavalanathan and De Choudhury, 2015; De Choudhury and De, 2014).
44
+
45
+ Other studies of Reddit mental health communities have aimed to quantify and forecast changes in user behavior. De Choudhury et al. (2016) presented a model for predicting the likelihood that users transition from discussing mental health generally to engaging in suicidal ideation. Li et al. (2018) analyzed linguistic style measures associated with increasing vs decreasing participation in mental health subreddits over the course of a year. Kumar et al. (2015) examined how posting activity in r/SuicideWatch changes following a celebrity suicide. Our work similarly focuses on analyzing temporal patterns in user activity, but we aim to characterize changes associated with COVID-19.
46
+
47
+ ### 2.3 Mental Health and COVID-19
48
+
49
+ Since the first cases of COVID-19 were reported in December 2019, there have been a number of preliminary studies of its impact on mental health. In a survey of the general public of China, a majority of respondents perceived the psychological impact of the outbreak to be moderate-to-severe and about one-third reported experiencing moderate-to-severe anxiety (Wang et al., 2020). Studies of the impact of COVID-19 among residents of Liaoning Province, China (Zhang and Ma, 2020) and the adult Indian population (Roy et al., 2020) also found notable rates of mental distress.
50
+
51
+ There is a set of studies that have examined the mental health consequences of COVID-19 by analyzing online behaviors. Jacobson et al. (2020) explored the short-term impact of stay-at-home orders in the United States by analyzing changes in the rates of mental health-related Google search queries immediately after orders were issued. Their results showed that rates of mental health queries increased leading up to the issuance of stay-at-home-orders, but then plateaued after they went into effect; however they did not consider the longer-term implications of the stay-at-home orders on mental health. Li et al. (2020) measured psycholinguistic attributes of posts on Weibo, a Chinese social media platform, before and after the Chinese National Health Commission declared COVID-19 to be an epidemic. Their findings showed that expressions of negative emotions and sensitivity to social risks increased following the declaration. Wolohan (2020) used a Long Short-Term Memory model to classify depression among Reddit users in April 2020, finding a higher than normal depression rate.
52
+
53
+ Our work similarly aims to measure changes in online behavior as a means of understanding the relationship between COVID-19 and mental health. However, two notable differences are: (1) instead of analyzing the short-term impact of a specific COVID-related event, we examine more general changes that have occurred during a three-month period of the outbreak; and (2) we focus our analysis on activity within mental health forums, which allows us to examine the impact of COVID- 19 specifically on individuals who have expressed mental health concerns.
54
+
55
+ ## 3 Data
56
+
57
+ We collect Reddit posts from three mental health subreddits using the Pushshift API ${}^{1}$ (Baumgart-ner et al., 2020): r/Anxiety, r/depression, and r/SuicideWatch, from January 2017 to May 2020. The reasons for analyzing these three subreddits are twofold: first, over the three and a half years represented in our data, these subreddits have a significant amount of activity $( \geq {40}$ posts every day), making it feasible to treat daily values as a time series. Second, because the subreddits provide support for different mental health disorders, their users may have been affected differently by COVID-19. We separate the data into two time periods: pre-COVID (January 1, 2017 - February 29, 2020) and post-COVID (March 1, 2020 - May 31, 2020), roughly delineating when COVID-19 began to have a serious impact on those in the United States, where the majority of Reddit users are concentrated. ${}^{2}$ This choice of dates was informed by our analysis of the rates at which COVID-19 related words were discussed in each subreddit (see Section 5.1), which we found hovered around 0 -5% before rising sharply near the beginning of March.
58
+
59
+ <table><tr><td colspan="2">r/Anxiety</td><td>r/depression</td><td>r/SuicideWatch</td></tr><tr><td>2017</td><td>95</td><td>279</td><td>91</td></tr><tr><td>2018</td><td>164</td><td>449</td><td>188</td></tr><tr><td>2019</td><td>211</td><td>622</td><td>285</td></tr><tr><td>2020</td><td>243</td><td>618</td><td>370</td></tr></table>
60
+
61
+ Table 1: Average number of posts per day across the three subreddits in our dataset.
62
+
63
+ We exclude posts where the author or text is marked as '[removed]' or '[deleted]', because posts with deleted authors offer no value for user count metrics, and deleted content means that we are unable to capture linguistic signals (see Section 4.1 for more details on these metrics). Figure 1 shows the average number of daily posts for $\mathrm{r}/$ Anxiety, $\mathrm{r}/$ depression, and $\mathrm{r}/$ SuicideWatch.
64
+
65
+ ## 4 Methodology
66
+
67
+ Our goal is to identify how mental health subreddit activity has changed during the pandemic. We first create time series for a number of metrics that could be affected by the pandemic, encompassing activity levels and text content (Section 4.1). We then use a time series intervention analysis technique to determine whether there are significant changes in our metrics during the pandemic (Section 4.2).
68
+
69
+ ### 4.1 Reddit Activity Metrics
70
+
71
+ We begin by creating a lexicon of words that are commonly used to refer to COVID-19. This allows us to determine the extent to which users in each subreddit are discussing COVID-19, and also gives us a clearer idea of when COVID-19 began to directly affect discussion in the mental health subreddits. We based the lexicon on a set of twitter search keywords from Huang et al. (2020), and added six additional words that we believed would be indicative of discussion about COVID-19 (see the full lexicon in Appendix A).
72
+
73
+ ---
74
+
75
+ ${}^{1}$ As with other social media datasets, there may be noise in the form of API changes and data removed after collection. For the dates involved in our study, static Pushshift dump files were not yet available.
76
+
77
+ ${}^{2}$ https://www.alexa.com/siteinfo/reddit.com
78
+
79
+ ---
80
+
81
+ To study changes in the number of users seeking mental health support in subreddits, we record the author usernames for each post in our dataset. Since individuals can create multiple accounts under different usernames, the number of unique user-names associated with posts is likely not equal to the true number of unique users; however, it is a reasonable proxy.
82
+
83
+ To study changes in content that occur during the pandemic, we use the LIWC lexicon (Pennebaker et al., 2015) and Latent Dirichlet Allocation (LDA) topic modeling (Blei et al., 2003). The LIWC lexicon consists of seventy-three hierarchical psycholinguistic word categories, encapsulating properties including linguistic categories (e.g., 1st person plural pronouns, verbs), emotions (e.g., anxiety, sadness), time (e.g., present, future), and personal concerns (e.g., work, money, death). To capture the discussion topics that are common in the r/Anxiety, $\mathrm{r}/$ depression, and $\mathrm{r}/$ SuicideWatch subreddits specifically, we train a topic model on posts from these subreddits. We ensure that discussions from each of the subreddits are equally represented in our training dataset by randomly downsampling the posts from the subreddits with more data. We use the implementation of LDA topic modeling provided in the MALLET toolkit (McCallum, 2002) and train models with $k = 5,{10},..,{40}$ topics. We select a single model to use in our analysis by examining their coherence scores, a measure of the semantic similarity of high probability words within each topic (Mimno et al., 2011). As coherence scores tend to increase with increasing $k$ , we select $k$ as the first local maxima of coherence scores, which we found to be $k = {25}$ .
84
+
85
+ In Appendix B, we show the 25 topics obtained from our topic model, along with the highest probability words associated with each topic. We also provide labels that summarize the essence of each topic, which we created by examining their representative words. Common themes of discussion include: daily life concerns (e.g., school, work, sleep and routine), personal relationships (e.g., friends, family, relationships), and mental health struggles (e.g., anxiety, suicide, medical treatment).
86
+
87
+ When using text from posts, we remove special characters and sequences, such as newlines, quotes, emails, and tables. To represent the text of a post, we concatenate the title with the text content, as was done in prior work (Chakravorti et al., 2018). We apply additional pre-processing steps for our topic modeling analysis: (1) we remove a set of common stopwords that do not appear in the LIWC lexicon (we kept those in LIWC as they have been found to have psychological meaning), (2) we form bigrams from pairs of words that commonly appear together, and (3) we lemmatize each word.
88
+
89
+ ### 4.2 Time Series Analysis
90
+
91
+ We treat the task of identifying changes in subred-dit activity patterns as a time series intervention analysis problem. Our basic approach involves: (1) fitting a time series model to the pre-COVID observations for each of the metrics described above and then (2) examining how the values forecasted by the model compare to the observed values during the post-COVID time period. It is worth noting that the one study we found examining the impact of an event on activity within mental health subreddits employs a different approach: they use a t-test to compare the observations from "before" vs "after" the event (Kumar et al., 2015). However, their problem setup differs from ours in that they consider a much shorter period of time (four weeks total), so the effects of seasonality (regular changes that recur each year) and longer-term trends are likely reduced. In contrast, we find that there is often a strong trend over time and seasonal component in our data, making a direct comparison of two time periods with a t-test unreliable.
92
+
93
+ We smooth each time series and remove day-of-week related fluctuations by computing a seven-day rolling mean over the time series. We use the Prophet model (Taylor and Letham, 2018) to create a model of the period before COVID-19. This model was initially created by Facebook to forecast time series on their platform, such as the number of events created per day or the number of active users; we find that our time series, also compiled from social media, have many similar properties. The Prophet model is an additive regression model with three components:
94
+
95
+ $$
96
+ y\left( t\right) = g\left( t\right) + s\left( t\right) + h\left( t\right) + {\epsilon }_{t} \tag{1}
97
+ $$
98
+
99
+ The trend is encapsulated by $g\left( t\right)$ , a piecewise linear model. The seasonality of the data is captured by $s\left( t\right)$ , which is approximated using a Fourier series. As we smooth our data on a weekly basis, we utilize only yearly seasonality, excluding the optional weekly and daily seasonality components. The third term, $h\left( t\right)$ , represents holidays; we find that adding the default list of US holidays provided by Prophet reduces error for most our our time series in the pre-COVID period, likely because the Reddit population is centered in the United States. Finally, ${\epsilon }_{t}$ represents the error, in this case fluctuations in the time series that are not captured by the model.
100
+
101
+ After training the model on the pre-COVID data, we predict values for the post-COVID period. If we assume that there is no change during this time period, we would expect the predicted values to be near the true values, given that the model does a good job fitting the trend and seasonal components. The model computes uncertainty intervals over the predicted values by simulating ways in which the trend may change during the period of the forecast. We use this method to compute the ${95}\%$ prediction interval. Our null hypothesis is that there has been no change in trend. In this case, we would expect $5\%$ of the data in the post-COVID period to fall outside of the prediction interval. Our alternate hypothesis is that there was a change in the trend of the time series (which may be attributable to COVID-19). In this case, more than $5\%$ of the data in the post-COVID period will fall outside of the prediction interval. We apply a one-sample proportion test to assess whether the proportion of observations outside of the prediction interval in the post-COVID period is significantly greater than $5\%$ . The details of this test are in Appendix C.
102
+
103
+ ## 5 Results and Discussion
104
+
105
+ ### 5.1 How often do people in different mental health subreddits discuss COVID-19?
106
+
107
+ Using our COVID-19 lexicon (Section 4.1), we compute the percentage of posts per day that mention any words related to COVID-19, as shown in Figure 1. We see that COVID-19 began to have a serious impact on discussions in all three sub-reddits around the beginning of March 2020, as is clear from the spikes in Figure 1. Although COVID-19 is discussed on all subreddits, we see a stark difference in the volume of discussion across each of them; in r/Anxiety, discussion of COVID- 19 is more frequent than it is in r/depression or r/SuicideWatch, and begins earlier.
108
+
109
+ ![01963dfe-b410-7184-a1e9-090be76a5eca_4_851_175_608_230_0.jpg](images/01963dfe-b410-7184-a1e9-090be76a5eca_4_851_175_608_230_0.jpg)
110
+
111
+ Figure 1: Percent of posts mentioning COVID-19 related words across mental health subreddits.
112
+
113
+ Discussion When choosing the date to consider as the beginning of the post-COVID period in our time series analysis, we considered March 1st, 2020 as a sensible date, as it aligns with the time at which the United States (where the majority of Reddit users reside) began to take COVID-19 seriously. March 1st closely followed the first announced COVID-19 death in the United States on February 28th, 2020, and preceded state lockdowns and school closures. The spikes at the beginning of March suggest that this date also reflects the time at which COVID-19 began to have a notable impact on mental health subreddit discussions.
114
+
115
+ Although most COVID-19 related discussion started in March, we also see that a small spike in discussion rates occurred earlier in r/Anxiety. This suggests that users in this subreddit began to notice some impact from COVID-19 in late January, when reports of lockdowns in China first appeared in the news. Based on the early start and elevated rate of COVID-19 discussion within r/Anxiety, we conclude that all of our metrics are likely to be more strongly affected by COVID-19 in r/Anxiety.
116
+
117
+ ### 5.2 Has COVID-19 changed the number of users seeking support in mental health subreddits?
118
+
119
+ We report the daily number of unique users who posted in each subreddit in Figure 2. We observe an increase in the number of users who posted in the r/Anxiety subreddit in the post-COVID period. Meanwhile, in both r/depression and r/SuicideWatch, we find significant decreases in the number of users who posted. In r/depression, we observe a substantial drop in posting rates around mid-March. Activity in this subreddit remains abnormally low into late-April, when it starts to revert back towards the forecasted values. In r/SuicideWatch, the drop in user activity is less extreme, and we see that the activity levels eventually return to their predicted values.
120
+
121
+ ![01963dfe-b410-7184-a1e9-090be76a5eca_5_195_174_607_476_0.jpg](images/01963dfe-b410-7184-a1e9-090be76a5eca_5_195_174_607_476_0.jpg)
122
+
123
+ Figure 2: Daily active users over time. The grey line is the Prophet forecast, the shaded area is the ${95}\%$ prediction interval, and the black line is the true value. Subreddits marked with * have a statistically significant percentage of outliers $\left( {\alpha = {0.05}}\right)$ .
124
+
125
+ Discussion The increase in users posting on r/Anxiety is consistent with prior work that has found that epidemics often lead to increased rates of anxiety (Torales et al., 2020). One explanation for the reduction of activity within $\mathrm{r}/$ depression could be that fewer users are depressed and don't feel the need to post on the support forum. If this is the case, our findings contrast with prior work that found that depressive symptoms are commonly observed during pandemics (Chew et al., 2020). However, there are multiple possible alternatives; for example, depression can also cause people to socially withdraw (Mayo Clinic, 2018), so an increase in depression rates could lead to a reduction in posting activity. Another finding from prior work is that delayed depression is common following disaster events (Pennebaker and Harber, 1993; Nandi et al., 2009). Our analysis covers only the beginning of the pandemic, so it likely wouldn't capture this phenomenon. Additional analysis focused on the causes driving the reduction in activity and how this pattern changes in the long-term is needed to make a more conclusive statement about the effects of COVID-19 on depression.
126
+
127
+ ### 5.3 Has COVID-19 led to changes in the discussions users have surrounding mental health?
128
+
129
+ To determine what changes have occurred in conversations surrounding mental health, we use two types of features: LIWC categories and topics obtained from an LDA model. The LIWC features give us a better idea of how common language dimensions have changed, while the LDA-derived topics allow us to explore areas of discussion that are typically of concern in these subreddits. LIWC has been used extensively in mental health analysis, and there are some LIWC categories and LDA-derived topics that overlap, such as ANXIETY, DEATH, and FAMILY, but there are also unique categories covered by each method, such as WE and MOTIVATION. For each of the metrics, we examine changes that have occurred since COVID-19 by computing the proportion of outliers produced by our forecasting model (see Section 4.2) in the post-COVID period. We acknowledge that this analysis may occasionally capture misleading changes. For example, the death keyword may yield changes in the suicide topic (see Appendix B) that are actually related to infectious disease, and observing increased mentions of family does not indicate the polarity of their sentiment. We leave it to future work to do a more in-depth analysis of the context surrounding specific outliers that are detected.
130
+
131
+ <table><tr><td colspan="3">r/Anxiety</td><td colspan="3">r/depression</td><td colspan="3">r/SuicideWatch</td></tr><tr><td>Category</td><td colspan="2">% Outliers</td><td>Category</td><td colspan="2">% Outliers</td><td>Category</td><td colspan="2">$\%$ Outliers</td></tr><tr><td>Motion*</td><td>79</td><td>↓</td><td>You*</td><td>55</td><td>↓</td><td>Prep*</td><td>33</td><td>↑</td></tr><tr><td>Work*</td><td>73</td><td>↓</td><td>Conj ${}^{ * }$</td><td>51</td><td>↓</td><td>Space ${}^{3}$</td><td>33</td><td>↑</td></tr><tr><td>${\mathrm{I}}^{ * }$</td><td>68</td><td>↓</td><td>Motion*</td><td>45</td><td>↓</td><td>Netspeak*</td><td>23</td><td>↑</td></tr><tr><td>Body*</td><td>61</td><td>↑</td><td>Quant*</td><td>43</td><td>↑</td><td>Assent*</td><td>23</td><td>↓</td></tr><tr><td>PPron*</td><td>54</td><td>↓</td><td>Family*</td><td>40</td><td>↑</td><td>Informal ${}^{ * }$</td><td>22</td><td>↑</td></tr><tr><td>Relativ*</td><td>54</td><td>↓</td><td>Article*</td><td>39</td><td>↓</td><td>Cause*</td><td>20</td><td>↑</td></tr><tr><td>${\mathrm{{We}}}^{ * }$</td><td>50</td><td>↑</td><td>Pronoun</td><td>38</td><td>↑</td><td>Affiliation</td><td>17</td><td>↓</td></tr><tr><td>${\text{Bio}}^{ * }$</td><td>49</td><td>↑</td><td>Reward ${}^{ * }$</td><td>36</td><td>↓</td><td>FocusFuture</td><td>16</td><td>↓</td></tr><tr><td>Percept ${}^{ * }$</td><td>42</td><td>↑</td><td>${\text{Feel}}^{ * }$</td><td>35</td><td>↓</td><td>NegEmo</td><td>15</td><td>↑</td></tr><tr><td>Certain*</td><td>41</td><td>↑</td><td>FocusPast</td><td>33</td><td>↑</td><td>Conj</td><td>15</td><td>↓</td></tr></table>
132
+
133
+ Table 2: Ten LIWC categories with the highest proportion of outliers in each subreddit. Arrows mark the direction in which the mean of the outliers shifted from the predicted mean. Categories marked with * have a statistically significant percentage of outliers (with Bonferroni correction; $\alpha = {0.05}$ before correction).
134
+
135
+ #### 5.3.1 LIWC Analysis
136
+
137
+ In Table 2, we show the ten LIWC categories with the most outliers (outside of the 95% prediction interval) from March to May of 2020. We observe a lack of consistency between the subreddits, both in the number and direction of the outliers.
138
+
139
+ We see decreases in r/Anxiety and r/depression in the MOTION category. Categories such as BIO and BODY tend to increase in r/Anxiety; however, this pattern is not present in other subreddits. We see consistent changes in time orientation (e.g., FOCUSPAST, FOCUSFUTURE) across subreddits; a higher focus on the past in $\mathrm{r}/$ depression, and a lower focus on the future in r/SuicideWatch. While it is not among the categories with the most outliers, there is a statistically significant drop in FO-CUSFUTURE on r/Anxiety and r/depression. We also see changes in pronoun usage; the most notable and consistent change across the subreddits is that the usage of WE increases significantly, especially in the early period of COVID-19 (Figure 3a). While there is a significant decrease in I words in $\mathrm{r}/$ Anxiety, there is in fact an increase in $\mathrm{r}/$ depression (Figure 3b). Finally, we see a notable drop in the WORK category (Figure 3c).
140
+
141
+ ![01963dfe-b410-7184-a1e9-090be76a5eca_6_196_169_1264_438_0.jpg](images/01963dfe-b410-7184-a1e9-090be76a5eca_6_196_169_1264_438_0.jpg)
142
+
143
+ Figure 3: Average daily percent of words across posts from a selection LIWC categories over time. The grey line is the Prophet forecast, the shaded area is the 95% prediction interval, and the black line is the true value. Subreddits marked with * have a statistically significant percentage of outliers (with Bonferroni correction; $\alpha = {0.05}$ before correction).
144
+
145
+ Discussion We see changes in some categories that appear to be directly related to the new experience of living during a global pandemic under social distancing rules; this includes the decrease in MOTION, which makes sense as people are traveling and moving around far less. The increase in categories such as BIO and BODY within r/Anxiety may reflect concerns regarding the physical health implications of COVID-19. Moreover, it appears that physical health concerns are especially salient for people who experience anxiety, as the rise in these categories is not present in the other subred-dits. The statistically significant drop in FOCUSFU-TURE within $\mathrm{r}/$ Anxiety and $\mathrm{r}/$ depression indicates that users are less inclined to speak about their concerns for the future in light of the more pressing current concerns related to the pandemic.
146
+
147
+ The sharp increase in WE (Figure 3a) indicates a general feeling of community and togetherness, which speaks positively to the support that those in these mental health communities are getting during the pandemic. This finding aligns with a study by Zhang and Ma (2020) on the effects of COVID- 19 on mental well-being in China, which found that participants received increased support from friends and family during the pandemic. In addition, seeking social support was listed as a common coping strategy during infections disease outbreaks by Chew et al. (2020). An increase in "we" is not specific to mental health communities; researchers have found increases in usage of the pronoun during the early stages of COVID-19 on other sub-reddits (Ashokkumar and Pennebaker, 2020). The decrease in I words in r/Anxiety is accompanied by an increase in r/depression (Figure 3b). The increase of usage of the I pronoun is concerning because it has been shown to correlate with depression, indicating that an increase in its use could be related to worsening symptoms (Rude et al., 2004).
148
+
149
+ The drop in discussion of WORK (Figure 3c) is unexpected, as the economic downturn could be a significant motivator of posts. The drop indicates that up to this point, the stress and change associated with adapting to working from home, or worse, losing one's job has not been a frequent topic of discussion in these forums. This drop could be due to a decrease in work-related stressors, which have been shown to cause anxiety and depression (Melchior et al., 2007; Cherry, 1978), or it could simply indicate that the stressors became secondary to other concerns. It is also possible that compared to the general population, Reddit users are more likely to have jobs that can be done remotely during the pandemic, as they are more likely to have college degrees than the general population. ${}^{3}$
150
+
151
+ #### 5.3.2 Topic Analysis
152
+
153
+ We report the ten topics with the highest proportion of outliers for each subreddit during the COVID- 19 period (March to May 2020) in Table 3. One notable trend is an increase in the amount of discussion related to family; we find that the FAMILY AND HOME topic increased significantly in all three subreddits and the FAMILY AND CHILDREN topic increased significantly in r/depression. Figure 4b shows how the usage of the FAMILY AND HOME topic has changed since January 2019 within each subreddit. While there are noticeable increases in all three subreddits, we a see particularly large spike in r/Anxiety starting around mid-March. Within all subreddits, we see a significant decrease in the TRANSPORT AND DAILY LIFE topic (see Figure 4c), which is associated with words such "drive," "car," "time," and "day". Mirroring the reduction of WORK-related language we observed in Section 5.3.1, we also find that there has been a significant decrease in discussion of the SCHOOL and WORK topics within the r/Anxiety and $\mathrm{r}/$ depression subreddits.
154
+
155
+ We observe significant changes in topics that are explicitly related to mental health. One of the most prominent trends is a significant increase in discussions of ANXIETY and its symptoms (keywords include: "panic," "heart," and "chest"). As seen in Figure 4a, we see a spike in ANXIETY in mid-March in all three subreddits; however, whereas we see a return to a typical level in both r/depression and r/SuicideWatch, within the r/Anxiety, ANXIETY discussion rates have remained abnormally high all the way through the end of May. We find that both INFORMATION SHARING (keywords include: "post," "read," "share," "find," and "hope") and COMMUNICATION (keywords include: "talk," "call," and "message") have become more frequent topics of discussion.
156
+
157
+ Discussion Several of the results in Table 3 seem to reflect the disruption to normal daily life caused by COVID-19 and the resulting quarantine measures. This includes the increase in the FAMILY AND CHILDREN topic (Figure 4b), which is largely expected, as quarantine policies implemented to help contain COVID-19 have resulted in many people spending more time at home and with family than they previously had. While there are noticeable increases in all three subreddits, we see a particularly large spike in $\mathrm{r}/$ Anxiety starting around mid-March. Prior studies on disease outbreaks have found that uncertainty regarding the wellbeing of loved ones is a common source of anxiety during epidemics, which may help to explain this finding (Chew et al., 2020). Another contributing factor may be the emergence of new family responsibilities, such as childcare and home-schooling, that many people have had to take on in the face of closures caused by the pandemic. The decrease in the TRANSPORT AND DAILY LIFE topic (Figure $4\mathrm{c}$ ) is intuitive; quarantine practices following COVID-19 have led to a large reduction in driving and other forms of transportation (Domonoske and Adeline, 2020) and, more generally, to a disruption in daily lifestyles. To the extent that these results indicate an abandonment of routine, they are somewhat concerning, as evidence from prior outbreaks suggests that getting back into normal routines helps to reduce loneliness and anxiety during quarantines (Huremović, 2019). The decreases in discussion of the SCHOOL and WORK topics may indicate that these previously common sources of stress have now become secondary concerns compared to the more immediate concerns associated with COVID-19.
158
+
159
+ The increases in the ANXITETY topic, especially on r/Anxiety, are aligned with existing research that has found that anxiety and the somatic symptoms associated with it are common psychological responses to epidemics (Chew et al., 2020). Further, studies of prior epidemics have found that feelings of anxiety and fear can persist even after the disease itself has been contained (Usher et al., 2020).
160
+
161
+ The increase in the INFORMATION SHARING and COMMUNICATION topics may be tied to the effects of social distancing measures, which have limited in-person interactions and led people to increasingly turn to digital methods of communication. These observations may also reflect a desire to seek out information related to COVID-19; individuals who experience health anxiety are more likely to exhibit online health information seeking behavior (McMullan et al., 2019). The increase in mentions of words related to social media (e.g., "post," "share") is somewhat worrisome; studies of disaster events have found that both more frequent social media use and exposure to conflicting information online (a widely acknowledged issue with COVID- 19 (Kouzy et al., 2020)) lead to higher stress levels (Torales et al., 2020). However, the rise of the INFORMATION SHARING topic, especially in it's relation to words like "share," "hope," and "story," could also be indicative of a collective coping process, in which individuals come together for social support. As noted in Section 5.3.1, this type of coping strategy has frequently been observed during past disease outbreaks (Chew et al., 2020) and may also be reflected by the increase in the usage of WE we saw for discussions in $r/$ Anxiety.
162
+
163
+ ---
164
+
165
+ ${}^{3}$ https://www.statista.com/statistics/517222/reddit-user-distribution-usa-education/
166
+
167
+ ---
168
+
169
+ <table><tr><td colspan="2">r/Anxiety</td><td colspan="2">r/depression</td><td colspan="2">r/SuicideWatch</td></tr><tr><td>Topic</td><td>$\%$ Outliers</td><td>Topic</td><td>$\%$ Outliers</td><td>Topic</td><td>$\%$ Outliers</td></tr><tr><td>Transport and Daily Life*</td><td>88$\downarrow$</td><td>Family and Home*</td><td>83↑</td><td>Transport and Daily Life*</td><td>70↓</td></tr><tr><td>Anxiety*</td><td>75↑</td><td>Transport and Daily Life*</td><td>82↓</td><td>Family and Home*</td><td>50↑</td></tr><tr><td>Information Sharing*</td><td>68↑</td><td>Information Sharing'</td><td>62↑</td><td>Friends*</td><td>32↓</td></tr><tr><td>School*</td><td>62↓</td><td>Work*</td><td>55↓</td><td>Anxiety*</td><td>28↑</td></tr><tr><td>Family and Home*</td><td>48↑</td><td>Suicide*</td><td>50↓</td><td>Family and Children</td><td>22↑</td></tr><tr><td>Life and Philosophy*</td><td>34↑</td><td>Sleep and Routine*</td><td>50↓</td><td>"Game-over" Mentality and Swearing</td><td>15↑</td></tr><tr><td>Work*</td><td>33↓</td><td>Communication</td><td>48↑</td><td>Suicide</td><td>14↓</td></tr><tr><td>"Game-over" Mentality and Swearing"</td><td>29↓</td><td>"Game-over" Mentality and Swearing"</td><td>39↑</td><td>Worry</td><td>14↑</td></tr><tr><td>Experience and Mental State ${}^{ * }$</td><td>27↑</td><td>Medical Treatment</td><td>36↓</td><td>Communication</td><td>13↑</td></tr><tr><td>Motivation*</td><td>22↓</td><td>Family and Children*</td><td>34↑</td><td>People and Behavior</td><td>13↑</td></tr></table>
170
+
171
+ Table 3: Ten topics with the most outliers for r/Anxiety, r/depression, and r/SuicideWatch. Arrows mark the direction in which the mean of the outliers shifted from the predicted mean. Topics marked with * have a statistically significant percentage of outliers (with Bonferroni correction; $\alpha = {0.05}$ before correction).
172
+
173
+ ![01963dfe-b410-7184-a1e9-090be76a5eca_8_195_627_1265_429_0.jpg](images/01963dfe-b410-7184-a1e9-090be76a5eca_8_195_627_1265_429_0.jpg)
174
+
175
+ Figure 4: Average daily posterior probability of selected topics in posts over time. The grey line is the Prophet forecast, the shaded area is the 95% prediction interval, and the black line is the true value. Subreddits marked with * have a statistically significant percentage of outliers (with Bonferroni correction; $\alpha = {0.05}$ before correction).
176
+
177
+ ## 6 Conclusions
178
+
179
+ In this study, we examined how COVID-19 has influenced the online behavior of individuals who discuss mental health concerns by analyzing activity within the r/Anxiety, r/depression, and r/SuicideWatch communities on Reddit. We found substantial evidence of increases in anxiety; we observed an increase in user activity in r/Anxiety, as well as significant increases in discussions of anxiety and the symptoms associated with it. Interestingly, we observed a decrease in activity within the $\mathrm{r}$ /depression and $\mathrm{r}$ /SuicideWatch subreddits. The literature on the impact of disease outbreaks on depression rates contains somewhat contradictory findings; we therefore believe that this is an interesting area for future work.
180
+
181
+ We also observed interesting changes in the content of discussions within each subreddit. Our results suggest that concerns related to COVID- 19, such as health and family, have become more prominent discussion topics compared to other common concerns, such as work and school, which have generated relatively less discussion since the outbreak. While our findings largely confirm the warnings offered by psychiatrists regarding the potential for COVID-19 to have an adverse effect on mental health, we also found some reason for optimism; increases in the usage of WE as well as the INFORMATION SHARING topic (associated with words such as "story" and "hope"), suggest a heightened sense of community and shared experience, which may help individuals cope with these stressful times.
182
+
183
+ ## Acknowledgment
184
+
185
+ We are grateful to the Michigan AI lab for discussions that led to this project, and to the statistical consultants at CSCAR who helped with developing our process for hypothesis testing, especially Kerby Shedden and Thomas Fiore. This material is based in part on work supported by the Precision Health initiative at the University of Michigan, the NSF (grant #1815291), and the John Templeton Foundation (grant #61156). Any opinions, findings, conclusions, or recommendations in this material are those of the authors and do not necessarily reflect the views of the Precision Health initiative, the NSF, or the John Templeton Foundation.
186
+
187
+ ## References
188
+
189
+ Ashwini Ashokkumar and James W Pennebaker. 2020. Turning inward during crises: How COVID is changing our social ties. Accessed: 2020-05-05.
190
+
191
+ Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The pushshift Reddit dataset. Proceedings of the International AAAI Conference on Web and Social Media, 14(1):830-839.
192
+
193
+ David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993-1022.
194
+
195
+ Wilma Bucci and Norbert Freedman. 1981. The language of depression. Bulletin of the Menninger Clinic, 45(4):334.
196
+
197
+ Dante Chakravorti, Kathleen Law, Jonathan Gemmell, and Daniela Raicu. 2018. Detecting and characterizing trends in online mental health discussions. In 2018 IEEE International Conference on Data Mining Workshops (ICDMW), pages 697-706.
198
+
199
+ Stevie Chancellor and Munmun De Choudhury. 2020. Methods in predictive techniques for mental health status on social media: a critical review. NPJ digital medicine, 3(1):1-11.
200
+
201
+ Nicola Cherry. 1978. Stress, anxiety and work: A longitudinal study. Journal of Occupational Psychology, 51(3):259-270.
202
+
203
+ Qian Hui Chew, Ker Chiah Wei, Shawn Vasoo, Hong Choon Chua, and Kang Sim. 2020. Narrative synthesis of psychological and coping responses towards emerging infectious disease outbreaks in the general population: practical considerations for the COVID-19 pandemic. Singapore Medical Journal, (April): 1-31.
204
+
205
+ Glen Coppersmith, Mark Dredze, and Craig Harman. 2014. Quantifying mental health signals in twitter. In Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From
206
+
207
+ Linguistic Signal to Clinical Reality, pages 51-60, Baltimore, Maryland, USA. Association for Computational Linguistics.
208
+
209
+ Glen Coppersmith, Kim Ngo, Ryan Leary, and Anthony Wood. 2016. Exploratory analysis of social media prior to a suicide attempt. In Proceedings of the Third Workshop on Computational Linguistics and Clinical Psychology, pages 106-117, San Diego, CA, USA. Association for Computational Linguistics.
210
+
211
+ Munmun De Choudhury and Sushovan De. 2014. Mental health discourse on Reddit: Self-disclosure, social support, and anonymity. In Eighth international AAAI conference on weblogs and social media.
212
+
213
+ Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013. Predicting depression via social media. Seventh international AAAI conference on weblogs and social media, 13:1-10.
214
+
215
+ Munmun De Choudhury, Emre Kiciman, Mark Dredze, Glen Coppersmith, and Mrinal Kumar. 2016. Discovering shifts to suicidal ideation from mental health content in social media. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, page 2098-2110, New York, NY, USA. Association for Computing Machinery.
216
+
217
+ Camila Domonoske and Stephanie Adeline. 2020. The pandemic emptied american roads. but driving is picking back up. NPR.
218
+
219
+ Johannes Feldhege, Markus Moessner, and Stephanie Bauer. 2020. Who says what? Content and participation characteristics in an online depression community. Journal of Affective Disorders, 263:521- 527.
220
+
221
+ Neil Greenberg, Mary Docherty, Sam Gnanapragasam, and Simon Wessely. 2020. Managing mental health challenges faced by healthcare workers during covid- 19 pandemic. BMJ, 368.
222
+
223
+ Xiaolei Huang, Amelia Jamison, David Bronia-towski, Sandra Quinn, and Mark Dredze. 2020. Coronavirus Twitter Data: A collection of COVID-19 tweets with automated annotations. http://twitterdata.covid19dataresources.org/index.
224
+
225
+ Damir Huremović. 2019. Psychiatry of Pandemics: A Mental Health Response to Infection Outbreak. Springer.
226
+
227
+ Nicholas C Jacobson, Damien Lekkas, George Price, Michael V Heinz, Minkeun Song, A James O'Malley, and Paul J Barr. 2020. Flattening the mental health curve: COVID-19 stay-at-home orders are associated with alterations in mental health search behavior in the united states. JMIR mental health, 7(6):e19347.
228
+
229
+ Ramez Kouzy, Joseph Abi Jaoude, Afif Kraitem, Molly B El Alam, Basil Karam, Elio Adib, Jabra
230
+
231
+ Zarka, Cindy Traboulsi, Elie W Akl, and Khalil Bad-
232
+
233
+ dour. 2020. Coronavirus goes viral: quantifying the COVID-19 misinformation epidemic on Twitter. Cureus, 12(3).
234
+
235
+ Mrinal Kumar, Mark Dredze, Glen Coppersmith, and Munmun De Choudhury. 2015. Detecting Changes in Suicide Content Manifested in Social Media Following Celebrity Suicides. In Proceedings of the 26th ACM Conference on Hypertext & Social Media - HT '15, volume 176, pages 85-94, New York, New York, USA. ACM Press.
236
+
237
+ Sijia Li, Yilin Wang, Jia Xue, Nan Zhao, and Tingshao Zhu. 2020. The impact of COVID-19 epidemic declaration on psychological consequences: a study on active weibo users. International journal of environmental research and public health, 17(6):2032.
238
+
239
+ Yaoyiran Li, Rada Mihalcea, and Steven R Wilson. 2018. Text-based detection and understanding of changes in mental health. In International Conference on Social Informatics, pages 176-188. Springer.
240
+
241
+ Mayo Clinic. 2018. Depression (major depressive disorder) - symptoms and causes. Accessed: 2020-10- 09.
242
+
243
+ Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit.
244
+
245
+ Ryan D McMullan, David Berle, Sandra Arnáez, and Vladan Starcevic. 2019. The relationships between health anxiety, online health information seeking, and cyberchondria: Systematic review and meta-analysis. Journal of affective disorders, 245:270- 278.
246
+
247
+ Maria Melchior, Avshalom Caspi, Barry J Milne, Andrea Danese, Richie Poulton, and Terrie E Moffitt. 2007. Work stress precipitates depression and anxiety in young, working women and men. Psychological medicine, 37(8):1119-1129.
248
+
249
+ David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 262-272, Edinburgh, Scotland, UK. Association for Computational Linguistics.
250
+
251
+ Arijit Nandi, Melissa Tracy, John R Beard, David Vla-hov, and Sandro Galea. 2009. Patterns and predictors of trajectories of depression after an urban disaster. Annals of epidemiology, 19(11):761-770.
252
+
253
+ Thomas E Oxman, Stanley D Rosenberg, and Gary J Tucker. 1982. The language of paranoia. The American journal of psychiatry.
254
+
255
+ Umashanthi Pavalanathan and Munmun De Choudhury. 2015. Identity management and mental health discourse in social media. In Proceedings of the 24th International Conference on World Wide Web, pages 315-321.
256
+
257
+ James W Pennebaker, Ryan L Boyd, Kayla Jordan, and
258
+
259
+ Kate Blackburn. 2015. The development and psychometric properties of LIWC2015. Technical report.
260
+
261
+ James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: LIWC 2001.
262
+
263
+ James W. Pennebaker and Kent D. Harber. 1993. A social stage model of collective coping: The Loma Prieta earthquake and the Persian Gulf War. Journal of Social Issues, 49(4):125-145.
264
+
265
+ Daniel Preotiuc-Pietro, Johannes Eichstaedt, Gregory Park, Maarten Sap, Laura Smith, Victoria Tobolsky, H. Andrew Schwartz, and Lyle Ungar. 2015. The role of personality, age, and gender in tweeting about mental illness. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 21-30, Denver, Colorado. Association for Computational Linguistics.
266
+
267
+ Jianyin Qiu, Bin Shen, Min Zhao, Zhen Wang, Bin Xie, and Yifeng Xu. 2020. A nationwide survey of psychological distress among chinese people in the COVID-19 epidemic: implications and policy recommendations. General psychiatry, 33(2).
268
+
269
+ Philip Resnik, William Armstrong, Leonardo Claudino, Thang Nguyen, Viet-An Nguyen, and Jordan Boyd-Graber. 2015. Beyond LDA: Exploring supervised topic modeling for depression-related language in twitter. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 99- 107, Denver, Colorado. Association for Computational Linguistics.
270
+
271
+ Deblina Roy, Sarvodaya Tripathy, Sujita Kumar Kar, Nivedita Sharma, Sudhir Kumar Verma, and Vikas Kaushal. 2020. Study of knowledge, attitude, anxiety & perceived mental healthcare need in Indian population during COVID-19 pandemic. Asian Journal of Psychiatry, page 102083.
272
+
273
+ Stephanie S. Rude, Eva Maria Gortner, and James W. Pennebaker. 2004. Language use of depressed and depression-vulnerable college students. Cognition and Emotion, 18(8):1121-1133.
274
+
275
+ Judy Hanwen Shen and Frank Rudzicz. 2017. Detecting anxiety through Reddit. In Proceedings of the Fourth Workshop on Computational Linguistics and Clinical Psychology - From Linguistic Signal to Clinical Reality, pages 58-65, Vancouver, BC. Association for Computational Linguistics.
276
+
277
+ Shannon Wiltsey Stirman and James W Pennebaker. 2001. Word use in the poetry of suicidal and non-suicidal poets. Psychosomatic medicine, 63(4):517- 522.
278
+
279
+ Sean J. Taylor and Benjamin Letham. 2018. Forecasting at Scale. The American Statistician, 72(1):37- 45.
280
+
281
+ Julio Torales, Marcelo O'Higgins, João Mauricio Castaldelli-Maia, and Antonio Ventriglio. 2020. The outbreak of COVID-19 coronavirus and its impact on global mental health. International Journal of Social Psychiatry, 66(4):317-320.
282
+
283
+ Sho Tsugawa, Yukiko Mogi, Yusuke Kikuchi, Fumio Kishino, Kazuyuki Fujita, Yuichi Itoh, and Hiroyuki Ohsaki. 2013. On estimating depressive tendencies of Twitter users utilizing their tweet data. In 2013 IEEE Virtual Reality (VR), pages 1-4.
284
+
285
+ Kim Usher, Navjot Bhullar, and Debra Jackson. 2020. Life in the pandemic: Social isolation and mental health. Journal of Clinical Nursing, 29(15- 16):2756-2757.
286
+
287
+ Cuiyan Wang, Riyu Pan, Xiaoyang Wan, Yilin Tan, Linkang Xu, Cyrus S Ho, and Roger C Ho. 2020. Immediate psychological responses and associated factors during the initial stage of the 2019 coronavirus disease (COVID-19) epidemic among the general population in China. International journal of environmental research and public health, 17(5):1729.
288
+
289
+ Walter Weintraub. 1981. Verbal behavior: Adaptation and psychopathology. Springer Publishing Company New York.
290
+
291
+ JT Wolohan. 2020. Estimating the effect of COVID-19 on mental health: Linguistic indicators of depression during a global pandemic. In ${ACL2020}$ Workshop on Natural Language Processing for COVID-19. Association for Computational Linguistics.
292
+
293
+ Hao Yao, Jian-Hua Chen, and Yi-Feng Xu. 2020. Patients with mental health disorders in the COVID-19 epidemic. The Lancet Psychiatry, 7(4):e21.
294
+
295
+ Yingfei Zhang and Zheng Feei Ma. 2020. Impact of the COVID-19 pandemic on mental health and quality of life among local residents in Liaoning Province, China: A cross-sectional study. International journal of environmental research and public health, 17(7):2381.
296
+
297
+ Francis W Zwiers and Hans Von Storch. 1995. Taking serial correlation into account in tests of the mean. Journal of Climate, 8(2):336-351.
298
+
299
+ Ayşegül Şahin, Murat Tasci, and Jin Yan. 2020. The Unemployment Cost of COVID-19: How High and How Long? Economic Commentary (Federal Reserve Bank of Cleveland), pages 1-7.
300
+
301
+ ## A COVID-19 Lexicon
302
+
303
+ The following terms from Huang et al. (2020) are included in our COVID-19 lexicon: 2019- ncov, 2019ncov, coronavirus, COVID, COVID- 19, COVID19, mers, sars, SARS2, SARSCOV19, wuflu, Wuhan. We add the following terms: corona, outbreak, pandemic, rona, sars-cov-2, virus. We ignore case when counting occurrences, and therefore exclude duplicate terms that only differ in their case.
304
+
305
+ ## B Topics identified by LDA Model
306
+
307
+ Figure 4 shows the topics identified by the LDA model.
308
+
309
+ <table><tr><td>Topic Label</td><td>High Probability Words</td></tr><tr><td>School</td><td>school, year, college, high, class, fail, parent, study, grade, start</td></tr><tr><td>Relationships</td><td>love, relationship, girl, guy, good, girlfriend, break, date, meet, find</td></tr><tr><td>Experience and Mental State</td><td>experience, situation, mind, part, brain, lead, state, feeling, sense, learn</td></tr><tr><td>Communication</td><td>talk, call, time, phone, send, text, give, back, speak, message</td></tr><tr><td>People and Behavior</td><td>people, make, person, care, thing, understand, problem, wrong, act, attention</td></tr><tr><td>Feelings</td><td>happy, tired, cry, anymore, sad, make, hurt, depressed, stop, feeling</td></tr><tr><td>"Game-over" Mentality and Swearing</td><td>hate, fuck, shit, fucking, die, wanna, stupid, kill, literally, idk</td></tr><tr><td>Transport and Daily Life</td><td>drive, time, back, car, drink, start, walk, home, run, day</td></tr><tr><td>Time</td><td>year, month, start, time, back, ago, day, week, past, couple</td></tr><tr><td>Worry</td><td>thought, mind, fear, worry, head, afraid, scared, scare, stop, happen</td></tr><tr><td>Friends</td><td>friend, talk, people, good, social, play, make, close, hang, group</td></tr><tr><td>Anxiety</td><td>anxiety, attack, panic, anxious, heart, symptom, calm, chest, experience, stress</td></tr><tr><td>Medical Treatment</td><td>anxiety, medication, doctor, therapy, med, therapist, experience, week, mg, work</td></tr><tr><td>Body and Food</td><td>eat, body, eye, face, hand, head, sit, food, weight, walk</td></tr><tr><td>-</td><td>bad, thing, make, time, lot, happen, pretty, good, stuff, kind</td></tr><tr><td>Life and Philosophy</td><td>life, world, hope, exist, dream, human, live, pain, love, real</td></tr><tr><td>Depression and Mental Illness</td><td>depression, mental, issue, health, problem, struggle, deal, bad, suffer, year</td></tr><tr><td>Life Purpose</td><td>life, live, end, anymore, point, family, reason, care, worth, future</td></tr><tr><td>Motivation</td><td>thing, time, make, good, find, hard, work, enjoy, change, motivation</td></tr><tr><td>Work</td><td>work, job, money, pay, quit, find, afford, interview, company, month</td></tr><tr><td>Family and Children</td><td>year, family, mother, kid, parent, child, life, father, young, age</td></tr><tr><td>Information Sharing</td><td>post, read, write, find, hope, give, share, story, reddit, long</td></tr><tr><td>Family and Home</td><td>leave, mom, home, move, house, dad, family, live, parent, stay</td></tr><tr><td>Sleep and Routine</td><td>day, sleep, night, hour, wake, today, bed, morning, work, week</td></tr><tr><td>Suicide</td><td>kill, die, suicide, pain, suicidal, end, attempt, cut, plan, dead</td></tr></table>
310
+
311
+ Table 4: Topics identified by the LDA topic model. For each topic, we provide a summary label and the ten most probable words. We omit labels for topics whose keywords did not have a clear interpretation.
312
+
313
+ ## C Statistical Significance Test
314
+
315
+ We apply a one-sample proportion test to assess whether the proportion of observations outside of the prediction interval in the post-COVID period is significantly greater than 5%. This test assumes that the observations are independent; however, we find that there is order-1 autocorrelation in our data. We therefore apply a correction for order- 1 autocorrelation (Zwiers and Von Storch, 1995) when computing the z-test statistic. The corrected test statistic is:
316
+
317
+ $$
318
+ z = \frac{\widehat{p} - {p}_{0}}{\sqrt{{p}_{0}\left( {1 - {p}_{0}}\right) \left( {1 + r}\right) /n\left( {1 - r}\right) }} \tag{2}
319
+ $$
320
+
321
+ where $\widehat{p}$ is the proportion of observations outside of the prediction interval in the post-COVID period, ${p}_{0} = {0.05}, n$ is the number of observations in the post-COVID period, and $r$ is the lag-1 correlation coefficient of the pre-COVID data.
322
+
323
+ We use a Bonferroni correction when determining statistical significance for our discussion content metrics, as we ran almost 300 tests. $M = {294}$ , which is the number of LIWC categories and topics, multiplied by the number of subreddits. Our corrected $\alpha = {0.05}/{294} = {1.7} \times {10}^{-5}$ .
papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/4LIJshtHlnk/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § QUANTIFYING THE EFFECTS OF COVID-19 ON MENTAL HEALTH SUPPORT FORUMS
2
+
3
+ Laura Biester*, Katie Matton*, Janarthanan Rajendran,
4
+
5
+ Emily Mower Provost, Rada Mihalcea
6
+
7
+ Computer Science & Engineering, University of Michigan, USA
8
+
9
+ $\left\{ {{1biester},{katiemat},{rjana},{emilykmp},{mihalcea}}\right\} @{umich}.{edu}$
10
+
11
+ § ABSTRACT
12
+
13
+ The COVID-19 pandemic, like many of the disease outbreaks that have preceded it, is likely to have a profound effect on mental health. Understanding its impact can inform strategies for mitigating negative consequences. In this work, we seek to better understand the effects of COVID-19 on mental health by examining discussions within mental health support communities on Reddit. First, we quantify the rate at which COVID-19 is discussed in each community, or subreddit, in order to understand levels of pandemic-related discussion. Next, we examine the volume of activity in order to determine whether the number of people discussing mental health has risen. Finally, we analyze how COVID-19 has influenced language use and topics of discussion within each subreddit.
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ The implications of COVID-19 extend far beyond its immediate physical health effects. Uncertainty and fear surrounding the disease and its effects, in addition to a lack of consistent and reliable information, contribute to rising levels of anxiety and stress (Torales et al., 2020). Policies designed to help contain the disease also have significant consequences. Social distancing policies and lockdowns lead to increased feelings of isolation and uncertainty (Huremović, 2019). They have also triggered an economic downturn (Şahin et al., 2020), resulting in soaring unemployment rates and causing many to experience financial stress. Therefore, in addition to the profound effects on physical health around the world, psychiatrists have warned that we should also brace for a mental health crisis as a result of the pandemic (Qiu et al., 2020; Greenberg et al., 2020; Yao et al., 2020; Torales et al., 2020).
18
+
19
+ Indeed, the literature on the impact of past epidemics indicates that they are associated with a myriad of adverse mental health effects. In a review of studies on the 2002-2003 SARS outbreak, the 2009 H1N1 influenza outbreak, and the 2018 Ebola outbreak, Chew et al. (2020) found that anxiety, fear, depression, anger, guilt, grief, and post-traumatic stress were all commonly observed psychological responses. Furthermore, many of the factors commonly cited for inducing these responses are applicable to the COVID-19 setting. These include: fear of contracting the disease, a disruption in daily routines, isolation related to being quarantined, and uncertainty regarding the disease treatment process and outcomes, the well-being of loved ones, and one's economic situation.
20
+
21
+ While disease outbreaks pose a risk to the mental health of the general population, research suggests that this risk is heightened for those with preexisting mental health concerns. People with mental health disorders are particularly susceptible to experiencing negative mental health consequences during times of social isolation (Usher et al., 2020). Further, as Yao et al. (2020) warn, they are likely to have a stronger emotional response to the feelings of fear, anxiety, and depression that come along with COVID-19 than the general population.
22
+
23
+ Given the potential for the COVID-19 outbreak to have devastating consequences for mental health, it is critical that we work to understand its psychological effects. In this work, we use Reddit, a popular social media platform, to study how COVID-19 has impacted the behavior of groups of users who express mental health concerns. We analyze the content of discussions (COVID-related discussions, psycholinguistic categories, and topics) as well as the volume of communication (daily user count) and find notable changes in each category. Some of these changes appear in multiple mental health subreddits, but some are more specific to individual communities that relate to specific diagnoses. We believe that our findings can help us better understand and potentially alleviate the negative mental health effects of the pandemic; for instance, this type of analysis could help moderators to more effectively support users through future crises. To the best of our knowledge, the method that we propose has not been used previously to study changes in mental health subreddits, and could be applied to understand the effects of other major events like political elections and natural disasters.
24
+
25
+ *Denotes equal contribution.
26
+
27
+ § 2 RELATED WORK
28
+
29
+ § 2.1 LINGUISTIC ANALYSIS AND MENTAL HEALTH
30
+
31
+ There is a considerable body of research that examines the relationship between language use and mental health, including work dating back several decades. For example, Bucci and Freedman (1981) and Weintraub (1981) observed an increased usage of first person singular pronouns in individuals with depression. Oxman et al. (1982) showed that they could distinguish between paranoia and depression by applying linguistic analysis to speech.
32
+
33
+ Since then, advances in tools for text analytics have led to increased research in this area. Notably, the Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) is a widely used computerized text analysis tool that has been validated for psycholinguistic analysis. Some of the earliest studies using LIWC analyzed written text. For instance, researchers have used LIWC to study linguistic patterns in essays written by college students with and without depression (Rude et al., 2004) or in poems written by suicidal vs non-suicidal poets (Stirman and Pennebaker, 2001). More recently, there has been a proliferation of studies applying LIWC to online text, including social media data. LIWC has been used to study language patterns on social media for a variety of mental health disorders, including depression, anxiety, suicidality, and bipolar disorder (De Choudhury et al., 2013; Shen and Rudzicz, 2017; Coppersmith et al., 2014, 2016). In addition to LIWC, other methods used to study the linguistic patterns of mental illness include character and word models (Coppersmith et al., 2014; Tsugawa et al., 2013) and topic modeling (Resnik et al., 2015; Preotiuc-Pietro et al., 2015).
34
+
35
+ § 2.2 STUDYING MENTAL HEALTH VIA SOCIAL MEDIA
36
+
37
+ In the past decade, social media has emerged as a powerful tool for understanding human behavior, and correspondingly mental health. A growing number of studies have applied computational methods to data collected from social media platforms in order to characterize behavior associated with mental health illnesses and to detect and forecast mental health outcomes (see Chancellor and De Choudhury (2020) for a comprehensive review).
38
+
39
+ Reddit is a particularly well-suited platform for studying mental health due to its semi-anonymous nature, which encourages user honesty and reduces inhibitions associated with self-disclosure (De Choudhury and De, 2014). Additionally, Reddit contains subreddits that act as mental health support forums (e.g., r/Anxiety, r/depression, r/SuicideWatch), which enable a more targeted analysis of users experiencing different mental health conditions. A number of existing works have focused on characterizing patterns of discourse within these mental health communities on Red-dit. These include studies that have analyzed longitudinal trends in topic usage and word choice (Chakravorti et al., 2018), the relationship between user participation styles and topic usage (Feldhege et al., 2020), and the discourse patterns specific to self-disclosure, social support, and anonymous posting (Pavalanathan and De Choudhury, 2015; De Choudhury and De, 2014).
40
+
41
+ Other studies of Reddit mental health communities have aimed to quantify and forecast changes in user behavior. De Choudhury et al. (2016) presented a model for predicting the likelihood that users transition from discussing mental health generally to engaging in suicidal ideation. Li et al. (2018) analyzed linguistic style measures associated with increasing vs decreasing participation in mental health subreddits over the course of a year. Kumar et al. (2015) examined how posting activity in r/SuicideWatch changes following a celebrity suicide. Our work similarly focuses on analyzing temporal patterns in user activity, but we aim to characterize changes associated with COVID-19.
42
+
43
+ § 2.3 MENTAL HEALTH AND COVID-19
44
+
45
+ Since the first cases of COVID-19 were reported in December 2019, there have been a number of preliminary studies of its impact on mental health. In a survey of the general public of China, a majority of respondents perceived the psychological impact of the outbreak to be moderate-to-severe and about one-third reported experiencing moderate-to-severe anxiety (Wang et al., 2020). Studies of the impact of COVID-19 among residents of Liaoning Province, China (Zhang and Ma, 2020) and the adult Indian population (Roy et al., 2020) also found notable rates of mental distress.
46
+
47
+ There is a set of studies that have examined the mental health consequences of COVID-19 by analyzing online behaviors. Jacobson et al. (2020) explored the short-term impact of stay-at-home orders in the United States by analyzing changes in the rates of mental health-related Google search queries immediately after orders were issued. Their results showed that rates of mental health queries increased leading up to the issuance of stay-at-home-orders, but then plateaued after they went into effect; however they did not consider the longer-term implications of the stay-at-home orders on mental health. Li et al. (2020) measured psycholinguistic attributes of posts on Weibo, a Chinese social media platform, before and after the Chinese National Health Commission declared COVID-19 to be an epidemic. Their findings showed that expressions of negative emotions and sensitivity to social risks increased following the declaration. Wolohan (2020) used a Long Short-Term Memory model to classify depression among Reddit users in April 2020, finding a higher than normal depression rate.
48
+
49
+ Our work similarly aims to measure changes in online behavior as a means of understanding the relationship between COVID-19 and mental health. However, two notable differences are: (1) instead of analyzing the short-term impact of a specific COVID-related event, we examine more general changes that have occurred during a three-month period of the outbreak; and (2) we focus our analysis on activity within mental health forums, which allows us to examine the impact of COVID- 19 specifically on individuals who have expressed mental health concerns.
50
+
51
+ § 3 DATA
52
+
53
+ We collect Reddit posts from three mental health subreddits using the Pushshift API ${}^{1}$ (Baumgart-ner et al., 2020): r/Anxiety, r/depression, and r/SuicideWatch, from January 2017 to May 2020. The reasons for analyzing these three subreddits are twofold: first, over the three and a half years represented in our data, these subreddits have a significant amount of activity $( \geq {40}$ posts every day), making it feasible to treat daily values as a time series. Second, because the subreddits provide support for different mental health disorders, their users may have been affected differently by COVID-19. We separate the data into two time periods: pre-COVID (January 1, 2017 - February 29, 2020) and post-COVID (March 1, 2020 - May 31, 2020), roughly delineating when COVID-19 began to have a serious impact on those in the United States, where the majority of Reddit users are concentrated. ${}^{2}$ This choice of dates was informed by our analysis of the rates at which COVID-19 related words were discussed in each subreddit (see Section 5.1), which we found hovered around 0 -5% before rising sharply near the beginning of March.
54
+
55
+ max width=
56
+
57
+ 2|c|r/Anxiety r/depression r/SuicideWatch
58
+
59
+ 1-4
60
+ 2017 95 279 91
61
+
62
+ 1-4
63
+ 2018 164 449 188
64
+
65
+ 1-4
66
+ 2019 211 622 285
67
+
68
+ 1-4
69
+ 2020 243 618 370
70
+
71
+ 1-4
72
+
73
+ Table 1: Average number of posts per day across the three subreddits in our dataset.
74
+
75
+ We exclude posts where the author or text is marked as '[removed]' or '[deleted]', because posts with deleted authors offer no value for user count metrics, and deleted content means that we are unable to capture linguistic signals (see Section 4.1 for more details on these metrics). Figure 1 shows the average number of daily posts for $\mathrm{r}/$ Anxiety, $\mathrm{r}/$ depression, and $\mathrm{r}/$ SuicideWatch.
76
+
77
+ § 4 METHODOLOGY
78
+
79
+ Our goal is to identify how mental health subreddit activity has changed during the pandemic. We first create time series for a number of metrics that could be affected by the pandemic, encompassing activity levels and text content (Section 4.1). We then use a time series intervention analysis technique to determine whether there are significant changes in our metrics during the pandemic (Section 4.2).
80
+
81
+ § 4.1 REDDIT ACTIVITY METRICS
82
+
83
+ We begin by creating a lexicon of words that are commonly used to refer to COVID-19. This allows us to determine the extent to which users in each subreddit are discussing COVID-19, and also gives us a clearer idea of when COVID-19 began to directly affect discussion in the mental health subreddits. We based the lexicon on a set of twitter search keywords from Huang et al. (2020), and added six additional words that we believed would be indicative of discussion about COVID-19 (see the full lexicon in Appendix A).
84
+
85
+ ${}^{1}$ As with other social media datasets, there may be noise in the form of API changes and data removed after collection. For the dates involved in our study, static Pushshift dump files were not yet available.
86
+
87
+ ${}^{2}$ https://www.alexa.com/siteinfo/reddit.com
88
+
89
+ To study changes in the number of users seeking mental health support in subreddits, we record the author usernames for each post in our dataset. Since individuals can create multiple accounts under different usernames, the number of unique user-names associated with posts is likely not equal to the true number of unique users; however, it is a reasonable proxy.
90
+
91
+ To study changes in content that occur during the pandemic, we use the LIWC lexicon (Pennebaker et al., 2015) and Latent Dirichlet Allocation (LDA) topic modeling (Blei et al., 2003). The LIWC lexicon consists of seventy-three hierarchical psycholinguistic word categories, encapsulating properties including linguistic categories (e.g., 1st person plural pronouns, verbs), emotions (e.g., anxiety, sadness), time (e.g., present, future), and personal concerns (e.g., work, money, death). To capture the discussion topics that are common in the r/Anxiety, $\mathrm{r}/$ depression, and $\mathrm{r}/$ SuicideWatch subreddits specifically, we train a topic model on posts from these subreddits. We ensure that discussions from each of the subreddits are equally represented in our training dataset by randomly downsampling the posts from the subreddits with more data. We use the implementation of LDA topic modeling provided in the MALLET toolkit (McCallum, 2002) and train models with $k = 5,{10},..,{40}$ topics. We select a single model to use in our analysis by examining their coherence scores, a measure of the semantic similarity of high probability words within each topic (Mimno et al., 2011). As coherence scores tend to increase with increasing $k$ , we select $k$ as the first local maxima of coherence scores, which we found to be $k = {25}$ .
92
+
93
+ In Appendix B, we show the 25 topics obtained from our topic model, along with the highest probability words associated with each topic. We also provide labels that summarize the essence of each topic, which we created by examining their representative words. Common themes of discussion include: daily life concerns (e.g., school, work, sleep and routine), personal relationships (e.g., friends, family, relationships), and mental health struggles (e.g., anxiety, suicide, medical treatment).
94
+
95
+ When using text from posts, we remove special characters and sequences, such as newlines, quotes, emails, and tables. To represent the text of a post, we concatenate the title with the text content, as was done in prior work (Chakravorti et al., 2018). We apply additional pre-processing steps for our topic modeling analysis: (1) we remove a set of common stopwords that do not appear in the LIWC lexicon (we kept those in LIWC as they have been found to have psychological meaning), (2) we form bigrams from pairs of words that commonly appear together, and (3) we lemmatize each word.
96
+
97
+ § 4.2 TIME SERIES ANALYSIS
98
+
99
+ We treat the task of identifying changes in subred-dit activity patterns as a time series intervention analysis problem. Our basic approach involves: (1) fitting a time series model to the pre-COVID observations for each of the metrics described above and then (2) examining how the values forecasted by the model compare to the observed values during the post-COVID time period. It is worth noting that the one study we found examining the impact of an event on activity within mental health subreddits employs a different approach: they use a t-test to compare the observations from "before" vs "after" the event (Kumar et al., 2015). However, their problem setup differs from ours in that they consider a much shorter period of time (four weeks total), so the effects of seasonality (regular changes that recur each year) and longer-term trends are likely reduced. In contrast, we find that there is often a strong trend over time and seasonal component in our data, making a direct comparison of two time periods with a t-test unreliable.
100
+
101
+ We smooth each time series and remove day-of-week related fluctuations by computing a seven-day rolling mean over the time series. We use the Prophet model (Taylor and Letham, 2018) to create a model of the period before COVID-19. This model was initially created by Facebook to forecast time series on their platform, such as the number of events created per day or the number of active users; we find that our time series, also compiled from social media, have many similar properties. The Prophet model is an additive regression model with three components:
102
+
103
+ $$
104
+ y\left( t\right) = g\left( t\right) + s\left( t\right) + h\left( t\right) + {\epsilon }_{t} \tag{1}
105
+ $$
106
+
107
+ The trend is encapsulated by $g\left( t\right)$ , a piecewise linear model. The seasonality of the data is captured by $s\left( t\right)$ , which is approximated using a Fourier series. As we smooth our data on a weekly basis, we utilize only yearly seasonality, excluding the optional weekly and daily seasonality components. The third term, $h\left( t\right)$ , represents holidays; we find that adding the default list of US holidays provided by Prophet reduces error for most our our time series in the pre-COVID period, likely because the Reddit population is centered in the United States. Finally, ${\epsilon }_{t}$ represents the error, in this case fluctuations in the time series that are not captured by the model.
108
+
109
+ After training the model on the pre-COVID data, we predict values for the post-COVID period. If we assume that there is no change during this time period, we would expect the predicted values to be near the true values, given that the model does a good job fitting the trend and seasonal components. The model computes uncertainty intervals over the predicted values by simulating ways in which the trend may change during the period of the forecast. We use this method to compute the ${95}\%$ prediction interval. Our null hypothesis is that there has been no change in trend. In this case, we would expect $5\%$ of the data in the post-COVID period to fall outside of the prediction interval. Our alternate hypothesis is that there was a change in the trend of the time series (which may be attributable to COVID-19). In this case, more than $5\%$ of the data in the post-COVID period will fall outside of the prediction interval. We apply a one-sample proportion test to assess whether the proportion of observations outside of the prediction interval in the post-COVID period is significantly greater than $5\%$ . The details of this test are in Appendix C.
110
+
111
+ § 5 RESULTS AND DISCUSSION
112
+
113
+ § 5.1 HOW OFTEN DO PEOPLE IN DIFFERENT MENTAL HEALTH SUBREDDITS DISCUSS COVID-19?
114
+
115
+ Using our COVID-19 lexicon (Section 4.1), we compute the percentage of posts per day that mention any words related to COVID-19, as shown in Figure 1. We see that COVID-19 began to have a serious impact on discussions in all three sub-reddits around the beginning of March 2020, as is clear from the spikes in Figure 1. Although COVID-19 is discussed on all subreddits, we see a stark difference in the volume of discussion across each of them; in r/Anxiety, discussion of COVID- 19 is more frequent than it is in r/depression or r/SuicideWatch, and begins earlier.
116
+
117
+ < g r a p h i c s >
118
+
119
+ Figure 1: Percent of posts mentioning COVID-19 related words across mental health subreddits.
120
+
121
+ Discussion When choosing the date to consider as the beginning of the post-COVID period in our time series analysis, we considered March 1st, 2020 as a sensible date, as it aligns with the time at which the United States (where the majority of Reddit users reside) began to take COVID-19 seriously. March 1st closely followed the first announced COVID-19 death in the United States on February 28th, 2020, and preceded state lockdowns and school closures. The spikes at the beginning of March suggest that this date also reflects the time at which COVID-19 began to have a notable impact on mental health subreddit discussions.
122
+
123
+ Although most COVID-19 related discussion started in March, we also see that a small spike in discussion rates occurred earlier in r/Anxiety. This suggests that users in this subreddit began to notice some impact from COVID-19 in late January, when reports of lockdowns in China first appeared in the news. Based on the early start and elevated rate of COVID-19 discussion within r/Anxiety, we conclude that all of our metrics are likely to be more strongly affected by COVID-19 in r/Anxiety.
124
+
125
+ § 5.2 HAS COVID-19 CHANGED THE NUMBER OF USERS SEEKING SUPPORT IN MENTAL HEALTH SUBREDDITS?
126
+
127
+ We report the daily number of unique users who posted in each subreddit in Figure 2. We observe an increase in the number of users who posted in the r/Anxiety subreddit in the post-COVID period. Meanwhile, in both r/depression and r/SuicideWatch, we find significant decreases in the number of users who posted. In r/depression, we observe a substantial drop in posting rates around mid-March. Activity in this subreddit remains abnormally low into late-April, when it starts to revert back towards the forecasted values. In r/SuicideWatch, the drop in user activity is less extreme, and we see that the activity levels eventually return to their predicted values.
128
+
129
+ < g r a p h i c s >
130
+
131
+ Figure 2: Daily active users over time. The grey line is the Prophet forecast, the shaded area is the ${95}\%$ prediction interval, and the black line is the true value. Subreddits marked with * have a statistically significant percentage of outliers $\left( {\alpha = {0.05}}\right)$ .
132
+
133
+ Discussion The increase in users posting on r/Anxiety is consistent with prior work that has found that epidemics often lead to increased rates of anxiety (Torales et al., 2020). One explanation for the reduction of activity within $\mathrm{r}/$ depression could be that fewer users are depressed and don't feel the need to post on the support forum. If this is the case, our findings contrast with prior work that found that depressive symptoms are commonly observed during pandemics (Chew et al., 2020). However, there are multiple possible alternatives; for example, depression can also cause people to socially withdraw (Mayo Clinic, 2018), so an increase in depression rates could lead to a reduction in posting activity. Another finding from prior work is that delayed depression is common following disaster events (Pennebaker and Harber, 1993; Nandi et al., 2009). Our analysis covers only the beginning of the pandemic, so it likely wouldn't capture this phenomenon. Additional analysis focused on the causes driving the reduction in activity and how this pattern changes in the long-term is needed to make a more conclusive statement about the effects of COVID-19 on depression.
134
+
135
+ § 5.3 HAS COVID-19 LED TO CHANGES IN THE DISCUSSIONS USERS HAVE SURROUNDING MENTAL HEALTH?
136
+
137
+ To determine what changes have occurred in conversations surrounding mental health, we use two types of features: LIWC categories and topics obtained from an LDA model. The LIWC features give us a better idea of how common language dimensions have changed, while the LDA-derived topics allow us to explore areas of discussion that are typically of concern in these subreddits. LIWC has been used extensively in mental health analysis, and there are some LIWC categories and LDA-derived topics that overlap, such as ANXIETY, DEATH, and FAMILY, but there are also unique categories covered by each method, such as WE and MOTIVATION. For each of the metrics, we examine changes that have occurred since COVID-19 by computing the proportion of outliers produced by our forecasting model (see Section 4.2) in the post-COVID period. We acknowledge that this analysis may occasionally capture misleading changes. For example, the death keyword may yield changes in the suicide topic (see Appendix B) that are actually related to infectious disease, and observing increased mentions of family does not indicate the polarity of their sentiment. We leave it to future work to do a more in-depth analysis of the context surrounding specific outliers that are detected.
138
+
139
+ max width=
140
+
141
+ 3|c|r/Anxiety 3|c|r/depression 3|c|r/SuicideWatch
142
+
143
+ 1-9
144
+ Category 2|c|% Outliers Category 2|c|% Outliers Category 2|c|$\%$ Outliers
145
+
146
+ 1-9
147
+ Motion* 79 ↓ You* 55 ↓ Prep* 33 ↑
148
+
149
+ 1-9
150
+ Work* 73 ↓ Conj ${}^{ * }$ 51 ↓ Space ${}^{3}$ 33 ↑
151
+
152
+ 1-9
153
+ ${\mathrm{I}}^{ * }$ 68 ↓ Motion* 45 ↓ Netspeak* 23 ↑
154
+
155
+ 1-9
156
+ Body* 61 ↑ Quant* 43 ↑ Assent* 23 ↓
157
+
158
+ 1-9
159
+ PPron* 54 ↓ Family* 40 ↑ Informal ${}^{ * }$ 22 ↑
160
+
161
+ 1-9
162
+ Relativ* 54 ↓ Article* 39 ↓ Cause* 20 ↑
163
+
164
+ 1-9
165
+ ${\mathrm{{We}}}^{ * }$ 50 ↑ Pronoun 38 ↑ Affiliation 17 ↓
166
+
167
+ 1-9
168
+ ${\text{ Bio }}^{ * }$ 49 ↑ Reward ${}^{ * }$ 36 ↓ FocusFuture 16 ↓
169
+
170
+ 1-9
171
+ Percept ${}^{ * }$ 42 ↑ ${\text{ Feel }}^{ * }$ 35 ↓ NegEmo 15 ↑
172
+
173
+ 1-9
174
+ Certain* 41 ↑ FocusPast 33 ↑ Conj 15 ↓
175
+
176
+ 1-9
177
+
178
+ Table 2: Ten LIWC categories with the highest proportion of outliers in each subreddit. Arrows mark the direction in which the mean of the outliers shifted from the predicted mean. Categories marked with * have a statistically significant percentage of outliers (with Bonferroni correction; $\alpha = {0.05}$ before correction).
179
+
180
+ § 5.3.1 LIWC ANALYSIS
181
+
182
+ In Table 2, we show the ten LIWC categories with the most outliers (outside of the 95% prediction interval) from March to May of 2020. We observe a lack of consistency between the subreddits, both in the number and direction of the outliers.
183
+
184
+ We see decreases in r/Anxiety and r/depression in the MOTION category. Categories such as BIO and BODY tend to increase in r/Anxiety; however, this pattern is not present in other subreddits. We see consistent changes in time orientation (e.g., FOCUSPAST, FOCUSFUTURE) across subreddits; a higher focus on the past in $\mathrm{r}/$ depression, and a lower focus on the future in r/SuicideWatch. While it is not among the categories with the most outliers, there is a statistically significant drop in FO-CUSFUTURE on r/Anxiety and r/depression. We also see changes in pronoun usage; the most notable and consistent change across the subreddits is that the usage of WE increases significantly, especially in the early period of COVID-19 (Figure 3a). While there is a significant decrease in I words in $\mathrm{r}/$ Anxiety, there is in fact an increase in $\mathrm{r}/$ depression (Figure 3b). Finally, we see a notable drop in the WORK category (Figure 3c).
185
+
186
+ < g r a p h i c s >
187
+
188
+ Figure 3: Average daily percent of words across posts from a selection LIWC categories over time. The grey line is the Prophet forecast, the shaded area is the 95% prediction interval, and the black line is the true value. Subreddits marked with * have a statistically significant percentage of outliers (with Bonferroni correction; $\alpha = {0.05}$ before correction).
189
+
190
+ Discussion We see changes in some categories that appear to be directly related to the new experience of living during a global pandemic under social distancing rules; this includes the decrease in MOTION, which makes sense as people are traveling and moving around far less. The increase in categories such as BIO and BODY within r/Anxiety may reflect concerns regarding the physical health implications of COVID-19. Moreover, it appears that physical health concerns are especially salient for people who experience anxiety, as the rise in these categories is not present in the other subred-dits. The statistically significant drop in FOCUSFU-TURE within $\mathrm{r}/$ Anxiety and $\mathrm{r}/$ depression indicates that users are less inclined to speak about their concerns for the future in light of the more pressing current concerns related to the pandemic.
191
+
192
+ The sharp increase in WE (Figure 3a) indicates a general feeling of community and togetherness, which speaks positively to the support that those in these mental health communities are getting during the pandemic. This finding aligns with a study by Zhang and Ma (2020) on the effects of COVID- 19 on mental well-being in China, which found that participants received increased support from friends and family during the pandemic. In addition, seeking social support was listed as a common coping strategy during infections disease outbreaks by Chew et al. (2020). An increase in "we" is not specific to mental health communities; researchers have found increases in usage of the pronoun during the early stages of COVID-19 on other sub-reddits (Ashokkumar and Pennebaker, 2020). The decrease in I words in r/Anxiety is accompanied by an increase in r/depression (Figure 3b). The increase of usage of the I pronoun is concerning because it has been shown to correlate with depression, indicating that an increase in its use could be related to worsening symptoms (Rude et al., 2004).
193
+
194
+ The drop in discussion of WORK (Figure 3c) is unexpected, as the economic downturn could be a significant motivator of posts. The drop indicates that up to this point, the stress and change associated with adapting to working from home, or worse, losing one's job has not been a frequent topic of discussion in these forums. This drop could be due to a decrease in work-related stressors, which have been shown to cause anxiety and depression (Melchior et al., 2007; Cherry, 1978), or it could simply indicate that the stressors became secondary to other concerns. It is also possible that compared to the general population, Reddit users are more likely to have jobs that can be done remotely during the pandemic, as they are more likely to have college degrees than the general population. ${}^{3}$
195
+
196
+ § 5.3.2 TOPIC ANALYSIS
197
+
198
+ We report the ten topics with the highest proportion of outliers for each subreddit during the COVID- 19 period (March to May 2020) in Table 3. One notable trend is an increase in the amount of discussion related to family; we find that the FAMILY AND HOME topic increased significantly in all three subreddits and the FAMILY AND CHILDREN topic increased significantly in r/depression. Figure 4b shows how the usage of the FAMILY AND HOME topic has changed since January 2019 within each subreddit. While there are noticeable increases in all three subreddits, we a see particularly large spike in r/Anxiety starting around mid-March. Within all subreddits, we see a significant decrease in the TRANSPORT AND DAILY LIFE topic (see Figure 4c), which is associated with words such "drive," "car," "time," and "day". Mirroring the reduction of WORK-related language we observed in Section 5.3.1, we also find that there has been a significant decrease in discussion of the SCHOOL and WORK topics within the r/Anxiety and $\mathrm{r}/$ depression subreddits.
199
+
200
+ We observe significant changes in topics that are explicitly related to mental health. One of the most prominent trends is a significant increase in discussions of ANXIETY and its symptoms (keywords include: "panic," "heart," and "chest"). As seen in Figure 4a, we see a spike in ANXIETY in mid-March in all three subreddits; however, whereas we see a return to a typical level in both r/depression and r/SuicideWatch, within the r/Anxiety, ANXIETY discussion rates have remained abnormally high all the way through the end of May. We find that both INFORMATION SHARING (keywords include: "post," "read," "share," "find," and "hope") and COMMUNICATION (keywords include: "talk," "call," and "message") have become more frequent topics of discussion.
201
+
202
+ Discussion Several of the results in Table 3 seem to reflect the disruption to normal daily life caused by COVID-19 and the resulting quarantine measures. This includes the increase in the FAMILY AND CHILDREN topic (Figure 4b), which is largely expected, as quarantine policies implemented to help contain COVID-19 have resulted in many people spending more time at home and with family than they previously had. While there are noticeable increases in all three subreddits, we see a particularly large spike in $\mathrm{r}/$ Anxiety starting around mid-March. Prior studies on disease outbreaks have found that uncertainty regarding the wellbeing of loved ones is a common source of anxiety during epidemics, which may help to explain this finding (Chew et al., 2020). Another contributing factor may be the emergence of new family responsibilities, such as childcare and home-schooling, that many people have had to take on in the face of closures caused by the pandemic. The decrease in the TRANSPORT AND DAILY LIFE topic (Figure $4\mathrm{c}$ ) is intuitive; quarantine practices following COVID-19 have led to a large reduction in driving and other forms of transportation (Domonoske and Adeline, 2020) and, more generally, to a disruption in daily lifestyles. To the extent that these results indicate an abandonment of routine, they are somewhat concerning, as evidence from prior outbreaks suggests that getting back into normal routines helps to reduce loneliness and anxiety during quarantines (Huremović, 2019). The decreases in discussion of the SCHOOL and WORK topics may indicate that these previously common sources of stress have now become secondary concerns compared to the more immediate concerns associated with COVID-19.
203
+
204
+ The increases in the ANXITETY topic, especially on r/Anxiety, are aligned with existing research that has found that anxiety and the somatic symptoms associated with it are common psychological responses to epidemics (Chew et al., 2020). Further, studies of prior epidemics have found that feelings of anxiety and fear can persist even after the disease itself has been contained (Usher et al., 2020).
205
+
206
+ The increase in the INFORMATION SHARING and COMMUNICATION topics may be tied to the effects of social distancing measures, which have limited in-person interactions and led people to increasingly turn to digital methods of communication. These observations may also reflect a desire to seek out information related to COVID-19; individuals who experience health anxiety are more likely to exhibit online health information seeking behavior (McMullan et al., 2019). The increase in mentions of words related to social media (e.g., "post," "share") is somewhat worrisome; studies of disaster events have found that both more frequent social media use and exposure to conflicting information online (a widely acknowledged issue with COVID- 19 (Kouzy et al., 2020)) lead to higher stress levels (Torales et al., 2020). However, the rise of the INFORMATION SHARING topic, especially in it's relation to words like "share," "hope," and "story," could also be indicative of a collective coping process, in which individuals come together for social support. As noted in Section 5.3.1, this type of coping strategy has frequently been observed during past disease outbreaks (Chew et al., 2020) and may also be reflected by the increase in the usage of WE we saw for discussions in $r/$ Anxiety.
207
+
208
+ ${}^{3}$ https://www.statista.com/statistics/517222/reddit-user-distribution-usa-education/
209
+
210
+ max width=
211
+
212
+ 2|c|r/Anxiety 2|c|r/depression 2|c|r/SuicideWatch
213
+
214
+ 1-6
215
+ Topic $\%$ Outliers Topic $\%$ Outliers Topic $\%$ Outliers
216
+
217
+ 1-6
218
+ Transport and Daily Life* 88 $\downarrow$ Family and Home* 83↑ Transport and Daily Life* 70↓
219
+
220
+ 1-6
221
+ Anxiety* 75↑ Transport and Daily Life* 82↓ Family and Home* 50↑
222
+
223
+ 1-6
224
+ Information Sharing* 68↑ Information Sharing' 62↑ Friends* 32↓
225
+
226
+ 1-6
227
+ School* 62↓ Work* 55↓ Anxiety* 28↑
228
+
229
+ 1-6
230
+ Family and Home* 48↑ Suicide* 50↓ Family and Children 22↑
231
+
232
+ 1-6
233
+ Life and Philosophy* 34↑ Sleep and Routine* 50↓ "Game-over" Mentality and Swearing 15↑
234
+
235
+ 1-6
236
+ Work* 33↓ Communication 48↑ Suicide 14↓
237
+
238
+ 1-6
239
+ "Game-over" Mentality and Swearing" 29↓ "Game-over" Mentality and Swearing" 39↑ Worry 14↑
240
+
241
+ 1-6
242
+ Experience and Mental State ${}^{ * }$ 27↑ Medical Treatment 36↓ Communication 13↑
243
+
244
+ 1-6
245
+ Motivation* 22↓ Family and Children* 34↑ People and Behavior 13↑
246
+
247
+ 1-6
248
+
249
+ Table 3: Ten topics with the most outliers for r/Anxiety, r/depression, and r/SuicideWatch. Arrows mark the direction in which the mean of the outliers shifted from the predicted mean. Topics marked with * have a statistically significant percentage of outliers (with Bonferroni correction; $\alpha = {0.05}$ before correction).
250
+
251
+ < g r a p h i c s >
252
+
253
+ Figure 4: Average daily posterior probability of selected topics in posts over time. The grey line is the Prophet forecast, the shaded area is the 95% prediction interval, and the black line is the true value. Subreddits marked with * have a statistically significant percentage of outliers (with Bonferroni correction; $\alpha = {0.05}$ before correction).
254
+
255
+ § 6 CONCLUSIONS
256
+
257
+ In this study, we examined how COVID-19 has influenced the online behavior of individuals who discuss mental health concerns by analyzing activity within the r/Anxiety, r/depression, and r/SuicideWatch communities on Reddit. We found substantial evidence of increases in anxiety; we observed an increase in user activity in r/Anxiety, as well as significant increases in discussions of anxiety and the symptoms associated with it. Interestingly, we observed a decrease in activity within the $\mathrm{r}$ /depression and $\mathrm{r}$ /SuicideWatch subreddits. The literature on the impact of disease outbreaks on depression rates contains somewhat contradictory findings; we therefore believe that this is an interesting area for future work.
258
+
259
+ We also observed interesting changes in the content of discussions within each subreddit. Our results suggest that concerns related to COVID- 19, such as health and family, have become more prominent discussion topics compared to other common concerns, such as work and school, which have generated relatively less discussion since the outbreak. While our findings largely confirm the warnings offered by psychiatrists regarding the potential for COVID-19 to have an adverse effect on mental health, we also found some reason for optimism; increases in the usage of WE as well as the INFORMATION SHARING topic (associated with words such as "story" and "hope"), suggest a heightened sense of community and shared experience, which may help individuals cope with these stressful times.
260
+
261
+ § ACKNOWLEDGMENT
262
+
263
+ We are grateful to the Michigan AI lab for discussions that led to this project, and to the statistical consultants at CSCAR who helped with developing our process for hypothesis testing, especially Kerby Shedden and Thomas Fiore. This material is based in part on work supported by the Precision Health initiative at the University of Michigan, the NSF (grant #1815291), and the John Templeton Foundation (grant #61156). Any opinions, findings, conclusions, or recommendations in this material are those of the authors and do not necessarily reflect the views of the Precision Health initiative, the NSF, or the John Templeton Foundation.
papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/4nEHDnoLAmK/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,335 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CORA: A Deep Active Learning Covid-19 Relevancy Algorithm to Identify Core Scientific Articles
2
+
3
+ Zubair Afzal
4
+
5
+ Elsevier, The Netherlands
6
+
7
+ m.afzal.1@elsevier.com
8
+
9
+ Vikrant Yadav
10
+
11
+ Elsevier, The Netherlands
12
+
13
+ v.yadav@elsevier.com
14
+
15
+ Olga Fedorova
16
+
17
+ Elsevier, The Netherlands
18
+
19
+ o.fedorova@elsevier.com
20
+
21
+ Vaishnavi Kandala
22
+
23
+ Elsevier, The Netherlands
24
+
25
+ v.kandala@elsevier.com
26
+
27
+ Janneke van de Loo
28
+
29
+ Elsevier, The Netherlands
30
+
31
+ j.loo.1@elsevier.com
32
+
33
+ Saber A. Akhondi
34
+
35
+ Elsevier, The Netherlands
36
+
37
+ s.akhondi@elsevier.com
38
+
39
+ Pascal Coupet
40
+
41
+ Elsevier, The Netherlands
42
+
43
+ p.coupet@elsevier.com
44
+
45
+ George Tsatsaronis
46
+
47
+ Elsevier, The Netherlands
48
+
49
+ g.tsatsaronis@elsevier.com
50
+
51
+ ## Abstract
52
+
53
+ Ever since the COVID-19 pandemic broke out, the academic and scientific research community, as well as industry and governments around the world have joined forces in an unprecedented manner to fight the threat. Clinicians, biologists, chemists, bioinformaticians, nurses, data scientists, and all of the affiliated relevant disciplines have been mobilized to help discover efficient treatments for the infected population, as well as a vaccine solution to prevent further the virus' spread. In this combat against the virus responsible for the pandemic, key for any advancements is the timely, accurate, peer-reviewed, and efficient communication of any novel research findings. In this paper we present a novel framework to address the information need of filtering efficiently the scientific bibliography for relevant literature around COVID-19. The contributions of the paper are summarized in the following: we define and describe the information need that encompasses the major requirements for COVID-19 articles' relevancy, we present and release an expert-curated benchmark set for the task, and we analyze the performance of several state-of-the-art machine learning classifiers that may distinguish the relevant from the non-relevant COVID-19 literature.
54
+
55
+ ## 1 Introduction
56
+
57
+ The COVID-19 pandemic has been responsible for almost 10 million people infected worldwide, and has left close to 1 million people dead till mid-September 2020, according to the World Health Organization ${}^{1}$ . The whole world observes in awe the catastrophe that the pandemic is leaving behind; human lives, economies and markets have been struck fiercely, as the scientific community and industry, united in all fronts, is seeking for treatment and vaccine solutions against the disease caused by the 2019-ncov virus.
58
+
59
+ In these times where scientific advancements are sought and expected rapidly, there lies the challenge of filtering efficiently the scientific literature for the most relevant articles that can help clinicians, nurses, biologists, chemists, bioinformati-cians, data scientists and other researchers operating in affiliated disciplines, to combat the pandemic. All these research and practice protagonists have many, and heterogeneous, requirements on what would be a relevant COVID-19 article; more importantly, they have extremely limited time to scan large volumes of literature.
60
+
61
+ The risks faced in the aforementioned challenge are multiple; for first, the information, in the form of scientific articles, needs to be timely. This requires acceleration of the whole peer-review and publication process for the COVID-19 relevant articles, to enable the fastest possible communication of breaking scientific and clinical results. The information needs to be accurate as well; therefore, without jeopardizing quality, the editors and publishers of scientific content need to have in place fast-track review processes for these important articles. Furthermore, the information needs to be highly relevant for the aforementioned protagonists who combat the pandemic at the forefront, and have limited time for extensive literature reviews given their crucial duties. Last but not least, the challenge is becoming increasingly difficult, given that the volume of the relevant literature for COVID-19 is continuously growing; indicatively, the Elsevier published articles on COVID-19 ${}^{2}$ have grown in volume from few tens of articles per week in March 2020, to almost 1,000 articles per week in June 2020. This is an increase of approximately 2 orders of magnitude in a period of 4 months.
62
+
63
+ ---
64
+
65
+ https://covid19.who.int/
66
+
67
+ ---
68
+
69
+ In order to avoid "information choking", the community requires efficient data science solutions and respective initiatives that can help researchers navigate through this volume of information and focus on the most relevant articles based on their information need. Some examples of such initiatives, or enablers thereof, are:
70
+
71
+ - the TREC-COVID ${}^{3}$ which follows the wellknown to the information retrieval community ${TREC}$ series for building information retrieval test collections, and enabling the development of novel document retrieval algorithms,
72
+
73
+ - data science challenges organized by Kaggle ${}^{4}$ , e.g., the COVID-19 Open Research Dataset Challenge ${\left( \mathrm{{CORD}} - {19}\right) }^{5}$ ,
74
+
75
+ - public releases of COVID-19 relevant datasets of scientific articles, such as the ${\mathrm{{CORD} - {19}}}^{6}$ , or full texts made available by ${PM}{C}^{7}$ , in which all scientific publishers contribute, and,
76
+
77
+ - publicly available and free to use research platforms, where researchers can navigate the COVID-19, and all relevant literature, and benefit from advanced text mining and natural language processing solutions, e.g., the Elsevier’s Coronavirus Research Hub ${}^{8}$ .
78
+
79
+ The majority of the aforementioned initiatives imply the existence of a COVID-19 scientific article relevancy mechanism, that can filter the core literature on the pandemic, to be included in such collections or data science challenges and platforms. In this paper we present such a framework, namely ${CORA}$ , and we argue that it may constitute the basis for surfacing efficiently the core COVID-19 literature in a way that it addresses the majority of the information needs of the protagonists who fight the pandemic. The contribution of ${CORA}$ can be summarized in the following: (i) it defines the information need behind relevancy to COVID-19, having ingested the feedback of researchers and professionals in medicine, biology, chemistry, bioin-formatics and data science, (ii) it offers a benchmark set for the task, with labelled "relevant" and "non-relevant" COVID-19 scientific articles, and, (iii) defines an efficient approach that combines search and machine learning, to balance optimally between precision and recall for the task. The impact of such an approach is tremendous; for first, it can help scientific publishers and editors to flag early submitted articles that are core to COVID-19, and ensure the acceleration of their review and final publication. It can also be used to filter out large volumes of scientific literature, and retain only the body of the literature that is core to COVID-19. This can help accelerate the preparation and production of data science datasets and challenges aiming to address the pandemic. The presented framework is generic, and is described in detail so that it can be reproduced in any environment for these two purposes.
80
+
81
+ In the remaining of the paper we describe the information need of relevancy to COVID-19 (Section 2), the process used to create and validate the benchmark set for the training and the tuning of the approach (Section 3), the details of the CORA framework (Section 4), as well as results of various methods, including CORA, in the produced set (Section 5).
82
+
83
+ ## 2 COVID-19 Information Need for Relevant Scientific Literature
84
+
85
+ One of the largest publicly available datasets for COVID-19, namely CORD-19 (Wang et al., 2020), draws its contents from PubMed Central, bioRxiv, medRxiv, and the World Health Organization (WHO). All the major scientific publishers, such as Elsevier, and Springer Nature are contributing to it, and have offered every help for its compilation. In the case of ${WHO}$ , the data can be pulled from a hand-curated database of relevant literature compiled by the organization ${}^{9}$ . However, in the case of the remaining three sources, a generic keyword query is used on the title, abstract and body text of the articles, to filter the ones that are included in the collection. The query is shown in Figure 1.
86
+
87
+ ---
88
+
89
+ ${}^{2}$ All of the Elsevier articles pertaining to COVID-19 are made available to the community: https://www.elsevier.com/connect/ coronavirus-information-center
90
+
91
+ ${}^{3}$ https://ir.nist.gov/covidSubmit/
92
+
93
+ 4https://www.kaggle.com/
94
+
95
+ 5https://www.kaggle.com/
96
+
97
+ allen-institute-for-ai/
98
+
99
+ CORD-19-research-challenge
100
+
101
+ 6https://allenai.org/data/cord-19
102
+
103
+ ${}^{7}$ https://www.ncbi.nlm.nih.gov/pmc/ about/covid-19/
104
+
105
+ ${}^{8}$ https://www.elsevier.
106
+
107
+ com/clinical-solutions/
108
+
109
+ coronavirus-research-hub
110
+
111
+ ---
112
+
113
+ "COVID-19" OR "Coronavirus" OR "Corona virus" OR "2019-nCOV" OR "SARS-COV" OR "MERS-COV" OR "Severe Acute Respiratory Syndrome" OR "Middle East Respiratory Syndrome"
114
+
115
+ Figure 1: The keyword-based query used to retrieve COVID-19 relevant documents for CORD-19 from PubMed Central, bioRxiv, and medRxiv. Papers that match on these keywords in their title, abstract, or full text are included in the dataset.
116
+
117
+ The query used for the compilation of CORD-19 includes the fundamental keywords for the pandemic; however, the precision of the aforementioned query is highly arguable. A scientific article may refer to COVID-19 or any of the coronaviruses for multiple reasons, and often the article can be deemed as irrelevant by expert doctors, biologists and chemists. For example, the article could refer to the financial consequences of COVID-19, or to its impact in some social aspects. It could even just refer to COVID-19 as the most recent example of a pandemic, without discussing about the specific pandemic at all, in a scientific, medical or clinical context. We argue that the expert users who combat the pandemic have an underlying information need that is much more specific than the one expressed from the aforementioned query, and that there should be much more efficient mechanisms to filter the relevant core COVID-19 articles.
118
+
119
+ As a first step for the creation of ${CORA}$ we interviewed experts in the field of medicine, biology, chemistry and bioinformatics, who combat the pandemic, and attempted to extract their information needs. This resulted in a number of inclusion and exclusion criteria, that represent the information need, and can be used to compile a benchmark dataset for identifying core COVID-19 articles. The inclusion and exclusion criteria are presented in Figure 2.
120
+
121
+ As it is illustrated, the protagonists who combat the pandemic, are interested exclusively in the diagnosis, treatment, vaccine development, pathology, and virology of COVID-19, as well as in literature about other coronaviruses. Furthermore, the experts are also interested in how hospitals are addressing the pandemic, how does the health care systems manage it, and what are some population statistics, and demographics of the disease. All experts were explicit in that, articles related to the impact of the pandemic in areas such as economy, education, transport, etc., are of secondary importance and should not be included in a core scientific COVID-19 collection, aiming to aid the combat to the disease.
122
+
123
+ ## 3 Preparation of the Benchmark Set
124
+
125
+ One of the main challenges in any supervised learning task is to have good quality and high volume training data for the algorithms to learn optimally. In cases where training sets are not available, one must create a bespoke data set for the task at hand. Creating a valid, accurate, and large data set is a time-consuming and laborious task. Data sets are typically created by manual annotation of data points, e.g., scientific articles in our case, from a pool of randomly selected data points from a population. The size of the training dataset typically depends on factors such as task complexity, resources, time, and budge availability.
126
+
127
+ In order to create a benchmark dataset which includes both "relevant" and "non-relevant" articles to the criteria illustrated in Figure 2, as a first step we applied the query illustrated in Figure 3, into the forward flow of the Elsevier accepted articles for a period of 2 months. This query can be seen as a much more detailed version of the simple and generic keyword-based query illustrated in Figure 1 that was utilized for the compilation of the CORD-19 dataset. For the manual annotation of the documents returned by the query, we used active machine learning, and more precisely the general approach described by Konyushkova et al. (Konyushkova et al., 2017). Active learning in this case provides an efficient way of selecting the right document sample(s) for labelling. In active learning, an algorithm picks the examples that are more useful in order for the machine learning process to reach its full potential. We used BioBERT as the
128
+
129
+ ---
130
+
131
+ 9http://tiny.cc/2n9jrz
132
+
133
+ ---
134
+
135
+ ![01963de6-47a6-751f-bc58-f075bd73d49b_3_216_191_1226_618_0.jpg](images/01963de6-47a6-751f-bc58-f075bd73d49b_3_216_191_1226_618_0.jpg)
136
+
137
+ Figure 2: Inclusion and exclusion criteria of scientific information derived by analyzing the information needs of research experts and practitioners who combat the COVID-19 pandemic, from the fields of medicine, biology, chemistry, and bioinformatics.
138
+
139
+ ![01963de6-47a6-751f-bc58-f075bd73d49b_3_258_968_1139_318_0.jpg](images/01963de6-47a6-751f-bc58-f075bd73d49b_3_258_968_1139_318_0.jpg)
140
+
141
+ Figure 3: Generic keyword-based query strategy used to compile the corpus for annotation from the Subject Matter Experts in the fields of medicine, biology and chemistry, towards creating a benchmark set for the task.
142
+
143
+ ## base classifier in the active learning pipeline.
144
+
145
+ There are several criteria an algorithm can use to pick the best samples for annotation, such as based on uncertainty, committee, or by bagging and boosting (Olsson, 2009). For this task, we used uncertainty sampling which is one of the popular methods, and is considered to be very efficient (Shen et al., 2017). In uncertainty sampling, the algorithm picks a sample for annotation from the unlabeled pool where it is least confident about its prediction probability. This resulted in a much smaller but more informative data set for our task. The deep active learning algorithm utilized to compile the most useful such set for ${CORA}$ is described in Algorithm 1. We first fine-tuned BioBERT on a small expert-curated seed set (~1,600 data points) and measured its accuracy on the test set. Secondly, the algorithm enters the active learning loop, where the unlabeled data samples are picked using the uncertainty method and human validators are asked to provide labels for the data points. The model is then further fine-tuned on the data points where the classifier's label contradicts human label. The accuracy in line 12 of the algorithm measures how certain the model is on the unlabeled samples. This process continues til the maximum iterations or the active learning algorithm suggest that no further training is required (i.e., desired certainty is achieved). We also created a separate test set to evaluate the performance of the CORA machine learning model. Both the training and test set were manually annotated by in-house subject matter experts (Afzal et al., 2020). Table 1 describes the statistics of the benchmark set, for both the training and the test subsets. Figure 5 shows the incremental performance of the classifier on the test set as the number of labeled samples increases in the training set during active learning process.
146
+
147
+ ![01963de6-47a6-751f-bc58-f075bd73d49b_4_261_180_1136_385_0.jpg](images/01963de6-47a6-751f-bc58-f075bd73d49b_4_261_180_1136_385_0.jpg)
148
+
149
+ Figure 4: High-level description of the machine learning model used in CORA to filter in relevancy the originally retrieved articles from the generic query strategy.
150
+
151
+ ## 4 The CORA COVID-19 Relevancy Algorithm
152
+
153
+ ${CORA}$ is aiming at encapsulating the information needs described in Section 2, and retaining an optimal balance between precision and recall in the process of retrieving relevant documents according to these needs. The focus in satisfying the precision to the information needs can be addressed by training a machine learning model on the benchmark set described in Section 3. However, CORA needs to start from a much larger set, to also satisfy the requirement that recall is as high as possible; yet such a set needs to minimize the risks of introducing a large number of false positives, and totally irrelevant articles.
154
+
155
+ In order to achieve this balance, ${CORA}$ utilizes first the keyword-based strategy illustrated in Figure 3 , and then applies a machine learning model to filter out the "non-relevant" articles from this originally wide net that was cast to perform the information retrieval. The machine learning model that ${CORA}$ is using is a fine-tuning of BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining) (Lee et al., 2020) for the task of learning the "relevant" and "non-relevant" classes from the benchmark set.
156
+
157
+ BioBERT is a BERT-based language representation model which is pre-trained on biomedical corpora from PubMed and PMC, as well as on the English Wikipedia and a corpus of books. It has reported state-of-the-art performance in several NLP related tasks on biomedical text, such as named entity recognition, and biomedical question answering (Tsatsaronis et al., 2015). The high level description of the machine learning model used in ${CORA}$ is illustrated in Figure 4.
158
+
159
+ Given the benchmark training set obtained via deep active learning, as illustrated in Algorithm 1, the CORA COVID-19 relevancy can be described in simple steps, and is illustrated in Algorithm 2. The description of the algorithm covers both the preparation and training, as well as the inference steps, given an input set of unseen documents ${D}_{\text{test }}$ to be classified as "relevant" or "non-relevant".
160
+
161
+ ## 5 Experimental Results and Discussion
162
+
163
+ In this section we present the results of the empirical evaluation on the produced benchmark set described in Table 1. The numbers reported throughout the section refer to the performance of the tested models on the test (unseen) subset of the benchmark document collection. In all cases, precision, recall and F1-Score are reported for both the "relevant" and "non-relevant" classes. Section 5.1 measures performance of two flavors of ${CORA}$ ; one that utilizes a BioBERT fine-tuning which favors precision, and one that favors recall. We distinguish the evaluation of ${CORA}$ in this set, from other classification algorithms, as the created benchmark set has been produced using deep active learning on the BioBERT model, and, therefore, has included examples that are selected to help the fine-tuning of BioBERT specifically. Nevertheless, for reasons of completeness, and of scientific clarity, and in order to illustrate the potential value of this set for utilization by other methods, we report in Section 5.2 the performance of several mainstream machine learning models.
164
+
165
+ ![01963de6-47a6-751f-bc58-f075bd73d49b_5_197_172_1283_422_0.jpg](images/01963de6-47a6-751f-bc58-f075bd73d49b_5_197_172_1283_422_0.jpg)
166
+
167
+ Figure 5: Biobert fine-tuning learning curves for the "relevant" (Figure 5a) and "non-relevant" (Figure 5b) classes.
168
+
169
+ ### 5.1 CORA Evaluation
170
+
171
+ The evaluation of ${CORA}$ on the test subset is focusing on measuring the performance of the BioBERT fine-tuning. We have fine-tuned two variants of BioBERT in CORA. The first variant is focusing on maximizing the precision on the positive class ("relevant"), while the second is focused more on the recall. The difference between these two variants can be achieved by doing a grid-search on the threshold for the classification. The results of the evaluation are reported in Table 2. As the numbers in the table suggest, both variants of the BioBERT result in a high F1-score, equal or more than ${90}\%$ for the "relevant" class. The performance for the "non-relevant" class, which is also the majority class in the test set, is even higher, at ${96}\%$ . The difference between the two variants is small, and, given the volume of the test set, it appears to be statistically insignificant for the precision in the positive class, but significant for the recall.
172
+
173
+ The high F1-score in both variants and classes, and especially given the imbalance between the two classes in the test set, which is simulating the actual forward flow of articles in the population, designates that the model has learned successfully to distinguish between "relevant" and "non-relevant" COVID-19 articles. Given also that the inclusion and exclusion criteria for the labeling of the set encapsulate the information needs of the expert users, we can argue that the proposed algorithm manages to filter the relevant COVID-19 scientific literature in an acceptable manner, with an F1-score equal to, or greater than ${90}\%$ .
174
+
175
+ As the CORA algorithm utilizes both a keyword-based search, and a machine learning model for the final filtering, it is important to highlight the differences in the performance when the machine learning model is not used. This way, we have an indication of the contribution that the model brings into ${CORA}$ , as well as the ability to individually compare different query strategies. For this purpose, in Table 3 we are presenting the performance of the two keyword search strategies discussed in this paper, namely the CORD-19 keyword query (first row), and the extended keyword query that we have included in CORA (second row), on the test set that we have created, with the focus on the "relevant" class. We also present the effect of applying the machine learning model as the final filtering step, on the results of the extended keyword query (third row), in essence reporting on the overall CORA performance, discussed in Table 2.
176
+
177
+ Comparing the first two rows of Table 3 we can see an expected result: the CORD-19 keyword query provides lower recall than the extended keyword query, with a benefit of a better precision. As the extended keyword query was designed to capture holistically all the information needs illustrated in Figure 3, and, therefore has many more keywords, it returns much larger number of articles, harming the precision, but covering much better the information needs of the experts. The third row shows the great advantage of adding the trained machine learning model for filtering that set; with a loss in recall of 4 p.p., yet still remaining very high at ${94}\%$ , the model manages to filter out also a lot of false positives coming from the extended query, boosting the precision by 14 p.p., to 88% from 74%. The effect of applying the whole CORA algorithm is eventually made fully visible by looking at the differences also in the F1 scores: the addition of the machine learning model contributes 5 additional p.p. to the CORD-19 keyword query approach, and 7 additional p.p. to the extended keyword query approach; in the former case the higher F1 contribution is attributed both to increased precision and recall, while in the latter primarily in a very large boost in precision.
178
+
179
+ Algorithm 1: CORAACTIVELEARNING
180
+
181
+ ---
182
+
183
+ Input: A document collection $D$ of
184
+
185
+ scientific articles; a small labelled
186
+
187
+ training set ${L}_{{t}_{0}} \in D$ , with
188
+
189
+ $L =$ ["relevant","non-relevant"];
190
+
191
+ ${t}_{\max }$ as the maximum number of
192
+
193
+ iterations, acc as the desired
194
+
195
+ accuracy of the model
196
+
197
+ Output: The final training set ${L}_{t}$ after ${t}_{\max }$
198
+
199
+ iterations, or achieved accuracy
200
+
201
+ ${acc}$
202
+
203
+ $i = 0$
204
+
205
+ ${U}_{{t}_{i}} \leftarrow D \smallsetminus {L}_{{t}_{i}}$
206
+
207
+ train classifier ${f}_{{t}_{i}}$ on ${L}_{{t}_{i}}$
208
+
209
+ measure ${acc}\left( {f}_{{t}_{i}}\right)$
210
+
211
+ while $i \leq {t}_{\max }$ and $\operatorname{acc}\left( {f}_{{t}_{i}}\right) \leq {acc}$ do
212
+
213
+ pick instance ${x}_{i} \in {U}_{{t}_{i}}$ based on
214
+
215
+ uncertainty sampling
216
+
217
+ annotate ${x}_{i}$ with $L$
218
+
219
+ ${L}_{{t}_{i + 1}} \leftarrow {L}_{{t}_{i}} \cup {x}_{i}$
220
+
221
+ ${U}_{{t}_{i + 1}} \leftarrow {U}_{{t}_{i}} \smallsetminus {x}_{i}$
222
+
223
+ $i \leftarrow i + 1$
224
+
225
+ train classifier ${f}_{{t}_{i}}$ on ${L}_{{t}_{i}}$
226
+
227
+ measure $\operatorname{acc}\left( {f}_{{t}_{i}}\right)$
228
+
229
+ end
230
+
231
+ return ${L}_{{t}_{i}}$
232
+
233
+ ---
234
+
235
+ <table><tr><td/><td>Relevant</td><td>Non-Relevant</td><td>Total</td></tr><tr><td>Training set</td><td>3296</td><td>5920</td><td>9216</td></tr><tr><td>Test set</td><td>324</td><td>910</td><td>1234</td></tr></table>
236
+
237
+ Table 1: CORA training and test set
238
+
239
+ ### 5.2 Evaluation of other Classification Algorithms
240
+
241
+ One of the potential drawbacks of a data set generated through active learning is that it's primarily biased towards the preferences of the model used in the loop (i.e., base learner) and peculiarities of the task. It has been questioned whether such a data set can be used effectively by a machine learning algorithm different from the one used as a base learner (Olsson, 2009). Therefore, a direct comparison between BioBERT and other classifiers trained on the same set would not be fair, since the training set was generated through an active learning system with BioBERT as a base learner.
242
+
243
+ Algorithm 2: CORAALGORITHM
244
+
245
+ ---
246
+
247
+ Input: A document set ${D}_{\text{test }}$ of unseen
248
+
249
+ scientific articles
250
+
251
+ Output: A list of classification labels
252
+
253
+ ${L}_{{D}_{test}}$ from
254
+
255
+ $L =$ ["relevant","non-relevant"]
256
+
257
+ if classifier ${f}_{{L}_{i}}$ not initialized then
258
+
259
+ /* refer to Algorithm 1 */
260
+
261
+ ${f}_{{L}_{i}} \leftarrow$ finetune BioBERT on ${L}_{i}$
262
+
263
+ for $j \leftarrow 1$ to $\left| {D}_{\text{test }}\right|$ do
264
+
265
+ if ${D}_{\text{test }}\left\lbrack j\right\rbrack$ does not satisfy CORA query
266
+
267
+ then
268
+
269
+ /* refer to query illustrated
270
+
271
+ in Figure 3 */
272
+
273
+ ${L}_{{D}_{\text{test }}}\left\lbrack j\right\rbrack \leftarrow$ "non-relevant"
274
+
275
+ else
276
+
277
+ ${L}_{{D}_{\text{test }}}\left\lbrack j\right\rbrack \leftarrow L\left( {{f}_{{L}_{i}}\left( {{D}_{\text{test }}\left\lbrack j\right\rbrack }\right) }\right)$
278
+
279
+ end
280
+
281
+ return ${L}_{{D}_{test}}$
282
+
283
+ ---
284
+
285
+ However, in order to illustrate that the data set captured the underlying characteristics of the data based on our relevancy inclusion and exclusion criteria, we trained and evaluated several other mainstream machine learning classifiers, namely Support Vector Machines (SVM), XGBoost, Logistic Regression, and Naive Bayes.
286
+
287
+ The results of these classifiers in the same test set are presented in Table 4. The best performance from this set of classifiers was achieved by ${XG}$ - Boost, with a reported precision of ${85}\%$ , recall of 95%, and an F1-score of 89% in the "Relevant" class. This performance is very close to CORA's BioBERT, suggesting that the same set can be very useful for training other classifiers as well, despite the fact that the set was created with a bias to help BioBERT resolve the uncertainty between the two classes.
288
+
289
+ ## 6 Conclusions and Future Work
290
+
291
+ Following the breakout of the COVID-19 pandemic early in 2020, the scientific community, industry and governments around the world joined forces to combat the spreading of the disease, and to identify efficient treatment methods, as well as vaccine solutions against the 2019-ncov virus. Efficient and reliable information communication, including the latest scientific advancements in the form of peer-reviewed published articles, has proven to hold great challenges; primarily the lack of fast, and accurate ways to focus only on the core COVID-19 scientific papers and filter out the secondary impact articles.
292
+
293
+ <table><tr><td colspan="2">${BioBERT}$ fine-tuned models</td><td>Precision</td><td>Recall</td><td>F1-Score</td></tr><tr><td rowspan="2">Precision Favored</td><td>Non-Relevant</td><td>0.97</td><td>0.96</td><td>0.96</td></tr><tr><td>Relevant</td><td>0.89</td><td>0.91</td><td>0.90</td></tr><tr><td rowspan="2">Recall Favored</td><td>Non-Relevant</td><td>0.98</td><td>0.95</td><td>0.96</td></tr><tr><td>Relevant</td><td>0.88</td><td>0.94</td><td>0.91</td></tr></table>
294
+
295
+ Table 2: Performance of two fine-tuned BioBERT models on the test set; a precision-favored and a recall-favored version of the model.
296
+
297
+ <table><tr><td/><td>Class</td><td>Precision</td><td>Recall</td><td>F1-Score</td></tr><tr><td>CORD-19 Keyword Query</td><td>Relevant</td><td>0.85</td><td>0.86</td><td>0.86</td></tr><tr><td>Extended Keyword Query</td><td>Relevant</td><td>0.74</td><td>0.98</td><td>0.84</td></tr><tr><td>${BioBERT}$ fine-tuned</td><td>Relevant</td><td>0.88</td><td>0.94</td><td>0.91</td></tr></table>
298
+
299
+ Table 3: Performance of keyword queries and the fine-tuned BioBERT model on the test set.
300
+
301
+ In this paper we presented ${CORA}$ , an algorithmic solution to filter the relevant scientific papers, and save time from the experts to combat the disease, and focus only on the primary impact information. The contribution of this work is multi-fold: (i) we present a framework of inclusion and exclusion criteria that may be used as guidelines to annotate corpora of scientific publications, towards building benchmark datasets for the purpose of developing and tuning COVID-19 relevancy systems; the criteria encapsulate the information needs of experts across medicine, biology, chemistry, and bioinfor-matics, in order to combat efficiently the pandemic, (ii) we applied a simple, yet efficient deep active learning approach to compile such a benchmark set with the help of subject matter experts for the hand curation of the labels; the approach utilized the fine-tuning of BioBERT as a base classifier, and we demonstrated that the produced set is also very meaningful for training other classifiers as well, (iii) we introduced the CORA algorithmic framework for filtering the relevant scientific literature; CORA combines an extensive keyword-based query to initialize a large pool of potentially relevant documents and maximize recall, and a trained fine-tuned BioBERT model, to retain only the relevant articles from this pool, (iv) we demonstrated via an experimental evaluation on the benchmark set, that the CORA algorithm can achieve ${96}\% {F1}$ - score on detecting the non-relevant documents, and ${91}\%$ on detecting the relevant documents, constituting CORA a satisfactory solution for production settings of this exercise.
302
+
303
+ As a future work, we plan to experiment further with novel machine learning models, e.g., Albert (Lan et al., 2019), and Electra (Clark et al., 2020), who have shown great promise in the GLUE leader board results ${}^{10}$ , as well as with alternative active learning approaches, e.g., batch-aware methods (Chen and Krause, 2013), in order to improve further this performance. More importantly, having the understanding that the terminology around the COVID-19 literature is evolving fast over time, new terms appear constantly, and the vocabulary is shifting focus towards the names of new promising targets, compounds or characterization of symptoms and treatment options, we will focus in enriching CORA with a novel adaption of its keyword-based query over time. By conducting novel keywords extraction from the recent scientific literature, the CORA keyword-based query can be enhanced automatically with new terminology. In this manner, the original pool of fetched documents can still satisfy the requirement of very high recall, as they are fetched by a query which follows the vocabulary trends adopted by the published scientific literature on COVID-19. Additionally, to help further with the information overload issue, we plan to introduce domain-specific targeted labels for different user groups (e.g., clinicians, bioinformaticians, chemists), allowing any COVID-19 relevant literature to be potentially filtered according to domain specific information needs.
304
+
305
+ ---
306
+
307
+ ${}^{10}$ https://gluebenchmark.com/leaderboard
308
+
309
+ ---
310
+
311
+ <table><tr><td colspan="2"/><td>Precision</td><td>Recall</td><td>F1-Score</td></tr><tr><td rowspan="2">SVM-Linear</td><td>Non-Relevant</td><td>0.90</td><td>0.74</td><td>0.81</td></tr><tr><td>Relevant</td><td>0.52</td><td>0.78</td><td>0.62</td></tr><tr><td rowspan="2">XGBoost</td><td>Non-Relevant</td><td>0.98</td><td>0.94</td><td>0.96</td></tr><tr><td>Relevant</td><td>0.85</td><td>0.95</td><td>0.89</td></tr><tr><td rowspan="2">Logistic Regression</td><td>Non-Relevant</td><td>0.92</td><td>0.80</td><td>0.86</td></tr><tr><td>Relevant</td><td>0.60</td><td>0.81</td><td>0.69</td></tr><tr><td rowspan="2">Naive Bayes</td><td>Non-Relevant</td><td>0.95</td><td>0.74</td><td>0.83</td></tr><tr><td>Relevant</td><td>0.55</td><td>0.89</td><td>0.68</td></tr></table>
312
+
313
+ Table 4: Performance of Support Vector Machines (SVM), XGBoost, Logistic Regression, and Naive Bayes in the CORA test set.
314
+
315
+ ## References
316
+
317
+ Zubair Afzal, Vikrant Yadav, Olga Fedorova, Vaish-navi Kandala, Janneke van de Loo, Saber A. Akhondi, Pascal Coupet, and George Tsatsaronis. 2020. Covid-19 relevancy algorithm data set for identification of core scientific articles. Mendeley Data.
318
+
319
+ Yuxin Chen and Andreas Krause. 2013. Near-optimal batch mode active learning and adaptive submodular optimization. ICML (1), 28(160-168):8-1.
320
+
321
+ Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
322
+
323
+ Ksenia Konyushkova, Raphael Sznitman, and Pascal Fua. 2017. Learning active learning from data. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 4225-4235.
324
+
325
+ Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Sori-cut. 2019. ALBERT: A lite BERT for self-supervised learning of language representations. CoRR, abs/1909.11942.
326
+
327
+ Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
328
+
329
+ Fredrik Olsson. 2009. A literature survey of active machine learning in the context of natural language processing.
330
+
331
+ Yanyao Shen, Hyokun Yun, Zachary C Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. arXiv preprint arXiv:1707.05928.
332
+
333
+ George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R. Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopou-los, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Artières, Axel-Cyrille Ngonga Ngomo, Norman Heino, Éric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition. BMC Bioinform., 16:138:1-138:28.
334
+
335
+ Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Kinney, Ziyang Liu, William Merrill, Paul Mooney, Dewey Murdick, Devvret Rishi, Jerry Sheehan, Zhihong Shen, Brandon Stilson, Alex D. Wade, Kuansan Wang, Chris Wilhelm, Boya Xie, Douglas Raymond, Daniel S. Weld, Oren Et-zioni, and Sebastian Kohlmeier. 2020. CORD- 19: the covid-19 open research dataset. CoRR, abs/2004.10706.
papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/4nEHDnoLAmK/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,356 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CORA: A DEEP ACTIVE LEARNING COVID-19 RELEVANCY ALGORITHM TO IDENTIFY CORE SCIENTIFIC ARTICLES
2
+
3
+ Zubair Afzal
4
+
5
+ Elsevier, The Netherlands
6
+
7
+ m.afzal.1@elsevier.com
8
+
9
+ Vikrant Yadav
10
+
11
+ Elsevier, The Netherlands
12
+
13
+ v.yadav@elsevier.com
14
+
15
+ Olga Fedorova
16
+
17
+ Elsevier, The Netherlands
18
+
19
+ o.fedorova@elsevier.com
20
+
21
+ Vaishnavi Kandala
22
+
23
+ Elsevier, The Netherlands
24
+
25
+ v.kandala@elsevier.com
26
+
27
+ Janneke van de Loo
28
+
29
+ Elsevier, The Netherlands
30
+
31
+ j.loo.1@elsevier.com
32
+
33
+ Saber A. Akhondi
34
+
35
+ Elsevier, The Netherlands
36
+
37
+ s.akhondi@elsevier.com
38
+
39
+ Pascal Coupet
40
+
41
+ Elsevier, The Netherlands
42
+
43
+ p.coupet@elsevier.com
44
+
45
+ George Tsatsaronis
46
+
47
+ Elsevier, The Netherlands
48
+
49
+ g.tsatsaronis@elsevier.com
50
+
51
+ § ABSTRACT
52
+
53
+ Ever since the COVID-19 pandemic broke out, the academic and scientific research community, as well as industry and governments around the world have joined forces in an unprecedented manner to fight the threat. Clinicians, biologists, chemists, bioinformaticians, nurses, data scientists, and all of the affiliated relevant disciplines have been mobilized to help discover efficient treatments for the infected population, as well as a vaccine solution to prevent further the virus' spread. In this combat against the virus responsible for the pandemic, key for any advancements is the timely, accurate, peer-reviewed, and efficient communication of any novel research findings. In this paper we present a novel framework to address the information need of filtering efficiently the scientific bibliography for relevant literature around COVID-19. The contributions of the paper are summarized in the following: we define and describe the information need that encompasses the major requirements for COVID-19 articles' relevancy, we present and release an expert-curated benchmark set for the task, and we analyze the performance of several state-of-the-art machine learning classifiers that may distinguish the relevant from the non-relevant COVID-19 literature.
54
+
55
+ § 1 INTRODUCTION
56
+
57
+ The COVID-19 pandemic has been responsible for almost 10 million people infected worldwide, and has left close to 1 million people dead till mid-September 2020, according to the World Health Organization ${}^{1}$ . The whole world observes in awe the catastrophe that the pandemic is leaving behind; human lives, economies and markets have been struck fiercely, as the scientific community and industry, united in all fronts, is seeking for treatment and vaccine solutions against the disease caused by the 2019-ncov virus.
58
+
59
+ In these times where scientific advancements are sought and expected rapidly, there lies the challenge of filtering efficiently the scientific literature for the most relevant articles that can help clinicians, nurses, biologists, chemists, bioinformati-cians, data scientists and other researchers operating in affiliated disciplines, to combat the pandemic. All these research and practice protagonists have many, and heterogeneous, requirements on what would be a relevant COVID-19 article; more importantly, they have extremely limited time to scan large volumes of literature.
60
+
61
+ The risks faced in the aforementioned challenge are multiple; for first, the information, in the form of scientific articles, needs to be timely. This requires acceleration of the whole peer-review and publication process for the COVID-19 relevant articles, to enable the fastest possible communication of breaking scientific and clinical results. The information needs to be accurate as well; therefore, without jeopardizing quality, the editors and publishers of scientific content need to have in place fast-track review processes for these important articles. Furthermore, the information needs to be highly relevant for the aforementioned protagonists who combat the pandemic at the forefront, and have limited time for extensive literature reviews given their crucial duties. Last but not least, the challenge is becoming increasingly difficult, given that the volume of the relevant literature for COVID-19 is continuously growing; indicatively, the Elsevier published articles on COVID-19 ${}^{2}$ have grown in volume from few tens of articles per week in March 2020, to almost 1,000 articles per week in June 2020. This is an increase of approximately 2 orders of magnitude in a period of 4 months.
62
+
63
+ https://covid19.who.int/
64
+
65
+ In order to avoid "information choking", the community requires efficient data science solutions and respective initiatives that can help researchers navigate through this volume of information and focus on the most relevant articles based on their information need. Some examples of such initiatives, or enablers thereof, are:
66
+
67
+ * the TREC-COVID ${}^{3}$ which follows the wellknown to the information retrieval community ${TREC}$ series for building information retrieval test collections, and enabling the development of novel document retrieval algorithms,
68
+
69
+ * data science challenges organized by Kaggle ${}^{4}$ , e.g., the COVID-19 Open Research Dataset Challenge ${\left( \mathrm{{CORD}} - {19}\right) }^{5}$ ,
70
+
71
+ * public releases of COVID-19 relevant datasets of scientific articles, such as the ${\mathrm{{CORD} - {19}}}^{6}$ , or full texts made available by ${PM}{C}^{7}$ , in which all scientific publishers contribute, and,
72
+
73
+ * publicly available and free to use research platforms, where researchers can navigate the COVID-19, and all relevant literature, and benefit from advanced text mining and natural language processing solutions, e.g., the Elsevier’s Coronavirus Research Hub ${}^{8}$ .
74
+
75
+ The majority of the aforementioned initiatives imply the existence of a COVID-19 scientific article relevancy mechanism, that can filter the core literature on the pandemic, to be included in such collections or data science challenges and platforms. In this paper we present such a framework, namely ${CORA}$ , and we argue that it may constitute the basis for surfacing efficiently the core COVID-19 literature in a way that it addresses the majority of the information needs of the protagonists who fight the pandemic. The contribution of ${CORA}$ can be summarized in the following: (i) it defines the information need behind relevancy to COVID-19, having ingested the feedback of researchers and professionals in medicine, biology, chemistry, bioin-formatics and data science, (ii) it offers a benchmark set for the task, with labelled "relevant" and "non-relevant" COVID-19 scientific articles, and, (iii) defines an efficient approach that combines search and machine learning, to balance optimally between precision and recall for the task. The impact of such an approach is tremendous; for first, it can help scientific publishers and editors to flag early submitted articles that are core to COVID-19, and ensure the acceleration of their review and final publication. It can also be used to filter out large volumes of scientific literature, and retain only the body of the literature that is core to COVID-19. This can help accelerate the preparation and production of data science datasets and challenges aiming to address the pandemic. The presented framework is generic, and is described in detail so that it can be reproduced in any environment for these two purposes.
76
+
77
+ In the remaining of the paper we describe the information need of relevancy to COVID-19 (Section 2), the process used to create and validate the benchmark set for the training and the tuning of the approach (Section 3), the details of the CORA framework (Section 4), as well as results of various methods, including CORA, in the produced set (Section 5).
78
+
79
+ § 2 COVID-19 INFORMATION NEED FOR RELEVANT SCIENTIFIC LITERATURE
80
+
81
+ One of the largest publicly available datasets for COVID-19, namely CORD-19 (Wang et al., 2020), draws its contents from PubMed Central, bioRxiv, medRxiv, and the World Health Organization (WHO). All the major scientific publishers, such as Elsevier, and Springer Nature are contributing to it, and have offered every help for its compilation. In the case of ${WHO}$ , the data can be pulled from a hand-curated database of relevant literature compiled by the organization ${}^{9}$ . However, in the case of the remaining three sources, a generic keyword query is used on the title, abstract and body text of the articles, to filter the ones that are included in the collection. The query is shown in Figure 1.
82
+
83
+ ${}^{2}$ All of the Elsevier articles pertaining to COVID-19 are made available to the community: https://www.elsevier.com/connect/ coronavirus-information-center
84
+
85
+ ${}^{3}$ https://ir.nist.gov/covidSubmit/
86
+
87
+ 4https://www.kaggle.com/
88
+
89
+ 5https://www.kaggle.com/
90
+
91
+ allen-institute-for-ai/
92
+
93
+ CORD-19-research-challenge
94
+
95
+ 6https://allenai.org/data/cord-19
96
+
97
+ ${}^{7}$ https://www.ncbi.nlm.nih.gov/pmc/ about/covid-19/
98
+
99
+ ${}^{8}$ https://www.elsevier.
100
+
101
+ com/clinical-solutions/
102
+
103
+ coronavirus-research-hub
104
+
105
+ "COVID-19" OR "Coronavirus" OR "Corona virus" OR "2019-nCOV" OR "SARS-COV" OR "MERS-COV" OR "Severe Acute Respiratory Syndrome" OR "Middle East Respiratory Syndrome"
106
+
107
+ Figure 1: The keyword-based query used to retrieve COVID-19 relevant documents for CORD-19 from PubMed Central, bioRxiv, and medRxiv. Papers that match on these keywords in their title, abstract, or full text are included in the dataset.
108
+
109
+ The query used for the compilation of CORD-19 includes the fundamental keywords for the pandemic; however, the precision of the aforementioned query is highly arguable. A scientific article may refer to COVID-19 or any of the coronaviruses for multiple reasons, and often the article can be deemed as irrelevant by expert doctors, biologists and chemists. For example, the article could refer to the financial consequences of COVID-19, or to its impact in some social aspects. It could even just refer to COVID-19 as the most recent example of a pandemic, without discussing about the specific pandemic at all, in a scientific, medical or clinical context. We argue that the expert users who combat the pandemic have an underlying information need that is much more specific than the one expressed from the aforementioned query, and that there should be much more efficient mechanisms to filter the relevant core COVID-19 articles.
110
+
111
+ As a first step for the creation of ${CORA}$ we interviewed experts in the field of medicine, biology, chemistry and bioinformatics, who combat the pandemic, and attempted to extract their information needs. This resulted in a number of inclusion and exclusion criteria, that represent the information need, and can be used to compile a benchmark dataset for identifying core COVID-19 articles. The inclusion and exclusion criteria are presented in Figure 2.
112
+
113
+ As it is illustrated, the protagonists who combat the pandemic, are interested exclusively in the diagnosis, treatment, vaccine development, pathology, and virology of COVID-19, as well as in literature about other coronaviruses. Furthermore, the experts are also interested in how hospitals are addressing the pandemic, how does the health care systems manage it, and what are some population statistics, and demographics of the disease. All experts were explicit in that, articles related to the impact of the pandemic in areas such as economy, education, transport, etc., are of secondary importance and should not be included in a core scientific COVID-19 collection, aiming to aid the combat to the disease.
114
+
115
+ § 3 PREPARATION OF THE BENCHMARK SET
116
+
117
+ One of the main challenges in any supervised learning task is to have good quality and high volume training data for the algorithms to learn optimally. In cases where training sets are not available, one must create a bespoke data set for the task at hand. Creating a valid, accurate, and large data set is a time-consuming and laborious task. Data sets are typically created by manual annotation of data points, e.g., scientific articles in our case, from a pool of randomly selected data points from a population. The size of the training dataset typically depends on factors such as task complexity, resources, time, and budge availability.
118
+
119
+ In order to create a benchmark dataset which includes both "relevant" and "non-relevant" articles to the criteria illustrated in Figure 2, as a first step we applied the query illustrated in Figure 3, into the forward flow of the Elsevier accepted articles for a period of 2 months. This query can be seen as a much more detailed version of the simple and generic keyword-based query illustrated in Figure 1 that was utilized for the compilation of the CORD-19 dataset. For the manual annotation of the documents returned by the query, we used active machine learning, and more precisely the general approach described by Konyushkova et al. (Konyushkova et al., 2017). Active learning in this case provides an efficient way of selecting the right document sample(s) for labelling. In active learning, an algorithm picks the examples that are more useful in order for the machine learning process to reach its full potential. We used BioBERT as the
120
+
121
+ 9http://tiny.cc/2n9jrz
122
+
123
+ < g r a p h i c s >
124
+
125
+ Figure 2: Inclusion and exclusion criteria of scientific information derived by analyzing the information needs of research experts and practitioners who combat the COVID-19 pandemic, from the fields of medicine, biology, chemistry, and bioinformatics.
126
+
127
+ < g r a p h i c s >
128
+
129
+ Figure 3: Generic keyword-based query strategy used to compile the corpus for annotation from the Subject Matter Experts in the fields of medicine, biology and chemistry, towards creating a benchmark set for the task.
130
+
131
+ § BASE CLASSIFIER IN THE ACTIVE LEARNING PIPELINE.
132
+
133
+ There are several criteria an algorithm can use to pick the best samples for annotation, such as based on uncertainty, committee, or by bagging and boosting (Olsson, 2009). For this task, we used uncertainty sampling which is one of the popular methods, and is considered to be very efficient (Shen et al., 2017). In uncertainty sampling, the algorithm picks a sample for annotation from the unlabeled pool where it is least confident about its prediction probability. This resulted in a much smaller but more informative data set for our task. The deep active learning algorithm utilized to compile the most useful such set for ${CORA}$ is described in Algorithm 1. We first fine-tuned BioBERT on a small expert-curated seed set (1̃,600 data points) and measured its accuracy on the test set. Secondly, the algorithm enters the active learning loop, where the unlabeled data samples are picked using the uncertainty method and human validators are asked to provide labels for the data points. The model is then further fine-tuned on the data points where the classifier's label contradicts human label. The accuracy in line 12 of the algorithm measures how certain the model is on the unlabeled samples. This process continues til the maximum iterations or the active learning algorithm suggest that no further training is required (i.e., desired certainty is achieved). We also created a separate test set to evaluate the performance of the CORA machine learning model. Both the training and test set were manually annotated by in-house subject matter experts (Afzal et al., 2020). Table 1 describes the statistics of the benchmark set, for both the training and the test subsets. Figure 5 shows the incremental performance of the classifier on the test set as the number of labeled samples increases in the training set during active learning process.
134
+
135
+ < g r a p h i c s >
136
+
137
+ Figure 4: High-level description of the machine learning model used in CORA to filter in relevancy the originally retrieved articles from the generic query strategy.
138
+
139
+ § 4 THE CORA COVID-19 RELEVANCY ALGORITHM
140
+
141
+ ${CORA}$ is aiming at encapsulating the information needs described in Section 2, and retaining an optimal balance between precision and recall in the process of retrieving relevant documents according to these needs. The focus in satisfying the precision to the information needs can be addressed by training a machine learning model on the benchmark set described in Section 3. However, CORA needs to start from a much larger set, to also satisfy the requirement that recall is as high as possible; yet such a set needs to minimize the risks of introducing a large number of false positives, and totally irrelevant articles.
142
+
143
+ In order to achieve this balance, ${CORA}$ utilizes first the keyword-based strategy illustrated in Figure 3, and then applies a machine learning model to filter out the "non-relevant" articles from this originally wide net that was cast to perform the information retrieval. The machine learning model that ${CORA}$ is using is a fine-tuning of BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining) (Lee et al., 2020) for the task of learning the "relevant" and "non-relevant" classes from the benchmark set.
144
+
145
+ BioBERT is a BERT-based language representation model which is pre-trained on biomedical corpora from PubMed and PMC, as well as on the English Wikipedia and a corpus of books. It has reported state-of-the-art performance in several NLP related tasks on biomedical text, such as named entity recognition, and biomedical question answering (Tsatsaronis et al., 2015). The high level description of the machine learning model used in ${CORA}$ is illustrated in Figure 4.
146
+
147
+ Given the benchmark training set obtained via deep active learning, as illustrated in Algorithm 1, the CORA COVID-19 relevancy can be described in simple steps, and is illustrated in Algorithm 2. The description of the algorithm covers both the preparation and training, as well as the inference steps, given an input set of unseen documents ${D}_{\text{ test }}$ to be classified as "relevant" or "non-relevant".
148
+
149
+ § 5 EXPERIMENTAL RESULTS AND DISCUSSION
150
+
151
+ In this section we present the results of the empirical evaluation on the produced benchmark set described in Table 1. The numbers reported throughout the section refer to the performance of the tested models on the test (unseen) subset of the benchmark document collection. In all cases, precision, recall and F1-Score are reported for both the "relevant" and "non-relevant" classes. Section 5.1 measures performance of two flavors of ${CORA}$ ; one that utilizes a BioBERT fine-tuning which favors precision, and one that favors recall. We distinguish the evaluation of ${CORA}$ in this set, from other classification algorithms, as the created benchmark set has been produced using deep active learning on the BioBERT model, and, therefore, has included examples that are selected to help the fine-tuning of BioBERT specifically. Nevertheless, for reasons of completeness, and of scientific clarity, and in order to illustrate the potential value of this set for utilization by other methods, we report in Section 5.2 the performance of several mainstream machine learning models.
152
+
153
+ < g r a p h i c s >
154
+
155
+ Figure 5: Biobert fine-tuning learning curves for the "relevant" (Figure 5a) and "non-relevant" (Figure 5b) classes.
156
+
157
+ § 5.1 CORA EVALUATION
158
+
159
+ The evaluation of ${CORA}$ on the test subset is focusing on measuring the performance of the BioBERT fine-tuning. We have fine-tuned two variants of BioBERT in CORA. The first variant is focusing on maximizing the precision on the positive class ("relevant"), while the second is focused more on the recall. The difference between these two variants can be achieved by doing a grid-search on the threshold for the classification. The results of the evaluation are reported in Table 2. As the numbers in the table suggest, both variants of the BioBERT result in a high F1-score, equal or more than ${90}\%$ for the "relevant" class. The performance for the "non-relevant" class, which is also the majority class in the test set, is even higher, at ${96}\%$ . The difference between the two variants is small, and, given the volume of the test set, it appears to be statistically insignificant for the precision in the positive class, but significant for the recall.
160
+
161
+ The high F1-score in both variants and classes, and especially given the imbalance between the two classes in the test set, which is simulating the actual forward flow of articles in the population, designates that the model has learned successfully to distinguish between "relevant" and "non-relevant" COVID-19 articles. Given also that the inclusion and exclusion criteria for the labeling of the set encapsulate the information needs of the expert users, we can argue that the proposed algorithm manages to filter the relevant COVID-19 scientific literature in an acceptable manner, with an F1-score equal to, or greater than ${90}\%$ .
162
+
163
+ As the CORA algorithm utilizes both a keyword-based search, and a machine learning model for the final filtering, it is important to highlight the differences in the performance when the machine learning model is not used. This way, we have an indication of the contribution that the model brings into ${CORA}$ , as well as the ability to individually compare different query strategies. For this purpose, in Table 3 we are presenting the performance of the two keyword search strategies discussed in this paper, namely the CORD-19 keyword query (first row), and the extended keyword query that we have included in CORA (second row), on the test set that we have created, with the focus on the "relevant" class. We also present the effect of applying the machine learning model as the final filtering step, on the results of the extended keyword query (third row), in essence reporting on the overall CORA performance, discussed in Table 2.
164
+
165
+ Comparing the first two rows of Table 3 we can see an expected result: the CORD-19 keyword query provides lower recall than the extended keyword query, with a benefit of a better precision. As the extended keyword query was designed to capture holistically all the information needs illustrated in Figure 3, and, therefore has many more keywords, it returns much larger number of articles, harming the precision, but covering much better the information needs of the experts. The third row shows the great advantage of adding the trained machine learning model for filtering that set; with a loss in recall of 4 p.p., yet still remaining very high at ${94}\%$ , the model manages to filter out also a lot of false positives coming from the extended query, boosting the precision by 14 p.p., to 88% from 74%. The effect of applying the whole CORA algorithm is eventually made fully visible by looking at the differences also in the F1 scores: the addition of the machine learning model contributes 5 additional p.p. to the CORD-19 keyword query approach, and 7 additional p.p. to the extended keyword query approach; in the former case the higher F1 contribution is attributed both to increased precision and recall, while in the latter primarily in a very large boost in precision.
166
+
167
+ Algorithm 1: CORAACTIVELEARNING
168
+
169
+ Input: A document collection $D$ of
170
+
171
+ scientific articles; a small labelled
172
+
173
+ training set ${L}_{{t}_{0}} \in D$ , with
174
+
175
+ $L =$ ["relevant","non-relevant"];
176
+
177
+ ${t}_{\max }$ as the maximum number of
178
+
179
+ iterations, acc as the desired
180
+
181
+ accuracy of the model
182
+
183
+ Output: The final training set ${L}_{t}$ after ${t}_{\max }$
184
+
185
+ iterations, or achieved accuracy
186
+
187
+ ${acc}$
188
+
189
+ $i = 0$
190
+
191
+ ${U}_{{t}_{i}} \leftarrow D \smallsetminus {L}_{{t}_{i}}$
192
+
193
+ train classifier ${f}_{{t}_{i}}$ on ${L}_{{t}_{i}}$
194
+
195
+ measure ${acc}\left( {f}_{{t}_{i}}\right)$
196
+
197
+ while $i \leq {t}_{\max }$ and $\operatorname{acc}\left( {f}_{{t}_{i}}\right) \leq {acc}$ do
198
+
199
+ pick instance ${x}_{i} \in {U}_{{t}_{i}}$ based on
200
+
201
+ uncertainty sampling
202
+
203
+ annotate ${x}_{i}$ with $L$
204
+
205
+ ${L}_{{t}_{i + 1}} \leftarrow {L}_{{t}_{i}} \cup {x}_{i}$
206
+
207
+ ${U}_{{t}_{i + 1}} \leftarrow {U}_{{t}_{i}} \smallsetminus {x}_{i}$
208
+
209
+ $i \leftarrow i + 1$
210
+
211
+ train classifier ${f}_{{t}_{i}}$ on ${L}_{{t}_{i}}$
212
+
213
+ measure $\operatorname{acc}\left( {f}_{{t}_{i}}\right)$
214
+
215
+ end
216
+
217
+ return ${L}_{{t}_{i}}$
218
+
219
+ max width=
220
+
221
+ X Relevant Non-Relevant Total
222
+
223
+ 1-4
224
+ Training set 3296 5920 9216
225
+
226
+ 1-4
227
+ Test set 324 910 1234
228
+
229
+ 1-4
230
+
231
+ Table 1: CORA training and test set
232
+
233
+ § 5.2 EVALUATION OF OTHER CLASSIFICATION ALGORITHMS
234
+
235
+ One of the potential drawbacks of a data set generated through active learning is that it's primarily biased towards the preferences of the model used in the loop (i.e., base learner) and peculiarities of the task. It has been questioned whether such a data set can be used effectively by a machine learning algorithm different from the one used as a base learner (Olsson, 2009). Therefore, a direct comparison between BioBERT and other classifiers trained on the same set would not be fair, since the training set was generated through an active learning system with BioBERT as a base learner.
236
+
237
+ Algorithm 2: CORAALGORITHM
238
+
239
+ Input: A document set ${D}_{\text{ test }}$ of unseen
240
+
241
+ scientific articles
242
+
243
+ Output: A list of classification labels
244
+
245
+ ${L}_{{D}_{test}}$ from
246
+
247
+ $L =$ ["relevant","non-relevant"]
248
+
249
+ if classifier ${f}_{{L}_{i}}$ not initialized then
250
+
251
+ /* refer to Algorithm 1 */
252
+
253
+ ${f}_{{L}_{i}} \leftarrow$ finetune BioBERT on ${L}_{i}$
254
+
255
+ for $j \leftarrow 1$ to $\left| {D}_{\text{ test }}\right|$ do
256
+
257
+ if ${D}_{\text{ test }}\left\lbrack j\right\rbrack$ does not satisfy CORA query
258
+
259
+ then
260
+
261
+ /* refer to query illustrated
262
+
263
+ in Figure 3 */
264
+
265
+ ${L}_{{D}_{\text{ test }}}\left\lbrack j\right\rbrack \leftarrow$ "non-relevant"
266
+
267
+ else
268
+
269
+ ${L}_{{D}_{\text{ test }}}\left\lbrack j\right\rbrack \leftarrow L\left( {{f}_{{L}_{i}}\left( {{D}_{\text{ test }}\left\lbrack j\right\rbrack }\right) }\right)$
270
+
271
+ end
272
+
273
+ return ${L}_{{D}_{test}}$
274
+
275
+ However, in order to illustrate that the data set captured the underlying characteristics of the data based on our relevancy inclusion and exclusion criteria, we trained and evaluated several other mainstream machine learning classifiers, namely Support Vector Machines (SVM), XGBoost, Logistic Regression, and Naive Bayes.
276
+
277
+ The results of these classifiers in the same test set are presented in Table 4. The best performance from this set of classifiers was achieved by ${XG}$ - Boost, with a reported precision of ${85}\%$ , recall of 95%, and an F1-score of 89% in the "Relevant" class. This performance is very close to CORA's BioBERT, suggesting that the same set can be very useful for training other classifiers as well, despite the fact that the set was created with a bias to help BioBERT resolve the uncertainty between the two classes.
278
+
279
+ § 6 CONCLUSIONS AND FUTURE WORK
280
+
281
+ Following the breakout of the COVID-19 pandemic early in 2020, the scientific community, industry and governments around the world joined forces to combat the spreading of the disease, and to identify efficient treatment methods, as well as vaccine solutions against the 2019-ncov virus. Efficient and reliable information communication, including the latest scientific advancements in the form of peer-reviewed published articles, has proven to hold great challenges; primarily the lack of fast, and accurate ways to focus only on the core COVID-19 scientific papers and filter out the secondary impact articles.
282
+
283
+ max width=
284
+
285
+ 2|c|${BioBERT}$ fine-tuned models Precision Recall F1-Score
286
+
287
+ 1-5
288
+ 2*Precision Favored Non-Relevant 0.97 0.96 0.96
289
+
290
+ 2-5
291
+ Relevant 0.89 0.91 0.90
292
+
293
+ 1-5
294
+ 2*Recall Favored Non-Relevant 0.98 0.95 0.96
295
+
296
+ 2-5
297
+ Relevant 0.88 0.94 0.91
298
+
299
+ 1-5
300
+
301
+ Table 2: Performance of two fine-tuned BioBERT models on the test set; a precision-favored and a recall-favored version of the model.
302
+
303
+ max width=
304
+
305
+ X Class Precision Recall F1-Score
306
+
307
+ 1-5
308
+ CORD-19 Keyword Query Relevant 0.85 0.86 0.86
309
+
310
+ 1-5
311
+ Extended Keyword Query Relevant 0.74 0.98 0.84
312
+
313
+ 1-5
314
+ ${BioBERT}$ fine-tuned Relevant 0.88 0.94 0.91
315
+
316
+ 1-5
317
+
318
+ Table 3: Performance of keyword queries and the fine-tuned BioBERT model on the test set.
319
+
320
+ In this paper we presented ${CORA}$ , an algorithmic solution to filter the relevant scientific papers, and save time from the experts to combat the disease, and focus only on the primary impact information. The contribution of this work is multi-fold: (i) we present a framework of inclusion and exclusion criteria that may be used as guidelines to annotate corpora of scientific publications, towards building benchmark datasets for the purpose of developing and tuning COVID-19 relevancy systems; the criteria encapsulate the information needs of experts across medicine, biology, chemistry, and bioinfor-matics, in order to combat efficiently the pandemic, (ii) we applied a simple, yet efficient deep active learning approach to compile such a benchmark set with the help of subject matter experts for the hand curation of the labels; the approach utilized the fine-tuning of BioBERT as a base classifier, and we demonstrated that the produced set is also very meaningful for training other classifiers as well, (iii) we introduced the CORA algorithmic framework for filtering the relevant scientific literature; CORA combines an extensive keyword-based query to initialize a large pool of potentially relevant documents and maximize recall, and a trained fine-tuned BioBERT model, to retain only the relevant articles from this pool, (iv) we demonstrated via an experimental evaluation on the benchmark set, that the CORA algorithm can achieve ${96}\% {F1}$ - score on detecting the non-relevant documents, and ${91}\%$ on detecting the relevant documents, constituting CORA a satisfactory solution for production settings of this exercise.
321
+
322
+ As a future work, we plan to experiment further with novel machine learning models, e.g., Albert (Lan et al., 2019), and Electra (Clark et al., 2020), who have shown great promise in the GLUE leader board results ${}^{10}$ , as well as with alternative active learning approaches, e.g., batch-aware methods (Chen and Krause, 2013), in order to improve further this performance. More importantly, having the understanding that the terminology around the COVID-19 literature is evolving fast over time, new terms appear constantly, and the vocabulary is shifting focus towards the names of new promising targets, compounds or characterization of symptoms and treatment options, we will focus in enriching CORA with a novel adaption of its keyword-based query over time. By conducting novel keywords extraction from the recent scientific literature, the CORA keyword-based query can be enhanced automatically with new terminology. In this manner, the original pool of fetched documents can still satisfy the requirement of very high recall, as they are fetched by a query which follows the vocabulary trends adopted by the published scientific literature on COVID-19. Additionally, to help further with the information overload issue, we plan to introduce domain-specific targeted labels for different user groups (e.g., clinicians, bioinformaticians, chemists), allowing any COVID-19 relevant literature to be potentially filtered according to domain specific information needs.
323
+
324
+ ${}^{10}$ https://gluebenchmark.com/leaderboard
325
+
326
+ max width=
327
+
328
+ 2|c|X Precision Recall F1-Score
329
+
330
+ 1-5
331
+ 2*SVM-Linear Non-Relevant 0.90 0.74 0.81
332
+
333
+ 2-5
334
+ Relevant 0.52 0.78 0.62
335
+
336
+ 1-5
337
+ 2*XGBoost Non-Relevant 0.98 0.94 0.96
338
+
339
+ 2-5
340
+ Relevant 0.85 0.95 0.89
341
+
342
+ 1-5
343
+ 2*Logistic Regression Non-Relevant 0.92 0.80 0.86
344
+
345
+ 2-5
346
+ Relevant 0.60 0.81 0.69
347
+
348
+ 1-5
349
+ 2*Naive Bayes Non-Relevant 0.95 0.74 0.83
350
+
351
+ 2-5
352
+ Relevant 0.55 0.89 0.68
353
+
354
+ 1-5
355
+
356
+ Table 4: Performance of Support Vector Machines (SVM), XGBoost, Logistic Regression, and Naive Bayes in the CORA test set.
papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/5DrUl9nn5y/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A System for Worldwide COVID-19 Information Aggregation
2
+
3
+ Akiko Aizawa ${}^{3}$ Frederic Bergeron ${}^{1}$ Junjie Chen ${}^{5}$ Fei Cheng ${}^{1}$ Katsuhiko Hayashi ${}^{5}$ Kentaro Inui ${}^{4}$ Hiroyoshi Ito ${}^{6}$ Daisuke Kawahara ${}^{7}$ Masaru Kitsuregawa ${}^{3}$ Hirokazu Kiyomaru ${}^{1}$ Masaki Kobayashi ${}^{6}$ Takashi Kodama ${}^{1}$ Sadao Kurohashi ${}^{1}$ Qianying Liu ${}^{1}$ Masaki Matsubara ${}^{6}$ Yusuke Miyao ${}^{5}$
4
+
5
+ Atsuyuki Morishima ${}^{6}$ Yugo Murawaki ${}^{1}$ Kazumasa Omura ${}^{1}$ Haiyue Song ${}^{1}$ Eiichiro Sumita ${}^{2}$ Shinji Suzuki ${}^{8}$ Ribeka Tanaka ${}^{1}$ Yu Tanaka ${}^{1}$ Masashi Toyoda ${}^{8}$ Nobuhiro Ueda ${}^{1}$ Honai Ueoka ${}^{1}$ Masao Utiyama ${}^{2}$ Ying Zhong ${}^{6}$ ${}^{1}$ Kyoto University ${}^{2}$ NICT ${}^{3}$ NII ${}^{4}$ Tohoku University ${}^{5}$ The University of Tokyo ${}^{6}$ The University of Tsukuba ${}^{7}$ Waseda University ${}^{8}$ Institute of Industrial Science, the University of Tokyo
6
+
7
+ ## Abstract
8
+
9
+ The global pandemic of COVID-19 has made the public pay close attention to related news, covering various domains, such as sanitation, treatment, and effects on education. Meanwhile, the COVID-19 condition is very different among the countries (e.g., policies and development of the epidemic), and thus citizens would be interested in news in foreign countries. We build a system for worldwide COVID-19 information aggregation ${}^{1}$ containing reliable articles from 10 regions in 7 languages sorted by topics. Our reliable COVID-19 related website dataset collected through crowdsourcing ensures the quality of the articles. A neural machine translation module translates articles in other languages into Japanese and English. A BERT-based topic-classifier trained on an article-topic pair dataset helps users find their interested information efficiently by putting articles into different categories.
10
+
11
+ ## 1 Introduction
12
+
13
+ Due to the global COVID-19 epidemic and the rapid changes in the epidemic, citizens are highly interested in learning about the latest news, which covers various domains, including directly related news such as treatment and sanitation policies and also side effects on education, economy, and so on. Meanwhile, citizens would pay extra attention to global related news now, not only because the planet has been brought together by the pandemic, but also because they can learn from the news of other countries to obtain first-hand news. For example, the epidemic outbreak in Korea is one month earlier than in Japan. Japanese citizens could prepare better for the epidemic if they had obtained more information from Korea. Citizens could learn from Asian countries about the effectiveness of masks before local official guidance. Universities can learn about how to arrange virtual courses from the experience of other countries. Thus, a citizen-friendly international news system with topic detection would be helpful.
14
+
15
+ There are three challenges for building such a system compared with systems focusing on one language and one topic (Dong et al., 2020; Thorlund et al., 2020):
16
+
17
+ - The reliability of news sources.
18
+
19
+ - Translation quality to the local language.
20
+
21
+ - Topic classification for efficient searching.
22
+
23
+ The interface and the construction process of the worldwide COVID-19 information aggregation system are shown in Figure 1. We first construct a robust multilingual reliable website collection solver via crowdsourcing with native workers for collecting reliable websites. We crawl news articles base on them and filter out the irrelevant. A high-quality machine translation system is then exploited to translate the articles into the local language (i.e., Japanese and English). The translated news are grouped into their corresponding topics by a BERT-based topic classifier. Our classifier achieves ${0.84}\mathrm{\;F}$ -score when classifying whether an article is about COVID-19 and substantially outperforms a keyword-based model by a large margin. In the end, all the translated and topic labeled news is demonstrated via a user-friendly web interface.
24
+
25
+ ## 2 Methodology
26
+
27
+ We present the pipeline for building the worldwide COVID-19 information aggregation system, focusing on the three solutions to the challenges.
28
+
29
+ ---
30
+
31
+ The authors are in alphabetical order.
32
+
33
+ https://lotus.kuee.kyoto-u.ac.jp/ NLPforCOVID-19/en
34
+
35
+ ---
36
+
37
+ ![01963dde-0cf0-76e4-b5e6-9263130ca493_1_201_171_1208_397_0.jpg](images/01963dde-0cf0-76e4-b5e6-9263130ca493_1_201_171_1208_397_0.jpg)
38
+
39
+ Figure 1: The construction process and the interface of the system.
40
+
41
+ <table><tr><td>Website</td><td>Country</td><td>Primary</td><td>Reason</td><td>Topics</td></tr><tr><td>www.cdc.gov</td><td>US</td><td>True</td><td>The site is a government website, specifically the Center for Disease Control.</td><td>infection status prevention and emergency declaration symptoms, medical treatment and tests</td></tr><tr><td>www.covid19-yamanaka.com</td><td>Japan</td><td>False</td><td>Shinya Yamanaka is a famous medical researcher and his insights about COVID-19 are reliable.</td><td>prevention and emergency declaration</td></tr><tr><td>www.internazionale.it</td><td>Italy</td><td>False</td><td>This website collects and translates articles from news agencies and magazines from all over the world. Has up-to-date news, but also long-form analysis articles. Most of my deeper information comes from here.</td><td>infection status economics and welfare prevention and emergency declaration school and online classes</td></tr><tr><td>covid.saude.gov.br</td><td>Brazil</td><td>True</td><td>This site is the goverment web site.</td><td>infection status</td></tr></table>
42
+
43
+ Table 1: Crowdworkers give trusted websites that they use to obtain COVID-19 related information. They also give the reasons to choose the website and what kind of information he/she obtaines from the website.
44
+
45
+ ### 2.1 Reliable Website Collection
46
+
47
+ To avoid rumors and obtain high-quality, reliable information, it is essential to limit the information sources. Since we aim to create a multilingual system, the first challenge is to obtain a list of reliable information providers from different countries and in different languages.
48
+
49
+ Crowdsourcing is known to be efficient in creating high-quality datasets (Behnke et al., 2018). To collect the list of reliable websites of a specific country, we use multiple crowdsourcing services (e.g., Crowd $4{\mathrm{U}}^{2}$ , Amazon Mechanical Turk ${}^{3}$ , Yahoo! Crowdsourcing ${}^{4}$ , Tencent wenjuan ${}^{5}$ ) and limit the workers' nationality because we assume that local citizens of each country know the reliable websites in their country. The workers not only suggest websites they think are reliable but they must also justify their choices and give a list of related topics they address, similar to constructing support for rumor detection (Gorrell et al., 2019; Derczynski et al., 2017).
50
+
51
+ We decided to use eight countries of interest, including India, the United States, Italy, Japan, Spain, France, Germany, and Brazil. For other countries or regions such as China and Korea, reliable web-sites are provided by international students from these areas.
52
+
53
+ We treat official news from the governments as primary information sources and reliable newspapers as secondary information sources. We counted how many times each website was mentioned by the crowdworkers and found that the primary information sources tend to be ranked at the top three in each country. So we mainly crawl articles from primary sources.
54
+
55
+ Table 1 shows examples of the crowdsourcing results. The workers provide websites indicating for each one whether it is a primary or a secondary source, what are the reasons to choose this particular website, and which topics are addressed by the website. These topics are selected from a list that includes eight topics (e.g., Infection status, Economics and welfare, School and online classes).
56
+
57
+ ---
58
+
59
+ ${}^{2}$ https://crowd4u.org/
60
+
61
+ 3https://www.mturk.com/
62
+
63
+ *https://crowdsourcing.yahoo.co.jp
64
+
65
+ ${}^{5}$ https://wj.qq.com
66
+
67
+ ---
68
+
69
+ ### 2.2 Crawl, Filter and Translation for Information Localization
70
+
71
+ We crawl articles from 35 most reliable websites everyday by accessing the entry page and jumping to urls inside it recursively.
72
+
73
+ The number of crawled web pages is too big and exceeds the translation capacity. We consider only the most relevant pages by filtering using keywords such as COVID. We can focus on pages with a higher probability to be COVID-19 related.
74
+
75
+ We use neural machine translation model Tex- ${\mathrm{{Tra}}}^{6}$ with self-attention mechanism (Bahdanau et al., 2015; Vaswani et al., 2017). The translation system provides high-quality translation from news articles in multiple languages into articles in Japanese and English. The translation capacity is approximately 3,000 articles per day.
76
+
77
+ ### 2.3 Topic Classification
78
+
79
+ To perform topic classification, we first collect the dataset via crowdsourcing. The topic labels are annotated to a subset of articles. Then we train a topic-classification model to label further articles automatically.
80
+
81
+ #### 2.3.1 Crowdsourcing Annotation for Topic Classification
82
+
83
+ All articles are in Japanese and English after the translation stage, we then apply crowdsourcing annotation to label the articles with topics. As shown in Figure 2, the crowdsourcing workers first check the content of the page and give four labels to the article: whether it is related to COVID-19, whether it is helpful, whether the translated text is fluent, and topics of the article.
84
+
85
+ Each article is assigned to 10 crowdworkers from Yahoo Crowdsourcing and we set a threshold to ${50}\%$ for each binary question, i.e., if more than 5 workers think the article is related to COVID-19, then we label the article as related. We post this crowdsourcing task twice a week and can obtain ${20}\mathrm{\;K}$ article-topic pairs each time.
86
+
87
+ #### 2.3.2 Automatic Topic Classifier
88
+
89
+ The pretrained language model BERT (Devlin et al., 2019) shows reliable performance on many NLP tasks with limited annotated data including document classification (Adhikari et al., 2019; Sun et al., 2019). We use a pretrained BERT model in a feature based manner (Lee et al., 2019) where encoder weights kept frozen and train a classifier using the labeled articles by crowdsourcing. The BERT-based topic classification can then label other pages.
90
+
91
+ ![01963dde-0cf0-76e4-b5e6-9263130ca493_2_967_170_371_524_0.jpg](images/01963dde-0cf0-76e4-b5e6-9263130ca493_2_967_170_371_524_0.jpg)
92
+
93
+ Figure 2: A sample of crowdsourcing annotation.
94
+
95
+ <table><tr><td># Country</td><td>Questionnaire</td><td>Reliable sites</td></tr><tr><td>India</td><td>122</td><td>67</td></tr><tr><td>US</td><td>106</td><td>77</td></tr><tr><td>Italy</td><td>104</td><td>68</td></tr><tr><td>Japan</td><td>102</td><td>49</td></tr><tr><td>Spain</td><td>126</td><td>90</td></tr><tr><td>France</td><td>127</td><td>71</td></tr><tr><td>Germany</td><td>106</td><td>61</td></tr><tr><td>Brazil</td><td>115</td><td>67</td></tr><tr><td>Total</td><td>908</td><td>550</td></tr></table>
96
+
97
+ Table 2: Statistics of the number of questionnaires and reliable sites of each country.
98
+
99
+ <table><tr><td>Country</td><td>Article with topic label</td></tr><tr><td>France</td><td>31K</td></tr><tr><td>America</td><td>6K</td></tr><tr><td>Japan</td><td>5K</td></tr><tr><td>China</td><td>8K</td></tr><tr><td>International</td><td>3K</td></tr><tr><td>Spain</td><td>1K</td></tr><tr><td>India</td><td>2K</td></tr><tr><td>Germany</td><td>1K</td></tr><tr><td>Total</td><td>57K</td></tr></table>
100
+
101
+ Table 3: Statistics of the article-topic dataset constructed by crowdsourcing.
102
+
103
+ We also compare it with a keyword-based baseline method where we set keywords for each topic and find exact match.
104
+
105
+ ---
106
+
107
+ 6https://mt-auto-minhon-mlt.ucri. jgn-x.jp/
108
+
109
+ ---
110
+
111
+ <table><tr><td rowspan="2">Task</td><td colspan="3">Keyword-based model</td><td colspan="3">BERT-based model</td></tr><tr><td>Precision</td><td>Recall</td><td>F-score</td><td>Precision</td><td>Recall</td><td>F-score</td></tr><tr><td>Is about COVID-19</td><td>0.36</td><td>1.00</td><td>0.54</td><td>0.82</td><td>0.87</td><td>0.84</td></tr><tr><td>Topic: Infection status</td><td>0.09</td><td>0.53</td><td>0.16</td><td>0.43</td><td>0.81</td><td>0.56</td></tr><tr><td>Topic: Prevention</td><td>0.05</td><td>0.73</td><td>0.10</td><td>0.19</td><td>0.73</td><td>0.30</td></tr><tr><td>Topic: Medical information</td><td>0.17</td><td>0.70</td><td>0.27</td><td>0.27</td><td>0.91</td><td>0.41</td></tr><tr><td>Topic: Economic</td><td>0.06</td><td>0.36</td><td>0.10</td><td>0.14</td><td>0.84</td><td>0.24</td></tr><tr><td>Topic: Education</td><td>0.06</td><td>1.00</td><td>0.11</td><td>0.05</td><td>0.60</td><td>0.09</td></tr><tr><td>Topic: Art and Sport</td><td>0.06</td><td>0.41</td><td>0.10</td><td>0.08</td><td>0.94</td><td>0.14</td></tr><tr><td>Topic: Others</td><td>0.52</td><td>0.07</td><td>0.13</td><td>0.87</td><td>0.79</td><td>0.83</td></tr></table>
112
+
113
+ Table 4: Topic classification results. Each line stands for one task. We use F-score to evaluate.
114
+
115
+ <table><tr><td>Topic</td><td>Positive</td><td>Negative</td><td>Positive percentage</td></tr><tr><td>Is about COVID-19</td><td>24361</td><td>32265</td><td>43.0%</td></tr><tr><td>Topic: Infection status</td><td>6664</td><td>49962</td><td>11.8%</td></tr><tr><td>Topic: Prevention</td><td>2533</td><td>54093</td><td>4.5%</td></tr><tr><td>Topic: Medical information</td><td>5075</td><td>51551</td><td>9.0%</td></tr><tr><td>Topic: Economic</td><td>2066</td><td>54560</td><td>3.6%</td></tr><tr><td>Topic: Education</td><td>173</td><td>56453</td><td>0.3%</td></tr><tr><td>Topic: Art and Sport</td><td>657</td><td>55969</td><td>1.2%</td></tr><tr><td>Topic: Others</td><td>37331</td><td>19295</td><td>65.9%</td></tr></table>
116
+
117
+ Table 5: The number of positive and negative samples for each topic.
118
+
119
+ ## 3 Results
120
+
121
+ We show the statistics of the reliable website collection , topic classification results, translation quality evaluation and statistical information of the interface in this section.
122
+
123
+ ### 3.1 Reliable Website Collection
124
+
125
+ As shown in Table 2, we totally recieved 908 questionnaire results from 8 countries with totally 550 websites. Rumors are rampant in this era, the reliable websites dataset can help people to protect themselves from COVID-19 and avoid trusting rumors about COVID-19.
126
+
127
+ ### 3.2 Topic Classification
128
+
129
+ We compared the BERT-based model with the keyword-based baseline model on topic classification task.
130
+
131
+ For the keyword-based method, there are totally 76 selected keywords of different topics such as COVID, Remote work, and Social distance.
132
+
133
+ For the BERT-based method, we use the pre-trained BERT-LARGE model with Whole Word Masking (WWM). We add one linear layer after the BERT encoder without fine-tuning the encoder. For every article, we take the hidden state of the ending symbol of each sentence as the sentence embedding and perform mean and max pooling of all sentence embeddings. The input of the linear layer is the concatenation of mean and max pooling embeddings and the output is a binary label.
134
+
135
+ <table><tr><td>Topic</td><td>Krippendorff's alpha</td></tr><tr><td>Is about COVID-19</td><td>0.927</td></tr><tr><td>Topic: Infection status</td><td>0.898</td></tr><tr><td>Topic: Prevention</td><td>0.851</td></tr><tr><td>Topic: Medical information</td><td>0.867</td></tr><tr><td>Topic: Economic</td><td>0.931</td></tr><tr><td>Topic: Education</td><td>0.938</td></tr><tr><td>Topic: Art and Sport</td><td>0.994</td></tr><tr><td>Topic: Others</td><td>0.884</td></tr></table>
136
+
137
+ Table 6: Inter-annotator agreement measured by Krip-pendorff's alpha of each topic.
138
+
139
+ The article-topic dataset is shown in Table 3, it contains totally ${57}\mathrm{\;K}$ articles from 8 countires with topic label. We calculated the Krippendorff's alpha ${}^{7}$ as an inter-annotator agreement measure for ${57}\mathrm{\;K}$ human checked articles, as shown in Table 6 , the inter-annotator agreement of all topics is larger than 0.8 which is high enough (Krippen-dorff, 2004), showing the quality of our dataset is guaranteed.
140
+
141
+ We randomly selected select ${90}\%$ data as train set and the remaining ${10}\%$ as test set. As shown in Table 4, the BERT-based model outperforms the baseline model in almost all tasks. We can see that our system can reliably classify which articles are related to COVID-19 with 0.84 F1, and that our interface can show related news to our users. The topic classifiers achieve relatively satisfactory performance on the medical related topics (i.e. Infec-ction status, Prevention and Medical information). The general high recall guarantees the most topic relevant articles are returned. When presenting to end users, the topic predictions are filtered by the 'Is about COVID-19' classifier to ensure the eventual precision. Meanwhile, for some topics such as Arts & Sports and Education, the performance of the current system is still limited.
142
+
143
+ We further analyze the balance of the dataset by calculating the positive and negative number of each topic. As shown in Table 5, labels of several topics are imbalanced, for example, Education, Art and Sport, Economic. Analyzed together with the BERT topic classifier results, we found that the performance is poor for such imbalanced topics. For more frequent and balanced topics such as Infection status and Medical information, the F-scores are relatively higher.
144
+
145
+ ### 3.3 Machine Translation Evaluation
146
+
147
+ For the evaluation of the translated text, we conducted human evaluation through crowdsourcing. As shown in Table 7, 61.7% of the articles are fluent in the translated language and the inter-annotator agreement is high enough (>0.8).
148
+
149
+ <table><tr><td>Fluent</td><td>Not fluent</td><td>Krippendorff's alpha</td></tr><tr><td>15036</td><td>9235</td><td>0.877</td></tr></table>
150
+
151
+ Table 7: Human evaluation of translated Japanese articles related to COVID-19 and inter-annotator agreement.
152
+
153
+ ### 3.4 Statistics of the System
154
+
155
+ <table><tr><td>Country</td><td>Raw(↑/day)</td><td>Translated</td><td>With topics</td></tr><tr><td>France</td><td>774K(8K)</td><td>74K</td><td>9K</td></tr><tr><td>US</td><td>69K(730)</td><td>15K</td><td>2K</td></tr><tr><td>Japan</td><td>25K(260)</td><td>5K</td><td>2K</td></tr><tr><td>Europe</td><td>50K(510)</td><td>2K</td><td>50</td></tr><tr><td>China</td><td>38K(400)</td><td>3K</td><td>342</td></tr><tr><td>Int.</td><td>45K(470)</td><td>3K</td><td>263</td></tr><tr><td>Korea</td><td>16K(170)</td><td>260</td><td>71</td></tr><tr><td>Spain</td><td>4K(40)</td><td>370</td><td>36</td></tr><tr><td>India</td><td>14K(150)</td><td>860</td><td>66</td></tr><tr><td>Germany</td><td>16K(170)</td><td>8K</td><td>6K</td></tr><tr><td>Total</td><td>1.05M(11K)</td><td>110K</td><td>18K</td></tr></table>
156
+
157
+ Table 8: Statistics of the growing database of the system.
158
+
159
+ The detail of the system database is shown in Table 8. There are totally ${1.05}\mathrm{M}$ website pages with ${110}\mathrm{\;K}$ of them translated into Japanese and ${18}\mathrm{\;K}$ of them with topic labels. The dataset is still growing approximately ${11}\mathrm{\;K}$ pages per day.
160
+
161
+ We collected the number of visits to the website through google analytics, there are about 200 to 500 visits per month and more than 100 people at the peak per day. It suggests that the system is actually taken up by the public.
162
+
163
+ ## 4 Conclusion
164
+
165
+ We built a system for worldwide COVID-19 information aggregation by combining crowdsourcing, crawling, machine translation, and a topic classifier, which provides reliable, comprehensive and latest information from the world. In the mean-wile, we proposed an effective approach to annotate large cross-lingual news topic datasets with high inner-annotation agreement, which potentially benefits the NLP community to enrich the solutions for preventing COVID-19. The contextual BERT based classification models achieve reasonable performance considering the imbalance of the topic labels. We assume this work could attract future research interests to the COVID-19 related tasks.
166
+
167
+ ## References
168
+
169
+ Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019. Docbert: Bert for document classification. arXiv preprint arXiv:1904.08398.
170
+
171
+ ---
172
+
173
+ ${}^{7}$ https://github.com/pln-fing-udelar/fast-krippendorff
174
+
175
+ ---
176
+
177
+ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-gio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the Third International Conference on Learning Representations (ICLR), San Diego, USA.
178
+
179
+ Maximiliana Behnke, Antonio Valerio Miceli Barone, Rico Sennrich, Vilelmini Sosoni, Thanasis Naskos, Eirini Takoulidou, Maria Stasimioti, Menno van Za-anen, Sheila Castilho, Federico Gaspari, Panayota Georgakopoulou, Valia Kordoni, Markus Egg, and Katia Lida Kermanidis. 2018. Improving Machine Translation of Educational Content via Crowdsourc-ing. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC), Miyazaki, Japan. European Language Resources Association.
180
+
181
+ Leon Derczynski, Kalina Bontcheva, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Arkaitz Zubiaga. 2017. SemEval-2017 task 8: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 69-76, Vancouver, Canada. Association for Computational Linguistics.
182
+
183
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
184
+
185
+ Ensheng Dong, Hongru Du, and Lauren Gardner. 2020. An interactive web-based dashboard to track covid- 19 in real time. The Lancet infectious diseases, 20(5):533-534.
186
+
187
+ Genevieve Gorrell, Elena Kochkina, Maria Liakata, Ahmet Aker, Arkaitz Zubiaga, Kalina Bontcheva, and Leon Derczynski. 2019. SemEval-2019 task 7: RumourEval, determining rumour veracity and support for rumours. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 845-854, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
188
+
189
+ Klaus Krippendorff. 2004. Reliability in content analysis: Some common misconceptions and recommendations. Human communication research, 30(3):411-433.
190
+
191
+ Jaejun Lee, Raphael Tang, and Jimmy Lin. 2019. What would elsa do? freezing layers during transformer fine-tuning. arXiv preprint arXiv:1911.03090.
192
+
193
+ Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In China National Conference on Chinese Computational Linguistics, pages 194-206. Springer.
194
+
195
+ Kristian Thorlund, Louis Dron, Jay Park, Grace Hsu, Jamie I Forrest, and Edward J Mills. 2020. A real-time dashboard of clinical trials for covid-19. The Lancet Digital Health, 2(6):e286-e287.
196
+
197
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc.
198
+
199
+ ![01963dde-0cf0-76e4-b5e6-9263130ca493_6_196_464_1265_1269_0.jpg](images/01963dde-0cf0-76e4-b5e6-9263130ca493_6_196_464_1265_1269_0.jpg)
200
+
201
+ Figure 3: The interface of the worldwide COVID-19 information aggregation system in English.
202
+
203
+ <table><tr><td>Website</td><td>Country</td><td>Primary</td><td>Reason</td><td>Topics</td></tr><tr><td>www.washingtonpost.com/coronavirus</td><td>US</td><td>True</td><td>I visit this URL daily and I trust them.</td><td>prevention and emergency declaration symptoms, medical treatment and tests economics and welfare</td></tr><tr><td>www.osha.gov/SLTC/covid-19</td><td>US</td><td>False</td><td>It helps employees and employers understand the work environment better as far a covid19 goes and how to stay healthy, safe and follow guidleines correctly.</td><td>prevention and emergency declaration symptoms, medical treatment and tests economics and welfare</td></tr><tr><td>www.mhlw.go.jp/stf/seisakunitsuite /bunya/0000164708_00001.html</td><td>Japan</td><td>True</td><td>The government provides reliable information.</td><td>infection status prevention and emergency declaration</td></tr><tr><td>vdata.nikkei.com/newsgraphics /coronavirus-world-map</td><td>Japan</td><td>False</td><td>I can learn the worldwide information through it.</td><td>infection status</td></tr><tr><td>www.ncbi.nlm.nih.gov/pubmed</td><td>Italy</td><td>True</td><td>A scientific papers site</td><td>symptoms, medical treatment and tests</td></tr><tr><td>www.ansa.it /canale_saluteebenessere/</td><td>Italy</td><td>False</td><td>This is an official news outlet that gathers its news from trusted sources, it's a constantly updated website which a lot of Italians rely on. It also has exclusive reports and interviews to important people.</td><td>infection status prevention and emergency declaration symptoms, medical treatment and tests school and online classes about rumours</td></tr><tr><td>www.gouvemement.fr /info-coronavirus</td><td>France</td><td>True</td><td>This is French government website.</td><td>prevention and emergency declaration symptoms, medical treatment and tests</td></tr><tr><td>aatishb.com/covidtrends</td><td>France</td><td>False</td><td>The code is open source and the data come from reliable source</td><td>infection status</td></tr><tr><td>cnecovid.isciii.es</td><td>Spain</td><td>True</td><td>This site is the goverment web site</td><td>infection status prevention and emergency declaration symptoms, medical treatment and tests</td></tr><tr><td>www.marca.com/tiramillas /actualidad/2020/05/14/ 5ebcc0cee2704ec4bb8b4623.html</td><td>Spain</td><td>False</td><td>It is a sport magazine but they constantly update all the information in Spain about coronavirus</td><td>infection status entertainment and sports</td></tr><tr><td>www.charite.de</td><td>Germany</td><td>True</td><td>its the page of the hospital that mainly works with the german goverment</td><td>infection status</td></tr><tr><td>www.spiegel.de /thema/coronavirus/</td><td>Gemany</td><td>False</td><td>one of Germanys oldest weekly news Paper, quality journalism and fact checking</td><td>infection status economics and welfare entertainment and sports</td></tr><tr><td>coronavirus.curitiba.pr.gov.br</td><td>Brazil</td><td>True</td><td>This is my city's COVID page, with daily updates on infection status and the running of the city, I can trust them because they only relay official information.</td><td>infection status prevention and emergency declaration</td></tr><tr><td>www.uol.com.br</td><td>Brazil</td><td>False</td><td>Its the biggest news site here in my country</td><td>prevention and emergency declaration entertainment and sports</td></tr><tr><td>www.mohfw.gov.in</td><td>India</td><td>True</td><td>This is the official page of the ministry of health and family welfare of government of India and is therefore reliable.</td><td>infection status prevention and emergency declaration symptoms, medical treatment and tests economics and welfare school and online classes entertainment and sports</td></tr><tr><td>www.thehindu.com</td><td>India</td><td>False</td><td>This is one of the trusted News paper</td><td>infection status prevention and emergency declaration economics and welfare school and online classes</td></tr></table>
204
+
205
+ Table 9: Some crowdsourcing results of reliable websites collection
206
+
207
+ <table><tr><td>Country</td><td>Website</td><td>Mentioned times</td></tr><tr><td rowspan="3">United States</td><td>www.cdc.gov/coronavirus/2019-ncov</td><td>14</td></tr><tr><td>www.usa.gov/coronavirus</td><td>6</td></tr><tr><td>www.nytimes.com/news-event/coronavirus</td><td>4</td></tr><tr><td rowspan="3">Japan</td><td>hazard.yahoo.co.jp/article/20200207</td><td>17</td></tr><tr><td>www.mhlw.go.jp/...</td><td>13</td></tr><tr><td>corona.feedal.com</td><td>6</td></tr><tr><td rowspan="3">Italy</td><td>www.salute.gov.it/nuovocoronavirus</td><td>11</td></tr><tr><td>www.salute.gov.it/portale/home.html</td><td>4</td></tr><tr><td>www.worldometers.info/coronavirus</td><td>3</td></tr><tr><td rowspan="3">France</td><td>www.gouvernement.fr/info-coronavirus</td><td>28</td></tr><tr><td>www.who.int/fr/emergencies/diseases/novel-coronavirus-2019</td><td>7</td></tr><tr><td>www.lemonde.fr/coronavirus-2019-ncov/</td><td>6</td></tr><tr><td rowspan="3">Spain</td><td>www.usa.gov/coronavirus</td><td>9</td></tr><tr><td>www.mscbs.gob.es/profesionales/...</td><td>7</td></tr><tr><td>covid19.gob.es</td><td>4</td></tr><tr><td rowspan="3">Germany</td><td>www.rki.de/DE/Home/homepage_node.html</td><td>7</td></tr><tr><td>www.bundesgesundheitsministerium.de/coronavirus.html</td><td>6</td></tr><tr><td>interaktiv.morgenpost.de/corona-virus-karte-infektionen-deutschland-weltweit</td><td>5</td></tr><tr><td rowspan="3">Brazil</td><td>covid.saude.gov.br</td><td>21</td></tr><tr><td>g1.globo.com/bemestar/coronavirus</td><td>11</td></tr><tr><td>coronavirus.saude.gov.br</td><td>9</td></tr><tr><td rowspan="3">India</td><td>www.worldometers.info/coronavirus</td><td>11</td></tr><tr><td>www.mohfw.gov.in</td><td>10</td></tr><tr><td>www.mygov.in/covid-19/?cbps=1</td><td>10</td></tr></table>
208
+
209
+ Table 10: Top three mentioned websites by crowdworkers of each country.
210
+
papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/5DrUl9nn5y/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,362 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § A SYSTEM FOR WORLDWIDE COVID-19 INFORMATION AGGREGATION
2
+
3
+ Akiko Aizawa ${}^{3}$ Frederic Bergeron ${}^{1}$ Junjie Chen ${}^{5}$ Fei Cheng ${}^{1}$ Katsuhiko Hayashi ${}^{5}$ Kentaro Inui ${}^{4}$ Hiroyoshi Ito ${}^{6}$ Daisuke Kawahara ${}^{7}$ Masaru Kitsuregawa ${}^{3}$ Hirokazu Kiyomaru ${}^{1}$ Masaki Kobayashi ${}^{6}$ Takashi Kodama ${}^{1}$ Sadao Kurohashi ${}^{1}$ Qianying Liu ${}^{1}$ Masaki Matsubara ${}^{6}$ Yusuke Miyao ${}^{5}$
4
+
5
+ Atsuyuki Morishima ${}^{6}$ Yugo Murawaki ${}^{1}$ Kazumasa Omura ${}^{1}$ Haiyue Song ${}^{1}$ Eiichiro Sumita ${}^{2}$ Shinji Suzuki ${}^{8}$ Ribeka Tanaka ${}^{1}$ Yu Tanaka ${}^{1}$ Masashi Toyoda ${}^{8}$ Nobuhiro Ueda ${}^{1}$ Honai Ueoka ${}^{1}$ Masao Utiyama ${}^{2}$ Ying Zhong ${}^{6}$ ${}^{1}$ Kyoto University ${}^{2}$ NICT ${}^{3}$ NII ${}^{4}$ Tohoku University ${}^{5}$ The University of Tokyo ${}^{6}$ The University of Tsukuba ${}^{7}$ Waseda University ${}^{8}$ Institute of Industrial Science, the University of Tokyo
6
+
7
+ § ABSTRACT
8
+
9
+ The global pandemic of COVID-19 has made the public pay close attention to related news, covering various domains, such as sanitation, treatment, and effects on education. Meanwhile, the COVID-19 condition is very different among the countries (e.g., policies and development of the epidemic), and thus citizens would be interested in news in foreign countries. We build a system for worldwide COVID-19 information aggregation ${}^{1}$ containing reliable articles from 10 regions in 7 languages sorted by topics. Our reliable COVID-19 related website dataset collected through crowdsourcing ensures the quality of the articles. A neural machine translation module translates articles in other languages into Japanese and English. A BERT-based topic-classifier trained on an article-topic pair dataset helps users find their interested information efficiently by putting articles into different categories.
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ Due to the global COVID-19 epidemic and the rapid changes in the epidemic, citizens are highly interested in learning about the latest news, which covers various domains, including directly related news such as treatment and sanitation policies and also side effects on education, economy, and so on. Meanwhile, citizens would pay extra attention to global related news now, not only because the planet has been brought together by the pandemic, but also because they can learn from the news of other countries to obtain first-hand news. For example, the epidemic outbreak in Korea is one month earlier than in Japan. Japanese citizens could prepare better for the epidemic if they had obtained more information from Korea. Citizens could learn from Asian countries about the effectiveness of masks before local official guidance. Universities can learn about how to arrange virtual courses from the experience of other countries. Thus, a citizen-friendly international news system with topic detection would be helpful.
14
+
15
+ There are three challenges for building such a system compared with systems focusing on one language and one topic (Dong et al., 2020; Thorlund et al., 2020):
16
+
17
+ * The reliability of news sources.
18
+
19
+ * Translation quality to the local language.
20
+
21
+ * Topic classification for efficient searching.
22
+
23
+ The interface and the construction process of the worldwide COVID-19 information aggregation system are shown in Figure 1. We first construct a robust multilingual reliable website collection solver via crowdsourcing with native workers for collecting reliable websites. We crawl news articles base on them and filter out the irrelevant. A high-quality machine translation system is then exploited to translate the articles into the local language (i.e., Japanese and English). The translated news are grouped into their corresponding topics by a BERT-based topic classifier. Our classifier achieves ${0.84}\mathrm{\;F}$ -score when classifying whether an article is about COVID-19 and substantially outperforms a keyword-based model by a large margin. In the end, all the translated and topic labeled news is demonstrated via a user-friendly web interface.
24
+
25
+ § 2 METHODOLOGY
26
+
27
+ We present the pipeline for building the worldwide COVID-19 information aggregation system, focusing on the three solutions to the challenges.
28
+
29
+ The authors are in alphabetical order.
30
+
31
+ https://lotus.kuee.kyoto-u.ac.jp/ NLPforCOVID-19/en
32
+
33
+ < g r a p h i c s >
34
+
35
+ Figure 1: The construction process and the interface of the system.
36
+
37
+ max width=
38
+
39
+ Website Country Primary Reason Topics
40
+
41
+ 1-5
42
+ www.cdc.gov US True The site is a government website, specifically the Center for Disease Control. infection status prevention and emergency declaration symptoms, medical treatment and tests
43
+
44
+ 1-5
45
+ www.covid19-yamanaka.com Japan False Shinya Yamanaka is a famous medical researcher and his insights about COVID-19 are reliable. prevention and emergency declaration
46
+
47
+ 1-5
48
+ www.internazionale.it Italy False This website collects and translates articles from news agencies and magazines from all over the world. Has up-to-date news, but also long-form analysis articles. Most of my deeper information comes from here. infection status economics and welfare prevention and emergency declaration school and online classes
49
+
50
+ 1-5
51
+ covid.saude.gov.br Brazil True This site is the goverment web site. infection status
52
+
53
+ 1-5
54
+
55
+ Table 1: Crowdworkers give trusted websites that they use to obtain COVID-19 related information. They also give the reasons to choose the website and what kind of information he/she obtaines from the website.
56
+
57
+ § 2.1 RELIABLE WEBSITE COLLECTION
58
+
59
+ To avoid rumors and obtain high-quality, reliable information, it is essential to limit the information sources. Since we aim to create a multilingual system, the first challenge is to obtain a list of reliable information providers from different countries and in different languages.
60
+
61
+ Crowdsourcing is known to be efficient in creating high-quality datasets (Behnke et al., 2018). To collect the list of reliable websites of a specific country, we use multiple crowdsourcing services (e.g., Crowd $4{\mathrm{U}}^{2}$ , Amazon Mechanical Turk ${}^{3}$ , Yahoo! Crowdsourcing ${}^{4}$ , Tencent wenjuan ${}^{5}$ ) and limit the workers' nationality because we assume that local citizens of each country know the reliable websites in their country. The workers not only suggest websites they think are reliable but they must also justify their choices and give a list of related topics they address, similar to constructing support for rumor detection (Gorrell et al., 2019; Derczynski et al., 2017).
62
+
63
+ We decided to use eight countries of interest, including India, the United States, Italy, Japan, Spain, France, Germany, and Brazil. For other countries or regions such as China and Korea, reliable web-sites are provided by international students from these areas.
64
+
65
+ We treat official news from the governments as primary information sources and reliable newspapers as secondary information sources. We counted how many times each website was mentioned by the crowdworkers and found that the primary information sources tend to be ranked at the top three in each country. So we mainly crawl articles from primary sources.
66
+
67
+ Table 1 shows examples of the crowdsourcing results. The workers provide websites indicating for each one whether it is a primary or a secondary source, what are the reasons to choose this particular website, and which topics are addressed by the website. These topics are selected from a list that includes eight topics (e.g., Infection status, Economics and welfare, School and online classes).
68
+
69
+ ${}^{2}$ https://crowd4u.org/
70
+
71
+ 3https://www.mturk.com/
72
+
73
+ *https://crowdsourcing.yahoo.co.jp
74
+
75
+ ${}^{5}$ https://wj.qq.com
76
+
77
+ § 2.2 CRAWL, FILTER AND TRANSLATION FOR INFORMATION LOCALIZATION
78
+
79
+ We crawl articles from 35 most reliable websites everyday by accessing the entry page and jumping to urls inside it recursively.
80
+
81
+ The number of crawled web pages is too big and exceeds the translation capacity. We consider only the most relevant pages by filtering using keywords such as COVID. We can focus on pages with a higher probability to be COVID-19 related.
82
+
83
+ We use neural machine translation model Tex- ${\mathrm{{Tra}}}^{6}$ with self-attention mechanism (Bahdanau et al., 2015; Vaswani et al., 2017). The translation system provides high-quality translation from news articles in multiple languages into articles in Japanese and English. The translation capacity is approximately 3,000 articles per day.
84
+
85
+ § 2.3 TOPIC CLASSIFICATION
86
+
87
+ To perform topic classification, we first collect the dataset via crowdsourcing. The topic labels are annotated to a subset of articles. Then we train a topic-classification model to label further articles automatically.
88
+
89
+ § 2.3.1 CROWDSOURCING ANNOTATION FOR TOPIC CLASSIFICATION
90
+
91
+ All articles are in Japanese and English after the translation stage, we then apply crowdsourcing annotation to label the articles with topics. As shown in Figure 2, the crowdsourcing workers first check the content of the page and give four labels to the article: whether it is related to COVID-19, whether it is helpful, whether the translated text is fluent, and topics of the article.
92
+
93
+ Each article is assigned to 10 crowdworkers from Yahoo Crowdsourcing and we set a threshold to ${50}\%$ for each binary question, i.e., if more than 5 workers think the article is related to COVID-19, then we label the article as related. We post this crowdsourcing task twice a week and can obtain ${20}\mathrm{\;K}$ article-topic pairs each time.
94
+
95
+ § 2.3.2 AUTOMATIC TOPIC CLASSIFIER
96
+
97
+ The pretrained language model BERT (Devlin et al., 2019) shows reliable performance on many NLP tasks with limited annotated data including document classification (Adhikari et al., 2019; Sun et al., 2019). We use a pretrained BERT model in a feature based manner (Lee et al., 2019) where encoder weights kept frozen and train a classifier using the labeled articles by crowdsourcing. The BERT-based topic classification can then label other pages.
98
+
99
+ < g r a p h i c s >
100
+
101
+ Figure 2: A sample of crowdsourcing annotation.
102
+
103
+ max width=
104
+
105
+ # Country Questionnaire Reliable sites
106
+
107
+ 1-3
108
+ India 122 67
109
+
110
+ 1-3
111
+ US 106 77
112
+
113
+ 1-3
114
+ Italy 104 68
115
+
116
+ 1-3
117
+ Japan 102 49
118
+
119
+ 1-3
120
+ Spain 126 90
121
+
122
+ 1-3
123
+ France 127 71
124
+
125
+ 1-3
126
+ Germany 106 61
127
+
128
+ 1-3
129
+ Brazil 115 67
130
+
131
+ 1-3
132
+ Total 908 550
133
+
134
+ 1-3
135
+
136
+ Table 2: Statistics of the number of questionnaires and reliable sites of each country.
137
+
138
+ max width=
139
+
140
+ Country Article with topic label
141
+
142
+ 1-2
143
+ France 31K
144
+
145
+ 1-2
146
+ America 6K
147
+
148
+ 1-2
149
+ Japan 5K
150
+
151
+ 1-2
152
+ China 8K
153
+
154
+ 1-2
155
+ International 3K
156
+
157
+ 1-2
158
+ Spain 1K
159
+
160
+ 1-2
161
+ India 2K
162
+
163
+ 1-2
164
+ Germany 1K
165
+
166
+ 1-2
167
+ Total 57K
168
+
169
+ 1-2
170
+
171
+ Table 3: Statistics of the article-topic dataset constructed by crowdsourcing.
172
+
173
+ We also compare it with a keyword-based baseline method where we set keywords for each topic and find exact match.
174
+
175
+ 6https://mt-auto-minhon-mlt.ucri. jgn-x.jp/
176
+
177
+ max width=
178
+
179
+ 2*Task 3|c|Keyword-based model 3|c|BERT-based model
180
+
181
+ 2-7
182
+ Precision Recall F-score Precision Recall F-score
183
+
184
+ 1-7
185
+ Is about COVID-19 0.36 1.00 0.54 0.82 0.87 0.84
186
+
187
+ 1-7
188
+ Topic: Infection status 0.09 0.53 0.16 0.43 0.81 0.56
189
+
190
+ 1-7
191
+ Topic: Prevention 0.05 0.73 0.10 0.19 0.73 0.30
192
+
193
+ 1-7
194
+ Topic: Medical information 0.17 0.70 0.27 0.27 0.91 0.41
195
+
196
+ 1-7
197
+ Topic: Economic 0.06 0.36 0.10 0.14 0.84 0.24
198
+
199
+ 1-7
200
+ Topic: Education 0.06 1.00 0.11 0.05 0.60 0.09
201
+
202
+ 1-7
203
+ Topic: Art and Sport 0.06 0.41 0.10 0.08 0.94 0.14
204
+
205
+ 1-7
206
+ Topic: Others 0.52 0.07 0.13 0.87 0.79 0.83
207
+
208
+ 1-7
209
+
210
+ Table 4: Topic classification results. Each line stands for one task. We use F-score to evaluate.
211
+
212
+ max width=
213
+
214
+ Topic Positive Negative Positive percentage
215
+
216
+ 1-4
217
+ Is about COVID-19 24361 32265 43.0%
218
+
219
+ 1-4
220
+ Topic: Infection status 6664 49962 11.8%
221
+
222
+ 1-4
223
+ Topic: Prevention 2533 54093 4.5%
224
+
225
+ 1-4
226
+ Topic: Medical information 5075 51551 9.0%
227
+
228
+ 1-4
229
+ Topic: Economic 2066 54560 3.6%
230
+
231
+ 1-4
232
+ Topic: Education 173 56453 0.3%
233
+
234
+ 1-4
235
+ Topic: Art and Sport 657 55969 1.2%
236
+
237
+ 1-4
238
+ Topic: Others 37331 19295 65.9%
239
+
240
+ 1-4
241
+
242
+ Table 5: The number of positive and negative samples for each topic.
243
+
244
+ § 3 RESULTS
245
+
246
+ We show the statistics of the reliable website collection, topic classification results, translation quality evaluation and statistical information of the interface in this section.
247
+
248
+ § 3.1 RELIABLE WEBSITE COLLECTION
249
+
250
+ As shown in Table 2, we totally recieved 908 questionnaire results from 8 countries with totally 550 websites. Rumors are rampant in this era, the reliable websites dataset can help people to protect themselves from COVID-19 and avoid trusting rumors about COVID-19.
251
+
252
+ § 3.2 TOPIC CLASSIFICATION
253
+
254
+ We compared the BERT-based model with the keyword-based baseline model on topic classification task.
255
+
256
+ For the keyword-based method, there are totally 76 selected keywords of different topics such as COVID, Remote work, and Social distance.
257
+
258
+ For the BERT-based method, we use the pre-trained BERT-LARGE model with Whole Word Masking (WWM). We add one linear layer after the BERT encoder without fine-tuning the encoder. For every article, we take the hidden state of the ending symbol of each sentence as the sentence embedding and perform mean and max pooling of all sentence embeddings. The input of the linear layer is the concatenation of mean and max pooling embeddings and the output is a binary label.
259
+
260
+ max width=
261
+
262
+ Topic Krippendorff's alpha
263
+
264
+ 1-2
265
+ Is about COVID-19 0.927
266
+
267
+ 1-2
268
+ Topic: Infection status 0.898
269
+
270
+ 1-2
271
+ Topic: Prevention 0.851
272
+
273
+ 1-2
274
+ Topic: Medical information 0.867
275
+
276
+ 1-2
277
+ Topic: Economic 0.931
278
+
279
+ 1-2
280
+ Topic: Education 0.938
281
+
282
+ 1-2
283
+ Topic: Art and Sport 0.994
284
+
285
+ 1-2
286
+ Topic: Others 0.884
287
+
288
+ 1-2
289
+
290
+ Table 6: Inter-annotator agreement measured by Krip-pendorff's alpha of each topic.
291
+
292
+ The article-topic dataset is shown in Table 3, it contains totally ${57}\mathrm{\;K}$ articles from 8 countires with topic label. We calculated the Krippendorff's alpha ${}^{7}$ as an inter-annotator agreement measure for ${57}\mathrm{\;K}$ human checked articles, as shown in Table 6, the inter-annotator agreement of all topics is larger than 0.8 which is high enough (Krippen-dorff, 2004), showing the quality of our dataset is guaranteed.
293
+
294
+ We randomly selected select ${90}\%$ data as train set and the remaining ${10}\%$ as test set. As shown in Table 4, the BERT-based model outperforms the baseline model in almost all tasks. We can see that our system can reliably classify which articles are related to COVID-19 with 0.84 F1, and that our interface can show related news to our users. The topic classifiers achieve relatively satisfactory performance on the medical related topics (i.e. Infec-ction status, Prevention and Medical information). The general high recall guarantees the most topic relevant articles are returned. When presenting to end users, the topic predictions are filtered by the 'Is about COVID-19' classifier to ensure the eventual precision. Meanwhile, for some topics such as Arts & Sports and Education, the performance of the current system is still limited.
295
+
296
+ We further analyze the balance of the dataset by calculating the positive and negative number of each topic. As shown in Table 5, labels of several topics are imbalanced, for example, Education, Art and Sport, Economic. Analyzed together with the BERT topic classifier results, we found that the performance is poor for such imbalanced topics. For more frequent and balanced topics such as Infection status and Medical information, the F-scores are relatively higher.
297
+
298
+ § 3.3 MACHINE TRANSLATION EVALUATION
299
+
300
+ For the evaluation of the translated text, we conducted human evaluation through crowdsourcing. As shown in Table 7, 61.7% of the articles are fluent in the translated language and the inter-annotator agreement is high enough (>0.8).
301
+
302
+ max width=
303
+
304
+ Fluent Not fluent Krippendorff's alpha
305
+
306
+ 1-3
307
+ 15036 9235 0.877
308
+
309
+ 1-3
310
+
311
+ Table 7: Human evaluation of translated Japanese articles related to COVID-19 and inter-annotator agreement.
312
+
313
+ § 3.4 STATISTICS OF THE SYSTEM
314
+
315
+ max width=
316
+
317
+ Country Raw(↑/day) Translated With topics
318
+
319
+ 1-4
320
+ France 774K(8K) 74K 9K
321
+
322
+ 1-4
323
+ US 69K(730) 15K 2K
324
+
325
+ 1-4
326
+ Japan 25K(260) 5K 2K
327
+
328
+ 1-4
329
+ Europe 50K(510) 2K 50
330
+
331
+ 1-4
332
+ China 38K(400) 3K 342
333
+
334
+ 1-4
335
+ Int. 45K(470) 3K 263
336
+
337
+ 1-4
338
+ Korea 16K(170) 260 71
339
+
340
+ 1-4
341
+ Spain 4K(40) 370 36
342
+
343
+ 1-4
344
+ India 14K(150) 860 66
345
+
346
+ 1-4
347
+ Germany 16K(170) 8K 6K
348
+
349
+ 1-4
350
+ Total 1.05M(11K) 110K 18K
351
+
352
+ 1-4
353
+
354
+ Table 8: Statistics of the growing database of the system.
355
+
356
+ The detail of the system database is shown in Table 8. There are totally ${1.05}\mathrm{M}$ website pages with ${110}\mathrm{\;K}$ of them translated into Japanese and ${18}\mathrm{\;K}$ of them with topic labels. The dataset is still growing approximately ${11}\mathrm{\;K}$ pages per day.
357
+
358
+ We collected the number of visits to the website through google analytics, there are about 200 to 500 visits per month and more than 100 people at the peak per day. It suggests that the system is actually taken up by the public.
359
+
360
+ § 4 CONCLUSION
361
+
362
+ We built a system for worldwide COVID-19 information aggregation by combining crowdsourcing, crawling, machine translation, and a topic classifier, which provides reliable, comprehensive and latest information from the world. In the mean-wile, we proposed an effective approach to annotate large cross-lingual news topic datasets with high inner-annotation agreement, which potentially benefits the NLP community to enrich the solutions for preventing COVID-19. The contextual BERT based classification models achieve reasonable performance considering the imbalance of the topic labels. We assume this work could attract future research interests to the COVID-19 related tasks.
papers/EMNLP/EMNLP 2020/EMNLP 2020 Workshop/EMNLP 2020 Workshop NLP-COVID/5fjIxS5Kahh/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,655 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 050
2
+
3
+ 000 051
4
+
5
+ # Characterizing drug mentions in COVID-19 Twitter Chatter
6
+
7
+ 001 052
8
+
9
+ 002 053
10
+
11
+ 003 054
12
+
13
+ 004 Ramya Tekumalla Juan M Banda 055
14
+
15
+ 005 Department of Computer Science 056
16
+
17
+ 006 Georgia State University 057
18
+
19
+ \{rtekumalla1, jbanda\}@gsu.edu 058
20
+
21
+ 008 059
22
+
23
+ 060
24
+
25
+ 010
26
+
27
+ 012
28
+
29
+ ## Abstract
30
+
31
+ Since the classification of COVID-19 as a global pandemic, there have been many attempts to treat and contain the virus.
32
+
33
+ 017 Although there is no specific antiviral treatment recommended for COVID-19,
34
+
35
+ 019 there are several drugs that can potentially help with symptoms. In this work, we mined a large twitter dataset of 424 million tweets of COVID-19 chatter to identify discourse around drug mentions. While seemingly a straightforward task, due to the
36
+
37
+ 024 informal nature of language use in Twitter, we demonstrate the need of machine
38
+
39
+ 026 learning alongside traditional automated methods to aid in this task. By applying these complementary methods, we are able
40
+
41
+ 028 to recover almost 15% additional data, making misspelling handling a needed task as a pre-processing step when dealing with social media data.
42
+
43
+ 033
44
+
45
+ ## 1 Introduction
46
+
47
+ The World Health Organization (WHO) defines
48
+
49
+ 035 the Coronavirus disease (COVID-19) as an infectious disease caused by a newly discovered coronavirus and declared it a pandemic on March 11, 2020 (World Health Organization). As of June 26, 2020, 9,764,997 cases were confirmed
50
+
51
+ 040 worldwide with 492,807 deaths and 4,917,328 cases recovered. Social media platforms like
52
+
53
+ 042 Twitter and Reddit contain an abundance of text data that can be utilized for research. Over the last decade, Twitter has proven to be a valuable resource during disasters for many-to-many crisis communication (Zou et al., 2018; Earle, 2010; Alam et al., 2018). Recently, several works (Lu, 2020; Sanders et al., 2020; Gao et al., 2020) have
54
+
55
+ 049
56
+
57
+ 061
58
+
59
+ 062
60
+
61
+ 063
62
+
63
+ provided insights on the treatment options and 064
64
+
65
+ drug usages for COVID-19. 065
66
+
67
+ We utilized the largest available Covid-19 066 dataset (Banda et al., 2020) curated using a Social
68
+
69
+ Media Mining Toolkit (SMMT) (Tekumalla and 068 Banda, 2020b). Version 15 of the Covid-19 dataset
70
+
71
+ was utilized for our experiments since it was the 070 latest released version at the time of experiments. This dataset consists of tweets related to COVID- 19 from January 1, 2020 to June 20, 2020. We automatically tagged $\sim {424}$ million tweets using a drug dictionary compiled from RxNorm (National Library of Medicine, 2008) with 19,643 terms and validated in (Tekumalla et al., 2020) and
72
+
73
+ (Tekumalla and Banda, 2020a). 077
74
+
75
+ ## 2 Methods
76
+
77
+ 079
78
+
79
+ In order to identify drug-specific tweets related to COVID-19, we applied the drug dictionary on the clean version of the dataset. With the presence of retweets, any signal found would be greatly
80
+
81
+ amplified and hence only unique tweets must be 084 utilized to identify the signals. The cleaned version
82
+
83
+ of the dataset consists of 104,512,658 unique 086 tweets with no retweets. We are only using the English language tweets, which are ${67}\% ( \sim {70}$ million tweets) of them. The spacy annotator utility from the Social Media Mining Toolkit (Tekumalla and Banda, 2020b) was utilized to tag the clean tweets. A total of 723,129 tweets were identified as
84
+
85
+ containing one or more drug terms. However, 093 complex medical terms, such as medication and
86
+
87
+ disease names, are often misspelled by users on 095 Twitter (Karimi et al., 2015). Using only the keywords with correct spellings leads to a loss of potentially important data, particularly for terms with difficult spellings. Some misspellings also
88
+
89
+ occur more frequently than others, implying that 099 they are more important for data collection and concept detection. To identify the misspelled drug tweets, we initially identified the top 200 drug terms from the 723,129 drug tweets tagged. Additionally, early in the stages of this work, a preliminary report from a large study of 1,063 patients showed that hospitalized patients who were administered Remdesivir recovered faster than those who got a placebo (Beigel et al., 2020). However, the version of RxNorm we have utilized for the original drug dictionary did not have Remdesivir since it was an investigational drug and was later added to RxNorm in April 2020 (Bulletin, 2020). We manually added this drug term for our analysis.
90
+
91
+ In this work, we demonstrate the tradeoff between generating misspellings, using generated word-embedding misspellings, and auto-correcting the spellings at the text level. For this purpose, we employed four different methodologies to acquire additional data. The first methodology involves a machine learning approach called QMisSpell (Sarker and Gonzalez-Hernandez, 2018). The RedMed (Lavertu and Altman, 2019) model's word embeddings were utilized in QMisSpell to generate misspellings. This RedMed model has an embedding size of 64 dimensions with a window size of 7 and created utilizing the health-oriented subset of Reddit. QMisSpell relies on a dense vector model learned from large, unlabeled text, which identifies semantically close terms to the original keyword, followed by the filtering of terms that are lexically dissimilar beyond a given threshold. The process is recursive and converges when no new terms similar (lexically and semantically) to the original keyword are found. A total of 2,056 unique misspelled terms were generated using QMisSpell. We examined all the misspellings and eliminated several terms (example: heroes, heroine generated from the original keyword heroin) which are common terms in the English vocabulary and are not misspellings of drug terms. Post elimination, a total of 1,932 terms are identified as the misspelt terms.
92
+
93
+ The second methodology utilizes keyboard layout distance for generating the misspellings. For each term, each letter is replaced with the closest letter on the QWERTY keyboard. This process is recursive and ceases when it has looped through every letter in the term. For example, the term cocaine has 68 misspelled terms which vary from xocaine to cocaint. After eliminating the common
94
+
95
+ 150
96
+
97
+ vocabulary and duplicates, a total of 15,754 terms 151 are identified as misspelled terms for this
98
+
99
+ methodology. We text tagged the clean dataset 153 utilizing the misspelled terms from the methodologies.
100
+
101
+ The third methodology employs a spelling correction module called Symspell (Garbe, 2019) which corrects the spelling errors at text level before text tagging. This method is built using the Symmetric Delete spelling correction algorithm
102
+
103
+ which reduces the complexity of edit candidate 160 generation and dictionary lookup for a given
104
+
105
+ Damerau-Levenshtein distance (Damerau, 1964; 162 Levenshtein, 1966). This method generates terms with an edit distance (deletes only) from each dictionary term and adds them together with the original term to the dictionary and searches the
106
+
107
+ dictionary. This has to be done only once during a 167 pre-calculation step. For a word of length $\mathrm{n}$ , an alphabet size of a, an edit distance of 1 , there will be just n deletions, for a total of n-terms at search time. The Symmetric Delete spelling correction algorithm (Symspell) reduces the complexity of edit candidate generation and dictionary lookup by using deletes only instead of deletes + transposes + replaces + inserts(Norvig, 2007). This methodology requires a frequency dictionary of the common terms. Since we are extracting only English drug related tweets, we used the default English frequency dictionary provided by Symspell and added a drug frequency dictionary generated using ~60 million clean tweets. Additionally, we manually included the top COVID-19 keywords (Shah, 2020) with frequencies in our initial tagging, since these terms are found in large numbers. The Symspell algorithm corrects each term in a tweet text to the closest matching term in the dictionary. The corrected tweet is then sent to the SMMT spacy utility for tagging. A total of 17,686 misspelled drug terms were generated for the top 200 drug terms using the Qmispell and Keyboard layout methods.
108
+
109
+ The fourth methodology utilizes TextBlob (TextBlob), a Python library for processing textual data. It provides a consistent API for diving into common natural language processing (NLP) tasks such as part-of-speech tagging, noun phrase extraction, spelling correction and more. The spelling correction feature of the TextBlob employs Peter Norvig's spelling correction
110
+
111
+ algorithm (Norvig, 2007) which generates a 199
112
+
113
+ 200
114
+
115
+ 201 candidate model by a simple deletion to a word (remove one letter), a transposition (swap two
116
+
117
+ 203 adjacent letters), a replacement (change one letter to another) or an insertion (add a letter). This method was applied on each tweet text before tagging. Unfortunately, this implementation is highly inefficient, taking around 60 hours of execution time for every 600,000 tweets. We were only able to run this on 20 days of data, and only present results in the computational evaluation
118
+
119
+ 210 table.
120
+
121
+ All experiments in this work are presented in the following section and focus on the first three methods outlined.
122
+
123
+ ## 3 Results
124
+
125
+ Table 1 summarizes the text tagging results between the three experimental setups we tested. The overlap of the generated misspellings between QMisSpell and the Keyboard Layout generator is 4.9%, meaning only 33 terms were common between the two misspelling dictionaries. The method Spacy + RxNorm Dictionary tagged a total of 1,483,691 terms. The method Spacy ON Sympell Corrected text + RxNorm dictionary was able to tag an additional 149,260 terms (delta) which the previous method could not tag. These are mostly due to the newly spell-corrected tweets text, note that this difference is not indicative of the whole dataset as it was not all spell-corrected.
126
+
127
+ 233
128
+
129
+ <table><tr><td>Method</td><td>Total number of terms in dictionary</td><td>Total tagged term in dataset</td></tr><tr><td>Spacy + RxNorm Dictionary</td><td>19,643</td><td>1,483,691</td></tr><tr><td>Spacy + QMisSpell dictionary (only)</td><td>1,932</td><td>192,619</td></tr><tr><td>Spacy + Keyboard layout generated misspellings dictionary (only)</td><td>15,754</td><td>135,981</td></tr><tr><td>Spacy ON Symspell Corrected text + RxNorm Dictionary</td><td>19,643</td><td>1,632,951 **</td></tr></table>
130
+
131
+ Table 1: Text Tagging results ** note that we only corrected text on tweets for 20 days of the dataset, not the complete set.
132
+
133
+ 235
134
+
135
+ 238
136
+
137
+ 239
138
+
139
+ 240
140
+
141
+ 242
142
+
143
+ 249
144
+
145
+ 250
146
+
147
+ 251
148
+
149
+ 252
150
+
151
+ Unsurprisingly, Hydroxychloroquine was the 253
152
+
153
+ most tweeted drug found on the dataset with 254
154
+
155
+ chloroquine coming in as close second, as shown 255
156
+
157
+ in Table 2. A total of 1,483,691 terms were tagged 256
158
+
159
+ from 723,129 tweets. Though there are 19,643 257
160
+
161
+ unique drug terms in the dictionary, we only show 258
162
+
163
+ the top 10 most frequent drug terms tagged on 259
164
+
165
+ Table 2. Interestingly enough, hydroxychloroquine 260 was not the most misspelled word, chloroquine
166
+
167
+ was, as we can see in Table 3. 262
168
+
169
+ <table><tr><td>Drug Name</td><td>Frequency</td><td>Percentage</td></tr><tr><td>hydroxychloroquine</td><td>161,385</td><td>10.88%</td></tr><tr><td>chloroquine</td><td>106,377</td><td>7.17%</td></tr><tr><td>remdesivir</td><td>52,152</td><td>3.52%</td></tr><tr><td>agar</td><td>34,505</td><td>2.33%</td></tr><tr><td>oxygen</td><td>28,906</td><td>1.95%</td></tr><tr><td>zinc</td><td>22,632</td><td>1.53%</td></tr><tr><td>azithromycin</td><td>21,183</td><td>1.43%</td></tr><tr><td>vitamin d</td><td>19,067</td><td>1.29%</td></tr><tr><td>ibuprofen</td><td>14,765</td><td>1.00%</td></tr><tr><td>dettol</td><td>11,660</td><td>0.79%</td></tr></table>
170
+
171
+ Table 2: Top 10 Most frequent drugs found using the drug dictionary without including misspellings.
172
+
173
+ 266
174
+
175
+ 267
176
+
177
+ 268
178
+
179
+ 269
180
+
181
+ 270
182
+
183
+ 276
184
+
185
+ 278
186
+
187
+ In our exploration of misspellings, we had 23 283 potential different spellings for
188
+
189
+ Hydroxychloroquine. We have 78 possible 285 misspellings of chloroquine as well. While not all possible misspellings are found in the actual Twitter data, a good combination of them (41%) are actually present, indicating how tricky it is to work with Twitter text data. Terms such as Agar, and Coconut Oil acquired high counts as listed on Table 3. This is due to a few irrelevant terms
190
+
191
+ prevailing in the dictionary. The drug dictionary 292 curated from RxNorm consists of terms from different semantic types (Ingredients, Semantic Clinical Drug Component, Semantic Branded Drug Component and Semantic Branded Drug).
192
+
193
+ The Ingredients semantic type consists of different 297
194
+
195
+ terms that are technically not drugs but are utilized 298
196
+
197
+ in creating drugs (Eg: Gelatin, Agar, Coconut Oil). 299
198
+
199
+ 350
200
+
201
+ 351 Such terms were still included in the dictionary due
202
+
203
+ 352 to the granularity they bring to the research,
204
+
205
+ 353 particularly zinc, for which a study proved the
206
+
207
+ 354 addition of zinc sulfate to Hydroxychloroquine and
208
+
209
+ 355 Azithromycin increased the frequency of discharge
210
+
211
+ 356 rates of Covid-19 affected patients and hence may
212
+
213
+ 357 play a role in therapeutic management for COVID-
214
+
215
+ 358 19 (Carlucci et al., 2020). In order to evaluate the
216
+
217
+ 359 Symspell method, we counted the additional
218
+
219
+ 360 tagged terms occurrences from the Spacy+
220
+
221
+ 361 RxNorm experiment, which gave us the number of
222
+
223
+ 362 'misspellings' fixed that were now identified using
224
+
225
+ 363 the same dictionary. In other words, table 3 shows
226
+
227
+ 364 the additional number of drug terms found when using the misspelling dictionaries (keyboard 365 layout, and QMisSpell methods) and identified
228
+
229
+ 366 during the tagging of the spelling-corrected tweets.
230
+
231
+ 367 Out of the top 200 drug terms we generated
232
+
233
+ 368 misspellings for, over 150 drug term misspelling
234
+
235
+ 369 variants were tagged, with Table 3 showing only
236
+
237
+ 370
238
+
239
+ 300
240
+
241
+ the most frequent ones. At a glance, it is surprising 301
242
+
243
+ that after spell-correcting the tweets, only $\sim {200}\mathrm{\;K}$ 302
244
+
245
+ additional terms are tagged, but this is intuitive as 303
246
+
247
+ only drug-related terms are being tagged. If using 304
248
+
249
+ a general purpose dictionary, we would expect this 305
250
+
251
+ number to be considerably larger due to the fact 306
252
+
253
+ that Twitter data is quite noisy and constantly 307
254
+
255
+ misspelled. The total added term counts can be 308
256
+
257
+ found in Figure 1. As we can see in both Figure 1 309
258
+
259
+ and Table 1, by considering the possibility of 310 misspellings, we find around 15% more data. When dealing with limited text availability, particularly drug terms in Twitter, it is vital to use these types of methods to recover as much signal as we can. For this figure we removed the non-COVID-19 related terms manually and only left the more prevalent and relevant (in literature and news) drug terms to simplify our representation and discussion.
260
+
261
+ 372
262
+
263
+ <table><tr><td colspan="2">Keyboard Layout Method</td><td colspan="2">QMisSpell Method</td><td colspan="2">$\mathbf{{SymspellMethod}}$</td></tr><tr><td>Drug Name</td><td>Freq.</td><td>Drug Name</td><td>Freq.</td><td>Drug Name</td><td>Freq.</td></tr><tr><td>cloroquine</td><td>34,522</td><td>hydroxychloroquine</td><td>13,500</td><td>cloroquine</td><td>14,175</td></tr><tr><td>vitamin a</td><td>21,285</td><td>meted</td><td>10,424</td><td>hydroxychloroquine</td><td>11,580</td></tr><tr><td>remdesivir</td><td>16,981</td><td>allegra</td><td>10,339</td><td>azithromycin</td><td>10,528</td></tr><tr><td>agar</td><td>13,935</td><td>ibuprofen</td><td>8,640</td><td>nicotine</td><td>7,054</td></tr><tr><td>hydroxychloroquine</td><td>6,216</td><td>propane</td><td>6,984</td><td>doral</td><td>6,566</td></tr><tr><td>doral</td><td>3,496</td><td>coconut oil</td><td>4,812</td><td>cloroquine</td><td>4,620</td></tr><tr><td>nicotine</td><td>3,081</td><td>agar</td><td>2,518</td><td>aleve</td><td>2,004</td></tr><tr><td>cocaine</td><td>1,890</td><td>azithromycin</td><td>1,965</td><td>septa</td><td>1,914</td></tr><tr><td>zinc</td><td>1,424</td><td>aluminum</td><td>1,633</td><td>remdesivir</td><td>1,912</td></tr><tr><td>septa</td><td>1,006</td><td>acetaminophen</td><td>1,263</td><td>vitamin a</td><td>1,437</td></tr></table>
264
+
265
+ Table 3: Most frequent drugs found using the misspelled dictionaries
266
+
267
+ 375
268
+
269
+ 376
270
+
271
+ 377 327
272
+
273
+ 378 328
274
+
275
+ 379 329
276
+
277
+ 380 330
278
+
279
+ 381 331
280
+
281
+ 382 332
282
+
283
+ 383 333
284
+
285
+ 384 334
286
+
287
+ 385 335
288
+
289
+ 386 336
290
+
291
+ 387 337
292
+
293
+ 388 338
294
+
295
+ 389 339
296
+
297
+ 390 340
298
+
299
+ 391 341
300
+
301
+ 392 342
302
+
303
+ 393 343
304
+
305
+ 394 344
306
+
307
+ 395 345
308
+
309
+ 396 346
310
+
311
+ 397 347
312
+
313
+ 398 348
314
+
315
+ 399 349
316
+
317
+ 450 400
318
+
319
+ 451 401
320
+
321
+ 452 402
322
+
323
+ Frequency vs Drug Name
324
+
325
+ ![01963de4-4f94-73d0-831d-63163ddb17b9_4_209_276_1229_503_0.jpg](images/01963de4-4f94-73d0-831d-63163ddb17b9_4_209_276_1229_503_0.jpg)
326
+
327
+ Figure 1: Drug Term occurrences in our dataset
328
+
329
+ 453 403
330
+
331
+ 406
332
+
333
+ 407
334
+
335
+ 411
336
+
337
+ 415
338
+
339
+ 454 404
340
+
341
+ 455 405
342
+
343
+ 456
344
+
345
+ 457
346
+
347
+ 458 408
348
+
349
+ 459 409
350
+
351
+ 460 410
352
+
353
+ 461
354
+
355
+ 462 412
356
+
357
+ 463 413
358
+
359
+ 464 414
360
+
361
+ 465
362
+
363
+ 466 416
364
+
365
+ 467 417
366
+
367
+ 468 In order to evaluate the cost-benefit of using the 469 misspelling methods to get additional data, we 470 timed the execution of the components of the methods, from generation of dictionaries (where
368
+
369
+ 472 applicable) to total and average time taken for text tagging.
370
+
371
+ 474
372
+
373
+ 475
374
+
375
+ <table><tr><td/><td>Generation time (ms)</td><td>Total Tagging time (min)</td><td>$\mathbf{{Avg}}$ Tagging Time (per 600,000 tweets)</td></tr><tr><td>Base (RxNorm Dictionary)</td><td>100</td><td>6,930</td><td>45</td></tr><tr><td>Keyboard Layout $\mathbf{{Method}}$</td><td>2</td><td>6,314</td><td>41</td></tr><tr><td>QMisSpell $\mathbf{{Method}}$</td><td>200</td><td>924</td><td>6</td></tr><tr><td>Symspell $\mathbf{{Method}}$</td><td>NA</td><td>166,320</td><td>1,080</td></tr><tr><td>TextBlob</td><td>NA</td><td>72,000</td><td>3,600</td></tr></table>
376
+
377
+ Table 4: Misspelling method execution time analysis
378
+
379
+ 476
380
+
381
+ 477
382
+
383
+ 478
384
+
385
+ 479
386
+
387
+ 480
388
+
389
+ 481
390
+
391
+ 482
392
+
393
+ 483
394
+
395
+ 484
396
+
397
+ 485
398
+
399
+ 486
400
+
401
+ 487
402
+
403
+ 488
404
+
405
+ 489
406
+
407
+ 490
408
+
409
+ 491
410
+
411
+ 492 The first item to note is that for both the keyboard
412
+
413
+ 493 and QMisSpell methods, these times need to be
414
+
415
+ 494 added to the baseline times as they are executed
416
+
417
+ 495 independently in addition to the base text tagging.
418
+
419
+ 496 However, they are not even close in terms of
420
+
421
+ 497 computational expense to the Symspell or
422
+
423
+ 498 TextBlob spell correction, which are quite
424
+
425
+ 499 expensive. For TextBlob, we only processed 20 days of data before aborting as this package took
426
+
427
+ on average 60 hours of execution time per600,000 418
428
+
429
+ tweets. 419
430
+
431
+ In order to show the overlaps of terms tagged 420
432
+
433
+ with each of the different methods explored, Figure 421
434
+
435
+ 2 shows the total overlap in percent between the 422
436
+
437
+ new terms tagged. There is a 35% overlap between 423
438
+
439
+ the 3 methods since there are many generated 424
440
+
441
+ terms that are produced by all methods, these are 425
442
+
443
+ usually the more frequent ones or typical ones. 426
444
+
445
+ Showing that a brute force approach (keyboard 427 based method) might not be ideal to generate 428 misspellings on large sets of terms as it would add 429 considerable computational expense. 430
446
+
447
+ 431
448
+
449
+ ![01963de4-4f94-73d0-831d-63163ddb17b9_4_851_1424_587_554_0.jpg](images/01963de4-4f94-73d0-831d-63163ddb17b9_4_851_1424_587_554_0.jpg)
450
+
451
+ Figure 2: Tagged misspelled terms overlaps between different methods. Note that this is in % points.
452
+
453
+ 432
454
+
455
+ 433
456
+
457
+ 434
458
+
459
+ 435
460
+
461
+ 436
462
+
463
+ 437
464
+
465
+ 438
466
+
467
+ 439
468
+
469
+ 440
470
+
471
+ 441
472
+
473
+ 442
474
+
475
+ 443
476
+
477
+ 444
478
+
479
+ 445
480
+
481
+ 446
482
+
483
+ 447
484
+
485
+ 448
486
+
487
+ 449
488
+
489
+ 500
490
+
491
+ 501 Finally, in Table 5 we quantify the individual and
492
+
493
+ 502 combined gain from each of the methods when it
494
+
495
+ 503 comes to recovering additional data points. Note
496
+
497
+ 504 that this table includes all 200 most frequent terms
498
+
499
+ 505 we generated misspellings for, not just the most
500
+
501
+ 506 common COVID-19 drug terms. The 14.99%
502
+
503
+ 507 constitutes the additional number of terms that can
504
+
505
+ 508 be identified when using the outlined techniques
506
+
507
+ 509 for misspelling identification, in addition to the
508
+
509
+ 510 original 1,483,691 terms identified.
510
+
511
+ 511
512
+
513
+ <table><tr><td/><td>Additional Terms Identified</td><td>Percentage Increase</td></tr><tr><td>Keyboard Layout $\mathbf{{Method}}$</td><td>132,083</td><td>8.90%</td></tr><tr><td>QMisSpell $\mathbf{{Method}}$</td><td>75,788</td><td>5.11%</td></tr><tr><td>$\mathbf{{SymspellMethod}}$</td><td>89,592</td><td>6.04%</td></tr><tr><td>Total</td><td>222,418</td><td>$\mathbf{{14.99}\% }$</td></tr></table>
514
+
515
+ Table 5: Additional terms identified
516
+
517
+ 513
518
+
519
+ 520
520
+
521
+ 522
522
+
523
+ ## 4 Conclusions
524
+
525
+ In this work, we focused on extracting discourse related to the potential drug treatments available
526
+
527
+ 527 for COVID-19 patients. While seemingly a trivial task, we show that enough consideration is needed
528
+
529
+ 529 for the proper way to deal with the constant
530
+
531
+ 530 misspellings found in Twitter data. We have shown
532
+
533
+ 531 that with a combination of methods we can identify
534
+
535
+ 532 around 15% additional terms, which would have
536
+
537
+ 533 been lost. With data being quite limited and not
538
+
539
+ 534 easily available, it is important to apply the proper
540
+
541
+ 535 techniques to identify the largest subset of data
542
+
543
+ 536 possible. We provide a quantifiable evaluation by
544
+
545
+ 537 generating misspellings utilizing two different
546
+
547
+ 538 methods and automated spell check using Symspell. While the keyboard layout method is a fully automated method to generate misspellings, the QMisSpell method generates misspellings based on language models. We have shown that this work is indeed needed to be performed in order to make a thorough evaluation of the discourse regarding potential drug treatments for COVID-19
548
+
549
+ 545 patients.
550
+
551
+ 549
552
+
553
+ 550
554
+
555
+ ## 5 Future Work
556
+
557
+ 551
558
+
559
+ 552
560
+
561
+ Out of the scope of this work, we theorize that 553
562
+
563
+ using sentiment analysis and stance detection, we 554
564
+
565
+ can also identify how users respond to these drugs 555
566
+
567
+ and identify the drugs that help with some 556
568
+
569
+ symptoms. With careful analysis, this data can be 557
570
+
571
+ utilized to monitor the perception of theorized 558
572
+
573
+ treatment options. 559 Unfortunately, not many pre-trained embedding 560 models related to drug research are readily
574
+
575
+ available. The language model used in this research 561
576
+
577
+ was able to generate misspellings for ${80}\%$ of the 562 top 200 terms. It is immensely difficult to obtain drug related data to create a language model to generate misspellings for all terms due to
578
+
579
+ unavailability of the data. In the future, we would 566
580
+
581
+ like to explore the usage of deep learning models 567 like Bio-BERT or BERT for the error correction/misspelling generation.
582
+
583
+ ## Acknowledgments
584
+
585
+ Part of this research was developed during the COVID-19 Biohackathon April 5-11, 2020. We thank the organizers for coordinating the virtual hackathon during the COVID-19 crisis. We would like to thank Stephen Fleischman and HP labs for providing us with server access to perform our experiments during our research server downtime.
586
+
587
+ ## References
588
+
589
+ Firoj Alam, Ferda Ofli, Muhammad Imran, and Michael Aupetit. 2018. A Twitter Tale of Three Hurricanes: Harvey, Irma, and Maria. May.
590
+
591
+ Juan M. Banda, Ramya Tekumalla, Guanyu Wang, Jingyuan Yu, Tuo Liu, Yuning Ding, and Gerardo Chowell. 2020. A large-scale COVID-19 Twitter chatter dataset for open scientific research -- an international collaboration. April.
592
+
593
+ John H. Beigel, Kay M. Tomashek, Lori E. Dodd, Aneesh K. Mehta, Barry S. Zingman, Andre C. Kalil, Elizabeth Hohmann, Helen Y. Chu, Annie Luetkemeyer, Susan Kline, Diego Lopez de Castilla, Robert W. Finberg, Kerry Dierberg, Victor Tapson, Lanny Hsieh, Thomas F. Patterson, Roger Paredes, Daniel A. Sweeney, William R. Short, et al. 2020. Remdesivir for the Treatment of Covid-19 - Preliminary Report. The New England journal of medicine, May.
594
+
595
+ Nlm Technical Bulletin. 2020. Investigational Drugs in RxNorm. April.
596
+
597
+ 598
598
+
599
+ 599 600 601
600
+
601
+ Philip Carlucci, Tania Ahuja, Christopher M. Petrilli, Harish Rajagopalan, Simon Jones, and Joseph Rahimian. 2020. Hydroxychloroquine and azithromycin plus zinc vs hydroxychloroquine and azithromycin alone: outcomes in hospitalized COVID-19 patients. medRxiv.
602
+
603
+ Fred J. Damerau. 1964. A technique for computer detection and correction of spelling errors. Communications of the ACM, 7(3):171-176, March.
604
+
605
+ Paul Earle. 2010. Earthquake Twitter. Nature geoscience, 3(4):221-222, April.
606
+
607
+ Jianjun Gao, Zhenxue Tian, and Xu Yang. 2020. Breakthrough: Chloroquine phosphate has shown apparent efficacy in treatment of COVID-19 associated pneumonia in clinical studies. Bioscience trends, 14(1):72-73, March.
608
+
609
+ Wolf Garbe. 2019. Symspell.
610
+
611
+ Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedical informatics, 55:73-81, June.
612
+
613
+ Adam Lavertu and Russ B. Altman. 2019. RedMed: Extending drug lexicons for social media applications. Journal of biomedical informatics, 99:103307, November.
614
+
615
+ Vladimir I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, volume 10, pages 707-710. nymity.ch.
616
+
617
+ Hongzhou Lu. 2020. Drug treatment options for the 2019-new coronavirus (2019-nCoV). Bioscience trends, 14(1):69-71, March.
618
+
619
+ National Library of Medicine. 2008. RxNorm [Internet]. October.
620
+
621
+ Peter Norvig. 2007. How to write a spelling corrector.
622
+
623
+ James M. Sanders, Marguerite L. Monogue, Tomasz Z. Jodlowski, and James B. Cutrell. 2020. Pharmacologic Treatments for Coronavirus Disease 2019 (COVID-19): A Review. JAMA: the journal of the American Medical Association, April.
624
+
625
+ Abeed Sarker and Graciela Gonzalez-Hernandez. 2018. An unsupervised and customizable misspelling generator for mining noisy health-related text sources. Journal of biomedical informatics, 88:98-107, December.
626
+
627
+ Nigam Shah. 2020. Profiling presenting symptoms of patients screened for SARS-CoV-2. , April.
628
+
629
+ Ramya Tekumalla, Javad Rafiei Asl, and Juan M. Banda. 2020. Mining Archive. org's Twitter Stream Grab for Pharmacovigilance Research Gold. In Proceedings of the International AAAI Conference
630
+
631
+ 602 603 604 605 606 607 608 609
632
+
633
+ 613
634
+
635
+ 620
636
+
637
+ 622
638
+
639
+ 627
640
+
641
+ 629
642
+
643
+ 636
644
+
645
+ 638 650 on Web and Social Media, volume 14, pages 909- 651 917. 652
646
+
647
+ Ramya Tekumalla and Juan Banda. 2020a. A large- 653 scale Twitter dataset for drug safety applications 654 mined from publicly existing resources. January. 655
648
+
649
+ Ramya Tekumalla and Juan M. Banda. 2020b. Social 656 Media Mining Toolkit (SMMT). Genomics & 657 informatics, 18(2):e16, June. 658
650
+
651
+ TextBlob. TextBlob. 659
652
+
653
+ World Health Organization. WHO characterizes 660 COVID-19 as a pandemic. 661
654
+
655
+ Lei Zou, Nina S. N. Lam, Heng Cai, and Yi Qiang. 662 2018. Mining Twitter Data for Improved 663 Understanding of Disaster Resilience. Annals of the 664 Association of American Geographers. Association 665 of American Geographers, 108(5):1422-1441, September. 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699